Log yet another portable pc / laptop thingy


Trash Compacter
Original poster
Jan 19, 2021
This is yet another more or less mobile pc type project of which there a few now.
The goal is to build an easily upgradeable laptop/portable pc suitable for running a virtual machine with gpu passthrough.
Total power target is around 400W.
I can't remember.
No really, I don't know.
Somehow I have it noted everywhere that I need to be able to supply 400W for it to work but that doesn't make any sense.
Maybe it was the result of a powersupply calculator?
Maybe I just wanted 200W with a really really big safety margin?
Either way I think I will find the reason eventually.
For now I will try to build towards 400W and see how far i get.
hint: not very far

This one is more of feasibility test or tech demonstrator than an actual build.
As such one of the ideas was to keep costs as low as possible in case things don't work out after all. (Things work but costs went high.)
Therefore the tests here a done with a 200W power target for now.
I blame gpu prices.....

Do I know what I am doing?
I think I should mention that this is my first ever pc build.
I have mostly only dealt with laptops.
My pc building experience is limited to drive, ram and cpu swaps on pentium 4 machines.
I have also not done anything related to power electronics before.
Soo... I am perfectly qualified for this build. :p
It's a learn as you do kind of thing and I would say I learned quite a lot
At least enough to get this to where it is right now.

Photos are following, eventually, hopefully. Maybe?

Here is the parts list:
  • System
    • Motherboard: biostar x470nh
    • GPU: rx480 4gb *dead*
    • CPU: a8-9600
    • CPU-Cooler: nh-l9a-am4
    • RAM: 8g ddr4-2400t
    • Storage: sata ssd + sata hdd
  • Power
    • AC-DC-PSU: "480w" 12v led driver (S-480-12) dps-600pb b
    • DC-ATX-PSU: "400w" ebay directplug (264w)
    • Buck-Converter: szwengao 360W 24v-12V
    • Boost-Converter: "600w" ebay boost converter
    • Battery: INR18650-30Q * 12 (used)
    • BMS: 6s 18650 holder + bms * 2
    • Mosfets: IRF5210PBF * 2
    • Diode: ebay 50A ideal diode
  • Display
    • Display-Board: nt68676 board
    • Display: LP173WF1(TL)(B3)
    • Display-mux: ebay hdmi bidirectional switch
    • Display-frame: emachines g625 display frame
  • Misc
    • Wifi: rt5572 (5V)
    • Keyboard: thinkpad compact keyboard (KU-1255)
    • Case: dvb-s receiver case + u-shaped alu plate

Operating system
The host operating system is archlinux and the guest operating system is windows 10.
As a result, some things will be linux specific, which isn't too useful for most people here.
Still I hope that some information here will be of use to someone somewhere at some point. :D
The choice of linux as the operating system also prevents the use of nvidia gpus.

How it all connects.
                                             |                 +--GPU
                                             |                 |
12.4V          |        (ideal diode)        |                 |
               |                             | (out: 12.3V)    +--CPU
             BOOST                         BUCK
             CONVERTER                     CONVERTER
               | (out: 25.3V 1-2A)           | (in: 18-36V)
               |                             |
               |                             |
          MOSFET_SWITCH                      |
               |                         VOLTAGE
              BMS                        INDICATOR
           6S BATTERY
The grounds are all connected together and to mains earth inside the ac-dc psu.

                           |  (drain)
                     | |---+--+
                     |        |
                     | |->-+  V
             (gate)  |     |  T
                _____| |---+--+
               |           |  (source)
  +-- [100k]---+--[200k]---+
  |                        |
   \                       |
  |                        |
GND                      BMS
There are 2 mosfets in parallel to reduce total resistance and improve heat dissipation.
The resistor values might not be accurate.
The resistors are there so that Vgs doesn't exceed -20V which is the max according to the datasheet.
They should also prevent a 24V short in case the mosfet fails with a drain/source short to gate.

There are also other ways to wire everything up.
Here are some alternative designs that weren't chosen for one reason or another.
                   (ideal diode)                                      |
24V            |                   |     (out: 12V)      |   (in 12V) |
               |                   |                     |            +--CPU
             BUCK                  _                     |
             CONVERTER             ^ (ideal diode)       |
               | (out: 21V 1-2A)   |                     |
               |                   |               DISPLAY_BOARD
               |                   |
          MOSFET_SWITCH            |
               |                VOLTAGE
              BMS              INDICATOR
           5S BATTERY
A 24V version.
Also works with other ac-dc and battery voltages as long as the 2 voltage ranges don't overlap and ac-dc voltage is higher than battery voltage
                  (ideal diode)        (wide input)   |
18-36V         |                       |              |
               |                  MOSFET_SWITCH       +--CPU
           BUCK-BOOST                  |              |
           CONVERTER                   _              +--DISPLAY_BOARD
               |         (ideal diode) ^
               |                       |
               |                       |
          MOSFET_SWITCH                |
               |                    VOLTAGE
              BMS                  INDICATOR
Similar to before, with the buck converters being replaced by a wide input dc-atx unit and a boost-buck converter.
It's main advantage is independent and flexible voltage selection for the ac-dc unit and the battery.
There is uncertainty regarding the mosfet switching logic.

None of this was tested.

Choosing the correct dc-dc converters and a matching ac-dc supply can be a hit and miss.
This design is somewhat dependent on finding the correct boost and buck converters.
Since all the grounds are connected together, a boost converter that does its current limiting through a low side sense resistor will not work properly.
It will still supply power but not limit its current which can turn into a too-fast charger if one is not careful.

Buck converters seem a bit random.
I have only tested 2 of buck converters.
The first one had some strange behaviour.
If the converter was powered (for example by the battery) and if the ac-dc supply voltage dropped down to the bucks output voltage, the buck converter would works in reverse as a boost converter generating 40V on the input side.
This too will work as a "too-fast-for-battery-life" charger.
The charging power was measured at 300W.
I don't know what exactly causes this.
The other one doesn't have this problem.

The combination of the other buck converter with an appropriate ac-dc supply allows load sharing between the ac-dc supply and the battery.
I know that msi did something similar with some of the laptops, calling it hybrid power supply function or something: https://us.msi.com/support/technical_details/NB_Battery_Charge
I think not many people liked it because it prevented the proper use of higher rated power bricks in those laptops.
In this build, it can be disabled through the mosfet switch.
Doing so will prevent the battery from being fully charged as there is a 0.6V drop across the mosfets body diode.
It might also warm up the mosfet by a bit depending on the set charge current.

For "normal" operation the ac-dc supply voltage should be higher than the buck output voltage.
The droop characteristics as well as the idle voltage of each decide the load sharing behaviour.

Information regarding power delivery and battery performance.
The ac-dc supply has substantial voltage droop of 0.4V at 200W.
With the the ac-dc output set to 12.6V and the buck output being fixed at 12.3V, at 200W load, 150W is supplied by the ac-dc supply and 50W by the battery.
3 tests without load sharing were done.
Running at 200W, without load sharing crashes the system after 30min each time.
AC input current is 12.4A with load sharing and 18.7A without load sharing.
AC rms voltage 230V.
Powerfactor is unknown.
Efficiency is unknown.
Ripple voltage is unknown.
They are assumed to be "not great".
The dps-600pb psu has a default open voltage of 12.44V.
It has very low voltage droop and does not cause any load sharing with tested loads up to 80W.
The voltage is adjustable and can (probably) be lowered to ~12.3V to allow load sharing with the battery.
Input current (and power) is unkown.
Powerfactor is unkown.
Efficiency is unknown.
Ripple voltage is unknown.
They are assumed to be good.

On battery power only, with a 200W load, the battery gets 30min of runtime.
The test started at 25V and was stopped at 19V.
Total efficiency at 200W is 92%
This is calculated by measuring power going from the bms to the converter and comparing it to power going into the dc-atx unit and the display board.
The measurement accuracy is unknown.
The buck converter has a stated efficiency of 95%.
Ripple voltage is stated as 70mVp-p.
The accuracy of these statements is unknown.

Idle power consumption is 30W with the display on and 18W with the display off.
Idle runtime was not measured.
It is expected to be 4h at 30W.
The system does not support any C states deeper than C6.

A 16h standby test was done.
Battery voltage dropped from unknown (likely 25.2) to 23.4V.
The display board was not powered off.
The keyboard was not powered off to allow wake-on-usb.
Standby time is calculated to be 60h.

A test consisting only of web browsing was done.
The browser in use is palemoon-29.1 without any hardware acceleration.
The Battery voltage dropped from 25V to 18.8V at which point the system shutdown.
This was due to one of the cells being out of balance.
The total runtime was 150min.
Average power consumption is unknown.
It is expected to be ~45W.

Running a virtual machine with gpu passthrough is important. Yes, very important.
The motherboard ignores the video output device selection in the bios.
Instead it uses the first gpu that has a display connected to it and calls it a day.
This is somewhat random when both gpus are connected to the same monitor.
The hdmi can be used to set the boot gpu because it doesn't relay edid information to the non active port.
Surround view needs to be enabled in the bios, otherwise the non-boot gpu is disabled.
Once the gpu driver is loaded, the non-boot gpu gets powered down until there is work to do on it.
Runpm (andgpu.runpm=1) needs to be enabled to allow the gpu to power off while it has edid access.

DRI_PRIME offloading works with both the dgpu and the igpu.
Same is true for running separated accelerated Xorg sessions.
Other xorg configurations were not tested.
GPU passthrough only works with the dGPU.
Using the iGPU for passthrough was tested only once and did not work.
GPU passthrough works with dynamic unbind and rebind but causes a kernel oops on shutdown.
Systemd deals badly with this, causing several minutes of useless waiting for Xorg to die.
Doing a force shutdown using magic sysrq (with sync and unmount) or using a better init is an option.
(Recompiling systemd with a different wait timeout might also work.)
During gpu passthrough, the display can be switched between host and guest using the hdmi switch.
Using a kvmfr such as looking-glass was not tested.

Audio passthrough can be done using the hdmi-audio output of the gpu and connected the a playback device to the display board using the 3.5mm output port.
This works without any noticeable problems.
A 3.5mm male to male cable can be connected to the 3.5mm audio output and to the motherboards 3.5mm input port to allow audio playback using a loopback configuration on the host.
This was not tested.
Using scream over ivshmem works and allows output switching using standard software methods.

A bit about win10 memory allocation. Offtopic
Windows 10, or any windows version, does not support memory overcommitment (allocating more than is available).
The reasoning of "better safe than sorry" makes sense and is understandable.
It just turns out that it isn't safe at all.
It seems that that microsoft didn't reserve some allocatable space for core os functions which leads to the entire system crapping out, even if there are a few gigabytes of unused memory just sitting there doing absolutely nothing.
It basically means that you have to create a pagefile to increase the amount of allocatable space, even if said pagefile is never to be used.
This is generally not a problem, disk space is cheap.
A default installation has a decently sized pagefile, usually on the boot drive. (I think, I am not too sure about this.)
However since the pagefile is never to be used anyways (as long as you have memory), the pagefile can be on the slowest drive imaginable.
I wonder if it's possible to create a pagefile on a network drive.

Some say bulldozer is to amd as is netburst to intel.
In VM use, the cpu is underpowered for gaming but surprisingly effective for general tasks.
For vm tasks 3 cores were given to the vm and 1 to the host.
VM and host tasks were isolated from each other with the exception of a few host housekeeping tasks.
Code compilation takes long but not excessively so.
It does not support certain useful functions such as pcie3.0 atomics, pcie p2p dma across bridges and deep C states.
Power reporting is off by several magnitudes.
This might be a driver problem.
Attempts at overclocking had questionable results.
BIOS overclocking changes the tsc frequency but disables boost clocks and reduces performance in all cases.
Overclocking with amdctl works, occasionally for a limited time before the cpu resets its clocks.
It increases performances by a small amount.
A replacement in form of a 3400g or a 4650g preferably is planned.

Not much to be done without these.
The keyboard is connected with a 1-2m long usb cable to the back io.
During "laptop" use the cable is wrapped around the display and the keyboard placed on the lid, over the gpu vent holes.
The keyboard is held in place by a clip made from cut cable duct.
The cable can be unwrapped to use the keyboard further away from the laptop.
The keyboard has been updated with a modified firmware to swap fn and ctrl.

The display is pulled from a dell m6700.
It is connected to the display frame of an emachines g625.
The lvds cable for the display was supplied with the display board.
The combination of the display and the display board does not allow any backlight control.
There is information regarding backlight control in both the display datasheet and the controller datasheet.
No attempts at getting backlight control working was done.

The display board is connected with a hdmi ribbon cable to output the hdmi-switch.
Yhe motherboard and gpu are both connected with hdmi ribbon cables to the input of the hdmi-switch.
The switch switches between motherboard and gpu to the display board.

The wifi is connected to the usb header 2.0 header on the motherboard.
Its antenna ports are connected to the antenna cables coming from the display frame.
The wifi chip can only rx/tx on one channel at a time.
It is slow at switching channels (100-300ms).

The box with all the fun stuff inside.
The case outer dimensions are 285mm * 370mm * 60mm without the screen and the psu.
It consists of a dvb-s receiver case from the early 2000s and a u-shaped aluminium plate screwed to the back (originally front) of it.
The case is extended to the left by the power supply to a total of 285mm * 410mm * 68mm 285mm * 450mm * 60mm.
There is space for gpu up to 275mm long and a mini-itx board. (a r9 290 just barely fits with some trouble)
The motherboard is screwed in place by 3 screws.
Access to the 4th screw hole is not good.
The gpu is being held down by a small tab on the edge of the case used for connecting the alu plate.
Additionally the gpu is preventing from sliding backwards by the power connector of the display board.
There is space for a battery and bms up to 170mm * 90mm * 55mm total.
The area around the battery was insulated with double sided tape without uncovering the 2nd sticky side.
The battery just sits there on a piece of foam that was used to package the motherboard.
It is not physically connected to any part of the case.
I don't know why it doesn't slip but I think asking questions might make it change its mind.
There is not much space for sata drives or, more importantly, sata cables.
The sata drives are currently sitting on top of the gpu, taped together and held onto the case by a single screw.
This also prevents the rear of the gpu from making with the case.
Total weight is (definitely more than) 6.6kg.

The fun box is filled with fire apparently.
The cpu and gpu fans are in exhaust configuration.
The cpu exhausts through the top right and gpu through the bottom left.
More than half of the cpu fan is covered by the cases which makes it blow hot air out the sides and partially into the case.
One of the 2 gpu fans is fully covered which makes it theoretically blow hot air into the case.
Somehow it still improves gpu and case air temperatures so it is left in there.
The gpu fans are 2 pin sunon fans.
A modification intended to allow pwm control was done.
It does not work fully because there is no tach signal so the gpu outputs 100% pwm.
Under 200W load the gpu reaches up to 81degC.
Cpu temperatures can be kept below 70degC depending on the fan rpm.
Case air temperature exceeds 45degC.
Case surface temperature exceeded 60degC near the gpu.
A case fan was not added.

At the end of the 30min battery test, the battery stayed below 45degC with the lid removed.
There were no tests with the lid in place.
It is expected to exceed 50degC.
During the same test, the area around the buck converter became noticeably hot.
Temperature was not measured.
It is estimated to be 40-60degC.
The mosfets did not get appreciably warm.

The battery and several capacitors are very sensitive to high case air temperatures.
The reliability of other components is also negatively affected by high temperatures.
The buck converter has a rated working temperature of 80degC.
The boost converter has a rated working temperature of 85degC.
The ac-dc psu has a rated working temperature of 40degC.
The ac-dc psu is placed outside the case and therefore less affected by case air temperature.
The display board has a rated working temperature of 40degC.
The display board sits on top of the buck converter.
The display board and buck converter and covered by the pcie riser cable.
It is important to keep case temperature below 40degC.

What about 400W then?
Running the design mostly unchanged with a ~400W total load is unfeasible.
The ac-dc supply is woefully unsuited for computing loads.
The buck converter is only rated up to 360W.
The dc-atx unit is only rated for 22A at its 12V rail.

Thermally it already runs into problems regarding case air temperature and gpu temperature.
These are not expected to lower with higher power consumption.
The mosfets are expected to heat up significantly with higher loads.

Battery runtime at 400W is expected to be 10-12 minutes.

The current system is not capable of drawing a lot more than 200W.

Where to go now?
The ac-dc power supply needs to be replaced. *done*
The dps-600pb seems like a good choice.
It has the correct dimensions and power ratings.
Its pinout is known and its voltage and fan speed are adjustable.

There is no easy solution regarding the buck converter in sight.
Higher power buck converters seem to be bigger and often have higher ripple voltage.
Some might also have weird quirks such as the reverse boost quirk of the tested 480W buck converter.
Running multiple converters in parallel might be an option.
As the battery runtime at high loads is limited, running the buck converter at overload might work.

The dc-atx unit needs to be replaced by a higher power one.
Alternatively the gpu power cable can be connected to the 12V rail (before the dc-atx unit) using a switch module.
Given that the pcb and cables can handle it, replacing the mosfet on the dc-atx unit might be enough.

The mosfets need to be replaced by lower resistance ones.
Alternatively more mosfets need to be added to reduce total resistance as well as improve power dissipation.

The case needs additional vent holes and maybe a case fan. *done*
Ducting around the cpu and gpu might reduce the amount of exhaust air getting into the rest of the case.

The gpu needs to be replaced by a higher power one for testing 400W configuration once above changes are made.
Without much further information, a r9 290x seems like a good choice.

The battery will have to stay.
An upgrade to 7S2P might be possible.
However 10min at 400W isn't too bad.
In the worst case, it is possible to connect a car battery to the 12V rail.

Battery powered Portable PC
There is quite a lot of very useful information regarding battery powering a pc at https://smallformfactor.net/forum/threads/stalled-nfc-s4m-c-524-project-🐦roadrunner-battery-powered-portable-pc.6146/.
I highly recommend everyone to check it out if you are interested in building a battery powered pc.

I think this is it for now.
EDIT: some very basic psu information and a battery run time test.
Last edited:
  • Like
Reactions: timginter


Trash Compacter
Original poster
Jan 19, 2021
A text only update... I should fix my phone camera.

I ordered a bunch of new mosfets (IPI120P04P4L-03) and a new psu.
The psu (dps-600pb b) arrived and is working well.
It did make the unit 30-40mm wider but the length fit just right.
The cables stick out by a decent bit but it still made overall cable management sonewhat easier.

The mosfets will still take a good while to arrive, I think.

It turns out that the only thing between the cpu/gpu power rails and the main 12V line is a single p-ch mosfet on the dc-atx unit.
(That and a sub 1miliohm n-ch mosfet on the low side which seems to be only for reverse polarity protection)
The plan is to replace the p-ch mosfet with a lower resistance one as a very cheap way to make the dc-atx unit usable for 400W.
Not sure how well the cables and pcb traces will do but oh well...

The new mosfets should also replace the current ones between the dc-dc converters and the battery.

In the meantime I added a 3pin 50mm case fan to the battery.
It made a 10degC difference in battery surface temperature on average.
It does cause some vibrational noise which is quite annoying but it might go away once the fan is properly secured to somethibg instead of sitting on a single screw.

I also cut additional vents into the case for the cpu and gpu.
It didn't make too much of a difference for the cpu temperature but did reduce noise by quite a bit.
It did seem to make a big difference to the gpu.
Until I shorted it out on something and killed it.

Now isn't really the best time to look for a new gpu but what else is there to do?
At least it is a good excuse to buy a r9 290, if I get the chance. :D
  • Like
Reactions: timginter


Cable-Tie Ninja
Apr 21, 2019
Sorry to hear about your GPU, market sucks with the prices now.

Really cool idea on improving the dc-atx plug-in, hope all goes well with resoldering. Looking forward to updates.

Really impressive for a first build. Your qualifications are spot-on, had a good chuckle :D


Trash Compacter
Original poster
Jan 19, 2021
Update, more or less...
still no photos

I managed to buy a r9-290 somehow.
It looks like this https://www.newegg.com/powercolor-radeon-r9-290-axr9-290-4gbd5-tdhe-oc/p/N82E16814131569.
It is 20mm wider but 8mm shorter than a reference card.
I tried putting the reference cooler on it but it has some weird heat spreaders on the vram as well as a slightly different pcb layout which requires some heatsink modifications.
For now the fans and shroud are replaced by a single 80*25mm fan held on with cable ties.
This isn't enough to cool the gpu at any decent load.
The idea is that the gpu overheats and throttles before it can pull enough power to kill the atx-dc unit.

The fan and the back plate causes the height of the entire card to increase slightly.
It still fits just about inside the case with some spare space in length and height but none in width.
However there is no more space for the sata drives now which used to sit on top of the gpu.
It is possible to remove the back plate and make space for the drives.
However I don't want to kill another gpu by shorting it out on something again so i will leave it on for now.

The gpu works for all "normal" use cases it seems.
It does suffer from the amd gpu reset bug which is a bit of a pain but I have a few patches that might fix it.
There is a workaround in qemu but with it, for some reason, restarting the guest restarts the host. :p
This only happens when the gpu was bound to the amdgpu driver before binding it to vfio.
Everything works fine if the system is suspend to ram after binding to vfio but before starting the vm.
But this is too much of a pain to be usable.

I found out that the system seems to work ok with the 0V cable going to the ac-dc psu disconnected.
The current is probably flowing through the motherboard mounting screws, through the laptop case, to the psu case and back to psu internal ground.
This path doesn't seem capable of carrying high currents, however some modifications here and there might make it usable.
This would allow the omission of the cable going from the ac-dc unit to the ground terminal block just below the center of the screen.

It also seems that one of the pairs in the battery has a very high self discharge rate and gets always charged to a lower voltage than the other cells.
This might be because the bms was shorted out a few times and now it has gone a bit crazy or maybe one of the cells has gone bad.
Luckily I can replace up to 2 cells or cut one of the 2 bms out, depending on where the fault is.

As for the sata drives, I am thinking of placing them on the outside of the case.
Sata is supposed to be hotplug capable under certain configurations but who knows.
I have plenty of space on top of the ac-dc psu, if only I could get the sata cables all the way there.

The wifi board was also being a bit weird here and there, throwing usb errors and occasionally hanging the entire system on boot.
I think it was overheating because of all the tape wrapped around it.
I removed the tape on top of the chip but left the rest on the board with the hope of preventing short circuits.
It hasn't caused any problems since but the rate of which it did was very low to begin with.
Being next to the buck converter and the display board on one side and the pcie riser on the other probably doesn't do it any good either.

The real fun should start once the mosfets arrive.
  • Like
Reactions: timginter