Well, it seems that the sale is over. I'll probably just go with the 2400G still since I want some better gaming capabilities.
By the way I'm guessing most of you are using it with Win10 but are there any Linux users here with this system? Thinking of downsizing my build (already smaller at 6 liters) because I do not have the need for a discrete GPU at this time. Gaming on Linux through DirectX layer compatibility is an interesting proposition but Linux+gaming+Raven Ridge APU? Would that be a tall order for this system?
I still have some issues playing 4kx60 vids using vlc and mpv hardware acceleration CPU can't do it, interesting article on phoronix claims 3400G actually works better on linux than 2200g/2400g(as in not stable) using their test suite
Mpv is mostly ok with some config tweaks but lots of dropped frames on vids that play smooth on a j5005 iGPU same stack except intel, vlc still seems to blacklist vega11 but no issues with Intel iGPU.
If anyone is still looking for a cooling solution, doesn't like the crunchy noise generated by the stock cooler, doesn't like the colour scheme of the L9a/i or any of the alternatives (Jonsbo HP-400, ID-Cooling IS-40X), or fears the noise a 12mm fan would make (ID-Cooling IS-30), a couple months ago I hacked up a custom cooling solution based on a) a Noctua NF-9B Redux b) an ID-Cooling IS-30 c) an AM4 Backblate (AMD style).
As for the component selection:
a) It's Noctua, 25mm with similar airflow / pressure to the 14mm fan on the L9a @ 1600RPM. Colour on the Redux series is much better than on the normal ones.
b) At 18mm it easily allows for a 25mm fan while still fitting the A300 case (maybe the L9a heatsink @ 23mm would fit with a 25mm fan? On paper 2mm too tall)
c) Some blogger based in Taiwan did a nice cooler rundown for the A300. He/she found the ID-Cooling offerings (IS-40X, IS30) to be lacking in cooling performance but later retested with a backplate, as there was some indication that it would make performance better. Temperatures with backplate were much lower due to better mounting pressure (http://www.fox-saying.com/blog/post/46697484). Cheapest option was to ebay an original (style) AMD backplate from Hong Kong and dremel it.
Components ready. On the left the original AMD backplate. On the right the ebayed one, with the threaded standoffs already dremeled off and holes widened. I applied some red permanent marker to cover the blank metal. Could not find the black one. The plastic screen I had removed beforehand using force and then copious amounts of isopropyl alcohol to get the nasty glue off. I do install the plastic screen when mounting everything. In the middle the IS-30 heatsink from the bottom.
On the left the original IS-30 fan. It has to be loud, being that small. In the middle the IS-30 heatsink with the NF-9B Redux already installed. Needs M3x5 screws (M3x6 probably fine as well. Couldn't measure properly at first and ended up getting M2.5 screws which I'll never need). Stock cooler with shroud removed on the right.
Installed with the backplate in place. Looks ok.
At first, it wouldn't fit. I had to move the VRM heatsink to the edge of the board by loosening the screws and sliding the heatsink as much as the slack in the mounting holes allowed.
Does look like it belongs there once installed.
Aligns perfectly with the fan grille.
Noise is better than the stock fan. At similar RPM there are worlds between stock and the Noctua. As for cooling performance, it appears to be comparable at best. I don't have AC, room temps for my tests have varied wildly. I can say that the IS-30 heatsink is really light, so there is not a lot of thermal mass to soak up spikes. I have set the fan to ~950 RPM up to 50°C CPU, at that setting it does not really spin up during browsing / productivity and such. It's barely audible then. Idle is around 8°-10° C over ambient. Performance at 100% fan speed as follows: at 21°-22°C room temp, I ran 10 minutes of CPU-Z stress test and my 2200G was at 72°C and not throttling yet. Adding Furmark to the mix, 82°-83° after a couple minutes, and the processor not boosting anymore. That is extreme obviously. In terms of noise at 100%, airflow is noticeable, but no motor noise or anything else too aggravating.
Overall this may not be as good as the L9a (while maybe being marginally cheaper and definitely better looking). I have no way to compare as I do not own that HSF. It will absolutely be better than a stock IS-30. I personally prefer it to the stock cooler (Obviously, I mean I have spent time on this).
Not sure of anyone can answer this, kinda an odd question.
If I try to power on the a300 with no CPU installed at all, what should I expect?
I'm going to be building my system slowly and would like to test it upon receipt of possible, before I get the CPU.
Not sure of anyone can answer this, kinda an odd question.
If I try to power on the a300 with no CPU installed at all, what should I expect?
I'm going to be building my system slowly and would like to test it upon receipt of possible, before I get the CPU.
You're the best. Thank you.
I figured as much, but the most recent hardware I've owned is my 3570k/z77 board, so I wasn't sure if new hardware had neat new quirks
You're the best. Thank you.
I figured as much, but the most recent hardware I've owned is my 3570k/z77 board, so I wasn't sure if new hardware had neat new quirks
I can tell what it does with an incompatible CPU (anything without an iGPU). It spins all the fans up but the power light doesn't come on and it doesn't post.
Heya voices from the internet. I'm new and, I too, do have to compensate something by a small form factor computer.
Jest aside, the Asrock A300 and Ryzen 3200G (stock cooler, Intel 660p drive, Optane M10 as pmem, 16 Go in 2×DDR4-3000 CL16) is a problem-free experience for me on Linux 5.3, specifically Ubuntu 19.10 (“eoaoun” or such similar Welsh codename). Out of the box.
I didn't try other distributions, though, because this one does it to my satisfaction.
The 3200G is “soldered”, and the 12nm chips are less picky about memory and their IGP can do higher clocks. That's why it's an 3200G. Overclocking the IGP or removing power limits is super easy on Linux, but that's for a later post perhaps.
To save power, as I leave the box on 24/7 for downloads and cron-jobs, …
I did disable PBO and “turbo” in the BIOS (v3.50). Much to my surprise that dropped the voltage even when idle to what I now recognize as “reasonable” levels. I suspect it's been a BIOS flaw when on.
Don't write anything to pci**/**/power/control, or no matter what you set in (3) some devices just won't get into (deeper) power-saving modes. At least not with Linux 5.3.0.
… but do add to your kernel command line: pcie_aspm.policy=powersave
… and if you don't have a buggy NVMe drive (Samsung *cough cough*) you might as well (any higher number [µs] will do), because some distros artificially lower that value as workaround: nvme_core.default_ps_max_latency_us=25000
My /etc/default/grub reads like this (excerpt) and I remembered to run update-grub after changing it:
Now tweak the threshold between “keep in memory”/“write to disk” to decrease how often your NVMe needs to wake up and work (/etc/sysctl.d/99-local.conf, the “dirty background ratio” assumes you have a fast drive, so skip first two lines if with SATA SSDs):
While at it, have the kernel bundle writes with whenever it needs to read something new from disk anyway (same file, new line; increases read latency though!): vm.laptop_mode=5
I don't care about logs from last boot. If I need to diagnose something, I'd change this, but for now logs go to memory (and swap if need be; ask me about my zram scripts) not waking up disks (/etc/systemd/journald.conf):
Do you have “slow internet” and no fast LAN devices anyway? Have your router/switch/whatever negotiate 100 MBit/s instead of gigabit ethernet. That'll save some more 0.nnn W!
I've switched off the screen, unplugged peripherals including my headset, removed one memory stick, pulled the fan—it draws 3.5–3.9W when idle (with the occasional process doing something). No coil whine btw. Raspberry Pi owners' dream.
Both sticks, fan, …—everyday idle, browsing, chatting, has it at about 7.2–8.3W. :-)
The lines below the first are for the last 7 and 30 days. I did some experiments and didn't configure the device properly, so those numbers are a bit higher.
Given that a gaming GPU alone would draw this or more, I find those numbers quite impressive.
4k videos play fine, I did try VLC and had to disable the always-on interlacing. Consumption then is about 20W, as it would be when watching transmissions of the Mormon church, hence I cannot recommend doing it.
The question I would have at that point would be- if I dual boot Linux and windows would the clocks stick? I feel like no but if they did it would be 200% worth dual booting for
4k videos play fine, I did try VLC and had to disable the always-on interlacing. Consumption then is about 20W, as it would be when watching transmissions of the Mormon church, hence I cannot recommend doing it.
So did you get VLC to use iGPU hardware acceleration for h.265 4k vids? None of the comparable command line switches that worked in mpv worked in VLC for me, CPU roasted trying tp keep up on 4kx60 but was able to do 4kx30 albeit warmly.
Where are you based, are Deskmini in short supply? In the UK we seem to have steady stock but the price has gone up. I was paying around £130 now its £150.
Heya voices from the internet. I'm new and, I too, do have to compensate something by a small form factor computer.
Jest aside, the Asrock A300 and Ryzen 3200G (stock cooler, Intel 660p drive, Optane M10 as pmem, 16 Go in 2×DDR4-3000 CL16) is a problem-free experience for me on Linux 5.3, specifically Ubuntu 19.10 (“eoaoun” or such similar Welsh codename). Out of the box.
I didn't try other distributions, though, because this one does it to my satisfaction.
The 3200G is “soldered”, and the 12nm chips are less picky about memory and their IGP can do higher clocks. That's why it's an 3200G. Overclocking the IGP or removing power limits is super easy on Linux, but that's for a later post perhaps.
To save power, as I leave the box on 24/7 for downloads and cron-jobs, …
I did disable PBO and “turbo” in the BIOS (v3.50). Much to my surprise that dropped the voltage even when idle to what I now recognize as “reasonable” levels. I suspect it's been a BIOS flaw when on.
Don't write anything to pci**/**/power/control, or no matter what you set in (3) some devices just won't get into (deeper) power-saving modes. At least not with Linux 5.3.0.
… but do add to your kernel command line: pcie_aspm.policy=powersave
… and if you don't have a buggy NVMe drive (Samsung *cough cough*) you might as well (any higher number [µs] will do), because some distros artificially lower that value as workaround: nvme_core.default_ps_max_latency_us=25000
My /etc/default/grub reads like this (excerpt) and I remembered to run update-grub after changing it:
Now tweak the threshold between “keep in memory”/“write to disk” to decrease how often your NVMe needs to wake up and work (/etc/sysctl.d/99-local.conf, the “dirty background ratio” assumes you have a fast drive, so skip first two lines if with SATA SSDs):
While at it, have the kernel bundle writes with whenever it needs to read something new from disk anyway (same file, new line; increases read latency though!): vm.laptop_mode=5
I don't care about logs from last boot. If I need to diagnose something, I'd change this, but for now logs go to memory (and swap if need be; ask me about my zram scripts) not waking up disks (/etc/systemd/journald.conf):
Do you have “slow internet” and no fast LAN devices anyway? Have your router/switch/whatever negotiate 100 MBit/s instead of gigabit ethernet. That'll save some more 0.nnn W!
I've switched off the screen, unplugged peripherals including my headset, removed one memory stick, pulled the fan—it draws 3.5–3.9W when idle (with the occasional process doing something). No coil whine btw. Raspberry Pi owners' dream.
Both sticks, fan, …—everyday idle, browsing, chatting, has it at about 7.2–8.3W. :-)
The lines below the first are for the last 7 and 30 days. I did some experiments and didn't configure the device properly, so those numbers are a bit higher.
Given that a gaming GPU alone would draw this or more, I find those numbers quite impressive.
4k videos play fine, I did try VLC and had to disable the always-on interlacing. Consumption then is about 20W, as it would be when watching transmissions of the Mormon church, hence I cannot recommend doing it.
You are only using 3.9W? Impressive, and you still have the graphics (GPU) on the APU running/ enabled? The Pi4 is a bit less in real life use but by fractions, and it will go to about 5W if you really push it (as you can overclock it to 2Ghz), I want to know more. I don't have a 3200G unfortunately- I do have everything but with a 2400G sat in boxes still. Someone wanted a build then cancelled. Give me a chance to play with Ubuntu server as well I suppose.
My thinking now is, make it too easy and some users might start doing it not using their head, conceivably exceeding the limits of the power brick. You can divorce the power limits CPU-IGP on one hand, the other the APU has some other mechanisms triggering throttling, such as ingress power. I need to examine all that further.
Anyway, I spilled it in another post: Someone discovered using ACPI STPM commands has the desired effect, which I merely confirm.
The question I would have at that point would be- if I dual boot Linux and windows would the clocks stick? I feel like no but if they did it would be 200% worth dual booting for
No, for this reasons: (1) Windows. (2) Goto 1. (3) BIOS resets the values. (4) Games perform different under Linux. Some run better, some worse. Some are native, some need Steam's Proton. (5) The IGP will go from 1250 to 1500 MHz no problem, but gets stalled by memory anyway is my impression.
You can restart Linux (→ kexec) without going through the BIOS, and server-bros do this to shave off downtime, but I'm not aware anyone managed this to Windows. But I recall there's a tool for Windows which does send those ACPI commands.
So did you get VLC to use iGPU hardware acceleration for h.265 4k vids? None of the comparable command line switches that worked in mpv worked in VLC for me, CPU roasted trying tp keep up on 4kx60 but was able to do 4kx30 albeit warmly.
I will share my configuration with you. Ideally, if that doesn't work, let's compare differences using chat, Discord, before hijacking this thread, to later share our notes here.
Let's agree on using the “Elysium 2013” sample (420 MiB) from 4ksamples.com to verify and as metric. Plays fine, no frames lost or dropped. htop has the all-core average at 20–50%, no idea how to have VLC display it's gpu-accelerated. Update: Acceleration is active.
You can install a particular version of a package by appending =A.B.C to its name, and pick packages from a particular codename (such as xenial or yakkety or eoan) by flag -t eoan to apt or appending /eoan to the package name. Ex.: apt-get install -t eoan vlc>=3.0.8
My thinking now is, make it too easy and some users might start doing it not using their head, conceivably exceeding the limits of the power brick. You can divorce the power limits CPU-IGP on one hand, the other the APU has some other mechanisms triggering throttling, such as ingress power. I need to examine all that further.
Anyway, I spilled it in another post: Someone discovered using ACPI STPM commands has the desired effect, which I merely confirm.
No, for this reasons: (1) Windows. (2) Goto 1. (3) BIOS resets the values. (4) Games perform different under Linux. Some run better, some worse. Some are native, some need Steam's Proton. (5) The IGP will go from 1250 to 1500 MHz no problem, but gets stalled by memory anyway is my impression.
You can restart Linux (→ kexec) without going through the BIOS, and server-bros do this to shave off downtime, but I'm not aware anyone managed this to Windows. But I recall there's a tool for Windows which does send those ACPI commands.
I will share my configuration with you. Ideally, if that doesn't work, let's compare differences using chat, Discord, before hijacking this thread, to later share our notes here.
Let's agree on using the “Elysium 2013” sample (420 MiB) from 4ksamples.com to verify and as metric. Plays fine, no frames lost or dropped. htop has the all-core average at 20–50%, no idea how to have VLC display it's gpu-accelerated.
No. You could install the “Hyper-V feature” under Windows 10 Pro and up, and have Linux in a Hyper-V VM.—That'll work. Though it won't get you closer to overclocking, for you need access to the bare-metal and nothing virtualized intercepting whatever you do.
As some virus' authors like to thwart detection by having their malware check whether it is run in a VM, I for one recommend installing Hyper-V anyway, if and where possible.
My thinking now is, make it too easy and some users might start doing it not using their head, conceivably exceeding the limits of the power brick. You can divorce the power limits CPU-IGP on one hand, the other the APU has some other mechanisms triggering throttling, such as ingress power. I need to examine all that further.
Anyway, I spilled it in another post: Someone discovered using ACPI STPM commands has the desired effect, which I merely confirm.
Many of us here are tinkerers so this is right up our alley!
Do you think you could run a quick before & after GPU benchmark to show us that the iGPU overclock works? I don't have an A300 unfortunately but I've been very interested in it. (or link us to someone who has already done so)
Coolio,thanks for the rundown. I'm not surprised it wouldn't work. For me personally I would be too worried about ourltrubni g my pay, since I'm planning to use a meanwell rps400+ stepup to power my a300 at some point.
I will share my configuration with you. Ideally, if that doesn't work, let's compare differences using chat, Discord, before hijacking this thread, to later share our notes here.
Let's agree on using the “Elysium 2013” sample (420 MiB) from 4ksamples.com to verify and as metric. Plays fine, no frames lost or dropped. htop has the all-core average at 20–50%, no idea how to have VLC display it's gpu-accelerated. Update: Acceleration is active.
Just a quick reply, I'm 1-3 minor revs off most of your package list, in some cases just a sub rev but the funny thing is that the Elysium sample has the following test right under it on the download page "*NOTE* This file is encoded using x265 and will require a player such as MPC-BE to play them, It will not work in VLC", probably meant for older versions as the v3.0.8 on mine runs it. Anyway I get the same 20-50% core loads as you with vlc, no dropped frames(which only happens on 4kx60 clips from 4kmedia.org). On mpv running hardware acceleration core loads are mostly low single digits with spikes up to 12% using vaapi-copy. Looks/sounds better in mpv as well but it needs the following config and probably only the first line in reality