Prebuilt [SFFn] ASRock's DeskMini A300 - Finally!

Stevo_

Master of Cramming
Jul 2, 2015
449
304
Well, it seems that the sale is over. I'll probably just go with the 2400G still since I want some better gaming capabilities.

By the way I'm guessing most of you are using it with Win10 but are there any Linux users here with this system? Thinking of downsizing my build (already smaller at 6 liters) because I do not have the need for a discrete GPU at this time. Gaming on Linux through DirectX layer compatibility is an interesting proposition but Linux+gaming+Raven Ridge APU? Would that be a tall order for this system?

I still have some issues playing 4kx60 vids using vlc and mpv hardware acceleration CPU can't do it, interesting article on phoronix claims 3400G actually works better on linux than 2200g/2400g(as in not stable) using their test suite

https://www.phoronix.com/scan.php?page=article&item=amd-ryzen5-3400g&num=1
 

Stevo_

Master of Cramming
Jul 2, 2015
449
304
I have no issue playing 4kx60 @ Arch Linux with 3400g and latest kernel
Mpv is mostly ok with some config tweaks but lots of dropped frames on vids that play smooth on a j5005 iGPU same stack except intel, vlc still seems to blacklist vega11 but no issues with Intel iGPU.
 

ConsolidatedResults

Average Stuffer
May 4, 2019
66
72
If anyone is still looking for a cooling solution, doesn't like the crunchy noise generated by the stock cooler, doesn't like the colour scheme of the L9a/i or any of the alternatives (Jonsbo HP-400, ID-Cooling IS-40X), or fears the noise a 12mm fan would make (ID-Cooling IS-30), a couple months ago I hacked up a custom cooling solution based on a) a Noctua NF-9B Redux b) an ID-Cooling IS-30 c) an AM4 Backblate (AMD style).

As for the component selection:

a) It's Noctua, 25mm with similar airflow / pressure to the 14mm fan on the L9a @ 1600RPM. Colour on the Redux series is much better than on the normal ones.
b) At 18mm it easily allows for a 25mm fan while still fitting the A300 case (maybe the L9a heatsink @ 23mm would fit with a 25mm fan? On paper 2mm too tall)
c) Some blogger based in Taiwan did a nice cooler rundown for the A300. He/she found the ID-Cooling offerings (IS-40X, IS30) to be lacking in cooling performance but later retested with a backplate, as there was some indication that it would make performance better. Temperatures with backplate were much lower due to better mounting pressure (http://www.fox-saying.com/blog/post/46697484). Cheapest option was to ebay an original (style) AMD backplate from Hong Kong and dremel it.

Components ready. On the left the original AMD backplate. On the right the ebayed one, with the threaded standoffs already dremeled off and holes widened. I applied some red permanent marker to cover the blank metal. Could not find the black one. The plastic screen I had removed beforehand using force and then copious amounts of isopropyl alcohol to get the nasty glue off. I do install the plastic screen when mounting everything. In the middle the IS-30 heatsink from the bottom.



On the left the original IS-30 fan. It has to be loud, being that small. In the middle the IS-30 heatsink with the NF-9B Redux already installed. Needs M3x5 screws (M3x6 probably fine as well. Couldn't measure properly at first and ended up getting M2.5 screws which I'll never need). Stock cooler with shroud removed on the right.



Installed with the backplate in place. Looks ok.



At first, it wouldn't fit. I had to move the VRM heatsink to the edge of the board by loosening the screws and sliding the heatsink as much as the slack in the mounting holes allowed.



Does look like it belongs there once installed.



Aligns perfectly with the fan grille.



Noise is better than the stock fan. At similar RPM there are worlds between stock and the Noctua. As for cooling performance, it appears to be comparable at best. I don't have AC, room temps for my tests have varied wildly. I can say that the IS-30 heatsink is really light, so there is not a lot of thermal mass to soak up spikes. I have set the fan to ~950 RPM up to 50°C CPU, at that setting it does not really spin up during browsing / productivity and such. It's barely audible then. Idle is around 8°-10° C over ambient. Performance at 100% fan speed as follows: at 21°-22°C room temp, I ran 10 minutes of CPU-Z stress test and my 2200G was at 72°C and not throttling yet. Adding Furmark to the mix, 82°-83° after a couple minutes, and the processor not boosting anymore. That is extreme obviously. In terms of noise at 100%, airflow is noticeable, but no motor noise or anything else too aggravating.

Overall this may not be as good as the L9a (while maybe being marginally cheaper and definitely better looking). I have no way to compare as I do not own that HSF. It will absolutely be better than a stock IS-30. I personally prefer it to the stock cooler (Obviously, I mean I have spent time on this).
 

Curiosity

Too busy figuring out if I can to think if I shoul
Platinum Supporter
Bronze Supporter
M...M...M...M...Multi-Tier...Subscriber...
Apr 30, 2016
708
827
Not sure of anyone can answer this, kinda an odd question.

If I try to power on the a300 with no CPU installed at all, what should I expect?
I'm going to be building my system slowly and would like to test it upon receipt of possible, before I get the CPU.
 

Curiosity

Too busy figuring out if I can to think if I shoul
Platinum Supporter
Bronze Supporter
M...M...M...M...Multi-Tier...Subscriber...
Apr 30, 2016
708
827
Nothing will happen at all.
Source: I tried it myself
You're the best. Thank you.
I figured as much, but the most recent hardware I've owned is my 3570k/z77 board, so I wasn't sure if new hardware had neat new quirks
 
  • Like
Reactions: W4RR10R

W4RR10R

Cable-Tie Ninja
Jan 29, 2019
211
211
You're the best. Thank you.
I figured as much, but the most recent hardware I've owned is my 3570k/z77 board, so I wasn't sure if new hardware had neat new quirks
I can tell what it does with an incompatible CPU (anything without an iGPU). It spins all the fans up but the power light doesn't come on and it doesn't post.
 

LostEnergy

Caliper Novice
Sep 25, 2019
31
22
Heya voices from the internet. I'm new and, I too, do have to compensate something by a small form factor computer.

Jest aside, the Asrock A300 and Ryzen 3200G (stock cooler, Intel 660p drive, Optane M10 as pmem, 16 Go in 2×DDR4-3000 CL16) is a problem-free experience for me on Linux 5.3, specifically Ubuntu 19.10 (“eoaoun” or such similar Welsh codename). Out of the box.
I didn't try other distributions, though, because this one does it to my satisfaction.

The 3200G is “soldered”, and the 12nm chips are less picky about memory and their IGP can do higher clocks. That's why it's an 3200G. Overclocking the IGP or removing power limits is super easy on Linux, but that's for a later post perhaps.

To save power, as I leave the box on 24/7 for downloads and cron-jobs, …
  1. I did disable PBO and “turbo” in the BIOS (v3.50). Much to my surprise that dropped the voltage even when idle to what I now recognize as “reasonable” levels. I suspect it's been a BIOS flaw when on.
  2. Don't write anything to pci**/**/power/control, or no matter what you set in (3) some devices just won't get into (deeper) power-saving modes. At least not with Linux 5.3.0.
  3. … but do add to your kernel command line: pcie_aspm.policy=powersave
  4. … and if you don't have a buggy NVMe drive (Samsung *cough cough*) you might as well (any higher number [µs] will do), because some distros artificially lower that value as workaround: nvme_core.default_ps_max_latency_us=25000
    My /etc/default/grub reads like this (excerpt) and I remembered to run update-grub after changing it:
    Bash:
    GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1 pcie_aspm.policy=powersave acpihp.disable=1 nvme_core.default_ps_max_latency_us=25000"
  5. Now tweak the threshold between “keep in memory”/“write to disk” to decrease how often your NVMe needs to wake up and work (/etc/sysctl.d/99-local.conf, the “dirty background ratio” assumes you have a fast drive, so skip first two lines if with SATA SSDs):
    Bash:
    vm.dirty_ratio=40
    vm.dirty_background_ratio=30
    vm.dirty_writeback_centisecs=18000
    vm.dirty_expire_centisecs=18000
    
    fs.xfs.filestream_centisecs=18000
    fs.xfs.xfssyncd_centisecs=18000
  6. While at it, have the kernel bundle writes with whenever it needs to read something new from disk anyway (same file, new line; increases read latency though!):
    vm.laptop_mode=5
  7. I don't care about logs from last boot. If I need to diagnose something, I'd change this, but for now logs go to memory (and swap if need be; ask me about my zram scripts) not waking up disks (/etc/systemd/journald.conf):
    INI:
    [Journal]
    Storage=volatile
    SyncIntervalSec=5m
    RateLimitIntervalSec=60s
    RateLimitBurst=1000
    SystemMaxUse=1G
    RuntimeMaxUse=512M
    MaxFileSec=1day
    ForwardToSyslog=no
  8. Do you have “slow internet” and no fast LAN devices anyway? Have your router/switch/whatever negotiate 100 MBit/s instead of gigabit ethernet. That'll save some more 0.nnn W!

I've switched off the screen, unplugged peripherals including my headset, removed one memory stick, pulled the fan—it draws 3.5–3.9W when idle (with the occasional process doing something). No coil whine btw. Raspberry Pi owners' dream.

Both sticks, fan, …—everyday idle, browsing, chatting, has it at about 7.2–8.3W. :-)
The lines below the first are for the last 7 and 30 days. I did some experiments and didn't configure the device properly, so those numbers are a bit higher.

Asrock A300 idle power 7.2W

Given that a gaming GPU alone would draw this or more, I find those numbers quite impressive.

4k videos play fine, I did try VLC and had to disable the always-on interlacing. Consumption then is about 20W, as it would be when watching transmissions of the Mormon church, hence I cannot recommend doing it.
 
Last edited:

Curiosity

Too busy figuring out if I can to think if I shoul
Platinum Supporter
Bronze Supporter
M...M...M...M...Multi-Tier...Subscriber...
Apr 30, 2016
708
827
The question I would have at that point would be- if I dual boot Linux and windows would the clocks stick? I feel like no but if they did it would be 200% worth dual booting for
 

Stevo_

Master of Cramming
Jul 2, 2015
449
304
4k videos play fine, I did try VLC and had to disable the always-on interlacing. Consumption then is about 20W, as it would be when watching transmissions of the Mormon church, hence I cannot recommend doing it.

So did you get VLC to use iGPU hardware acceleration for h.265 4k vids? None of the comparable command line switches that worked in mpv worked in VLC for me, CPU roasted trying tp keep up on 4kx60 but was able to do 4kx30 albeit warmly.
 

SFF EOL

Cable-Tie Ninja
Dec 9, 2018
154
36
Heya voices from the internet. I'm new and, I too, do have to compensate something by a small form factor computer.

Jest aside, the Asrock A300 and Ryzen 3200G (stock cooler, Intel 660p drive, Optane M10 as pmem, 16 Go in 2×DDR4-3000 CL16) is a problem-free experience for me on Linux 5.3, specifically Ubuntu 19.10 (“eoaoun” or such similar Welsh codename). Out of the box.
I didn't try other distributions, though, because this one does it to my satisfaction.

The 3200G is “soldered”, and the 12nm chips are less picky about memory and their IGP can do higher clocks. That's why it's an 3200G. Overclocking the IGP or removing power limits is super easy on Linux, but that's for a later post perhaps.

To save power, as I leave the box on 24/7 for downloads and cron-jobs, …
  1. I did disable PBO and “turbo” in the BIOS (v3.50). Much to my surprise that dropped the voltage even when idle to what I now recognize as “reasonable” levels. I suspect it's been a BIOS flaw when on.
  2. Don't write anything to pci**/**/power/control, or no matter what you set in (3) some devices just won't get into (deeper) power-saving modes. At least not with Linux 5.3.0.
  3. … but do add to your kernel command line: pcie_aspm.policy=powersave
  4. … and if you don't have a buggy NVMe drive (Samsung *cough cough*) you might as well (any higher number [µs] will do), because some distros artificially lower that value as workaround: nvme_core.default_ps_max_latency_us=25000
    My /etc/default/grub reads like this (excerpt) and I remembered to run update-grub after changing it:
    Bash:
    GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1 pcie_aspm.policy=powersave acpihp.disable=1 nvme_core.default_ps_max_latency_us=25000"
  5. Now tweak the threshold between “keep in memory”/“write to disk” to decrease how often your NVMe needs to wake up and work (/etc/sysctl.d/99-local.conf, the “dirty background ratio” assumes you have a fast drive, so skip first two lines if with SATA SSDs):
    Bash:
    vm.dirty_ratio=40
    vm.dirty_background_ratio=30
    vm.dirty_writeback_centisecs=18000
    vm.dirty_expire_centisecs=18000
    
    fs.xfs.filestream_centisecs=18000
    fs.xfs.xfssyncd_centisecs=18000
  6. While at it, have the kernel bundle writes with whenever it needs to read something new from disk anyway (same file, new line; increases read latency though!):
    vm.laptop_mode=5
  7. I don't care about logs from last boot. If I need to diagnose something, I'd change this, but for now logs go to memory (and swap if need be; ask me about my zram scripts) not waking up disks (/etc/systemd/journald.conf):
    INI:
    [Journal]
    Storage=volatile
    SyncIntervalSec=5m
    RateLimitIntervalSec=60s
    RateLimitBurst=1000
    SystemMaxUse=1G
    RuntimeMaxUse=512M
    MaxFileSec=1day
    ForwardToSyslog=no
  8. Do you have “slow internet” and no fast LAN devices anyway? Have your router/switch/whatever negotiate 100 MBit/s instead of gigabit ethernet. That'll save some more 0.nnn W!

I've switched off the screen, unplugged peripherals including my headset, removed one memory stick, pulled the fan—it draws 3.5–3.9W when idle (with the occasional process doing something). No coil whine btw. Raspberry Pi owners' dream.

Both sticks, fan, …—everyday idle, browsing, chatting, has it at about 7.2–8.3W. :-)
The lines below the first are for the last 7 and 30 days. I did some experiments and didn't configure the device properly, so those numbers are a bit higher.

Asrock A300 idle power 7.2W

Given that a gaming GPU alone would draw this or more, I find those numbers quite impressive.

4k videos play fine, I did try VLC and had to disable the always-on interlacing. Consumption then is about 20W, as it would be when watching transmissions of the Mormon church, hence I cannot recommend doing it.
You are only using 3.9W? Impressive, and you still have the graphics (GPU) on the APU running/ enabled? The Pi4 is a bit less in real life use but by fractions, and it will go to about 5W if you really push it (as you can overclock it to 2Ghz), I want to know more. I don't have a 3200G unfortunately- I do have everything but with a 2400G sat in boxes still. Someone wanted a build then cancelled. Give me a chance to play with Ubuntu server as well I suppose.
 
  • Wow
Reactions: LostEnergy

LostEnergy

Caliper Novice
Sep 25, 2019
31
22
I think several people here would be very interested in hearing the details on that :)

My thinking now is, make it too easy and some users might start doing it not using their head, conceivably exceeding the limits of the power brick. You can divorce the power limits CPU-IGP on one hand, the other the APU has some other mechanisms triggering throttling, such as ingress power. I need to examine all that further.

Anyway, I spilled it in another post: Someone discovered using ACPI STPM commands has the desired effect, which I merely confirm.

The question I would have at that point would be- if I dual boot Linux and windows would the clocks stick? I feel like no but if they did it would be 200% worth dual booting for

No, for this reasons: (1) Windows. (2) Goto 1. (3) BIOS resets the values. (4) Games perform different under Linux. Some run better, some worse. Some are native, some need Steam's Proton. (5) The IGP will go from 1250 to 1500 MHz no problem, but gets stalled by memory anyway is my impression.

You can restart Linux (→ kexec) without going through the BIOS, and server-bros do this to shave off downtime, but I'm not aware anyone managed this to Windows. But I recall there's a tool for Windows which does send those ACPI commands.

So did you get VLC to use iGPU hardware acceleration for h.265 4k vids? None of the comparable command line switches that worked in mpv worked in VLC for me, CPU roasted trying tp keep up on 4kx60 but was able to do 4kx30 albeit warmly.

I will share my configuration with you. Ideally, if that doesn't work, let's compare differences using chat, Discord, before hijacking this thread, to later share our notes here.

Let's agree on using the “Elysium 2013” sample (420 MiB) from 4ksamples.com to verify and as metric. Plays fine, no frames lost or dropped. htop has the all-core average at 20–50%, no idea how to have VLC display it's gpu-accelerated. Update: Acceleration is active.
Bash:
uname -rv
5.3.0-10-generic #11-Ubuntu SMP Mon Sep 9 15:12:17 UTC 2019
You're looking for lines with “HEVC” here:
Bash:
vainfo

error: XDG_RUNTIME_DIR not set in the environment.
libva info: VA-API version 1.5.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/radeonsi_drv_video.so
libva info: Found init function __vaDriverInit_1_5
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.5 (libva 2.5.0)
vainfo: Driver version: Mesa Gallium driver 19.1.6 for AMD RAVEN (DRM 3.33.0, 5.3.0-10-generic, LLVM 8.0.1)
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            :    VAEntrypointVLD
      VAProfileMPEG2Main              :    VAEntrypointVLD
      VAProfileVC1Simple              :    VAEntrypointVLD
      VAProfileVC1Main                :    VAEntrypointVLD
      VAProfileVC1Advanced            :    VAEntrypointVLD
      VAProfileH264ConstrainedBaseline:    VAEntrypointVLD
      VAProfileH264ConstrainedBaseline:    VAEntrypointEncSlice
      VAProfileH264Main               :    VAEntrypointVLD
      VAProfileH264Main               :    VAEntrypointEncSlice
      VAProfileH264High               :    VAEntrypointVLD
      VAProfileH264High               :    VAEntrypointEncSlice
      VAProfileHEVCMain               :    VAEntrypointVLD
      VAProfileHEVCMain               :    VAEntrypointEncSlice
      VAProfileHEVCMain10             :    VAEntrypointVLD
      VAProfileJPEGBaseline           :    VAEntrypointVLD
      VAProfileVP9Profile0            :    VAEntrypointVLD
      VAProfileVP9Profile2            :    VAEntrypointVLD
      VAProfileNone                   :    VAEntrypointVideoProc
Bash:
egrep '^[^#\n]' .config/vlc/vlcrc | sed -ne '/^\[/{N;/=/p};/^\[/!p'

[visual] # Visualizer filter
effect-fft-window=none
[qt] # Qt interface
qt-pause-minimized=1
qt-name-in-title=0
qt-recentplay=0
qt-privacy-ask=0
sout-x264-preset=ultrafast
sout-x264-tune=film
svg-width=1
svg-height=1
a52-dynrng=0
soxr-resampler-quality=3
[headphone] # Headphone virtual spatialization effect
headphone-compensate=1
[core] # core program
stereo-mode=1
aout=any
audio-resampler=any
video-title-show=0
vout=xcb_xv
text-renderer=any
access=any
mux=any
access_output=any
packetizer=any
vod-server=any
one-instance-when-started-from-file=0
playlist-enqueue=1
advanced=1
Bash:
dpkg-query -f '${db:Status-Abbrev} ${Package}=${Version}\n' --show 'vlc*' 'linux*' '*amdgpu*' '*vdpau*' 'libav[cfu]*' '*265*' | grep -F ii | column -t | cut -d ' ' -f 3

libavc1394-0=0.5.4-5
libavcodec58=7:4.1.3-1
libavfilter7=7:4.1.3-1
libavformat58=7:4.1.3-1
libavutil56=7:4.1.3-1
libdrm-amdgpu1=2.4.99-1
libdrm-amdgpu1=2.4.99-1
libvdpau1=1.2-1ubuntu1
libx265-165=2.9-4
libx265-176=3.1.1-2
linux-base=4.5ubuntu2
linux-firmware=1.181
linux-image-5.3.0-10-generic=5.3.0-10.11
linux-image-generic=5.3.0.10.11
linux-libc-dev=5.2.0-15.16
linux-modules-5.3.0-10-generic=5.3.0-10.11
linux-modules-extra-5.3.0-10-generic=5.3.0-10.11
linux-signed-image-generic=5.3.0.10.11
linux-sound-base=1.0.25+dfsg-0ubuntu5
linux-tools-common=5.3.0-10.11
mesa-vdpau-drivers=19.1.6-1ubuntu1
vdpau-driver-all=1.2-1ubuntu1
vlc=3.0.8-2
vlc-bin=3.0.8-2
vlc-data=3.0.8-2
vlc-l10n=3.0.8-2
vlc-plugin-base=3.0.8-2
vlc-plugin-notify=3.0.8-2
vlc-plugin-qt=3.0.8-2
vlc-plugin-samba=3.0.8-2
vlc-plugin-skins2=3.0.8-2
vlc-plugin-video-output=3.0.8-2
vlc-plugin-video-splitter=3.0.8-2
vlc-plugin-visualization=3.0.8-2
xserver-xorg-video-amdgpu=19.0.1-1

You can install a particular version of a package by appending =A.B.C to its name, and pick packages from a particular codename (such as xenial or yakkety or eoan) by flag -t eoan to apt or appending /eoan to the package name. Ex.: apt-get install -t eoan vlc>=3.0.8

fin.
 
Last edited:

SFF EOL

Cable-Tie Ninja
Dec 9, 2018
154
36
My thinking now is, make it too easy and some users might start doing it not using their head, conceivably exceeding the limits of the power brick. You can divorce the power limits CPU-IGP on one hand, the other the APU has some other mechanisms triggering throttling, such as ingress power. I need to examine all that further.

Anyway, I spilled it in another post: Someone discovered using ACPI STPM commands has the desired effect, which I merely confirm.



No, for this reasons: (1) Windows. (2) Goto 1. (3) BIOS resets the values. (4) Games perform different under Linux. Some run better, some worse. Some are native, some need Steam's Proton. (5) The IGP will go from 1250 to 1500 MHz no problem, but gets stalled by memory anyway is my impression.

You can restart Linux (→ kexec) without going through the BIOS, and server-bros do this to shave off downtime, but I'm not aware anyone managed this to Windows. But I recall there's a tool for Windows which does send those ACPI commands.



I will share my configuration with you. Ideally, if that doesn't work, let's compare differences using chat, Discord, before hijacking this thread, to later share our notes here.

Let's agree on using the “Elysium 2013” sample (420 MiB) from 4ksamples.com to verify and as metric. Plays fine, no frames lost or dropped. htop has the all-core average at 20–50%, no idea how to have VLC display it's gpu-accelerated.
Bash:
uname -rv
5.3.0-10-generic #11-Ubuntu SMP Mon Sep 9 15:12:17 UTC 2019
Bash:
vainfo

error: XDG_RUNTIME_DIR not set in the environment.
libva info: VA-API version 1.5.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/radeonsi_drv_video.so
libva info: Found init function __vaDriverInit_1_5
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.5 (libva 2.5.0)
vainfo: Driver version: Mesa Gallium driver 19.1.6 for AMD RAVEN (DRM 3.33.0, 5.3.0-10-generic, LLVM 8.0.1)
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            :    VAEntrypointVLD
      VAProfileMPEG2Main              :    VAEntrypointVLD
      VAProfileVC1Simple              :    VAEntrypointVLD
      VAProfileVC1Main                :    VAEntrypointVLD
      VAProfileVC1Advanced            :    VAEntrypointVLD
      VAProfileH264ConstrainedBaseline:    VAEntrypointVLD
      VAProfileH264ConstrainedBaseline:    VAEntrypointEncSlice
      VAProfileH264Main               :    VAEntrypointVLD
      VAProfileH264Main               :    VAEntrypointEncSlice
      VAProfileH264High               :    VAEntrypointVLD
      VAProfileH264High               :    VAEntrypointEncSlice
      VAProfileHEVCMain               :    VAEntrypointVLD
      VAProfileHEVCMain               :    VAEntrypointEncSlice
      VAProfileHEVCMain10             :    VAEntrypointVLD
      VAProfileJPEGBaseline           :    VAEntrypointVLD
      VAProfileVP9Profile0            :    VAEntrypointVLD
      VAProfileVP9Profile2            :    VAEntrypointVLD
      VAProfileNone                   :    VAEntrypointVideoProc
Bash:
dpkg-query -f '${db:Status-Abbrev} ${Package}=${Version}\n' --show 'vlc*' 'linux*' '*amdgpu*' '*vdpau*' 'libav[cfu]*' '*265*' | grep -F ii | column -t | cut -d ' ' -f 3

libavc1394-0=0.5.4-5
libavcodec58=7:4.1.3-1
libavfilter7=7:4.1.3-1
libavformat58=7:4.1.3-1
libavutil56=7:4.1.3-1
libdrm-amdgpu1=2.4.99-1
libdrm-amdgpu1=2.4.99-1
libvdpau1=1.2-1ubuntu1
libx265-165=2.9-4
libx265-176=3.1.1-2
linux-base=4.5ubuntu2
linux-firmware=1.181
linux-image-5.3.0-10-generic=5.3.0-10.11
linux-image-generic=5.3.0.10.11
linux-libc-dev=5.2.0-15.16
linux-modules-5.3.0-10-generic=5.3.0-10.11
linux-modules-extra-5.3.0-10-generic=5.3.0-10.11
linux-signed-image-generic=5.3.0.10.11
linux-sound-base=1.0.25+dfsg-0ubuntu5
linux-tools-common=5.3.0-10.11
mesa-vdpau-drivers=19.1.6-1ubuntu1
vdpau-driver-all=1.2-1ubuntu1
vlc=3.0.8-2
vlc-bin=3.0.8-2
vlc-data=3.0.8-2
vlc-l10n=3.0.8-2
vlc-plugin-base=3.0.8-2
vlc-plugin-notify=3.0.8-2
vlc-plugin-qt=3.0.8-2
vlc-plugin-samba=3.0.8-2
vlc-plugin-skins2=3.0.8-2
vlc-plugin-video-output=3.0.8-2
vlc-plugin-video-splitter=3.0.8-2
vlc-plugin-visualization=3.0.8-2
xserver-xorg-video-amdgpu=19.0.1-1

Bash:
egrep '^[^#\n]' .config/vlc/vlcrc | sed -ne '/^\[/{N;/=/p};/^\[/!p'

[visual] # Visualizer filter
effect-fft-window=none
[qt] # Qt interface
qt-pause-minimized=1
qt-name-in-title=0
qt-recentplay=0
qt-privacy-ask=0
sout-x264-preset=ultrafast
sout-x264-tune=film
svg-width=1
svg-height=1
a52-dynrng=0
soxr-resampler-quality=3
[headphone] # Headphone virtual spatialization effect
headphone-compensate=1
[core] # core program
stereo-mode=1
aout=any
audio-resampler=any
video-title-show=0
vout=xcb_xv
text-renderer=any
access=any
mux=any
access_output=any
packetizer=any
vod-server=any
one-instance-when-started-from-file=0
playlist-enqueue=1
advanced=1

fin
What about running Linux from inside WIndows? That is doable now isn't it?
 

LostEnergy

Caliper Novice
Sep 25, 2019
31
22
No. You could install the “Hyper-V feature” under Windows 10 Pro and up, and have Linux in a Hyper-V VM.—That'll work. Though it won't get you closer to overclocking, for you need access to the bare-metal and nothing virtualized intercepting whatever you do.

As some virus' authors like to thwart detection by having their malware check whether it is run in a VM, I for one recommend installing Hyper-V anyway, if and where possible.
 

NateDawg72

Master of Cramming
Aug 11, 2016
398
302
My thinking now is, make it too easy and some users might start doing it not using their head, conceivably exceeding the limits of the power brick. You can divorce the power limits CPU-IGP on one hand, the other the APU has some other mechanisms triggering throttling, such as ingress power. I need to examine all that further.

Anyway, I spilled it in another post: Someone discovered using ACPI STPM commands has the desired effect, which I merely confirm.
Many of us here are tinkerers so this is right up our alley!

Do you think you could run a quick before & after GPU benchmark to show us that the iGPU overclock works? I don't have an A300 unfortunately but I've been very interested in it. (or link us to someone who has already done so)
 

Curiosity

Too busy figuring out if I can to think if I shoul
Platinum Supporter
Bronze Supporter
M...M...M...M...Multi-Tier...Subscriber...
Apr 30, 2016
708
827
Coolio,thanks for the rundown. I'm not surprised it wouldn't work. For me personally I would be too worried about ourltrubni g my pay, since I'm planning to use a meanwell rps400+ stepup to power my a300 at some point.
 

Stevo_

Master of Cramming
Jul 2, 2015
449
304
I will share my configuration with you. Ideally, if that doesn't work, let's compare differences using chat, Discord, before hijacking this thread, to later share our notes here.

Let's agree on using the “Elysium 2013” sample (420 MiB) from 4ksamples.com to verify and as metric. Plays fine, no frames lost or dropped. htop has the all-core average at 20–50%, no idea how to have VLC display it's gpu-accelerated. Update: Acceleration is active.

fin.

Just a quick reply, I'm 1-3 minor revs off most of your package list, in some cases just a sub rev but the funny thing is that the Elysium sample has the following test right under it on the download page "*NOTE* This file is encoded using x265 and will require a player such as MPC-BE to play them, It will not work in VLC", probably meant for older versions as the v3.0.8 on mine runs it. Anyway I get the same 20-50% core loads as you with vlc, no dropped frames(which only happens on 4kx60 clips from 4kmedia.org). On mpv running hardware acceleration core loads are mostly low single digits with spikes up to 12% using vaapi-copy. Looks/sounds better in mpv as well but it needs the following config and probably only the first line in reality

hwdec=vaapi-copy
hwdec-codecs=all
opengl-hwdec-interop=vaapi-egl

Anyway was just curious, not important enough to spend any time on as I was just using the 4kx60 vids as a gpu test.
 
  • Like
Reactions: LostEnergy