• Hey Guest, save 20% on our entire merchandise range during the month of December! Use promo code MAKEITSMOL at Our Merchandise Store!

Accessory Firebat Dock-X Thunderbolt 3 Titan Ridge NVME Dock - eGPU tests and benchmarks

REVOCCASES

Shrink Ray Wielder
Original poster
REVOCCASES
Silver Supporter
Apr 2, 2020
1,716
2,343
www.revoccases.com
probably more interesting for the eGPU.io community but maybe some fellow SFF members are also interested in Thunderbolt and eGPU stuff...

long story short, I've been looking for some *affordable* Thunderbolt Hub with PCIe x4 / NVME support and Titan Ridge or Maple Ridge controller for quite some time now. I almost got myself one of those WD_BLACK D50 docks but luckily I found another one with even more features for less $$$ - the Firebat Dock-X



Looking at the price vs. the features this dock promises to offer I was a bit skeptical at first but after I got it I was really impressed...

The hub is made from anodized aluminum, uses the JHL7440 Thunderbolt controller, has TB3 passthrough and one PCIe x4 (M.2) interface for NVME drives or GPUs (via riser cable).





With my previous hub I always had issues with mouse stutter and poor eGPU performance when daisy chained via the dock. In order to see how the Firebat Dock-X performs, I did a couple of tests...

1st test setup: GTX1650 hooked up via the M.2 slot of the hub to a TB4 compatible notebook



Results without any other devices connected to the HUB:



Results with USB mouse and keyboard connected to the hub and "simulating gaming" during benchmark by constantly moving mouse and pressing keys:



Summarizing test 1: way better than my old dock... no any stutter when moving the mouse and little to none performance penalty

-----------

2nd test setup: eGPU daisy chained to the hub via TB3, NVME drive installed in the hub, mouse and some USB stick connected to the hub



results without moving the mouse, no access to the NVME drive or other actions during the benchmark:



results doing the ultimate stress test: TimeSpy running from the hub's NVME drive while copying a large amount of files from the host to the hub plus moving the mouse during benchmark:



wow! no mouse stuttering during benchmark and only 5% performance penalty compared to M.2 eGPU connection despite stressing the NVME drive at the same time...

------------

just for reference: GTX1650 plugged into a "normal" PCIe 16x slot



------------

final thoughts: IMHO that hub is awesome for the price and has great potential for different setups / DIY projects... still not sure how I'll be finally using it but I can think of at least two scenarios...
  • "ThunderDockGTX" dock with internal GTX1650
  • "one cable" solution for WorkStation on-the-go / gaming at home
 

SFFMunkee

Airflow Optimizer
Jul 7, 2021
283
244
Fantastic results, missed this previously but nice to see that eGPU performance using TB connectivity has effectively caught up to M.2 risers!

Looks like the newer controllers really kick ass :)

My mind certainly considered the same two scenarios. Pity my newer laptop doesn’t have thunderbolt, only USB-C!
 
  • Like
Reactions: REVOCCASES

REVOCCASES

Shrink Ray Wielder
Original poster
REVOCCASES
Silver Supporter
Apr 2, 2020
1,716
2,343
www.revoccases.com
Fantastic results, missed this previously but nice to see that eGPU performance using TB connectivity has effectively caught up to M.2 risers!

Looks like the newer controllers really kick ass :)

Intel removed most of the TB bottleneck from Tiger Lake onwards because they integrated the TB controller directly in the CPU (on mobile). Paired with a Titan Ridge (or newer) controller on the accessories side you'll get the best possible performance for your eGPU setup.

Based on my experience I'd say the sweet spot for eGPU via TB is the 3050 / 3060 performance class. Anything faster and the TB/PCIe 4x limitation becomes much more noticeable.
 
  • Like
Reactions: SFFMunkee

SFFMunkee

Airflow Optimizer
Jul 7, 2021
283
244
Intel removed most of the TB bottleneck from Tiger Lake onwards because they integrated the TB controller directly in the CPU (on mobile). Paired with a Titan Ridge (or newer) controller on the accessories side you'll get the best possible performance for your eGPU setup.

Based on my experience I'd say the sweet spot for eGPU via TB is the 3050 / 3060 performance class. Anything faster and the TB/PCIe 4x limitation becomes much more noticeable.
That's pretty much the next question people (well, I) have, which is "what's the upper limit before PCIe x4 becomes too limiting?"
I remember Toms Hardware did some PCIe bandwidth/performance scaling tests, comparing PCIe lanes back in 2007 and then with PCIe versions (2.0 to 4.0) more recently.

What I found interesting, was that in the 2019 test, using a 5700XT, PCIe 2.0x16 only lost ~2% performance vs. PCIe 4.0x16, which be is theoretically similar to PCIe 4.0x4 (or PCIe 3.0x8). Then TechPowerUp also published a great article on 6500XT performance and PCIe bandwidth scaling. Of course this card has a maximum of x4 so it's really just looking at the difference in PCIe v2/3/4 again.

As we know, Thunderbolt3 effectively provides PCIe3.0 x4 (with additional functionality and a bit of overhead from encapsulation) and as demonstrated in those articles, the performance of a 5700XT is nearly equivalent when running at PCIe2.0x16 (or 3.0x8, or 4.0x4) it when using PCIe4.0 x16. This says to me that:
1) low/mid range GPUs can saturate PCIe3.0 x4 fairly easily
2) mid/high end GPUs still do not saturate PCIe x8 consistently

Thus, a newer version of Thunderbolt using more lanes or a later PCIe revision (or indeed PCIe4.0 M2 adapters) would dramatically reduce eGPU limitations. You'd lose only a few percentage points compared to directly installed GPUs, even on relatively high end cards. That would really open up the market for better gaming eGPUs!

For what it's worth, I'm fairly sure I've got the numbers right here, but please correct me if not:
Data RateFrequencyData EncodingPer lane/directionTotal BW x1Total BW x4Total BW x8Total BW x16
PCIe v1.12.5 GT/S2.5 GHZ8B / 10B (80%)2 Gbps (0.25 GB/s)0.5 GB/s2.0 GB/s4.0 GB/s8.0 GB/s
PCIe v2.05.0 GT/S5.0 GHZ8B / 10B (80%)4 Gbps (0.5 GB/s)1.0 GB/s4.0 GB/s8.0 GB/s16 GB/s
PCIe v3.08.0 GT/S8.0 GHZ128B / 130B (98.5%)8 Gbps (1.0 GB/s)2.0 GB/s8.0 GB/s16 GB/s32 GB/s
PCIe v4.016 GT/S16 GHz128B / 130B (98.5%)16 Gbps (2.0 GB/s)4.0 GB/s16 GB/s32 GB/s64 GB/s
PCIe v5.032 GT/S32 GHz128B / 130B (98.5%)32 Gbps (4.0 GB/s)8.0 GB/s32 GB/s64 GB/s128 GB/s
Thunderbolt 110 Gbps = 1.25 GB/s
Thunderbolt 220 Gbps = 2.50 GB/s
Thunderbolt 340 Gbps = 5.0 GB/s
Thunderbolt 440 Gbps = 5.0 GB/s
 
Last edited:

robbee

King of Cable Management
Sep 24, 2016
700
940
As far as I know, 4 lanes of PCIe 4.0 should still be okay even for todays high end GPU's. I cannot find direct comparisons, but found the following video's:

RTX 3070 comparing PCIe 3.0 x4, x8 and x16, no difference between x8 (same bandwidth as 4.0x4) and x16.



RTX 3070ti comparing PCIe 2.0, 3.0 and 4.0 all at x16. Gen 2.0 (same bandwidth as 4.0x4) show a small decay of about 5% buit only at average fps, not at % lows.



eGPU over TB is hard to compare with the video's though, as the cable adds latency and reduces signal integrity.
 

REVOCCASES

Shrink Ray Wielder
Original poster
REVOCCASES
Silver Supporter
Apr 2, 2020
1,716
2,343
www.revoccases.com
Thunderbolt performance in general is hard to compare and you can't get a fixed percentage of performance (loss) vs. PCIe.

I've spent a lot of time finding the "best case scenario" for my Thunderbolt projects and to get a good idea what works best is benchmarking with a fast M.2 SSD. Long story short I've seen everything between 2600Mb/s to 3800MB/s with TB4.

Best case - rule of thumb: Titan Ridge (or newer) TB controller on your accessories (eGPU), Tiger Lake Mobile platform (or later) on the host, <=0.7m TB cable

If you go for a desktop platform - even if it has TB4 (like the MSI Z590i / Z690i) - you'll loose another 10++ percent performance compared to Tiger Lake Mobile. Simply because there is one more controller and the chipset in between.
 
  • Like
Reactions: SFFMunkee

REVOCCASES

Shrink Ray Wielder
Original poster
REVOCCASES
Silver Supporter
Apr 2, 2020
1,716
2,343
www.revoccases.com
  • Like
Reactions: SFFMunkee

SFFMunkee

Airflow Optimizer
Jul 7, 2021
283
244
M.2 PCIe4 x4 vs TB4 performance comparison:


Thanks to @msystems for finding this.

Now I kinda want to get a NUC12 "Wall Street Canyon" to try that out XD
Why don’t you send us your old one, so you have an excuse to upgrade haha

I wish they’d integrate Thunderbolt ( on-die along with the USB4 controller ) for desktop CPUs too.

You’d think it would be popular in the creative industries where Apple has a good market share and TB accessories are common (though it’s mostly for large, high-throughput storage IIRC)
 
  • Like
Reactions: REVOCCASES

msystems

Master of Cramming
Apr 28, 2017
591
1,021
A little earlier I doing some more research on TB and M2, but I just saw your post a little higher, and the conclusion is exactly what you already said:

Best case - rule of thumb: Titan Ridge (or newer) TB controller on your accessories (eGPU), Tiger Lake Mobile platform (or later) on the host, <=0.7m TB cable

Or basically:

If the TB host controller is not on-die then it's going to be shit.

What remains to be seen is whether the on-die TB4 implementation on 12th gen Intel will be any better (as in: more efficient) than the implementation on Tiger Lake. I'm not sure if we know the answer to this yet? I found a Reddit post comparing Ice Lake (on-die) TB3 vs Tiger Lake (on-die) TB4 and it was ~1% difference / within margin of error. So I don't know if the situation can improve more.

The new Nuc12 looks very good for a tiny eGpu rig though. Finally it has a strong enough Cpu to really go with a serious Gpu. Can either use the tb4 link or make a custom case to use the single m.2 slot (and use Sata ssd) for the most compact eGpu performance.

Although if going m.2, the Deskmini B660 is what I have my eye on. Slightly bigger and more power hungry though.
 
Last edited: