Buy a used 1080Ti now for 500 before the price goes back up!
That’s what I’m getting after reading all this.
Was gonna get a nice small 2070 but side graded to a 1070 itx, waiting for 7nm instead.
Buy a used 1080Ti now for 500 before the price goes back up!
That’s what I’m getting after reading all this.
Again, you're getting mixed up between two different things:The motherboard hrtf fake is because it uses the 5.1 mix channel instead of being native to the engine. Audio engine -> 5.1 channel 3d mix >HRTF vs the audio engine handle the HRTF. It uses the crappy 5.1 channel mix instead. I also include environmental modeling in hrtf but you're right that people consider that separate.
Real HRTF uses the player space inside the game engine to generate a 2 channel instead of mixing down 5.1 back to 2 channel. It should also be modeling head shape as well to improve the spatial awareness. I only know of CS2 being a major game to have this built in the engine.
Putting an SSD on the card has no benefit for gaming use (and is functionally equivalent to DMA access over the PCIe bus that has been used for years). Getting textures from backing storage over the PCIe bus to the card is not a bottleneck on performance. If you were to try and keep textures out of vRAM and only load them on the fly (as opposed to the curent practice of caching every level texture onto vRAM until you run out of vRAM or run out of textures, and agressively flush those cached textures if that speace is needed for active tasks) then you would only see performance regression.There is still much to be done in textures. We don't even need ram chips, onboard SSD texture swapping can also work.
That's down to texture filtering, increasing texture resolution would only make the problem worse (nameby by introducing aliasing). MIPmaps change level based on the pixel-to-texel ratio (for many texels a pixel samples) based on absolute MIPmap size, not relative MIPmap size. If the optimum MIPmap level is 6464 for a given draw distance, the 64x64 MIPmap will be used regardless of what the MIP level 0 ('native' texture) resolution is. Thus, increasing texture resolution enables higher fidelity for closer and closer objects, but does not affect fidelity once you start stepping down MIP levels. See this image for example: anyhting at the 512x512 MIP or below would be unaffected by any increases is texture size.Any game I've played despite texture maxed out, still looks fuzzy and not crisp near my character. Increasing the texture quality also doesn't kill your fps.
Again, you're getting mixed up between two different things:
The HRTF is a static transform that takes into account things that don't change dynamically (i.e. your head remains your head) and applies it to an audio stream. This can be applied to a stereo or a 5.1 (or 7.1, or however many discrete channels you want to downmix to). The modelling of sound propagation around the head is done once offline to produce the transform, and this transform is used for all real-time processing. Because it's a fixed transform it is very simple and efficient to apply, but it also only applies to the head-related effects of sound (i.e. those that allow easier discrimination of source direction).
The sound engine is what deals with the dynamic effects of the (relative to the head) moving environment on sound sources within the environment. Reflections, scattering, frequency-dependant attenuation, etc. It's what generates the stereo (or multichannel) mix that then gets fed to the HRTF. You could in theory model the head in this engine and use that in lieu of a HRTF, but you'd just be wasting processing time for no benefit (and depending on the fidelity your real-time engine is capable of, possibly even get worse results).
Putting an SSD on the card has no benefit for gaming use (and is functionally equivalent to DMA access over the PCIe bus that has been used for years). Getting textures from backing storage over the PCIe bus to the card is not a bottleneck on performance. If you were to try and keep textures out of vRAM and only load them on the fly (as opposed to the curent practice of caching every level texture onto vRAM until you run out of vRAM or run out of textures, and agressively flush those cached textures if that speace is needed for active tasks) then you would only see performance regression.
That's down to texture filtering, increasing texture resolution would only make the problem worse (nameby by introducing aliasing). MIPmaps change level based on the pixel-to-texel ratio (for many texels a pixel samples) based on absolute MIPmap size, not relative MIPmap size. If the optimum MIPmap level is 6464 for a given draw distance, the 64x64 MIPmap will be used regardless of what the MIP level 0 ('native' texture) resolution is. Thus, increasing texture resolution enables higher fidelity for closer and closer objects, but does not affect fidelity once you start stepping down MIP levels. See this image for example: anyhting at the 512x512 MIP or below would be unaffected by any increases is texture size.
You're probably much better served by discussing this in the Unity forum and not cluttering this thread. I love game design, don't get me wrong, but it feels tangential at best to the discussion at hand.Concerning hrtf I see what you mean now, by its technical term it is just a fixed function. Would the correct term be 3d spatial audio? Then yes, I absolutely would love that. (CSGO implements 3D audio and they call it HRTF mode)
The mix of 5.1 to hrtf is surround sound, not 3d sound. Unless I'm missing something, 5.1 does not account for sounds on the Z plane. There are losses in fidelity there. I would say audio adds huge immersion benefit. Our audio stimulus is much faster than visual. I realize marketing says visual sells more but I'd love a game 3d spatial audio, with environmental processing. Head tracking for audio is something that hasn't been implemented outside of VR, and is also something I'd like to see implemented in our games.
So if we were to add enormous amount of texture swapping the PCIE is able to keep up? And yes I'm talking about increasing the texture of up close objects. Obviously objects more than 20 feet away look fine but crawling in grass or walking in corridors the texture looks ass. Of course the weapon texture model looks great but then you look at the other characters and it looks much lower resolution. And the fact it is flat. Fauna has volume.
The fact that texture is unnatural and repeats... Our eye is very good at recognizing patterns. The game world, by design is very inorganic. We notice the repeating grass polygons, textures, that walls are the same. If you ever seen an ancient building, you wouldn't expect all its stones on place? And that the walls are at perfect 90 degree angles? Perhaps use those AI cores to insert imperfections.
By the way, I think we can say goodbye to the dream of an ITX-sized 2080, given the TDP. Even a 2070 is unlikely, though Gigabyte could probably adapt their current design. And obviously the 2080 Ti is out of the question.
I think 2070 has a high potential of having ITX size variants. But I notice one thing. The 8-pin power connector is positioned not on the top of the card but at the end of the card, while the 2080 and 2080Ti have the power connectors at the more traditional top position.
I wonder if this FE design will be carried over to AIB cards.
4K.
Global high settings.
Pick a title.
1080ti vs 2080ti.
What are the FPS?
RTX is new. Bleeding edge and not worth the cost of entry if the above comparisons aren’t extremely favorable for the 2080ti.
Don’t preorder a 60+ dollar game.
Don’t preorder a 1200 dollar gpu with zero real world performance data.
My 2c