"It's too expensive and the new features don't work yet" is somewhere we've been before, with the introduction of Hardware Texture & Lighting, and with the introduction of fully programmable shaders (CUDA cores). The 2080 and 2080Ti sit right on the price/perf curve with the 10xx series in regular raster performance:
(and for comaprison, the last time around):
DLSS and other raster performance improvements (like Variable Rate Shading) are only going to shift that up, not down. While the ILM stormtrooper demo looks neat, practical raytracing applications are going to be along the hybrid lines proposed at the
GDC dev talks, augmenting areas where raster and screen-space effects fail (most reflection cases, lots of lighting issues when objects are nearby each other). For VR specifically, screen-space effects don't work properly, so raytracing is going to be the only game in town. More importantly for adoption, rayracing vastly simplifies things for developers who no longer need to work around the limitations of raster hacks to generate a certain look (e.g. being super careful where screen-space reflection surfaces are placed to avoid seeing the 'holes' in the backside of culled dynamic scene objects).
I don't buy the "this just means future cards will cost even more!" idea either. The original GTX Titan arrived 5 years ago with the 7xx series (and had been preceded with other $1k+ halo cards like the GTX 690 or 8800 Ultra) after all. The very top of the 'top end' hasn't really shifted up, and more cards will be released to fill out the rest of the range in time, as with every other generation. Even if those cards lack the RT cores or even Tensor cores, the other improvements to the CUDA cores (e.g. FP+INT simultaneous execution) will still be available. I rather expect Ampere to still feature a number of Tensor cores to take advantage of DLSS (without the number required for RT raypathing and denoise).
Unless you need or want new cards right away, waiting is generally a winning strategy.