I don't know why everyone is crying doom & gloom here, but I think people are over-analyzing the presentation and also overlooking a few key points that weren't emphasized in the presentation when they should have been.
The presentation held the ray-tracing architecture as an analog to the programmable shaders of the 8X00GTX series, when it really should have been compared to the hardware T&L units that the Geforce 256 had. The 8X00GTX had an immediate impact on current games of the time without much hardware support (although DX10 certainly improved performance further) but the hardware T&L units required graphics engine support/industry adoption. Once adopted, the T&L units completely changed the way lighting was handled on GPUs and offloaded the actual GPU (or rather CPU) for many other tasks, accelerating 3D quality between the 2001-to-2005 period. Other GPU vendors had competing T&L units (see Voodoo 5500) but most fell flat.
One of the few things that the movie production companies do that video games can't handle in real time is high density ray-tracing. Ray-tracing is coming one day whether today with the RTX announcement or in the future. Nvidia is trying to grab hold of this next logical step that the market will eventually go down by creating a ray-tracing standard before AMD/others can, just as they did in the early hardware T&L days. Hopefully, people who buy RTX GPUs now wont end up with useless RT units if the industry goes with another solution.
The increased die space costs more. Its not as easy as duplicating sections of the silicon either (as in Threadripper). A price increase is logical, especially if they're maintaining the performance of the previous generation and then some. We also have low GDDR6 yields and higher import taxes almost across the board...and we still have the competition from miners (even though its subdued somewhat). You can tell Nvidia tried to deemphasize the price with their tricky tactics. (We all saw through that slide with the picture of the 2080 Ti that said "Starting at $499", Nvidia...)
I'm assuming Nvidia Volta is a similar architecture to Turing. Lets look at the Titan V which has a similar number of CUDA cores as the RTX 2080 Ti just without the RT cores and with a few less optimizations here and there (e.g. hardware support for foveated rendering, etc). There was at a least 30% increase in performance from games over the Titan Xp across the board. Lets expect the performance of this card to match Volta at the very least.
The increased TDP is most likely from the much-overlooked 35w that can be supplied through the USB-C/VirtualLink port. The TDP is likely calculated including this load which is why its exactly 35w above the standard 250w of most cards today. It is unlikely that this will add much to the heat produced from the card even in use. This card will likely run much cooler than this generation just from process improvements.
If they put more RAM into the RTX 2080 Ti, it would encroach on their market for the Quadros (which have less CUDA cores to begin with but more RAM). You want space to do lots of data crunching? Get a Quadro. Want a lot of textures for gaming? Your 11GB RTX 2080 Ti will have more than enough memory for a while.
(I was going to make an account here eventually since I'm building a SFF PC but I figured this was a good time to start.
)