While the goal of rendering for VR is to never experience drops below VSYNC (and ideally never experience spikes above it, both through careful timing of rendering), the method for dealing with it in VR is a technique originally called 'asynchronous timewarp' by Carmack and Abrash when the concept was first described for VR (Carmack had experimented with it for Quake, but it was unnecessary and didn't work too well then) with some companies naming it something else (Sony calls it 'asynchronous reprojection' for example).
Basically, a few milliseconds before every VSYNC call, whatever was the latest rendered frame is taken from the buffer, warped based on the latest possible measurement of the user's head orientation and location (almost perfect warping for orientation, with some minor but acceptable artefacts when changing position) and this then gets handed over to the display. Ideally, you would perform this warping as the buffer is being read out ('racing the beam') to minimise latency. If you always have a new frame ready for each refresh, this is regular old 'synchronous timewarp'. If you are synthesising new frames because a new one isn't ready yet, it's 'asynchronous timewarp'.
Other methods of render time management are to change the resolution you are rendering the frame at dynamically frame-to-frame, so 'harder' scenes have a slightly lower resolution. Because EVERY frame gets warped before display, and thus there is no such thing as 1:1 pixel display for VR headsets, the image quality hit is pretty small compared to the rendering performance gain.