Like every other system that streams imagery from a host PC to a smartphone for VR (e.g. TriniusVR), it's going to be a pretty abysmal experience for one reason above all others: latency.
What could be considered a 'critical number' for VR is a "20ms motion-photons latency", or that the time between your head moving, and the image on the screen being updated to have the correct view for that new position is 20ms.
Within that 20ms, you must sample your tracking system, perform all game engine computations (e.g. loading and playing back audio, character animations, physics, netcode, etc), then render the image, then warp the image, then scanout that image to the display, then have the display readout that to the pixels, and finally those pixels start emitting photons and you stop the clock. Another way to think of it is you have a 20ms 'budget' available. You can spend this budget on rendering, or on moving the rendered image around.
The problem comes in that WiFi has both an extremely unreliable link latency, and the minimum latency is pretty huge. In many cases the link latency will be above 20ms and the system is totally nonviable from the start. But even if the latency is within 20ms,
every ms you waste on link latency is a ms you lose for actual useful rendering tasks. For example, if you have an WiFi network in a Faraday cage (no external interference) with no clients other than the smartphone and PC (no network conflicts), you
might be able to get under 20ms reliable. But even if you could hit 15ms, that means you have spent 15ms of your 20ms budget on transmission latency, and have left a mere 5ms for the host PC to complete all rendering tasks. Or to put it another way: to get the same level of functional performance, you need a PC
4 times as powerful. Even an extremely good 10ms transmission time would still need a PC to double in performance to deliver the same image.
But that's not even the whole story. No WiFi link has sufficient bandwidth to carry uncompressed video for VR (even if your smartphone only handles 1920x1080x60FPS that's 3 gigabit/s for uncompressed 8bpp video). So you need to slap two extra stages on the end of the rendering pipeline: video compression on the Pc end, and video decompression on the smartphone end. Neither are really optimised for latency, so you have two extra chunks of wasted time taken out of your 20ms budget, on top of the transmission latency itself. As well as a loss in quality, with VR already starving for every pixel you can get.
This is also a problem with any RF video transmission protocol available for consumers, which is why I'm extremely sceptical of the current crop of 'wireless VR' devices (e.g. the TPCast) that repackage the old SiBeam WirelessHD modules that were designed for remote AV receivers.
tl;dr: I wouldn't touch it, let alone spend money on it. You're fundamentally limited to it being a bad VR experience, and bad VR just isn't worth trying.