Tuesday, May 7, 2013

Notes on first person virtual reality / implementation in Unity.


I've been implementing the Oculus Rift VR goggles into my first person projects, and the process has been... interesting.

Valve's excellent introduction to working with virtual reality (per their experiences in porting Team Fortress 2) is the definitive primer to working with VR in a first person context, and more or less the state of the art. However, it also ends with a powerful notion: we're only just beginning with VR, and it's going to take time before we know what we're doing. Here's a very brief summary of general design guidelines, as defined by Valve:

Stuff that feels GREAT:
  • Moving forward under player control.
  • Unrestricted head movement, 1:1 correspondence between head tracking and camera rotation. (Therefore, head tracking should have no influence over torso facing or gun aiming or anything.)
... stuff that feels OKAY:
  • Moving forward for any reason.
  • Rotating the camera slowly under player control.
  • Overlaying a HUD element in front of the player, in the middle of the viewport.
... stuff that feels QUEASY:
  • Moving quickly.
  • Interrupting or disabling head tracking.
  • Using a non-stereoscopic crosshair.
  • Using non-distorted HUD elements.
  • Overlaying information on the edges of the viewport.
... stuff that feels BAD / AWFUL:
  • Rotating the camera quickly.
  • Rolling the camera.
  • Rotating upside down.
NOTE: something feeling "bad" does NOT mean "don't do it" -- it just means that if you do it, then you should do it consciously for a specific effect, otherwise it is indistinguishable from poor interface implementation. (e.g. ragdoll decapitation death cam in TF2 vs. head rolling in a guillotine simulator.)

Now, here are some notes / my experiences with implementing the Oculus VR stuff into Unity:

Character control. The default OVRCharacterController is kind of terrible and I recommend not using it. Maybe it's okay for very preliminary prototyping... but in the end, you should definitely just roll your own.

HUD. OnGUI, GUIText, and GUITexture are now dead, assuming they weren't already. Unless you want to implement separate HUD systems for your games, you should just stick to the "HUD in world" solution -- parent a bunch of always-render quads onto the camera, in front of the camera.

Skyboxes. Static skyboxes are now dead, because they don't appear in the depth buffer and thus the Rift doesn't now how to correct for how "far" they will appear. You now pretty much have to model / texture a giant skydome mesh, and scale it really big to minimize parallax / perspective effects (OR) render it really early in the render queue and hack the depth testing. Right now, I have a script that just scales it up x 100 on Awake() but keeps it miniature when editing so I can work with it on a human scale.

Crosshair. I'm not a fan of the included stereoscopic crosshair... it uses OnGUI, and there's all this stuff to configure, it just seems over-engineered and inflexible. Just make your own simple one: a plane, parented to CameraRight (don't parent HUD elements to CameraLeft, there's a slight delay in tracking / updating) that changes its local position and scales itself based on a RaycastHit.distance.

Head tracking offset. The "YRotation" variable in OVRCameraController.cs is very useful, and sets a "parent offset" for the head tracking. I found the GetYRotation() and SetYRotation() interfaces to be cumbersome, so I also went and changed the YRotation variable from private to public.

Unity workflow / testing. Stretch or extend your desktop, then drag the Unity "Game" tab onto the Rift's desktop, and maximize. You can now test in-editor, which is much more convenient than building out a standalone version each time you want to test something.