I'm making a small Thief-like with integrated level editor (3rd time's the charm!) to test (and teach myself) four (4) main feature-sets of the Unity 5 beta -- physically-based shading, integrated image-based lighting, real-time global illumination, and the UI system introduced late in Unity 4.6 but might as well have shipped in Unity 5. Here's my opinion so far:
- Physically-based shading (PBS / PBR) is decent once you start internalizing the new workflow / new way of thinking about materials. Unity needs to do a lot more education with this thing, you have to do a lot of digging in random websites to figure out how exactly the various parts of PBS work. Pop quiz -- how does a "metalness map" work? (Hint: you just paint bits that are metal 100% white, paints bits that aren't metal 100% black, and stick it in the red channel.) It's symptomatic of the wider problem with PBS in game dev as a whole, which is that it is still mostly early adopters and the documentation and resources are so fragmented. "What is the roughness value of iron? Well, this is what some Unreal 4 devs thought... but wait, Unity uses "smoothness" instead, so do I invert the value? But why does albedo matter so much, I thought metalness de-emphasizes albedo..." Everyone is pretending that they're talking about the same thing, and conceptually they are -- but in technical terms, they're not.
What worries me most about PBS is that it seems really difficult to mod the "standard shader" model. If you look at the built-in shader source, it's full of all these arcane #pragma directives and cg includes, and there's very little actual drawing code in there. Maybe the actual code is so complex they had to break it up like that?... but what if I just want to make a really simple triplanar PBS shader, how do I even approach this? I have a feeling the legacy lambert / blinn-phong models probably aren't going away anytime soon.
- Image-based lighting (IBL / IBR) is basically the old Half-Life 2 Source Engine implementation of placing panoramic cubemap probes everywhere, and then models will "catch" nearby probes to become shiny. The Unity implementation is a bit better than Source with blending between different reflection probes, whereas cubemap popping was kind of annoying in Source. This is all conceptually similar to the existing Light Probe system in Unity. There's also the option to generate reflection probes at real-time for easy mirrors and whatnot, which sounds incredibly expensive to me. Anyway, this pretty much works the way I'd expect it to work, and it resembles my own cubemap system that I coded a year and a half ago.
- Real-time global illumination (GI) uses the "Enlighten" middleware as seen in Dragon Age Inquisition, etc. and replaces the old Beast lightmapper, even for baked static lightmaps. In theory, this should be really cool, allowing really nice light bounce effects as you adjust lights in real-time... in practice, there are severe limitations on how you use this, especially in the half-finished state in this beta. The "light systems" re-baking process is very very slow and makes iteration kind of impossible during level construction, and the real-time GI currently only supports bounces on directional lights and no other light type. It's also debatable whether this is actually "real-time" when you have to bake so much data offline? Anyway, this thing is really disappointing and I consider it kind of unusable, for now. What's been more useful to me is the Valve approach to lighting -- standardize some light prefabs, and then place them in your level and come back to it later when you're done with the rest of your game.
- The 4.6 UI system was supposedly lead-designed by the guy who made the popular nGUI plugin. (This was after Unity tried and dropped Scaleform, a UI middleware popular in AAA that requires you to use Flash to author all your UI systems (!)... I'm really glad they dropped it.) If you're familiar with a lot of old OnGUI or nGUI concepts, then transitioning is fairly straightforward. The new Events system is also surprisingly robust with possible uses outside of the UI system. Dynamic instantiation of UI elements still requires you to manage your own objects and pool and it's still a pain in the ass, which was probably the only reason to use OnGUI before... anyway, the most important thing for you to know is that to detect whether the cursor is over a UI element or not, use bool EventSystem.IsPointerOverGameObject( ) -- there, I just saved you 30 minutes of googling, you're welcome.