Thursday, May 30, 2013

Design futures: AutoBrushes, levels that build themselves, and the politics of procedurality.



If Bethesda's detailed GDC presentation about modular level design kits for Fallout / Skyrim showed me anything, it's that modularity is actually a huge pain in the ass -- and not the good kind of ass pain either. Why should we keep building 3D levels in this slow, totally unnecessary way, with a workflow that's at least a decade old?

I remember a time when level design was slightly faster, and that time was the time of the brush. What if we could combine the benefits of modularity (variety / adaptability / abstract detail out of design) with the benefits of a brush-based workflow (simplicity / speed / focus on "platonic forms" of design)?

Tuesday, May 28, 2013

Cubemapped Environment Probes: Source Engine-style cubemap implementation in Unity 4 Pro


I wanted a static baked-in solution for doing cubemap-based reflections in Unity. Using cubemaps instead of (or with) traditional Blinn-Phong specular is great for games if (a) the light sources / environments stay static, and if (b) the player's camera will frequently be close enough to see small surface details. If that sounds like your game, maybe baked cubemaps are the way to go for you.

Friday, May 24, 2013

Thoughts on VR aesthetics



The current working standard for first person games is Valve's VR implementation in Team Fortress 2: the player uses the mouse to move an on-screen reticule, and if the reticule leaves the middle "dead space" screen area then it rotates the player's torso. Head tracking does not change where you're aiming -- and outside of giving you peripheral vision, it is somewhat meaningless within the context of the game.

Is that the best VR implementation we can do? To render it meaningless?

Right now, the Oculus Rift exists mainly in this realm of performance art -- where most of the interesting stuff happens when watching other people use the Rift, and imagining what they see and what their experiences are. The vast majority of videos out there focus on the player instead of the game ("This is the future of gaming. It looks like a dog trying to escape from under a duvet."), and the best Rift games focus on the intersection between virtual and non-virtual, like players who physically kneel on the ground to play the guillotine sim Disunion. I think much of this dynamic is informed by the growing dominance of Let's Play (LP) culture... which is to say that the "best players" are now the ones who can "perform" the game in the most compelling way and reveal new aspects of the game that we didn't realize before, and that way's context usually exists outside of getting high scores / headshots. What it means to be "good at a game" is slowly shifting from sport to sport as theater.

Virtual head animation has never been more human and true. We are all now cinematographers who can directly share our fields of vision in extremely subtle ways; the act of looking is now the most expressive input in video games today. And right now, the "best VR standard" is rendering it meaningless?

Don't forget that the Rift isn't just a display -- it is also a controller. Let's do stuff with it.

Tuesday, May 21, 2013

Post-partum: teaching Unity.

Here's a bit of reflection on my first semester teaching Unity at an art and design school, mixed undergrad / grad level. They're in the form of rough guidelines that I'm giving myself for next semester:

» Don't assume mastery of coding fundamentals. Some students will be able to formulate and solve abstract problems with little help, while other students will need to be taught the difference in syntax between variables and functions, repeatedly. Course prerequisites, even if enforced by your school's registar (at Parsons, they usually aren't), are no guarantee of mastery. In my class, I put a code comprehension section on the midterm exam, only to realize that some students didn't understand nested for() loops, which implies they don't fully grasp how unnested for() loops work either; but it was the midterm, and it was already too late. Some students didn't know how to use AND or OR, and some didn't understand scoping. I should've caught these problems earlier instead of unintentionally punishing them for it.
Recommendation: On the first or second week, conduct a diagnostic quiz that asks code comprehension questions, and assess which students need which kinds of help.

» Cover vector math, every day. Do light drilling. Even students with significant code experience may have trouble conceptualizing how vectors work together / what vector operations do, especially in 3D. I don't think I'd necessarily impose grade school drilling, like worksheets with 50 problems and 1 minute to solve all of them, but a few minutes or short drill, every day or week will help a lot.
Recommendation: At the start and end of each class, do some vector math problems together as a class. Practice thinking about vectors in multiple modes: visually as spatial coordinates, abstractly as sets of numbers, and procedurally as variables in code.

» Teach Maya and Unity, side by side, in parallel. I front-loaded the syllabus with Unity stuff, and only started Maya in the second half of the course. I think this was a mistake because we ended up having a 2 week break where students did very little code and focused on Maya, and it seemed to be like we were moving "backwards." I should've paced the class better to prevent this dead time.
Recommendation: When teaching the basics of 3D transformations in Unity, also teach the basics of 3D transformations in Maya, and emphasize the many similarities in interface and project organization: scene hierarchies, hotkeys, lights, materials handling, etc.

» Don't teach coroutines. I tried to teach this early in the course, and it ended up confusing a lot of people. Personally, I use coroutines constantly because I find them really useful for timing things... but maybe I shouldn't have projected my own practices on them.
Recommendation: Teach the use of timer variables and bookkeeping variables / using Time.time instead. It is worse practice sometimes, but it is a more immediately intuitive way of timing things, and reinforces fundamentals of controlling logic flow.

» End with procedural mesh generation / mutation? I really want this to be an "a-ha" moment of the course -- when students realize that everything is just a different way of storing data, and artists are just people who can figure out how to get the data looking and performing the way they want. Considering the emphasis on 3D, I think this is a much more coherent endpoint than my previous emphasis on AI and behavior.
Recommendation: If students have been working in Maya for a while and they understand for() loops, they might be ready to iterate through arrays of mesh data. Maybe look at implementing some Perlin noise or a simple sculpting tool.

This summer, I'm going to try to put these ideas into practice when teaching 6 week Unity intensives at NYU Game Center. Feel free to check-up on us on the class GitHubs (session 1) / (session 2).

Wednesday, May 15, 2013

On focalization, and against convenient understandings of immersion / flow.


This post is significantly changed from a talk I gave at Different Games. It was prompted by Jon Stokes approaching me and helpfully telling me that my talk made no sense, so hopefully this post will be more clear. SPOILER WARNING: I spoil Brendon Chung's excellent Thirty Flights of Loving.

As a self-proclaimed developer of "personal games", one thing that puzzles me about these games and empathy is that no one really knows how emotional transfer between players and games works -- like, what's really happening when you control a character in a game? Do you sympathize directly with the narrative situation, or are you role-playing, or do you think more in strategic terms, or what's going on? These words -- flow, immersion, empathy, role-playing -- how much do they really explain or predict how we, as humans, experience video games?

There's very little actual research on this because, I think, the game industry isn't interested in funding it and finding out. About the only researcher I've heard of is Jonas Linderoth and he argues for severe skepticism, or that games don't actually teach you anything outside of games -- and that isn't something the game industry would want people to hear.

Tuesday, May 7, 2013

Notes on first person virtual reality / implementation in Unity.


I've been implementing the Oculus Rift VR goggles into my first person projects, and the process has been... interesting.

Valve's excellent introduction to working with virtual reality (per their experiences in porting Team Fortress 2) is the definitive primer to working with VR in a first person context, and more or less the state of the art. However, it also ends with a powerful notion: we're only just beginning with VR, and it's going to take time before we know what we're doing. Here's a very brief summary of general design guidelines, as defined by Valve:

Thursday, May 2, 2013

Game development as drawing; gesture, iteration, and practice.


(NOTE: There are sketches of nude human figures in this post, with their anatomy intact.)

If you ask any great AAA game artist about the single-most important thing you can do to get better at art, they'll probably start mumbling about "foundation." Photoshop, Maya -- these are just newfangled versions of pencils or paintbrushes or clay. They won't really teach you how to paint, or how to sculpt, or how to look at things and represent them. In this way, a bit of traditional, non-digital fine arts education can be an extremely useful tool sometimes.

In the pretty casual 12 week, 2 hours a week drawing class I took, the teacher presented two ways of thinking about drawing: