Monday, December 24, 2012

Amnesia Fortnight design notes / analysis: "Autonomous"

I played the Double Fine Amnesia Fortnight prototypes without watching their pitches or videos or reading anything at all about them, so my descriptions / genre framing might be different from the "official" language used. MECHANICS SPOILERS BELOW...

Autonomous is a first person game where you build and "program" robot NPCs to battle hostile NPCs / mine resources for you.

There are some very smart design choices on robot programs / the affordances of each robot part, once you finally figure out how it all works and what each type is good at. They could've implemented a pseudo-language of modular triggers and verbs (probably the most straightforward, obvious way, like the perpetually clumsy Final Fantasy XII or Dragon Age: Origins "gambit" systems) but instead they opted for more designed constraint: preset trigger categories each with 3 possible verbs to choose from.

Some robot heads have some verbs, others have other verbs, e.g. a "sentry" head probably doesn't have the Idle: Walk or Idle: Wander verbs. This creates meaningful variety / specific robot roles while designing for emergent behaviors. (It kind of reminds me of the late MIT-Gambit's robot ecosystem prototype "Robotany", which had a more novel graph-based programming interface that was much less discrete and allowed for visual debugging.)

Though I couldn't imagine a use-case where you'd want a robot to run into a wall and do nothing, yet that is an option. I wonder whether there's value in letting the player fail and make bad robot designs -- but if they're obviously bad, then why leave those options in?

And unfortunately, the interface doesn't really work for this type of thing. The best they could do in this first person game was a modal cursor-based toggle menu. It's not smooth. It's impossible to use in combat -- but that's exactly when you'd need to watch your robots do their thing to understand the behaviors and debug the loadout and design! So it seems like, in its own way, it discourages you from getting feedback or iterating, or at least makes it really inconvenient.

It feels like stacking a tower of grapefruits sometimes -- needlessly frustrating to build robots while you're under fire, or when your robots are stuck in a chokepoint and they keep dying and you have to keep rebuilding them using the same tedious process over and over with, ironically, very little automation in the actual building process and doesn't really fulfill the promise of their approach. It needs more abstraction, more recursion, more computer science: there should be a robot type that builds other robots.

Then your prized robots wander off and all the scenery looks the same so you lose them and never find them ever again, and their camera feed ends up being pretty useless for figuring out their whereabouts.

I also wish it displayed robot part names / stats when hovering over them, and the frob highlight was more prominent and forgiving.

I also wish robots, by default, faced the direction you're facing rather than towards you. A robot's initial facing (especially if they use the "turn 90 degrees upon bump" behavior) is very very very important and is the difference between your robot robustly exploring and attacking or your robot wandering in a circle the whole time.

... These are mostly some very solvable interface issues though.

The bigger design problems now, I think, are more conceptual: (1) how can you make maintaining a robot ecosystem less tedious / encourage more player improvisation rather than rebuilding the same robots over and over in some random nook, and (2) why does this have to be first person, why is it worth having such a bad robot building interface (assuming you want to keep it similar)... the level design has these towers that seem to hint at some gesture towards perspective and line of sight, but right now their main function is to let you analyze the layout of the maze and figure out where your robots have wandered off to. (That's another thing: your robots wander off, disappear, and die, and you never know what happened to them.)

Design suggestions:
  • try some kind of multi camera view "Space Hulk" interface... if it works, great. If it doesn't, oh well. I think this'd work because you want to take better advantage of the first person conceit while letting the player collect data from a safe place.
  • a HUD indicator (maybe on their first person cam viewport) that shows each robot's fulfilled trigger condition and the action they're taking (if you expect us to debug, then you need to give more feedback about the parser... the raytraced line of sight lasers are a good start, but not enough)
  • many more unique landmarks for wayfinding in the level... your setting is a 1980s holographic space magic world, you can literally put any mesh floating anywhere and it'll fit
  • overhaul robot construction interface / process, merge the torso types (it was the least interesting part type to use) with the head types so the heads are the base component, because the heads are the most important thing to start from.
I think there's value in a game where you're paranoid about the robot parts, send them off and have no idea what happens to them ("they grow up so fast!"), which leaves you fumbling around in robot design -- maybe in more of a survival-horror bent -- but I don't think Autonomous wants to be that game.

For the purposes of AF though, I think this was a successful experiment and proves the viability of this concept. I'd just take more risks with the interface.

(Hello, Double Fine intranet...)