Showing posts with label resources. Show all posts
Showing posts with label resources. Show all posts

Thursday, March 16, 2023

Double Fine PsychOdyssey recaps / viewing guide, episodes 01-17


Last month, game industry documentary makers 2 Player Productions debuted a massive 32-part YouTube game dev doc series Double Fine PsychOdyssey, chronicling the development of Psychonauts 2 from its earliest glimmers of pre-production in 2015 to its final release in 2021. 

I assumed it was mostly for fans but after watching all 32 episodes (on 2x speed, skipping some parts) I've changed my mind and now I think it's essential viewing for all game designers / devs. It shows the everyday work of medium-scale commercial game dev in unprecedented detail: the creative high of successful collaboration as well as the ugly prototypes, grueling bug fixes, and painful miscommunication. There's also a thrill of access, where the camera captures vulnerable moments it wasn't quite supposed to see. The most epic public post-mortem ever.

As a public service, I've written a short text summary and some notes for each episode. This recap post / viewing guide covers only the first half of the series (episodes 01-17) and I'll try to write-up the second half later.

SPOILER WARNING: obviously, these recaps spoil what happens in each episode.

Friday, April 9, 2021

Getting started with HaxeFlixel in 2021

Warning: this is a fairly technical game developer-y post. If you came here for gay sex, I'm sorry.

For an upcoming project commission, I'm making a 2D game with crowd simulation and simple controls that works well on mobile browsers. (Reminder: for iOS, that means WebGL 1.0 and no WASM.) The engine should be able to render and simulate 200+ lightweight game objects -- frame-animated sprites with simple collision, no fancy physics or shaders.

Which game engine should I use to maximize ease of learning and compatibility, and manage hundreds of simple objects on-screen? Here was my thought process:

  • Unity WebGL: way too heavy and slow for mobile browsers, and maybe overkill for a no-physics 2D game anyway. (Although the Lil Nas X 3D twerking game runs surprisingly well on iOS's WebGL 1.0, I wonder how much they had to optimize?)
  • Unity Project Tiny: as far as I can tell, Project Tiny and its DOTS dependency is still in early development. The random caveats and various in-dev inconsistencies with regular Unity would also be frustrating. And as with many other Unity side projects, its long term future feels really hazy.
  • Construct: seems ok, and I think I could've gotten used to the visual block scripting, but overall the pricing and licensing feels weirdly restrictive. I have to pay to use more than 2 JS files? I have to pay to use more than 1 font, or make an animation more than 5 seconds long? These are some really bizarre artificial resource limits.
  • Phaser: seems popular enough with decent TypeScript support, but I want the option of building out to a native executable without a weird Electron wrapper or something. Their monetization model (free open source base but you pay for "premium plugins" and tools) is one of the more generous ways to go about this, I get it, but it still feels weird to me and reminds me of Construct.
  • Godot: I've wanted to try Godot for ages, but in the end I felt like I didn't have a good sense of what its HTML5 Web export could do + learning enough of the "Godot way" and GDScript would've taken a while. It's also in the middle of a big break between v3.0 and v4.0, and ideally I'd like to wait until like v4.2 to commit to learning it.
  • Heaps: promising and some people get great results with it, but maybe still too early in public lifecycle for a total newbie like me, with not enough samples / docs / robust official tutorials to learn from yet. If or when I do try out Heaps, I'll probably try using Deepnight's gameBase project template.

In the end, I chose to build this particular project with HaxeFlixel. This post details my early impressions, thoughts, confusion, advice, etc. with learning it.

Thursday, March 12, 2020

Living in interesting times

Hello all. It's 2020. The world feels... different. Hopefully you're all doing OK!

A recap of what I've been up to --

In these days of social distancing, remote classes, and quarantines, I taught my class about streaming on Twitch... by streaming the class on Twitch. Some writeups:




I'm also getting into Quake 1 mapping. The modern tools are great, the video tutorials are on point, and the community is lovely. Come join us. I recommend Andrew Yoder's comprehensive guide for getting started.


Until next time...
-- R

Monday, November 11, 2019

Practical primer to using Unity Timeline / Playables


I recently used Unity Timeline to do cutscenes in a game. Once you figure out how to use it, it works great, but that learning curve of expectations and setup is pretty annoying.

To review: Timeline is a sequencing tool. It's good for higher-level logic where you need to coordinate a bunch of objects at once. For many games, that usually means choreographing cutscenes or sequences. Many different engines and toolsets have sequencer tools and they all generally have the same workflow -- you make tracks and you put actions on those tracks. (see also: UDK Matinee, UE4 Sequencer, Source 1 Faceposer, Witcher 3's cinematic tool)

Note that Timeline is not an animation tool, it's higher level than that. Think of it like a movie director, it coordinates animation, audio, characters, and FX together, but doesn't actually make or process those assets.

In this intro workflow post, I'll start with SETUP TIMELINE, then SETUP DIRECTOR and MAKE CUTSCENES and CONTROL THE DIRECTOR VIA C# SCRIPT, and lastly how to MAKE CUSTOM TIMELINE TRACKS.

Monday, March 25, 2019

new Unity tool: Bobbin


I wanted to be able to write game dialogue in Google Docs (from my phone or tablet, or to share with external collaborators) and then automatically send those changes into the Unity project on my laptop.

To help me do that, I made a free open source tool called Bobbin, which is a relatively simple Unity Editor plugin that can automatically download the data at URLs, and import that data as a file in your Unity project. Again, it's very simple: every X seconds, it opens a list of URLs (as if it were a web browser) and then it saves all the bytes as a .txt, .csv, .png -- or in-theory, whatever file type you want. Note that this is just an automated download manager, you will still need to write your own game code to actually import, process, and use these files in your game.

The main audience for this tool is narrative designers, writers, localizers / translators, and designers / developers who need something fast and lightweight for syncing files with external collaborators. I imagine it also pairs well with text-based narrative plugins like Yarn Spinner, where in-theory, you could collaboratively write Yarn scripts in a Google Doc and then use this tool to automatically bring the script into your game.

(But if you're making a game that's going to make heavy use of spreadsheets, you should probably use something more robust like Meta Sheets or CastleDB-Unity-Importer, which can import your spreadsheet data as C# types with Intellisense auto-completion in your IDE.)

Anyway, I'm planning on a few more feature updates, like runtime support and/or better Google Sheets support, but personally I'm probably not going to expand the feature set much beyond that.

I hope you find it useful! And as always, feel free to submit any bug reports (or small feature requests) by opening an issue on the github.

Monday, November 26, 2018

Notes on "Sparkling Dialogue", a great narrative design / game writing talk by Jon Ingold at AdventureX 2018


My colleague Clara Fernandez-Vara pointed me towards this great game writing talk by Jon Ingold this year at AdventureX, an excellent narrative design conference in London. Unfortunately the Twitch video of the talk is hard to follow and the YouTube version of this talk is still forthcoming, so I thought I'd summarize the talk here because I found it very useful. As of December 1st, the YouTube version is now online!

(NOTE: This post isn't a transcript of Ingold's talk. It's a summary with my interpretations, and I might be wrong or misunderstanding.)

Ingold begins with something that should be obvious and uncontroversial to everyone: generally, most video game dialogue is poorly written. This isn't to say video games are bad, or that they we shouldn't try to do any dialogue at all. There are also many reasons why game writers are forced to write poorly, whether it's because of lack of resources, or last minute changes in the design, or other production constraints, etc.

The point is not to blame writers. The point is to highlight a problem in the craft and to define a better ideal. So, how can we write more competent game dialogue that is slightly less embarrassing?

To demonstrate the problem of typical video game writing, Ingold shows us this conversation from the first hour of Assassins Creed Odyssey in the starting mission "So It Begins":

Sunday, November 4, 2018

The first person shooter is a dad in mid-life crisis

OK I know Heavy Rain isn't an FPS but I like this screenshot so I don't care
Every semester for our introductory Games 101 historical survey class, a different NYU Game Center faculty member presents a survey of a game genre. Matt Parker lectures on sports, Clara Fernandez-Vara talks about adventure games, Mitu Khandaker talks about simulations, and so on.

My personal lecture happens to be on the first person shooter (FPS) genre. In my lecture, I trace five main currents through the FPS genre:

Thursday, July 12, 2018

Tips for working with VideoPlayer and VideoClips in Unity


Traditionally, game developers use Unity for real-time 2D and 3D games and shun any use of pre-rendered video, partly out of design dogma but also because the MovieTexture system was a nightmare. However, the recently overhauled VideoPlayer functionality means that *video* games are now much more doable. You can now feasibly make that Her Story clone you always dreamed of!

I'm currently making a video game that makes heavy use of video, chopped into many different video clips. It's been fun trying to figure out how to build basic video functionality like playlists and clean transitions between clips, except in Unity.

The thing they don't tell you about re-inventing wheels is that it's fun and exciting to re-invent the wheel, and it gives much more appreciation for the craft that goes into wheels. It was fun to think about how a live telecast cues up video footage on multiple monitors, and how a real-world broadcast works, and I learned a lot about why they do it like that.

Let's talk video in Unity.

Thursday, May 24, 2018

Games as Research symposium, after-action report


A month ago I attended a one-day Games As Research symposium, hosted by TAG at Concordia University and organized by Rilla Khaled and Pippin Barr. If you want my rawest thoughts, here's my live tweet thread from that day.

I learned a lot about design history and current methodologies for studying how a game is made. Here's some of the common topics and threads that we kept coming back to, and a brief summary of each presentation:

Tuesday, April 24, 2018

Advice for making a game design / game dev portfolio


After advising several game dev students on their portfolio websites, I realized I was basically giving the same advice and pointing out the same kinds of issues, and maybe I should write about it. So here's some thoughts for students making a portfolio:

First, figure out your audience. Who is your portfolio primarily for? If you want to get a AAA game job, you should try to tailor it to the norms of that industry -- if it's for some sort of school admissions application, then think about what an admissions committee would want to see. A small indie studio will want to see that you're versatile and that you won't need much supervision to solve problems. Each situation will have different expectations for a portfolio.

A portfolio is more curated than a personal site. A personal site can be whatever you want, and represent all your diverse interests and complex personality. In contrast, someone hiring for a programmer job doesn't care whether you play guitar or whether you draw sometimes, and anyway, they have 50 more applicants to look at! Your still life paintings are important to your identity, but probably irrelevant to a gameplay programming position, so you might want to consider keeping your portfolio separate from your personal website.

Next, here's some more specific advice:

Friday, March 30, 2018

GDC 2018: How To Light A Level, slides and transcript


This post is aimed at beginner / intermediate designers. It's a summary of the talk I gave at the GDC 2018 Level Design Workshop with David Shaver (Naughty Dog) for the "Invisible Intuition" double-feature session.

David's slides on blockmesh / layout are here (PDF) with case studies from The Last of Us / Uncharted. You can also get my full slide deck PDF here, and my speaker notes PDF here... but I don't know when GDC will upload the talk to YouTube, sorry.


A very brief and simple history of light starts with the sun. Then let's not forget about fire, controlled burning in gas lamps, incandescent light bulbs with a filament... and these days, there’s a stronger focus on more energy-efficient fluorescent lights, and LED lighting is also becoming more common.

It’s tempting to think of this as a story about technology and progress and older light sources becoming obsolete... but the light bulb did not make the sun obsolete, and the LED does not make fire obsolete! We still use fire as a light source all the time -- in our birthday candles, in our campfires, in our romantic candle-lit dinners -- in fact, I hate those little fake flickering LED candles, because a real flame has a unique quality to it, you know?

Fire hasn’t disappeared from the world, but rather our culture around fire has changed. That is, fire used to be a common and practical tasklight in Shakespeare's time, but now it feels more like special decoration for a special occasion. As a designer, you need to sensitize yourself to how light feels and conveys these ideas, because this is how you communicate those moods to the player.

Friday, September 29, 2017

Adventures in VR sculpting

I've been sculpting a lot in VR lately (via Oculus Medium) trying to figure out whether it's "the future" or not.

While I've worked in 3D for a long time, I'm used to building levels in a low polygon style with a 2D interface -- so for me, working "natively" in 3D VR has been strange and confusing, as I try to figure out how sculpting workflows work with 3D motion control interfaces.

When you are 3D modeling in a 2D interface, you can only move in two dimensions at once for every operation. Every stroke is constrained to 2 directions, so you learn to limit how much "each stroke" is supposed to accomplish. You begin seeing 3D in a specific "2D" kind of way. A lot of existing modeling software has evolved to fit this workflow, using operational systems that are non-linear and asynchronous -- what I mean is that each time you move a vertex or apply a bevel in Maya, you can always tweak or adjust that action later. Need to twist a tentacle in a weird way? You setup a spline, and 10 clicks later, you have a twist. It's very accurate because you're working very methodically in super super slow motion, decompressing time.

Current VR sculpting software doesn't really capture this "bullet time" dimension of working in 3D. Instead, it's very immediate and continuous. It's unclear whether VR will ever be able to support the high text density / menu complexity that most 3D modeling software needs.

If you have shaky inexperienced hands, too bad! You can't fine-tune or adjust your tool movements after you perform them, you just have to get better at doing more fluid, cleaner hand gestures.

Before, with a mouse, I could sort of do 100 different strokes and take the best bits of each one, and assemble the perfect stroke. But in VR, I feel like I can't do 100 takes, I get only 1 take, and I better not fuck it up! (Ugh. Why is this "natural" interface supposed to be so much better? Fuck nature!)

So now I basically have to become a much better fine artist, and learn how to move my body around the sculpture, instead of simply trying to developing the eye of a fine artist. Some of this frustration is due to the difference between a sculpting workflow vs a polygon workflow, but the inability to rest a mouse on a table certainly exacerbates it.

It also probably doesn't help that I'm taking on one of the most difficult topics of visual study possible, a human head. It's very easy to sculpt a "wrong-looking" blobby sculpture, as you can see in my screenshots! Fine artists usually spend many years in figure drawing workshops to train themselves how to "see" people and understand the many different shapes of our bones and muscles.

But I think this challenge has been helpful, and it keeps me focused on figuring out which skills I need to develop. How do I get clean sharp edges and defined planes in VR? Should I sculpt with blobby spheres and flatten it out afterwards, or should I sculpt with flat cubes and build-up my planes from the beginning? I'm still trying to figure it all out.

And if VR sculpting truly is the future, I do wonder how this will factor into a game development workflow. Maybe we'll sculpt basic forms in VR, and then bring them into Maya for fine-tuning -- or maybe it makes more sense the other way, to make basic forms in Maya, and then use VR only for detail?

I don't know of any game artists who seriously use VR as part of their workflow, but if you know of any, let me know so I can figure out what they're doing and copy it!!

(And hopefully in another month, my sculpts won't be so scary...)

Saturday, September 23, 2017

Writing stories / dialogue for Unity games with Yarn

I've been using Yarn for a little while, and I've grown to prefer it as my "talking to NPCs" solution for game development. If you're not familiar, Yarn and Yarn Spinner are a pretty powerful Twine-like plugin for Unity (though it could technically work in any C# game engine) that's geared towards writing video game dialogue, and it was most famously used for Night In The Woods.

Yarn is fairly lightweight, extensible, and it basically gets out of your way. Want to make a really big long monologue, or 100 little pieces of dialogue snippets? Yarn works well for both of those use-cases. (If you want something that's more focused on manipulating very long dense passages of text, you might want something more like inkle/ink, the system that powers the huge 750,000 word narrative game 80 Days.)

To try to provide more resources for other Yarn users, or potential Yarn users, here's a write-up with some advice and a short guide to working with Yarn...

Sunday, September 17, 2017

Level With Me, Half-Life 2, complete!


I've just finished playing through all of Half-Life 2 on my level design streaming show, Level With Me. Much like with my playthrough of Half-Life 1, I've played through this sequel several times already, and I thought I knew it pretty well -- but there were still sequences where I was surprised, impressed, or disappointed.

There were several main themes throughout this playthrough:

1. The current version of Half-Life 2, the only one now available on Steam, has been poorly updated and maintained. When Valve added HDR lighting to Source Engine 1, someone dutifully went through Half-Life 2 and updated all the maps -- but that process only involved recompiling the maps with HDR lighting. That broke several things: there are no LDR lightmaps (it's impossible to play Half-Life 2 without HDR now), and the unchanged settings are poorly calibrated for HDR, often being too bright / too dark / with lots of halo-y hotspots everywhere. If you want to play a better version of Half-Life 2, I recommend the Half-Life 2 Update mod, which fixes a lot of these issues.

2. Another frequent theme has been how Half-Life 2 keeps mixing itself up; one chapter is a horror survival segment, and then 2 minutes later the next chapter is a road trip driving section. This is pretty unusual in 2017, where AAA action games usually feel more consistent, systemic, and homogeneous. (Of the big franchises, maybe only Call of Duty maintains this roller coaster setpiece structure.) You could argue that Half-Life 2 sort of tries to do 10 different things, and doesn't really excel at any of them. Or on the flip-side, maybe the Valve of 2000-2004 was really impatient and bursting with ideas, and in the end, executes all of these ideas decently enough.

3. Rugs!!!

Check out the full Level With me archived playlist for Half-Life 2 on YouTube, or watch future broadcasts live on Twitch.

Thursday, September 14, 2017

How to Graybox / Blockout a 3D Video Game Level

from de_crown, by FMPONE and Volcano

UPDATE, 11 NOVEMBER 2021: this blog post is OK, but I would recommend reading the "Blockout" page in my free online work-in-progress level design book instead.

ORIGINAL POST:

While planning a level design class, I googled for a good article about blocking-out or grayboxing a 3D level design prototype. I didn't really find one that actually went into "how" you might actually go about grayboxing a level, so I guess I have to write it.

Grayboxing is a level design practice where you build a rough block-out version of your level using blocks (usually gray boxes) so that you can iterate and test the layout as soon as possible. Almost every 3D game engine has some sort of box primitive tool -- if you know how to use that, then you can graybox.

Before you graybox, you must make sure you've established a general game design direction. You should generally know how this level might fit into your game or workflow. There's no point in grayboxing if you don't even know what the player should be doing, or what this level is supposed to convey. Is the level supposed to be easy or hard? Does it focus on combat or non-combat? Should it feel scary or safe? Level design must always exist in the context of a larger game design, or else you're just wasting your time.

Then, open up your 3D game engine, and let's start laying down some boxes...

Thursday, August 24, 2017

Road trip sketches; notes on extracting and visualizing Half-Life 2 levels in Maya


So I'm working on (another) article about level design in Half-Life 2. I chose the d2_coast03 map of the Highway 17 chapter, which is the first real "coastline" road trip section of the game, and is probably the most successful. Look at how big and open it is. Would you believe this is a map in a game celebrated as a meticulous roller-coaster? In my mind, it's contemporary with a lot of vehicle-based first-person open world game design trends that started around the same time in 2004, and they even pulled it off in an engine architecture that's still kinda based on Quake 1.

Sunday, July 30, 2017

new tool: Yarn Weaver


I'm working on a game that uses the excellent Yarn and YarnSpinner narrative toolkit for Unity. For this project, I'm also collaborating with a narrative designer -- unfortunately, the Yarn editor doesn't actually have a play mode or a testing mode built into it -- which makes it difficult to collaborate, because the designer can't even run through the Yarn scripts without downloading the entire Unity editor and project source! What if she just wants to test a short conversation script or two?

So, I basically duct-taped the YarnSpinner example setup to this excellent UnityStandaloneFileBrowser (for native file open dialogs at runtime) to make a very small simple tool to open and run through Yarn scripts. It can display your text, parse all your variables, and render up to 4 choices.

I call this tool "Yarn Weaver". The project source files are on GitHub under MIT License, or you can download Windows and Mac OSX release builds here. I hope it's useful for people!

Tuesday, June 7, 2016

Working with custom ObjectPreviews and SkinnedMeshRenderers in Unity


Unity's blendshape controls -- basically just a list of textboxes -- were going to cause me a lot of pain. After wrestling with broken AnimationClips for my previous attempt at facial expressions in my game Stick Shift, I decided to actually invest a day or two into building better tools for myself, inspired partly by Valve's old Faceposer tool for Source Engine 1.

To do that, I scripted the Unity editor to draw a custom inspector with sliders (based on Chris Wade's BlendShapeController.cs) along with an interactive 3D face preview at the bottom of the inspector.

The workflow I wanted was this:

Friday, February 12, 2016

Oculus Rift DK2s kind of (secretly) do work on laptops (sometimes) and you can make VR stuff in Unity (maybe)

This is a rant + technical guide about how to get an Oculus Rift DK2 to work with Unity 5 so that you can make stuff with it. Maybe.

I'm teaching two virtual reality classes this semester, and I was dreading having to tell all my students that Oculus (in all their wisdom) has a public policy of no longer supporting Mac OSX, or any laptop, for the foreseeable future. Even now, when I tell my colleagues about this, they react with incredulous shock. With this single move, Oculus basically alienated the entire creative coding / technologist community, and basically 99% of the design / programming community in New York City.

The core of the issue is in how Oculus wants to synchronize (a) the image in the VR HMD (head-mounted display, or headset) with (b) the very subtle motions your head makes. If these two sensations aren't synchronized, then people usually suffer "simulator sickness." So, the VR industry generally wants to make sure these two things are synchronized as closely as possible, to make sure people don't vomit when using this glorious new technological medium.

In order to synchronize those things as fast as possible (90 frames per second is the minimum, 120 fps is the ideal) the HMD needs "direct access" to your graphics card.

Most laptops are engineered purposely to cut-off direct access like that, mostly because they have two different graphics processors -- one weak energy-efficient GPU, and one higher performance power-hungry GPU. For day-to-day non-VR use, the weak one is more than good enough, so that one is in charge.

From a VR developer perspective, we were early adopters and happily making Oculus prototypes for years, and our "weak inadequate laptops" were good enough. Then around runtime 0.5, Oculus discontinued OSX support and began insisting that all laptops were just inherently inferior and didn't deserve any attention. From our perspective, Oculus basically took away something that seemed to be functioning fine, for basically no good reason. It's really really really annoying.

If you search "oculus laptop", it's mostly going to be forum posts from the Oculus community manager telling people that laptops aren't supported... so I was pleasantly surprised when I was prepping to teach these VR classes and it turns out runtime 0.8 actually does work on my Windows laptop! My suspicion is that the GPU vendors Nvidia and AMD both updated their drivers to give Oculus what they wanted... well, kind of.

Tuesday, October 13, 2015

Tips for implementing / coding an in-game options or pause menu functionality in Unity

 

I recently implemented an in-game options menu in Unity. Mine looks something like the thing above. A surprising amount of the required functionality is already implemented in Unity, you just have to write some code to hook into it. In the cases when Unity didn't already have a static variable for a particular setting, like mouse sensitivity or menu language, then I'd implement my own static variable that worked with a specific PlayerPrefs key.

Anyway, here's a bunch of workflow / specific API calls that I found very useful when I did it...