Showing posts with label pipeline. Show all posts
Showing posts with label pipeline. Show all posts

Monday, April 18, 2022

Why I still use Unity

There's been some game dev twittering about Unity vs. Unreal lately. Why use Unity when Unreal is better?

The basic consensus is that Unity's advantages have been crumbling for years, and its attempt to challenge Unreal on high-end graphics has meant neglect everywhere else. But if you want high-end then UE5 Nanite / Lumen is light years beyond Unity HDRP anyway? And if you're making the typical aspirational photorealistic action game, you'll probably want UE's gameplay architecture and free photoscan assets too.

Most recently, respected developer Ethan Lee has weighed in. For him it's not about the graphics, it's about source engine access and engineering processes. Being able to pinpoint bugs in the core Unreal Engine code, fix them, and submit patches to Epic is how modern software development works. Comparatively, Unity is closed source, and even if you go to the trouble of filing a bug report you'll still have to wait a year for an official bug fix if you're lucky. This is important during the second half of a game dev cycle, when game making becomes a terrible slog -- when your game randomly crashes on Nintendo Switch for some reason and you have to figure out why but you're already so so tired.

So why on earth would anyone still use Unity? Everyone has their own situation, and here's mine:

Friday, April 9, 2021

Getting started with HaxeFlixel in 2021

Warning: this is a fairly technical game developer-y post. If you came here for gay sex, I'm sorry.

For an upcoming project commission, I'm making a 2D game with crowd simulation and simple controls that works well on mobile browsers. (Reminder: for iOS, that means WebGL 1.0 and no WASM.) The engine should be able to render and simulate 200+ lightweight game objects -- frame-animated sprites with simple collision, no fancy physics or shaders.

Which game engine should I use to maximize ease of learning and compatibility, and manage hundreds of simple objects on-screen? Here was my thought process:

  • Unity WebGL: way too heavy and slow for mobile browsers, and maybe overkill for a no-physics 2D game anyway. (Although the Lil Nas X 3D twerking game runs surprisingly well on iOS's WebGL 1.0, I wonder how much they had to optimize?)
  • Unity Project Tiny: as far as I can tell, Project Tiny and its DOTS dependency is still in early development. The random caveats and various in-dev inconsistencies with regular Unity would also be frustrating. And as with many other Unity side projects, its long term future feels really hazy.
  • Construct: seems ok, and I think I could've gotten used to the visual block scripting, but overall the pricing and licensing feels weirdly restrictive. I have to pay to use more than 2 JS files? I have to pay to use more than 1 font, or make an animation more than 5 seconds long? These are some really bizarre artificial resource limits.
  • Phaser: seems popular enough with decent TypeScript support, but I want the option of building out to a native executable without a weird Electron wrapper or something. Their monetization model (free open source base but you pay for "premium plugins" and tools) is one of the more generous ways to go about this, I get it, but it still feels weird to me and reminds me of Construct.
  • Godot: I've wanted to try Godot for ages, but in the end I felt like I didn't have a good sense of what its HTML5 Web export could do + learning enough of the "Godot way" and GDScript would've taken a while. It's also in the middle of a big break between v3.0 and v4.0, and ideally I'd like to wait until like v4.2 to commit to learning it.
  • Heaps: promising and some people get great results with it, but maybe still too early in public lifecycle for a total newbie like me, with not enough samples / docs / robust official tutorials to learn from yet. If or when I do try out Heaps, I'll probably try using Deepnight's gameBase project template.

In the end, I chose to build this particular project with HaxeFlixel. This post details my early impressions, thoughts, confusion, advice, etc. with learning it.

Monday, March 25, 2019

new Unity tool: Bobbin


I wanted to be able to write game dialogue in Google Docs (from my phone or tablet, or to share with external collaborators) and then automatically send those changes into the Unity project on my laptop.

To help me do that, I made a free open source tool called Bobbin, which is a relatively simple Unity Editor plugin that can automatically download the data at URLs, and import that data as a file in your Unity project. Again, it's very simple: every X seconds, it opens a list of URLs (as if it were a web browser) and then it saves all the bytes as a .txt, .csv, .png -- or in-theory, whatever file type you want. Note that this is just an automated download manager, you will still need to write your own game code to actually import, process, and use these files in your game.

The main audience for this tool is narrative designers, writers, localizers / translators, and designers / developers who need something fast and lightweight for syncing files with external collaborators. I imagine it also pairs well with text-based narrative plugins like Yarn Spinner, where in-theory, you could collaboratively write Yarn scripts in a Google Doc and then use this tool to automatically bring the script into your game.

(But if you're making a game that's going to make heavy use of spreadsheets, you should probably use something more robust like Meta Sheets or CastleDB-Unity-Importer, which can import your spreadsheet data as C# types with Intellisense auto-completion in your IDE.)

Anyway, I'm planning on a few more feature updates, like runtime support and/or better Google Sheets support, but personally I'm probably not going to expand the feature set much beyond that.

I hope you find it useful! And as always, feel free to submit any bug reports (or small feature requests) by opening an issue on the github.

Thursday, February 14, 2019

Thick skin: complexion, realism, and labor in games


In Dublin, I visited the Lucian Freud Project at IMMA.

If you're not familiar with painters (who is these days?) Lucian Freud is often held up as one of the greatest realist painters in the 20th century. And like many other artist men of the 20th century, his work also has a lot of racist and sexist baggage to deal with.

The IMMA curators figured out a pretty clever solution here -- they basically surrounded his stuff with women artists and intersectional feminist political theory. Instead of pretending to be a "neutral" celebration of a Great Male Painter, the curators did their job, and made an argument for real interpretation and criticism in the 21st century. It felt responsible and complicated.


The main basement gallery has two monitors in the middle of the room, running constant loops of John Berger's iconic feminist media studies primer Ways of Seeing. Specifically, it's Ways of Seeing episode 2, the one about the difference between nudity and nakedness, especially within the long history of European oil paintings depicting nude/naked women.

The second half of the episode is famous: the male narrator and host (Berger) shuts up and just listens to a panel of women critique patriarchy and art through their own experience. At first it seems like they're talking about the art shown in the film 30 years ago, but in the style of the Frankfurt School, they might as well be critiquing Freud's many paintings hanging on the walls today.

If you want to read more about the various artists and works, this Quietus post by Cathy Wade is a through walkthrough of it all. In this post, I'm just going to talk about one of the paintings and how I relate its form and politics to games:

For some reason, I gravitated towards a small painting hanging in the corner, a portrait simply called "Kai".

Thursday, July 12, 2018

Tips for working with VideoPlayer and VideoClips in Unity


Traditionally, game developers use Unity for real-time 2D and 3D games and shun any use of pre-rendered video, partly out of design dogma but also because the MovieTexture system was a nightmare. However, the recently overhauled VideoPlayer functionality means that *video* games are now much more doable. You can now feasibly make that Her Story clone you always dreamed of!

I'm currently making a video game that makes heavy use of video, chopped into many different video clips. It's been fun trying to figure out how to build basic video functionality like playlists and clean transitions between clips, except in Unity.

The thing they don't tell you about re-inventing wheels is that it's fun and exciting to re-invent the wheel, and it gives much more appreciation for the craft that goes into wheels. It was fun to think about how a live telecast cues up video footage on multiple monitors, and how a real-world broadcast works, and I learned a lot about why they do it like that.

Let's talk video in Unity.

Tuesday, June 5, 2018

Remastering Rinse and Repeat

Rinse and Repeat remastered, for Radiator 3 (2018)
Rinse and Repeat (2015)
I'm nearing the end of the remaster process for my shower game Rinse and Repeat, as part of a future re-release planned for late 2018. As I've said before: if you have the time and energy, I highly recommend remastering your games -- you get to revisit all the compromises and sacrifices you inflicted on yourself, and now you're not desperate to get the game out the door -- you can finally do things calmly and properly.

The time difference helps you see the project with new eyes. In my case, it's been about two and a half years since the original Rinse and Repeat release in October 2015. Game engine technology has changed, my skills and tastes have changed, and it's surprisingly therapeutic to revisit my past decisions. Like, why did I give everything a weird green tinge? I don't remember. Maybe I had good reasons that I've now forgotten.

Here's some of the specific changes I made to the shower scene, and some of my reasoning:

Thursday, May 24, 2018

Games as Research symposium, after-action report


A month ago I attended a one-day Games As Research symposium, hosted by TAG at Concordia University and organized by Rilla Khaled and Pippin Barr. If you want my rawest thoughts, here's my live tweet thread from that day.

I learned a lot about design history and current methodologies for studying how a game is made. Here's some of the common topics and threads that we kept coming back to, and a brief summary of each presentation:

Wednesday, November 22, 2017

Postcards from Unreal


I'm building a Unreal Tournament 4 level in preparation for a level design studio class I'm teaching next year. I've been using Unity for a few years and now I feel very comfortable with using Unity for my projects, but I don't really have much experience with Unreal Engine 4. To try to learn how to use it, I thought I'd make a small UT deathmatch map.

Honestly, I think Unreal Tournament is a colossal over-designed mess of a game -- players can slide, wall run, dodge -- use 10 different weapons each with primary and secondary fire modes... I prefer the simplicity (and elegance?) of Quake 3 and its successors. Basically, Quake feels like soccer, while Unreal Tournament feels more like American football with 100 extra rules tacked on.

Nevertheless, it's important to be able to internalize how a game plays, even if you don't like it very much. I've tried to provide opportunities for sliding and wall running, and I've focused on what seems like the core three weapons in UT (Flak, Rocket, Shock) while attempting to channel the UT series' sci-fi urban industrial aesthetic.

Friday, September 29, 2017

Adventures in VR sculpting

I've been sculpting a lot in VR lately (via Oculus Medium) trying to figure out whether it's "the future" or not.

While I've worked in 3D for a long time, I'm used to building levels in a low polygon style with a 2D interface -- so for me, working "natively" in 3D VR has been strange and confusing, as I try to figure out how sculpting workflows work with 3D motion control interfaces.

When you are 3D modeling in a 2D interface, you can only move in two dimensions at once for every operation. Every stroke is constrained to 2 directions, so you learn to limit how much "each stroke" is supposed to accomplish. You begin seeing 3D in a specific "2D" kind of way. A lot of existing modeling software has evolved to fit this workflow, using operational systems that are non-linear and asynchronous -- what I mean is that each time you move a vertex or apply a bevel in Maya, you can always tweak or adjust that action later. Need to twist a tentacle in a weird way? You setup a spline, and 10 clicks later, you have a twist. It's very accurate because you're working very methodically in super super slow motion, decompressing time.

Current VR sculpting software doesn't really capture this "bullet time" dimension of working in 3D. Instead, it's very immediate and continuous. It's unclear whether VR will ever be able to support the high text density / menu complexity that most 3D modeling software needs.

If you have shaky inexperienced hands, too bad! You can't fine-tune or adjust your tool movements after you perform them, you just have to get better at doing more fluid, cleaner hand gestures.

Before, with a mouse, I could sort of do 100 different strokes and take the best bits of each one, and assemble the perfect stroke. But in VR, I feel like I can't do 100 takes, I get only 1 take, and I better not fuck it up! (Ugh. Why is this "natural" interface supposed to be so much better? Fuck nature!)

So now I basically have to become a much better fine artist, and learn how to move my body around the sculpture, instead of simply trying to developing the eye of a fine artist. Some of this frustration is due to the difference between a sculpting workflow vs a polygon workflow, but the inability to rest a mouse on a table certainly exacerbates it.

It also probably doesn't help that I'm taking on one of the most difficult topics of visual study possible, a human head. It's very easy to sculpt a "wrong-looking" blobby sculpture, as you can see in my screenshots! Fine artists usually spend many years in figure drawing workshops to train themselves how to "see" people and understand the many different shapes of our bones and muscles.

But I think this challenge has been helpful, and it keeps me focused on figuring out which skills I need to develop. How do I get clean sharp edges and defined planes in VR? Should I sculpt with blobby spheres and flatten it out afterwards, or should I sculpt with flat cubes and build-up my planes from the beginning? I'm still trying to figure it all out.

And if VR sculpting truly is the future, I do wonder how this will factor into a game development workflow. Maybe we'll sculpt basic forms in VR, and then bring them into Maya for fine-tuning -- or maybe it makes more sense the other way, to make basic forms in Maya, and then use VR only for detail?

I don't know of any game artists who seriously use VR as part of their workflow, but if you know of any, let me know so I can figure out what they're doing and copy it!!

(And hopefully in another month, my sculpts won't be so scary...)

Thursday, August 24, 2017

Road trip sketches; notes on extracting and visualizing Half-Life 2 levels in Maya


So I'm working on (another) article about level design in Half-Life 2. I chose the d2_coast03 map of the Highway 17 chapter, which is the first real "coastline" road trip section of the game, and is probably the most successful. Look at how big and open it is. Would you believe this is a map in a game celebrated as a meticulous roller-coaster? In my mind, it's contemporary with a lot of vehicle-based first-person open world game design trends that started around the same time in 2004, and they even pulled it off in an engine architecture that's still kinda based on Quake 1.

Tuesday, June 7, 2016

Working with custom ObjectPreviews and SkinnedMeshRenderers in Unity


Unity's blendshape controls -- basically just a list of textboxes -- were going to cause me a lot of pain. After wrestling with broken AnimationClips for my previous attempt at facial expressions in my game Stick Shift, I decided to actually invest a day or two into building better tools for myself, inspired partly by Valve's old Faceposer tool for Source Engine 1.

To do that, I scripted the Unity editor to draw a custom inspector with sliders (based on Chris Wade's BlendShapeController.cs) along with an interactive 3D face preview at the bottom of the inspector.

The workflow I wanted was this:

Monday, March 7, 2016

A history (and the triumph) of the environment artist: on The Witness and Firewatch


This post vaguely spoils random bits of Firewatch and The Witness. I wouldn't worry about it.

Only a few years ago, hiking games (first person games with a focus on traversing large naturalistic landscapes) were rather fringe. Early indie masterpieces like Proteus and Eidolon abstracted the landscape into pixelated symbols, with a special interest in simulating weather and wildlife to make it feel real. But it took "mid-period" hiking blockbusters like The Vanishing of Ethan Carter, Everybody's Gone to the Rapture, and Dear Esther (2012 remake) to monetize the genre with all their glossy near-photorealistic graphics.

Now we are entering a later period of hiking games, epitomized by The Witness and Firewatch's less realistic visuals. It represents these environment artists finally asserting their control over a project and their identities as artists, within older traditions of gardening and landscape painting. To better understand this latest shift, let's think about the social and technical history of the environment artist in 3D games.

Monday, November 9, 2015

"Rinse and Repeat" technical post-partum / how to do over-complicated wet skin shower shader effects in Unity


This is a technical overview of how I built certain parts of Rinse and Repeat. It spoils the game, so you should probably play it first if you care about stuff like that.

Rinse and Repeat took about 1-2 months to make. For these sex games, my development process can basically be summarized as "art first" -- my very first in-engine prototypes are usually about establishing mood and texture, and setting up the character you'll be staring at, and these are by far the most important parts of the game.

Thursday, September 10, 2015

Scripting the Unity Editor to automatically build to Windows / OSX / Linux and packaging the files in ZIP files.


I'm getting ready to release my next gay sex game, which means a lot of builds and testing. This game, in particular, has a lot of particular framework and infrastructure that involves copying over specific files directly into the built data folder. I don't want to have to copy over the files manually over and over, so I bit the bullet and decided to write an editor script that automatically does all this stuff for me, for all 3 desktop platforms that I'm targeting. This is really nice because it saves me a lot of time when making builds, and it also makes it more the whole process more foolproof since it prevents me from forgetting any files -- Unity is automated to include them for me!

Here are the main snippets + explanations of those parts of the pipeline, with the full script at the end of this post...

Sunday, July 26, 2015

PSA: free (and COMPLETE) photorealistic 3D character workflow from Mixamo


Mixamo got bought out by Adobe... as part of the merge, they've turned off all their billing systems... which means almost everything they have is now free.

"Fuse" is their (free) character modeling / texturing / creation tool that is miles ahead of the old Autodesk Character Generator -- from there, you send the character mesh out to their Auto-Rigger cloud service (also free) with 60+ bone skeletons and facial blend shape support -- and with every (free) account you register, you get 20 (free) animations, and you can potentially make unlimited free accounts. This is a complete character art solution from mesh to skin weights to rigging to animation, for free. It's pretty impressive, and you can easily make a game that looks like a prestige AAA FPS from late 2013. (These assets don't have the accuracy of photoscanned models or DX11 procedural hair, but they're very well crafted.)

Tentatively, they're going to shutdown this infrastructure on December 31, 2015 (I think, according to a cryptic e-mail I got a few months ago) when they've finalized more of the merge with Adobe, so make sure you grab as much stuff as possible while you can.

To celebrate, look at the brunch hunk I made in Fuse (above) and exported out to Unity. Again, it's pretty high resolution stuff with no restrictions. Make use of it for your games while you can.

I'm documenting this resource as a "PSA" because making the tools of photorealism accessible and widespread helps (a) sabotage game industry machinery that privileges fidelity as something valuable, (b) re-contextualizes realism as a stylistic choice rather than a "default" marketing tactic.

Have fun!

Wednesday, February 18, 2015

We are drugs; speculative dev tools and psychedelic hologram futures.

This post is adapted from a talk I gave at Indiecade East 2015, where the theater was way too small for the crowd, so not many people got to see the talk... sorry / oh well. Here's basically what I said:

Our story begins on October 8th, 2014, on a very special episode of the Late Show with David Letterman. He was ending that episode with a musical guest from Japan -- a holographic vocaloid named Hatsune Miku. Pay attention to Letterman's barely-veiled incredulity as he introduces her. He can't believe the words coming out of his mouth:



But what really makes this moment is the ending, after the performance. Letterman doesn't even know what to say, and he knows he doesn't know what to say. The experience was completely overwhelming, so Letterman has to somehow pivot back to interpret it for his audience (mostly moms and dads from Milwaukee) and all he can muster is a facile comparison to "being on Willie Nelson's bus." (Willie Nelson, if you're not familiar, is a celebrity notorious for his drug use, among other things.)

The meaning is both clear and agreeable: Hatsune Miku is drugs.

Thursday, January 29, 2015

Lighting design theory for 3D games, part 1: light sources and fixtures

Contemporary Jewish Museum (San Francisco, California)
Here's how I generally, theoretically, approach lighting in my games and game worlds. Part 1 is about the general concept of lighting design.

Mood is the most important end result of your lighting. The "functional school" of game lighting, which maintains that lighting exists primarily to make a space readable so that the player can navigate it and shoot people -- can be useful in my eyes but only so far as that gameplay is tactical violence, and when that violence can support evoking a mood. The rest of the time, some designers often seem content to light their spaces like a furniture catalog, or even leave it as a total after-thought. Lights can do more than show-off your normal maps and show where to walk to trigger the next cutscene, okay?

So let's begin: lighting design is a discipline that has existed since the beginning of sunlight.

Monday, January 5, 2015

Some Unity 5 beta impressions


I'm making a small Thief-like with integrated level editor (3rd time's the charm!) to test (and teach myself) four (4) main feature-sets of the Unity 5 beta -- physically-based shading, integrated image-based lighting, real-time global illumination, and the UI system introduced late in Unity 4.6 but might as well have shipped in Unity 5. Here's my opinion so far:

Tuesday, September 23, 2014

Introducing: Mural (v0.2) a simple 3D scribbling tool


EDIT: v0.21 adds .OBJ export from the webplayer; you can now actually use this to make models and import it into whatever you want. (If you want to use this in Unity, you will need to apply a material / shader that uses vertex colors and doesn't cull backfaces, so pretty much any of the "Particle" shaders)

There are 2 common modes in 3D polygonal modeling: vertex manipulation and sculpting. But for many of these workflows, a 3D mass exists mostly as a surface to be unwrapped and painted. If all we need is a 3D canvas to paint upon, why can't we just go straight to the painting part?

"Mural" is an experimental freehand 3D modeling tool similar to SketchUp's "Freehand" tool or the impressive Tilt Brush, except SketchUp imagines it more as a tracing aid and Tilt Brush relies on VR hardware and doesn't readily export geometry.

I want to make Mural as an accessible 3D tool that borrows game UI metaphors (specifically, first person mouselook) and directly exports the resulting 3D models for use in games, or anything, really. Many of the models made in Mural will not look like "traditionally" modelled 3D objects, and intentionally embrace glitchy non-representational aesthetics, twisted normals, vertex colors, and z-sorting artifacts. If it hasn't already occurred, I imagine the "politics of 3D" will shift to embrace these phenomena as artistic features rather than aesthetic flaws.

(I am also indebted to Rich Edwards' early research with "3d concepts" using semi-transparent planes.)

CHANGELOG
v0.22
  • decoupled canvas movement from painting (thanks for suggestion @Dewb) so you can now move the painting surface WHILE painting
v0.21
  • added simple .OBJ export for webplayer; press F12 to save a .OBJ to your computer
v0.20
  • fixed stroke shader, colors now render properly
  • added a color picker hue / saturation circle, adapted from code in UnityPaint
  • replaced line renderers with generated meshes from Vectorosity
  • added .OBJ export
  • added very basic undo support (press [Z] to delete most recent stroke(s) )

FUTURE DIRECTIONS FOR MURAL: make it into a complete 3D world maker / game maker; add cooperative modelling / network multiplayer session support; better painting tools and interface; add file-writing and OBJ export in webplayer via JS hooks

Wednesday, July 2, 2014

Game engine review roundup

Unreal Engine 4. Very good high-end support, integrated vertex-painter, great for making 3D shooty games in huge landscapes. But it's very heavy and assumes you're making a 3D shooty game in a huge landscape, and it feels very bloated if you're not. 7/10.

Unity 4. Good medium-weight engine, with very few game genre assumptions. But that flexibility turns into tedium when you have to re-implement NPC AI / basic movement / damage systems / camera controls / etc. for the hundredth time. Very bad stock controller and GUI support. 7/10.

CryEngine 3. Very good high-end support that assumes you're making a 3D shooty drivey game in a huge landscape surrounded by water. Fantastic foliage and rock placement tools that are useless when that's not what your game's about. 7/10.

Source 1. The 2000-era engine that has aged the best, with its smart bets on image-based rendering and lightmapping. Physics feel tuned so well that Titanfall used the engine pretty much for that. However, has a horribly bad 3D asset pipeline that forces artists to learn an obscure "Quake C" syntax from the early 90s in order to import art -- which, in a 3D engine, is totally inexcusable. 7/10.

Twine. Best-in-class text support, exports seamlessly to all platforms, very little technical friction and learning curve. Very diverse and helpful user community. But text markup scheme feels patched-together and inconsistent, requires users to learn Javascript (?!) for more advanced features. No built-in 3D or multiplayer support. 7/10.