This is a technical overview of how I built certain parts of Rinse and Repeat. It spoils the game, so you should probably play it first if you care about stuff like that.
Rinse and Repeat took about 1-2 months to make. For these sex games, my development process can basically be summarized as "art first" -- my very first in-engine prototypes are usually about establishing mood and texture, and setting up the character you'll be staring at, and these are by far the most important parts of the game.
As I mentioned before in my more design-oriented write-up, R&R began as a technical test for fluid simulation. The water droplets are physically simulated Shuriken particles that collide with certain mesh colliders parented to the dude's body. (So, each of his butt cheeks were individually modeled in Maya, as well as his pecs, etc.)
I also have several other water particle systems going: some mist particles on top of his head, some mist particles for his shoulders, and two small waterfall particle systems for his elbows.
I quickly realized, though, that the water particles weren't enough to sell the idea of "wetness." Real-life showers proved surprisingly disappointing in this regard -- from what I can tell, most real-life "wetness" cues are about matted down hair and water dripping down from your silhouette. On the surface of the body itself, water is surprisingly static and boring, sitting and accumulating on your skin before occasionally rolling down.
It's important not to let your realism be constrained by reality, so I set about exaggerating this wetness. My two main shader inspirations were the sweat shader from the NBA2K1x series, and a kind-of-weird shower scene in Life Is Strange. I couldn't find a technical breakdown for either, so I hacked my own basic implementations together, which are all about animating the body's normal map to distort the cubemapped reflections.
My "sweat" trickles are basically a normal-mapped cubemap mask that scrolls downwards based on a planar projection (my model's UVs weren't uniformly upright, so I had to reproject.) You can see here that it doesn't exactly run down the contours of his body.
The result isn't nearly as sophisticated or realistic as in the NBA games, and in fact it almost makes it seem like he's being slathered in thick honey or oil, but combined with other conceptual cues it reads like water, and most players won't really notice the strange distortion going on.
None of these three effects (particles, trickles, droplets) was good enough by itself, but when you layer them on top of each other with some shower sounds, then they can mostly get the job done.
Here's the scene setup in the editor. Notice I only bothered modeling the showers and the visible part of the hallway, there's literally nothing else beyond what the player can see. Also notice how the NPCs begin in their final positions; this is easier to tune and pose them, and when the game starts I just teleport them out of sight as the screen fades in. I follow the same rule for the ending sequence with all hands that appear on his face; I keep the hands visible in-editor to help me pose them, and simply toggle them on and off when the time comes.
The one exception here is the disco ball, which I keep hidden by default. The disco ball reflections, as you may have guessed, are just a point light with a 6-sided cubemapped light cookie.
Most of the other specific details are based on techniques I've used in previous games...
The intro title dissolve is the old alpha test cutoff trick I also used in Stick Shift, except timed closely to the dude's walking pace. To go into slow motion, I set Time.timeScale to 0.1f, and then from there I just did some trial and error with tuning the exact timing. I built my own custom patrol-path system to manage all the dudes' movements and choreography, and their walking speed runs on a lerp, to ensure consistency -- using CharacterController, Rigidbody, or NavMesh movement seemed like overkill and too little at the same time, given how specific my needs were.
I track the showering schedule using the player's system clock and ample use of TimeSpan, much as how I tracked real-life time spans timer cooldowns in Hurt Me Plenty and Stick Shift. I talk about how to do it in this post.
One of my big coding regrets is overengineering this system by storing the schedule generation seeds via Szudzik pairing, because I wanted an easy to tie-in a possible "Midas Gym" website. I also had plans for Google Calendar integration and .ICS file export support, so people could potentially enter the shower schedules into their personal calendars, but ultimately scrapped all that stuff. Figuring out OAuth procedures for Google Calendar looked annoying, and the .ICS file format specification looked like a mess. Szudzik pairing was meant as an easy way to synchronize the game and the website without manually storing a database, but the fact that it involved implementing my own time / day format means there's still all these little edge-case bugs with the scheduling, even now. Ugh.
Because I have the lights switching on and off in the game, lightmaps were not ideal. However, I still wanted some nice occlusion along certain edges, as well as subtle shadows. To achieve this, I relied on a pretty old game art trick: manually placing semi-transparent "shadow cards" to paint specific shadows where I wanted them.
This "shadow card" is basically a quad with a very faint Particles/Multiply material. It proved very versatile... I use it for blob shadows beneath the NPCs, I use it for the edges of the cylindrical pillars (I really love shiny tiled 3D cylinders by the way), and I even use it beneath the towel table and on the bulletin board to make certain elements pop a bit more.
My "lip sync" solution is very rudimentary, based partly on how Valve did it in Half-Life 1 -- in LateUpdate(), I lerp the model's mouth bone position based on the voice over audio's overall volume. It isn't terribly expensive-looking, but I felt it was important to have some sort of mouth movement to accompany the voice over. Having worked with more sophisticated methods, like Half-Life 2's MSAPI phoneme extraction to WAV metadata with blendshapes, convinced me that a complex method was a lot more work than it was worth. (Besides, I imagined this bro dude shouting most of the time anyway.)
My implementation is mostly based on Wil Giesler's volume-based lip sync code, except I replaced the blend shape weight with my own mouth bone lerp %.
The audio files are randomly stitched together at runtime. The result is a stilted inhuman rhythm / cadence to the thing, which I'm ok with -- the idea of an obviously computerized hunky shower stud is more more funny than a completely flawless 3D-photoscanned shower stud who speaks perfectly.
* * *
If I had to articulate a common thread in my techniques here, it would involve understanding a particular"AAA" phenomena (sweat, dynamic GI, lip sync) and then approximating my own "AA" solution (scrolling cubemap masks, hand-placed shadow cards, volume-based mouth flaps)... what's important is the gesture, the idea, the fact that these details are here at all. When you're working on details, don't get caught-up in the details... (or your own inferiority!)