Image

Remaking Riven: The Characters

I love how excited and passionate folks are about Riven and its beloved cast. Your enthusiasm shows! I thought many of you might appreciate more of a look into the thought process behind it.

So why aren’t we using the original videos?

Simply, we can’t integrate them into the 3D environments in a believable way. They would at the very best look like holograms, even with ‘ai upscaling’. (Enhance!) If we used the original videos, there’s no way to improve or expand the scenes or story, we’re locked into directing, framing, editing and acting choices in the existing footage. 

We can’t improve on real people acting in a stage-like setting, so we have a lot to overcome when using CGI characters. There’s no camera angles or editing to hide behind, because these are 8+ minute monologues, with no cuts, that can be viewed in VR. Even the original performances had cuts that we don’t have the luxury of now. It’s been a journey getting here.

Why didn’t we do new live-action then? Why 3D? What about volumetric capture?

We felt that replacing the entire cast and every single performance wouldn’t work well for a couple of different reasons. 

The first is that one of our top priorities was maintaining John Keston’s original performance in a way that made sense for the format of the game. The actor who was cast to perform his mocap in this new version studied his role and acted to John’s original audio. This was one of the only ways we could realistically keep John’s performance alive for this new version of Riven and the requirements it has for interactions that wouldn’t instantly take the player out of the experience if presented in its original format compared to everything else.

The second reason is that volumetric capture is expensive, offers comparatively small capture volumes, and still requires substantial cleanup to remove artifacts. They did lots of volumetric capture at one of my previous jobs, and the raw scans are not clean enough to throw in a game and call it a day. Especially not at 60hz. If every single frame of every single character and prop is a unique, fully textured mesh, it gets data heavy and would be difficult to deliver at a resolution we’d like on all of our target platforms. 

Let alone that we still have to render the environments, fx, game logic etc. on top of that. Gotta share that precious VRAM! Speaking of mocap, we shot all the performance capture in-house, with off the shelf hardware. No specialized optical capture volumes. Our face capture was shot on an iPhone. We used industry standard software for processing the mocap and getting it onto the characters before animation cleanup. For those in the know: Xsens, Faceware, Motion Builder and Maya along with Unreal.

Which brings me to the point of all this…

Riven ’24 is a multi-platform release, which means it needs to work on lots of hardware. It dictates which features we are able to use in the game engine. Moving characters can’t receive baked lighting the way static, environment meshes can for example. We put a lot of effort in to make our game look as good as possible for as many people as possible, and many have commented on how good the demo looks on ‘low’ graphics settings.

In conclusion

The new version of Riven is going to be different from the original, but that’s also precisely why we still keep the original around. They are two different experiences, and therefore people can enjoy them in different ways. The character performances will be both different and familiar in many ways. It doesn’t necessarily mean that one is inherently worse, or better… They are just different.