I love how excited and passionate folks are about Riven and its beloved cast. Your enthusiasm shows! I thought many of you might appreciate more of a look into the thought process behind it.
So why aren’t we using the original videos?
Simply, we can’t integrate them into the 3D environments in a believable way. They would at the very best look like holograms, even with ‘ai upscaling’. (Enhance!) If we used the original videos, there’s no way to improve or expand the scenes or story, we’re locked into directing, framing, editing and acting choices in the existing footage.
Why didn’t we do new live-action then?
We can’t improve on real people acting in a stage-like setting, so we have a lot to overcome when using CGI characters. There’s no camera angles or editing to hide behind, because these are 8+ minute monologues, with no cuts, that can be viewed in VR. Even the original performances had cuts that we don’t have the luxury of now. It’s been a journey getting here.
We felt that replacing the entire cast and every single performance wouldn’t work well for a couple of different reasons.
Why 3D? What about volumetric capture?
The first is that one of our top priorities was maintaining John Keston’s original performance in a way that made sense for the format of the game. The actor who was cast to perform his mocap in this new version studied his role and acted to John’s original audio. This was one of the only ways we could realistically keep John’s performance alive for this new version of Riven and the requirements it has for interactions that wouldn’t instantly take the player out of the experience if presented in its original format compared to everything else.
The second reason is that volumetric capture is expensive, offers comparatively small capture volumes, and still requires substantial cleanup to remove artifacts. They did lots of volumetric capture at one of my previous jobs, and the raw scans are not clean enough to throw in a game and call it a day. Especially not at 60hz.
If every single frame of every single character and prop is a unique, fully textured mesh, it gets data heavy and would be difficult to deliver at a resolution we’d like on all of our target platforms. Let alone that we still have to render the environments, fx, game logic etc. on top of that. Gotta share that precious VRAM!
Speaking of mocap, we shot all the performance capture in-house, with off the shelf hardware. No specialized optical capture volumes. Our face capture was shot on an iPhone. We used industry standard software for processing the mocap and getting it onto the characters before animation cleanup. For those in the know: Xsens, Faceware, Motion Builder and Maya along with Unreal.
Why do we see some classic FMVs in Myst (’20/’21), then? Why can’t classic FMVs be added to Riven, too?
We were only able to swap some of the original FMVs (Full Motion Videos) into Myst because of the way the characters were presented to the player.
The scenes we were able to do it on were those that were already on flat surfaces in the original Myst. Specifically, via “imager” devices or framed inside small linking-book panels. This made them relatively easy to swap-in a different two-dimensional media asset as desired.
We could do it for most of the character encounters in Myst but importantly, not all of them. Players still encounter a CG character in the game’s endings.
Which brings me to the point of all this…
Riven ’24 is a multi-platform release, which means it needs to work on lots of hardware. It dictates which features we are able to use in the game engine. Moving characters can’t receive baked lighting the way static, environment meshes can for example.
Added to that is the need to support VR, where we (by definition) cannot control the precise position of the player camera. Those complexities make it impossible to place a two-dimensional, low-resolution video card into the scene in any way that would be reasonable quality.
In Riven, the characters are not contained as neatly… they occupy the same space as the player, and are not “framed” or “flattened” in the same way. For Riven we wanted characters that were in the scene with you, and could be seen in 3D with a player controlled camera. This was also the only way to make it for VR and have a feeling of presence with the characters. It makes for a more cohesive gameplay experience, not jumping between scenes we could and couldn’t do with live actors.
We put a lot of effort in to make our game look as good as possible for as many people as possible, and many have commented on how good the demo looks on ‘low’ graphics settings.
In conclusion
The new version of Riven is going to be different from the original, but that’s also precisely why we still keep the original around. They are two different experiences, and therefore people can enjoy them in different ways. The character performances will be both different and familiar in many ways. It doesn’t necessarily mean that one is inherently worse, or better… They are just different.