Jump to content

Discussion - Vertex Facial Animations


Recommended Posts

Ok so this is just a discussion about what would be the best method to implement vertex facial animations into JKA. It's been briefly talked about, but never really fully discussed to a point where there's a solid idea.

 

From what I remember, source engine's method is to:

 

- export out your character as you would (skin weights to a skeleton) to the uncompiled model format, SMD

- export out a VTA file which stores vertex morphs

 

Would this method be the best way (obviously different model formats for JKA), or does someone have another idea? 

Link to comment
  • 2 weeks later...

That would look very odd as it would have to be an MD3 head bolted onto a GLM neck/body. You would clearly see where the head is bolted on.

Why? Just do it properly. :P

 

It's the only way that does not involve a shitload of coding, new models formats and whatnot.

Link to comment

Another idea besides vertex animation - Facial animation scripting.

 

Setting up facial expression animations (with a complex facial bone setup) similar to what is already available in JKA currently but a lot more of them and more detail. Then have a 3rd party lip-syncing program that will allow the user to map facial expressions (for a voice audio clip) to a script file. This script file is then loaded by the game (possibly using ICARUS).

 

A nice lip-syncing program feature would be an "auto create" which will attempt to auto map facial expressions for a voice audio clip. Then the user can just manually edit the parts that don't quite match up.

Archangel35757 likes this
Link to comment

I like this last idea of a complex bone face rig... I was planning to do just that. We can create a complete library of visemes to be used with the lip-synch autoformatting the visemes to the sound file. What about modifying/using the lip-sync code from the game? We would need to have a facial pose library for facial expressions that the user would need to manually set against the soundtrack.

Link to comment

The lip-sync code from the game is ancient at best, and was designed essentially with 2D faces in mind (so this should already give you some idea as to how bad this might seem)

 

Essentially, what you need is something akin to this (video of it in action) which allows you to set generate miniature scripts for audio files. When a script for a particular audio file is detected for a particular language, the game would interpret the script (or use the old method if not detected), which in turn would cause a change in facial animation to a number of different facial states, which I would assume the system would require as input. In order to keep the animation smooth and crisp, splines or linear interpolation (aka lerping, you might have heard these terms before) would be used (maybe embedded in the script itself? i have no idea).

Link to comment

@@eezstreet -- there are a number of scripts, tools, and plugins that do this inside 3dsMax. We can make a "mouth area" animation for all of the standard visemes (see http://aidreams.co.uk/forum/index.php?page=Visemes_-_for_Character_Animation ) and map the phonemes to the visemes. Doesn't the game already do this? But it only uses a few visemes?

 

As for facial expressions to go along with speech... that's pretty complicated. Perhaps we could modify the ROFF system code and existing game lip-sync code to achieve all of the above. ROFF files can already process and use sound files. And we could add a new notetrack type called "expression" and then you would place expression notetrack keys along the sound file animation inside 3dsMax or Blender to play the facial expressions. Then in-game you'd check for facial expressions in the ROFF and play the facial expression animation as the sound plays. Only thing now is that ROFF sounds only play on CHANNEL BODY so that would need a slight modification to play on VOICE channel. What do you guys think of that?

Link to comment
  • 1 month later...

I did a search for all the face animations and could only find code for FACE_TALK0 & FACE_TALK1. Nothing about 2,3,4 except in the anim lists.

It doesn't mention the anims directly. It uses FACE_TALK0 for mouth closed and FACE_TALK1 + amplitude for all other sequences.

Link to comment

I've been a bit more open-minded recently to this, and I've begun to look around and find the best method to handle this.

Basically I've been thinking that instead of doing an MD3 head, Ghoul2 is perfectly fine as it does store raw vertex data. MD3 and Ghoul2 are actually very similar in format; Ghoul2 has some extra crap to it to handle dismemberment and bone-based animation, but the fundamentals behind the formats are very similar.

So...basically I've looked into something like this:

http://www.isca-speech.org/archive_open/archive_papers/avsp01/av01_110.pdf

 

However, I'd like to modify this a little bit so that we use a Kinect sensor, which would track motion as well as depth. The idea is that we use a series of about 10-20 facial markers, which use unique identifiers. Then, you would record a series of dialogue using the Kinect sensor and a special program, probably relating to the actual dialogue in the game ("Stop that, Jaden!"). This would export a file, which would be read by the game.

 

Then for each model, you would specify tags which would correlate to each marker, and the difference in the positions would be used.

I'm thinking that we would have to modify the ghoul2 format entirely, as it's too jittery atm to handle something like this.

Link to comment

You should be able to use any camera. I could have sworn I read somewhere that someone created UDK facial animations this way, and not with a kinect.

Yeah, probably. But I don't understand how you'd get depth data either way, actually. If you use only a single camera, there would be almost no way to determine the depth..right? Maybe you could run a simple calibration and then interpret the difference in marker size vs. baseline as the depth.

 

You could also place two different cameras at a fixed angle and calculate the depth via some basic trigonometry. However facial features (nose) might occlude some of the markers.

Link to comment

Ghoul2 is perfectly fine as it does store raw vertex data.

Could you elaborate on this? Are you thinking about an extra file per face state?

 

Yeah, probably. But I don't understand how you'd get depth data either way, actually.

Maybe you can ignore the depth? The only thing that causes outward movement on your faces is blowing out one's cheeks, right?
Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...