Tempust85 Posted November 10, 2013 Posted November 10, 2013 Ok so this is just a discussion about what would be the best method to implement vertex facial animations into JKA. It's been briefly talked about, but never really fully discussed to a point where there's a solid idea. From what I remember, source engine's method is to: - export out your character as you would (skin weights to a skeleton) to the uncompiled model format, SMD- export out a VTA file which stores vertex morphs Would this method be the best way (obviously different model formats for JKA), or does someone have another idea?
eezstreet Posted November 10, 2013 Posted November 10, 2013 or use a different lipsyncing algo first ;/
mrwonko Posted November 10, 2013 Posted November 10, 2013 Put an MD3 head on your model. eezstreet likes this
Tempust85 Posted November 19, 2013 Author Posted November 19, 2013 That would look very odd as it would have to be an MD3 head bolted onto a GLM neck/body. You would clearly see where the head is bolted on.
mrwonko Posted November 19, 2013 Posted November 19, 2013 That would look very odd as it would have to be an MD3 head bolted onto a GLM neck/body. You would clearly see where the head is bolted on.Why? Just do it properly. It's the only way that does not involve a shitload of coding, new models formats and whatnot.
eezstreet Posted November 20, 2013 Posted November 20, 2013 How is that any different from two separate objects in a modeling program?
Tempust85 Posted November 20, 2013 Author Posted November 20, 2013 This is an example of a static head model attached onto an animated body: AshuraDX likes this
eezstreet Posted November 20, 2013 Posted November 20, 2013 Looks more like the artist's fault than the fault of the system itself. The tones of the texture don't match well, thus creating a seam.
Tempust85 Posted November 20, 2013 Author Posted November 20, 2013 Another idea besides vertex animation - Facial animation scripting. Setting up facial expression animations (with a complex facial bone setup) similar to what is already available in JKA currently but a lot more of them and more detail. Then have a 3rd party lip-syncing program that will allow the user to map facial expressions (for a voice audio clip) to a script file. This script file is then loaded by the game (possibly using ICARUS). A nice lip-syncing program feature would be an "auto create" which will attempt to auto map facial expressions for a voice audio clip. Then the user can just manually edit the parts that don't quite match up. Archangel35757 likes this
Archangel35757 Posted November 22, 2013 Posted November 22, 2013 I like this last idea of a complex bone face rig... I was planning to do just that. We can create a complete library of visemes to be used with the lip-synch autoformatting the visemes to the sound file. What about modifying/using the lip-sync code from the game? We would need to have a facial pose library for facial expressions that the user would need to manually set against the soundtrack.
eezstreet Posted November 22, 2013 Posted November 22, 2013 The lip-sync code from the game is ancient at best, and was designed essentially with 2D faces in mind (so this should already give you some idea as to how bad this might seem) Essentially, what you need is something akin to this (video of it in action) which allows you to set generate miniature scripts for audio files. When a script for a particular audio file is detected for a particular language, the game would interpret the script (or use the old method if not detected), which in turn would cause a change in facial animation to a number of different facial states, which I would assume the system would require as input. In order to keep the animation smooth and crisp, splines or linear interpolation (aka lerping, you might have heard these terms before) would be used (maybe embedded in the script itself? i have no idea).
Archangel35757 Posted November 22, 2013 Posted November 22, 2013 @@eezstreet -- there are a number of scripts, tools, and plugins that do this inside 3dsMax. We can make a "mouth area" animation for all of the standard visemes (see http://aidreams.co.uk/forum/index.php?page=Visemes_-_for_Character_Animation ) and map the phonemes to the visemes. Doesn't the game already do this? But it only uses a few visemes? As for facial expressions to go along with speech... that's pretty complicated. Perhaps we could modify the ROFF system code and existing game lip-sync code to achieve all of the above. ROFF files can already process and use sound files. And we could add a new notetrack type called "expression" and then you would place expression notetrack keys along the sound file animation inside 3dsMax or Blender to play the facial expressions. Then in-game you'd check for facial expressions in the ROFF and play the facial expression animation as the sound plays. Only thing now is that ROFF sounds only play on CHANNEL BODY so that would need a slight modification to play on VOICE channel. What do you guys think of that?
eezstreet Posted November 22, 2013 Posted November 22, 2013 It already does that but its super primitive. Don't use it. Seriously.
Archangel35757 Posted November 22, 2013 Posted November 22, 2013 It already does that but its super primitive. Don't use it. Seriously.Then maybe I can understand it. Which c++ files govern the in-game facial animations that get played when a sound plays on VOICE Channel?
eezstreet Posted November 22, 2013 Posted November 22, 2013 I don't remember - one of the ones in client/ (snd_dma maybe)
Tempust85 Posted November 22, 2013 Author Posted November 22, 2013 I did a search for all the face animations and could only find code for FACE_TALK0 & FACE_TALK1. Nothing about 2,3,4 except in the anim lists.
Archangel35757 Posted January 19, 2014 Posted January 19, 2014 Any new on this? I noticed in the …\gamesource\anims.h file, they comment above the death anims that they mention being MD3, no?
eezstreet Posted January 19, 2014 Posted January 19, 2014 I did a search for all the face animations and could only find code for FACE_TALK0 & FACE_TALK1. Nothing about 2,3,4 except in the anim lists.It doesn't mention the anims directly. It uses FACE_TALK0 for mouth closed and FACE_TALK1 + amplitude for all other sequences.
eezstreet Posted January 21, 2014 Posted January 21, 2014 I've been a bit more open-minded recently to this, and I've begun to look around and find the best method to handle this.Basically I've been thinking that instead of doing an MD3 head, Ghoul2 is perfectly fine as it does store raw vertex data. MD3 and Ghoul2 are actually very similar in format; Ghoul2 has some extra crap to it to handle dismemberment and bone-based animation, but the fundamentals behind the formats are very similar.So...basically I've looked into something like this:http://www.isca-speech.org/archive_open/archive_papers/avsp01/av01_110.pdf However, I'd like to modify this a little bit so that we use a Kinect sensor, which would track motion as well as depth. The idea is that we use a series of about 10-20 facial markers, which use unique identifiers. Then, you would record a series of dialogue using the Kinect sensor and a special program, probably relating to the actual dialogue in the game ("Stop that, Jaden!"). This would export a file, which would be read by the game. Then for each model, you would specify tags which would correlate to each marker, and the difference in the positions would be used.I'm thinking that we would have to modify the ghoul2 format entirely, as it's too jittery atm to handle something like this.
Tempust85 Posted January 21, 2014 Author Posted January 21, 2014 You should be able to use any camera. I could have sworn I read somewhere that someone created UDK facial animations this way, and not with a kinect.
eezstreet Posted January 22, 2014 Posted January 22, 2014 You should be able to use any camera. I could have sworn I read somewhere that someone created UDK facial animations this way, and not with a kinect.Yeah, probably. But I don't understand how you'd get depth data either way, actually. If you use only a single camera, there would be almost no way to determine the depth..right? Maybe you could run a simple calibration and then interpret the difference in marker size vs. baseline as the depth. You could also place two different cameras at a fixed angle and calculate the depth via some basic trigonometry. However facial features (nose) might occlude some of the markers.
mrwonko Posted January 22, 2014 Posted January 22, 2014 Ghoul2 is perfectly fine as it does store raw vertex data.Could you elaborate on this? Are you thinking about an extra file per face state? Yeah, probably. But I don't understand how you'd get depth data either way, actually.Maybe you can ignore the depth? The only thing that causes outward movement on your faces is blowing out one's cheeks, right?
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now