Jump to content

I present a challenge.


Recommended Posts

Posted

If a modeller makes a new Jan model for this mod, I will add in better lip synching. As in, phenomenal lip synching that will blow your pants off. But naturally I need a model to test this out on that isn't JKA quality. It'll also need to be rigged special.

 

Details to be provided on progress report of the model.

 

@@DT85, @@minilogoguy18 and @@Psyk0Sith might be interested..

Archangel35757, Cerez and Stoiss like this
Posted

Count me in with regards to better lip-synching and rigging... I intend (once I get finished with the Max2013/2014 dotXSI exporter) to get back to my advanced character rig setup in 3ds Max... and my plan is to use ALL of the facial bones (JO/JA/SoF2) that are there in the released skeleton-- maybe add a few more for the tongue as well.  I also found some lip-synching code that might interest you.  I was also thinking we need to add in more facial animations to the .GLA for all (or the most common set) of the phonemes/visemes.

Cerez likes this
Posted

I know exactly what I need to do. I've researched the subject over and over again, and I have an exact plan made up. I need to make sure it's actually feasible before I do anything about it though. If someone makes the model, I will divulge my plan in full. No point in a theoretical debate about this sort of subject again.

Posted

I know exactly what I need to do. I've researched the subject over and over again, and I have an exact plan made up. I need to make sure it's actually feasible before I do anything about it though. If someone makes the model, I will divulge my plan in full. No point in a theoretical debate about this sort of subject again.

...It'll also need to be rigged special.

Exactly -- Not sure what your plan is... guess we won't be finding out anytime soon; but I plan to use all of the facial bones that are there in the skeleton I uploaded. They should be more than sufficient -- unless one wants additional detail for the tongue for some of the phoneme/viseme combinations. My rig will store a library of visemes to choose from/play for the phonemes when the voice audio is analyzed. I was planning to export these as additional facial animations to be merged with the GLA. I hope what I'm planning can fit in with your plan as well.

Posted

I guess there really isn't any harm in posting it, then.

 

Source.

No, I'm not kidding. This is what I'm bringing to the table. While playing Vampire: The Masquerade - Bloodlines (see below example), I was completely enthralled by the level of detail in character expressions. I did some digging and it turns out that the VCD and LIP files from Source can be read/parsed the same way that .sabs, .shaders, and other things are read. As a result of this, you can use stuff like FacePoser etc in order to generate the positions of the face. Since you'd be making your own model, you can convert the model to whatever format FacePoser likes (.obj or .fbx, I guess?) and preview it that way.

 

Example:

 

If you're not familiar with Source's method, it involves a shared skeleton for the body, and then a set of transforms which get fed into it, and these transforms get certain aliases (ie an 'ah' sound)

 

Process

So, let's start with the nitty-gritty of how this works. Whenever the level loads, target_scriptrunners which point to a script with a sound block in it cause what is called a precache event. On precache, the sound file is loaded. My plan is to hook this process and in turn load our VCD and LIP files as well. The LIP files control the mouth positions involved with the corresponding .mp3, while the VCD file controls the scripture - gestures, emotions, etc.

 

Here's an example of a .lip file:

VERSION 1.2
PLAINTEXT
{
   "Whoa"
}
WORDS
{
   WORD Whoa 0.000 1.000
   {                                               
       119 w 0.000 0.250 1.000 0                  
       652 ah 0.250 0.750 1.000 0
       596 ao 0.750 1.000 1.000 0
   }
}
EMPHASIS
{
}
CLOSECAPTION
{
   english
   {
      PHRASE unicode 12 " W h o a "  0.000 1.000
   }
}
OPTIONS
{
   voice_duck 1
   speaker_name Neo
}

And an example of a VCD file:

      actor "Vandal"
      {
        channel "My unique channel name"
        {
          event speak "My unique event name"
          {
            time 0.000000 14.461179
            param "character/dlg/santa monica/vandal/line551_col_e.wav"
            param2 "70dB"
            fixedlength
          }
        }


        channel "Gestures"
        {
           event gesture "A little something extra"
           {
                time 0.000000 14.666667
                param "ACT_CONVERSE_NORMAL_TALK"
           }
        }

        channel "Expressions"
        {
             event expression "A smiling finish"
             {
                 time 12.000000 14.666667
                 param "vandal"
                 param2 "Joy"
                 event_ramp
                 {
                     1.0000 0.0000
                 }
             }
         }
}

So the plan is to parse this data and use it for later when we're actually in dialogue.

You might be wondering now how this all plans to be rigged. Again..there's another plan for this.

 

Split Skeleton

I noted something curious in the _humanoid.GLA, and perhaps you did as well. Remember how facial animations only modified the head, not the rest of the body? Well, my solution involves 2 distinct skeletons. One is the current _humanoid.GLA, and another is a _lipsync.GLA. The lipsync GLA has many more bones than the _humanoid.GLA, and only contains data/animations for lipsynching. Essentially, you would rig your new model to the _lipsync.GLA, since it has all the original bones of the _humanoid, + certain bones for detail, like in the face (maybe hands or something too? I dunno). I'm sure all of you will be saying "BUT BUT BUT...all the animations are in the _humanoid!!"

 

Well, yes. All of the body animations.

 

The idea is that the new skeleton will be able to mimic the _humanoid animations since it has all of the same body bones, but none in the face or head. Basically, the bone transformations from the _humanoid.GLA are run on the _lipsync.GLA, and since the _lipsync.GLA has all the same bones, it's no problem. The game won't allow for this unless you recode it to do so. ;) But the capability exists based on principle as it is. The same thing could be done for capes too, actually.

 

What I Need

So basically, here's what I need to be made in order to get anything conclusive done with this:

  • _lipsync.GLA (same as _humanoid.GLA, but more bones for the face and phoneme/viseme animations, as well as others (like rolling of the eyes, moving of the eyebrows, etc)
  • A model (in GLM format, rigged to _lipsync, as well as in FBX/OBJ for faceposer)

I'll probably also need to be able to contact someone directly via IM whenever it gets close to model rigging/animation time.

NumberWan, Cerez and Circa like this
Posted

I will probably need to re-read what you wrote... but in essence don't we already have a split skeleton in the GLA-- it's already broken down into bone hierarchy subgroups: FACE, TORSO, LEGS, BOTH (meaning torso and legs).

 

Facial animations only play on the facial bones, while the torso or body (I.e., BOTH) can be playing other anims, no? So wouldn't it be the same as creating all of the viseme animations and adding them to the current GLA as: FACE_ah, FACE_xx, etc.?

 

Likewise you could make gesture animations that only target the TORSO, etc. So, couldn't we script different combos on the different groups?

Posted

I will probably need to re-read what you wrote... but in essence don't we already have a split skeleton in the GLA-- it's already broken down into bone hierarchy subgroups: FACE, TORSO, LEGS, BOTH (meaning torso and legs).

 

Facial animations only play on the facial bones, while the torso or body (I.e., BOTH) can be playing other anims, no? So wouldn't it be the same as creating all of the viseme animations and adding them to the current GLA as: FACE_ah, FACE_xx, etc.?

 

Likewise you could make gesture animations that only target the TORSO, etc. So, couldn't we script different combos on the different groups?

You're correct, to some extent. We don't have a split skeleton, rather we have FACE, TORSO, LEGS, BOTH. (The names are arbitrary and have no bearing on anything at all, actually. Most are actually run on the torso..) What I'm suggesting is that we have a separate GLA entirely, and then rig new models to this GLA. The new GLA would have corrected face bones, but it would still be able to run animations from the _humanoid skeleton because the TORSO, etc don't modify the facial bones. The bones in the new GLA are run through the same transforms from the old GLA.

 

Adding visemes to the existing GLA won't really work though because you need many more bones to get the proper effects.

Posted

TORSO anims only play on the bones in the TORSO group, while LEGS animations play on the LEGS group-- this is how they do strafing and other TORSO/LEGS combinations, if I'm not mistaken ...meanwhile the FACE group can be playing facial animations... I could be wrong, but I thought there's a lot of setup for this in/via Assimilate.

 

IMO the additional SoF2 bones should be sufficient... I will start on my facial rig so you can see what it can achieve... but it seems your plan can work on a new GLA that has all the current bones & perhaps a few more.

 

I guess I'm not grasping your need for a second facial GLA when it can reside in the one current /expanded GLA with new facial bones & facial anims (the extra SoF2 facial bones are already in the root model).

Posted

New bones can be added in with a recompile of the JKA animations, I've done this a few times before. All you need is a new root.xsi that contains the model + updated skeleton & a 1 frame animation containing the new bones at the top of the animation list in the model.car file. I'm sure @@minilogoguy18 could handle this. ;)

 

But hang on, can't we just use vertex animation like source engine?

Posted

You need more bones. You can't add new bones to _humanoid without fucking it up.

... the SoF2 facial bones are already in the root.xsi file... it's just that nothing is weighted to them... so it is like DT said-- create all the new Viseme animations ( even add more bones if necessary) and recompile the GLA.

Posted

If a modeller makes a new Jan model for this mod, I will add in better lip synching. As in, phenomenal lip synching that will blow your pants off. But naturally I need a model to test this out on that isn't JKA quality. It'll also need to be rigged special.

 

Details to be provided on progress report of the model.

 

@@DT85, @@minilogoguy18 and @@Psyk0Sith might be interested..

 

How about a retopo of the stock head with proper loops? and why Jan in particular?

Posted

New bones can be added in with a recompile of the JKA animations, I've done this a few times before. All you need is a new root.xsi that contains the model + updated skeleton & a 1 frame animation containing the new bones at the top of the animation list in the model.car file. I'm sure @@minilogoguy18 could handle this. ;)

 

I think I can handle it as well... :winkthumb:

 

I will begin working on my facial rig and GUI in 3dsMax and start a WIP thread for it.

 

It will include a pose library for existing FACE_*** animations (for backwards compatibility) and at least the 14 primary visemes. It will also include a pose library for the 6 primary facial expressions: joy, sadness, anger, fear, disgust, surprise.

 

Hmmm... @@eezstreet, to blend the different visemes and facial expressions-- may need to create a MOUTH group to play the new Visemes on top of the expressions targeting the FACE group. Legacy animations would play on the FACE group. Otherwise, we'd have to make anims for all mouth/facial expression combinations...

Posted

Perhaps someone should use @@DT85's improved DF2 Kyle model and finish it up for this? Since it's already nearly finished.

Posted

@@eezstreet -- one attractive benifit of having a separate facial GLA would be the ability to make better alien facial animations-- since many aliens don't conform well to the _humanoid facial bones... but I believe we'd still have to have independent MOUTH and FACE groups for the reason I already stated above. Not to mention that most aliens will likely have different visemes/expressions.

 

Pursuing your original idea... you could use the face helper bone, that is co-located with the _humanoid head bone, as the main root node for a separate facial skeleton/GLA.

 

Then, in simple terms, I believe all you would need to do is multiply the face helper bone transform matrix by the head bone transform matrix (so face follows the head/body).

Posted

tl;dr @ all

 

My whole point to using two skeletons so that merging with _humanoid isn't necessary. Couple of reasons:

 

- You can easily tell which skeleton the model is rigged to based on its MDXM header (_humanoid.gla vs _lipsync.gla) . This is important because otherwise, you'd have no way to detect if those bones are rigged correctly for that model. Keeps the facial animation working for older models that aren't updated. Lazy but I doubt someone feels like rerigging literally every model in the game.

 

- By using a separate skeleton, you skip the need to merge to _humanoid. Just compile the GLA, no need to mess with _humanoid.

 

- A separate skeleton is more easily extensible if we decide to add like capes etc

(Sorry if I was brash before. Was running on low sleep)

Posted

tl;dr @ all

My whole point to using two skeletons so that merging with _humanoid isn't necessary. Couple of reasons:

- You can easily tell which skeleton the model is rigged to based on its MDXM header (_humanoid.gla vs _lipsync.gla) . This is important because otherwise, you'd have no way to detect if those bones are rigged correctly for that model. Keeps the facial animation working for older models that aren't updated. Lazy but I doubt someone feels like rerigging literally every model in the game.

- By using a separate skeleton, you skip the need to merge to _humanoid. Just compile the GLA, no need to mess with _humanoid.

- A separate skeleton is more easily extensible if we decide to add like capes etc(Sorry if I was brash before. Was running on low sleep)

In my last suggestion you wouldn't touch or merge anything with the _humanoid.gla. You'd only need the transform matrix of the _humanoid head bone from the _humanoid.gla.. to put the lipsync.gla in the proper model space.

 

Inside 3dsMax however this could still be one rigged skeleton file. You would simply do an "Export-Selected" and export out only the head and face helper bone and all of the facial bones (with prefix on bone name so as not to conflict with legacy face bones). Then compile these 84+ animations (combinations of 14 visemes and 6 facial expressions) into its own GLA.

 

So then you would be using the legacy _humanoid.gla for the body and the new lipsync.gla for new facial animations, no?

 

In like manner you could have a separate cloak.GLA for capes, cloaks, robes...

Posted

Yes.

But I believe we would need the head bone to be included in the lipsync.gla because the outer vertices on the face need to be weighted 100% to the head bone.

 

So the head bone would be common to both the lipsync.GLA and legacy _humanoid.GLA (but you would not be touching/modifying it-- only getting head bone transform matrix to drive the lipsync.GLA.

 

...and you could do similarly for a cloak.GLA.

Posted

In 3dsMax it'd be the JK2 skeleton(?) But it might be better to keep lipsync facial skeletons as separate 3dsMax files and only reference in the lipsync head/face you need based on a selection menu of head types (I.e., humanoid or alien races...) Maybe @@Psyk0Sith will let me rig his custom Ongree head to better show what I mean in my WIP thread.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...