Jump to content

I present a challenge.


Recommended Posts

I still think you should just make your own facial bone set, rather than using what's in the JO skeleton. But since I doubt you will:

 

- Use 1 bone per mouth corner, having 2 bones there wouldn't give any benefit.

- I don't know about you, but I can't move the tip of my nose so ditch that one.

Having two bones in the corner of the mouth, in my opinion, will better keep the mouth from pinching-- while there may be only one control object that influences both of them...

 

The nosetip bone I believe is there to correct/fix the nose tip if need be... we'll see. Put your finger on tip of your nose and pucker/purse your lips. Also try moving your mouth from side-to-side. The nose-tip does flex a little.

 

Also, what is wrong with their bone placement? It seems pretty good to me... based on what I've been researching on facial expressions...

Link to comment

@@eezstreet -- one of the questions I have is... how will you determine the amplitude of a given expression? For example, let's say I create a "joy" animation that goes from a neutral face to 100% expression of Joy. But what if you only want to stop at 25% or 50% and then blend to another expression? How will intermediate expressions be handled?

Link to comment

Expressions - I'm not 100% how it's handled, but if I'm right, it's just another viseme. So, we have several poses, including a pure neutral. Each pose represents an expression at 100% joy, anger, etc with no viseme applied. So for 25% expression, we take the bone changes from the neutral pose and apply only 25% of the changes (since each expression is 100%) and overlay them onto the speaking face.

 

(I really hope that makes sense)

Link to comment

Expressions - I'm not 100% how it's handled, but if I'm right, it's just another viseme. So, we have several poses, including a pure neutral. Each pose represents an expression at 100% joy, anger, etc with no viseme applied. So for 25% expression, we take the bone changes from the neutral pose and apply only 25% of the changes (since each expression is 100%) and overlay them onto the speaking face.

(I really hope that makes sense)

 

So to get a 50% Anger I think you'd just grab the bone transforms on the frame that equates to 50% of the anger expression animation... wouldn't that be easier/more accurate? Likewise take a frame at some percentage of another facial expression to blend with 50% anger. Getting the bone transforms would also apply to individual facial motions like eyebrow raise etc. in addition to a premade entire facial animation (I.e., joy, sadness, anger, etc.).

 

Edit: I get what you're saying with 100% static expressions...

Link to comment

So to get a 50% Anger I think you'd just grab the bone transforms on the frame that equates to 50% of the anger expression animation... wouldn't that be easier/more accurate? Likewise take a frame at some percentage of another facial expression to blend with 50% anger.

 

Edit: I get what you're saying with 100% static expressions...

Yes, that's precisely what I was talking about :P

Link to comment

Getting the bone transforms on a particular frame would also apply to individual facial motions like eyebrow (left, right, or both) raise etc. in addition to an entire premade facial animation (I.e., joy, sadness, anger, etc.) That way people can experiment...

Link to comment

I can have the 3dsMax facial rig GUI generate the text files you need (once the user is satisfied with the facial animation & lip-syncing)... but for those that don't have/use 3dsMax it would be nice if the MODView code was modified to incorporate this open source SAPI lip-sync tool (for audio analysis & visemes placement):

 

http://www.annosoft.com/sapi_lipsync/docs/

 

And also have MODView generate these text files as well.

Link to comment
  • 2 weeks later...

Thanks to @@ensiform (and @@eezstreet) I have a working C++ function that parses the phoneme data UTF-8 text file... now I just need to restructure my code so that it stores the phoneme data (phoneme, startTime, endTime) in a struct rather than vector containers.

 

I've currently been downloading/reading papers on phonemes/visemes/facial expressions, etc.

 

Based on the IPA phonetic alphabet... the phoneme string could actually consist of one or more characters depending upon vowel/consonant accent marks and diacritics.

 

So the next function needs to take the phoneme and map it to a viseme (possibly an enum) which would then put/key a stored viseme facial pose on the animation timeline. Additional facial expressions/emotions would be handled separately.

Link to comment

Ok... so I've decided to use Mandy Amano ( http://www.mandyamano.com/Welcome.html ) as the reference model for the new Hi-def Jan Ors head model...

 

mandy_front_side_03_flattened_zpsb769916

 

The only issue I'm having in setting up my reference templates is that this is the best side profile picture I could find... and I had to scale it up to match the front view... but I'm having a bit of trouble getting things aligned.  It all seems to scale well-- except for the ear.  In the above photo, I copied the ear to a new layer and shifted it up to a more reasonable match.  What do you guys think?  Does it look alright?  @@Psyk0Sith & @@DT85 -- any further suggestions?

Link to comment

This is a technique that seems to be rarely used now a days, it can get you off to a start but a lot will have to be done in the 3d views to really make it look right.

 

It doesn't line up because the front view her head is tilted to her right, rotating the image may fix this, you also have the depth of field making the ears in the front view smaller. 2d rotoscope will only get you a basic start, most people who rely on it start to finish usually turn out with heads that have poor facial structure especially in the cheek area.

Link to comment

Why not just use Angela Harry?

 

My suggestion is to make a generic female head, then take it into Zbrush to make a good likeness. Or if you're really pro, make the head from start to finish in Zbrush.

Edited by DT85
Link to comment

This is a technique that seems to be rarely used now a days, it can get you off to a start but a lot will have to be done in the 3d views to really make it look right.

 

It doesn't line up because the front view her head is tilted to her right, rotating the image may fix this, you also have the depth of field making the ears in the front view smaller. 2d rotoscope will only get you a basic start, most people who rely on it start to finish usually turn out with heads that have poor facial structure especially in the cheek area.

Yes, I noticed that her head was ever so slightly tilted to her right... but when I rotated it to get the inside corners of her eyes to be level... the nose look tilted the other way... nobody is perfectly symmetrical... I do have headshot pics from various angles that I can use to check the 3D progress of the head mesh.  Besides, in the front image... I plan to mirror the right side of her over to the left so it will be symmetric.

Link to comment

Why not just use Angela Harry?

 

My suggestion is to make a generic female head, then take it into Zbrush to make a good likeness.

I could not find any good reference images for Angela Harry to give me a solid front and side views.  I have zero experience with ZBrush... I did just buy 3D-Coat... (but I still have zero sculpting experience...) but I prefer to model a good low poly model first-- which I could then take that into 3D-Coat and try to sculpt more details.  But does my upwards shift of her ear in the sideview look natural?  Both of these reference images are on her website gallery.

Link to comment
  • 2 weeks later...
  • 2 weeks later...

For all of you English speakers... let's have a little fun...

 

Please tell me if you make a distinction when you pronounce these word pairs... do you pronounce the 'h' sound as "(h)w" versus just "w" and what English speaking region are you from? In the Southern USA we do distinguish these two phonemes: /hw/ vs /w/

 

Here are some contrasting examples that come to mind:

which / witch

whale / wail

wheel / will

whine / wine

whet / wet

whether / weather

whurry / worry

what / watt

when / win

where / wear

while / wile

 

Can anybody come up with other examples?

Link to comment

Southern English accent (dunno if you were only looking for American accents?)... I don't personally pronounce the wh- and w- prefix any differently in any of those words. But some people might. Also don't know if you were looking for this, but some of those word pairs are pronounced differently, e.g. wheel/will, whurry/worry, when/win.

 

An interesting one to think about maybe is one and won. Some people pronounce "one" beginning with a /hw/ sound, and won sounds similar but with only the /w/ sound.

Link to comment

From what I've been told... only Scotland distinguishes between /hw/ and /w/ phonemes. But according to dictionary I believe it's proper to pronounce the "(h)w" sound... though many regions have merged the two.

 

All those word pairs are pronounced similarly in my region of the US ( Or at least by me-- distinguishing the "wh"). They definitely can vary by region. Play these sounds:

 

http://www.google.com/search?ei=8DqzU7ycHoeXqAb7qoKgBg&q=define+while&oq=define+while&gs_l=mobile-gws-serp.3...35018.37930.0.38915.6.6.0.0.0.0.0.0..0.0....0...1c.1.48.mobile-gws-serp..6.0.0.QWR1jUOCog4

 

 

http://www.google.com/search?site=&source=hp&ei=TDyzU_SkA8uKqAan24HwAw&q=define+wile&oq=define+wile&gs_l=mobile-gws-hp.3..0l3j0i10l2.5005.13290.0.14394.17.16.0.3.3.0.637.4456.2-1j3j2j4.10.0....0...1c.1.48.mobile-gws-hp..5.12.4063.3.dyfSBEMTZX8

Link to comment
  • 2 weeks later...

Apart from the different sounding (differently shaped) words Xycaleth pointed out, I think even if there are slight differences, the distinction is so small that you wouldn't notice it in an animation. There is the human suspension of disbelief factor to take into account as well. The essence is more about the length and the shape of the syllables rather than the finely exact lip movement. That said, English accents between regions can change the shape/sound of a syllable drastically, such as most notably in the case of the US and the British "ask". But I expect any sound analysis would find the difference between and "eh" and an "ah"...

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...