Archangel35757 Posted May 25, 2014 Posted May 25, 2014 I still think you should just make your own facial bone set, rather than using what's in the JO skeleton. But since I doubt you will: - Use 1 bone per mouth corner, having 2 bones there wouldn't give any benefit.- I don't know about you, but I can't move the tip of my nose so ditch that one.Having two bones in the corner of the mouth, in my opinion, will better keep the mouth from pinching-- while there may be only one control object that influences both of them... The nosetip bone I believe is there to correct/fix the nose tip if need be... we'll see. Put your finger on tip of your nose and pucker/purse your lips. Also try moving your mouth from side-to-side. The nose-tip does flex a little. Also, what is wrong with their bone placement? It seems pretty good to me... based on what I've been researching on facial expressions...
Archangel35757 Posted May 25, 2014 Posted May 25, 2014 @@eezstreet -- one of the questions I have is... how will you determine the amplitude of a given expression? For example, let's say I create a "joy" animation that goes from a neutral face to 100% expression of Joy. But what if you only want to stop at 25% or 50% and then blend to another expression? How will intermediate expressions be handled?
eezstreet Posted May 25, 2014 Author Posted May 25, 2014 Expressions - I'm not 100% how it's handled, but if I'm right, it's just another viseme. So, we have several poses, including a pure neutral. Each pose represents an expression at 100% joy, anger, etc with no viseme applied. So for 25% expression, we take the bone changes from the neutral pose and apply only 25% of the changes (since each expression is 100%) and overlay them onto the speaking face. (I really hope that makes sense)
Archangel35757 Posted May 25, 2014 Posted May 25, 2014 Expressions - I'm not 100% how it's handled, but if I'm right, it's just another viseme. So, we have several poses, including a pure neutral. Each pose represents an expression at 100% joy, anger, etc with no viseme applied. So for 25% expression, we take the bone changes from the neutral pose and apply only 25% of the changes (since each expression is 100%) and overlay them onto the speaking face.(I really hope that makes sense) So to get a 50% Anger I think you'd just grab the bone transforms on the frame that equates to 50% of the anger expression animation... wouldn't that be easier/more accurate? Likewise take a frame at some percentage of another facial expression to blend with 50% anger. Getting the bone transforms would also apply to individual facial motions like eyebrow raise etc. in addition to a premade entire facial animation (I.e., joy, sadness, anger, etc.). Edit: I get what you're saying with 100% static expressions...
eezstreet Posted May 25, 2014 Author Posted May 25, 2014 So to get a 50% Anger I think you'd just grab the bone transforms on the frame that equates to 50% of the anger expression animation... wouldn't that be easier/more accurate? Likewise take a frame at some percentage of another facial expression to blend with 50% anger. Edit: I get what you're saying with 100% static expressions...Yes, that's precisely what I was talking about
Archangel35757 Posted May 26, 2014 Posted May 26, 2014 Getting the bone transforms on a particular frame would also apply to individual facial motions like eyebrow (left, right, or both) raise etc. in addition to an entire premade facial animation (I.e., joy, sadness, anger, etc.) That way people can experiment...
eezstreet Posted May 26, 2014 Author Posted May 26, 2014 I suppose one could have a set of bones that the expression could apply to set in the .vld file.
Archangel35757 Posted May 26, 2014 Posted May 26, 2014 I can have the 3dsMax facial rig GUI generate the text files you need (once the user is satisfied with the facial animation & lip-syncing)... but for those that don't have/use 3dsMax it would be nice if the MODView code was modified to incorporate this open source SAPI lip-sync tool (for audio analysis & visemes placement): http://www.annosoft.com/sapi_lipsync/docs/ And also have MODView generate these text files as well.
Archangel35757 Posted May 26, 2014 Posted May 26, 2014 My power supply unit has died on my PC... so my rigging will be delayed until I get a replacement PSU.
Archangel35757 Posted May 29, 2014 Posted May 29, 2014 New PSU is on it's way... in the meantime I've been researching and playing with various speech analysis tools. For example: The bottom row shows the phoneme breakdown. These will be exported to a text file and mapped to visemes in the character rig lipsync GUI. Circa, eezstreet and Boothand like this
Archangel35757 Posted June 6, 2014 Posted June 6, 2014 Thanks to @@ensiform (and @@eezstreet) I have a working C++ function that parses the phoneme data UTF-8 text file... now I just need to restructure my code so that it stores the phoneme data (phoneme, startTime, endTime) in a struct rather than vector containers. I've currently been downloading/reading papers on phonemes/visemes/facial expressions, etc. Based on the IPA phonetic alphabet... the phoneme string could actually consist of one or more characters depending upon vowel/consonant accent marks and diacritics. So the next function needs to take the phoneme and map it to a viseme (possibly an enum) which would then put/key a stored viseme facial pose on the animation timeline. Additional facial expressions/emotions would be handled separately.
Archangel35757 Posted June 10, 2014 Posted June 10, 2014 ok... I finally got my replacement PSU from EVGA today... installed... back in business. I've imported the Jan Ors .glm model. I will attempt to make a hi-poly version of her head and use her for my part of the Lip Sync GLA effort.
Archangel35757 Posted June 12, 2014 Posted June 12, 2014 Ok... so I've decided to use Mandy Amano ( http://www.mandyamano.com/Welcome.html ) as the reference model for the new Hi-def Jan Ors head model... The only issue I'm having in setting up my reference templates is that this is the best side profile picture I could find... and I had to scale it up to match the front view... but I'm having a bit of trouble getting things aligned. It all seems to scale well-- except for the ear. In the above photo, I copied the ear to a new layer and shifted it up to a more reasonable match. What do you guys think? Does it look alright? @@Psyk0Sith & @@DT85 -- any further suggestions?
minilogoguy18 Posted June 13, 2014 Posted June 13, 2014 This is a technique that seems to be rarely used now a days, it can get you off to a start but a lot will have to be done in the 3d views to really make it look right. It doesn't line up because the front view her head is tilted to her right, rotating the image may fix this, you also have the depth of field making the ears in the front view smaller. 2d rotoscope will only get you a basic start, most people who rely on it start to finish usually turn out with heads that have poor facial structure especially in the cheek area.
Tempust85 Posted June 13, 2014 Posted June 13, 2014 (edited) Why not just use Angela Harry? My suggestion is to make a generic female head, then take it into Zbrush to make a good likeness. Or if you're really pro, make the head from start to finish in Zbrush. Edited June 13, 2014 by DT85
Archangel35757 Posted June 13, 2014 Posted June 13, 2014 This is a technique that seems to be rarely used now a days, it can get you off to a start but a lot will have to be done in the 3d views to really make it look right. It doesn't line up because the front view her head is tilted to her right, rotating the image may fix this, you also have the depth of field making the ears in the front view smaller. 2d rotoscope will only get you a basic start, most people who rely on it start to finish usually turn out with heads that have poor facial structure especially in the cheek area.Yes, I noticed that her head was ever so slightly tilted to her right... but when I rotated it to get the inside corners of her eyes to be level... the nose look tilted the other way... nobody is perfectly symmetrical... I do have headshot pics from various angles that I can use to check the 3D progress of the head mesh. Besides, in the front image... I plan to mirror the right side of her over to the left so it will be symmetric.
Archangel35757 Posted June 13, 2014 Posted June 13, 2014 Why not just use Angela Harry? My suggestion is to make a generic female head, then take it into Zbrush to make a good likeness.I could not find any good reference images for Angela Harry to give me a solid front and side views. I have zero experience with ZBrush... I did just buy 3D-Coat... (but I still have zero sculpting experience...) but I prefer to model a good low poly model first-- which I could then take that into 3D-Coat and try to sculpt more details. But does my upwards shift of her ear in the sideview look natural? Both of these reference images are on her website gallery.
Archangel35757 Posted June 13, 2014 Posted June 13, 2014 Ok... so I found my answer-- when viewed from the front, the ears are as long as the distance from the top of the eyes to the bottom of the nose (ref. "Drawing the Head & Figure" by Jack Hamm). So I just need to tweak it a little more...
Archangel35757 Posted June 21, 2014 Posted June 21, 2014 Working on the head mesh... hopefully I'll have something to show soon. I've also mapped all of the FacePoser phoneme ID's to their respective International Phonetic Alphabet font symbol... so I can map the identified IPA phonemes from PRAAT to FacePoser values.
Tempust85 Posted June 21, 2014 Posted June 21, 2014 For now, I wouldn't worry about making a Jan Ors head. Just a good head to use with the new facial bones.
Archangel35757 Posted July 1, 2014 Posted July 1, 2014 For all of you English speakers... let's have a little fun... Please tell me if you make a distinction when you pronounce these word pairs... do you pronounce the 'h' sound as "(h)w" versus just "w" and what English speaking region are you from? In the Southern USA we do distinguish these two phonemes: /hw/ vs /w/ Here are some contrasting examples that come to mind:which / witchwhale / wailwheel / willwhine / winewhet / wetwhether / weatherwhurry / worrywhat / wattwhen / winwhere / wearwhile / wile Can anybody come up with other examples?
Xycaleth Posted July 1, 2014 Posted July 1, 2014 Southern English accent (dunno if you were only looking for American accents?)... I don't personally pronounce the wh- and w- prefix any differently in any of those words. But some people might. Also don't know if you were looking for this, but some of those word pairs are pronounced differently, e.g. wheel/will, whurry/worry, when/win. An interesting one to think about maybe is one and won. Some people pronounce "one" beginning with a /hw/ sound, and won sounds similar but with only the /w/ sound.
Archangel35757 Posted July 1, 2014 Posted July 1, 2014 From what I've been told... only Scotland distinguishes between /hw/ and /w/ phonemes. But according to dictionary I believe it's proper to pronounce the "(h)w" sound... though many regions have merged the two. All those word pairs are pronounced similarly in my region of the US ( Or at least by me-- distinguishing the "wh"). They definitely can vary by region. Play these sounds: http://www.google.com/search?ei=8DqzU7ycHoeXqAb7qoKgBg&q=define+while&oq=define+while&gs_l=mobile-gws-serp.3...35018.37930.0.38915.6.6.0.0.0.0.0.0..0.0....0...1c.1.48.mobile-gws-serp..6.0.0.QWR1jUOCog4 http://www.google.com/search?site=&source=hp&ei=TDyzU_SkA8uKqAan24HwAw&q=define+wile&oq=define+wile&gs_l=mobile-gws-hp.3..0l3j0i10l2.5005.13290.0.14394.17.16.0.3.3.0.637.4456.2-1j3j2j4.10.0....0...1c.1.48.mobile-gws-hp..5.12.4063.3.dyfSBEMTZX8
Flynn Posted July 15, 2014 Posted July 15, 2014 I don't distinguish, although I definitely have heard some people speak with the h, idk if it's based on location or just personal preference
Cerez Posted July 20, 2014 Posted July 20, 2014 Apart from the different sounding (differently shaped) words Xycaleth pointed out, I think even if there are slight differences, the distinction is so small that you wouldn't notice it in an animation. There is the human suspension of disbelief factor to take into account as well. The essence is more about the length and the shape of the syllables rather than the finely exact lip movement. That said, English accents between regions can change the shape/sound of a syllable drastically, such as most notably in the case of the US and the British "ask". But I expect any sound analysis would find the difference between and "eh" and an "ah"...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now