PCA talk

Real-world applications of PDMs in 2D and 3D from the Eastman Dental Institute

Tim J. Hutton, Research Fellow & PhD student,
MINORI project (http://www.eastman.ucl.ac.uk/~dmi/MINORI),
Biomedical Informatics Unit (http://www.eastman.ucl.ac.uk/~dmi),
Eastman Dental Institute (http://www.eastman.ucl.ac.uk),
UCL.

2D Face Images:

Data: 17 face images of the same subject (Marcus) landmarked with 71 landmarks by TJH
Example: marcusexample.jpg
Method: standard Cootes et al. PDMs (Procrustes + PCA)
Mode 1 (48.0%): marcus2dm1.gif (between -3 and +3 standard deviations, all other modes kept at zero)
Mode 2 (29.1%): marcus2dm2.gif (between -3 and +3 standard deviations, all other modes kept at zero)

2D Lateral Cephalometric Radiograph ('ceph') Tracings:

Data: 131 cephs landmarked with 148 points by an Eastman orthodontist (ZNM)
Example: cephexample.jpg
Method: standard Cootes et al. PDMs (Procrustes + PCA)
Mode 1 (37.0%): zmode1.avi (between -3 and +3 standard deviations, all other modes kept at zero)
Mode 2 (18.7%): zmode2.avi (between -3 and +3 standard deviations, all other modes kept at zero)

2D Shape-Free Face Images:

Data: ~40 face images landmarked with ~90 landmarks by TJH
Method: standard Cootes et al. method of piecewise affine warping images to mean landmarks then applying PCA to intensities
Mode 1 (?%): sfm1.gif (between -3 and +3 standard deviations, all other modes kept at zero)

2D Combined Appearance Face Images:

Data: ~40 face images landmarked with ~90 landmarks by TJH
Method: combining PCA of modes of shape and intensity, made commensurate by maximising entropy [explain this] (alternate is ratio of eigensums [explain this]) Old method of Cootes et al. (moving landmarks and finding weighting by regression is a little perverse (and doesn't extend to 3D).
Mode 1 (?%): cam2dm1.gif (between -3 and +3 standard deviations, all other modes kept at zero)

3D Combined Appearance Face Surfaces:

Data type: textured 3D surfaces (VRML mesh + separate texture image)
Example: rick.gif (animation of rotation to show 3D nature)

Data: ~30 face surfaces landmarked with ~20 landmarks by TJH
Method: TJH's old method (Procrustes (scale taken out) + PCA on 3D landmarks, TPS-warping 3D surfaces to the mean landmarks, averaging of vertices to form mean mesh, combining PCA of 3D landmark modes and previous 2D grey-level model modes using ratio of eigensums - generate examples by TPS-warping mean mesh to desired 3D landmarks and texture-mapping the synthesised image)
Mode 1 (?%): cam3dm1.gif (between -3 and +3 standard deviations, all other modes kept at zero)
Mode 2 (?%): cam3dm2.gif (between -3 and +3 standard deviations, all other modes kept at zero)

Data: 17 scans of a single person (Marcus) in a variety of facial expressions landmarked with 71 landmarks by TJH
Method: TJH's Dense Surface Model (Procrustes (scale left in) on 3D landmarks, TPS-warping 3d surfaces to the mean landmarks, dense correspondence using nearest point search from a base mesh, inverse TPS warping resampled meshes back to original places, Procrustes on meshes, PCA on meshes, combining PCA of these modes and previous 2D grey-level model modes using ratio of eigensums)
Mode 1 (29.8%): marcusm1.gif (between -3 and +3 standard deviations, all other modes kept at zero)
Mode 2 (18.4%): marcusm2.gif (between -3 and +3 standard deviations, all other modes kept at zero)
Mode 3 (13.2%): marcusm3.gif (between -3 and +3 standard deviations, all other modes kept at zero)

Data: 31 images of a single person (Siben) in an amusing variety of facial expressions landmarked with 21 landmarks by TJH
Method: TJH's Dense Surface Model (DSM)
Mode 1 (?%): sibenmode1.gif (between -3 and +3 standard deviations, all other modes kept at zero)
Mode 2 (?%): sibenmode2.gif (between -3 and +3 standard deviations, all other modes kept at zero)
Mixed mode: sibenmixedmode.gif (mode1=-2.0, mode2=2.0, mode3 between -3 and 3, all others zero)

Using 3D Surface Models to Fit to New Images:

Data: DSM (wtihout texture) built using 17 scans of Marcus as above
Method: Fit initialized with average mesh in approximate position, combination of ICP and ASM search (using DSM) to translate, scale, rotate and deform mesh to target mesh.
fit1.mpg: in-training-set search to image of Marcus with his mouth open (target in semitransparent grey, model in green)
fit4.mpg: fit to unseen example (other person) (target in semitransparent grey, model in green)
fit5.mpg: in-training-set search to image of Marcus mid-blink (target in semitransparent grey, model textured with average image to better show fitting - eyes close to match target)

Using 3D Combined Appearance Surfaces to Model Aging:

Data: ~70 images of different people (ages 1 - 60) landmarked with 9 landmarks by TJH
Method: DSM gives scatter in n-dimensions, kernel-smoothing on age is used to find average 1 yr old, 2 yr old, 3 yr old., etc. (kernel is triangular, width = 20yrs - hence extremes are pulled-in)
aging.avi: note large shape change in youth then very little later in life, but more subtle skin tone changes with age, plus nose grows throughout life.
ageline.jpg: screen capture of 3D scatter plot (examples coloured by age (blue=young, red=old) on the first three modes, mode 1 stretched for clarity) with estimate of age trajectory in green.