Instrumentation Laser Optique Appliquée Rapide

Rendre Visible l'Invisible, Imagerie rapide et techniques de visualisation, Capture de mouvement – Seeing the Invisible High speed imaging and visualisation techniques, Motion capture – IMT Mines Alès CERIS I3A Euromov DHM

Motion Capture

post under development, as of 2023 June

NDLR: This post reflects my own thinking about motion capture and applications, thanks to fruitful discussions with my friends/colleagues from abroad and from my institution.

Motion capture, also known as MOCAP, is dedicated to collect data enabling to compute the displacement of the object under investigation. This object can be solid rigid, or the complex assembly of different parts, including living subjects. The knowledge of the trajectories/displacement leads to the biomechanical analysis of the subject, or the modification of its appearance for animation purposes. MOCAP can also be considered in computational photography, as the first capture frame is completely different compared to the final result after computational pre and postprocessing. MOCAP data can also be used to drive numeric twin or to generate digital companions… or to magnify and transmit incredible gestures, from artists to top level athletes.

Empowering spinning wheel of computational photography and motion capture

Summary

Main Goals

Some History

Motion Cartoons

Visual Effects VFX

Marker based systems

Markerless systems

Markerless motion capture: Other

Practical Systems

Main Goals 2020

  • Practical systems for capturing motion: with or without optics, marker/markerless
  • Allow (some) editing of motion
    • Can now be used as measuring tools / reference data
    • Applications :
      • Digital Health in Motion: Gait, Prosthesis, Bionics, Well-Being, Performances, Synchronization …
      • Sports and leisures: Optimal position, Interactions, Trajectories …
      • VFX (Visual Effects) Cinema: 3D Characters and Avatars
      • Cobotics: Fitting models on real 3D motion, Exoskeletons
      • e-Games and Virtual Reality
      • IHM : Human Machine Interface… and more

Some history

Motion has been rendered by Louis and Auguste Lumière with « Sortie de l’Usine Lumière à Lyon », (45s) in 1895…

Sortie de l’Usine Lumière

… but contested by Edison and Dickson with the kinetoscope box showing Dickson’s greeting (10s) moving… but not on screen.

Dickson’s greeting

In 1901, Etienne-Jules Marey was freezing the movement from 57 smoke channels hitting various objects, producing the first ever fluid mechanics experimental study.

Prisme triangulaire présentant au courant ses arêtes, quatrième et dernière version de la machine à fumée, équipée de 57 canaux, Agrandissement moderne d’après la plaque négative au gélatino-bromure sur verre (1901), Paris, Cinémathèque française

While the first motion analysis by photography has been realized by Eadweard Muybridge, « The farm », Stanford (1878). 16 « old fashioned » (1 shot, silver halide glass plate) cameras were triggered by a rope while the horse riding in front of white painted wall.

And Sir Stanford kept on going research afterwards and created Stanford University on « the Farm » site.

  • This is now possible doing it with color slow motion combined with dedicated 4×4 vehicles with counterbalanced seat and gizmo stabilized camera.
Photo Courtesy BBC unplugged, 2013
  • or even live for bio-mechanics analysis during competitions when placing reflective markers on rider and the horse surrounded by cameras (illustration from Qualisys, Goteborg Sweden).
Photos from Qualisys.com

Motion Cartoons

The first motion cartoon was delivered by Emile Cohl in France (1905)

  • Drawing each frame on paper
  • Shooting each frame onto negative film => picture with blackboard look.
  • Made up of 700 drawings, double-exposed (animated « on twos »)
  • Running time ~ 2minutes

Then the rotoscope appeared in 1915 thanks to Max Fleicher.

The rotoscope, Fleicher, 1915.

Animators traced characters over recorded actor’s motion, frame by frame. Rotoscopy was used for Human Characters in Snow White and Seven Dwarfs (1937).

In 1937, the multi-plane camera enabled displacing foreground and background to simulate relief perception, the illusion was perfect at these times.

Creates illusion of depth and true 3D.

Background and foreground moving in opposite directions creates effect of camera rotation.

Animated puppets and robotics

In 1993 Jurassic Park movie was created thanks to animated puppet dinosaurs using stop-motion armatures equipped with sensors measuring joint angles. Computer graphics model are then driven with key frames obtained from armatures.

Dinosaur puppets for the first Jurassic Park (1993)

Virtual visual effects… VFX

Visual effects, using « green background », image synthesis and image computing really appears in the late 1990’s with Matrix (1999). Embedding the multi vision angles high resolution digital photography with crispy images lead to show the same movement with different angles in the same time. In 2021, in « Matrix resurrection », it is even now possible doing it with high speed cameras replacing the single digital shot from the 1990… Face expressions dedicated to the character playing will be introduced in particular with Avatar (J. Cameron, 2009). These integrations require huge graphic processing units and tremendous data storage.

Visual virtual effects in Matrix and Avatar

Basic and complex motion capture with passive markers

Photogrammetric observation of passive marker appears in the beginning of 2000’s. Basic retro-reflective markers are placed on the subject and reflect the light shined from the camera to the objective lens of the same camera. If the marker is viewed at least by two camera, its 3D location is generated after intrinsic and extrinsic calibration of the measurement volume. More complex markers are also used in the cinematographic industry to reach easy location of the subject parts, e.g. for the Marvels Iron Man in the late 2010’s.

Single and complex passive markers

Active markers

Active markers just appeared in the 2010’s, by using driven LED on the subject instead of retroreflective markers (e.g. Phasespace). Leds are shining in synchronization with cameras exposure clock.

Phasespace active markers

More recently, Qualisys also developed active (or passive) dedicated markers to ensure rigid body tracking and skeleton generation using only 6 markers clusters (Traqrs, Qualisys, Goteborg, Sweden).

Traqr VR, passive or active markers cluster for Virtual Reality rendering, Qualisys, Sweden
QTM Streaming in VR , from AIHM, IMT Mines Ales, EuroMov DHM.

and also measuring synchronization of « dancers » listening to Macarena… in the AIHM facility.

Markerless motion capture: Optical

The main problem of the markers setup is to position the markers on the subjects… when possible. For « into the wild » measurements, it is then required to avoid positioning markers and also for non collaborative subjects. Cute systems with low cost cameras, or huge systems with lots of cameras can be used, thanks to live image processing, even on board mobile phones. The user will then choose between accuracy of the 3D results, and the simplicity/cost of the system. Recent paper from our team (Desmarais et al) describes the different algorithms possible to reach the human pose estimation. This research is part of the KeenMT project, to allow 3D estimation with low cost systems in harsh environment, with controlled measurement errors by comparison to golden standard e.g. markers Qualisys MOCAP.

Exemple of video analysis rendering using HumanPose

Carnegie Mellon University (Pittsburgh, Pennsylvania) also developed the digital Dome…using 480 cameras with simultaneous flow to ensure 3D recording…. in a dedicated place.

The digital Dome, Carnegie Mellon University, Pittsburgh, Pennsylvania

Markerless motion capture: Other

At AIHM facility, it’s also possible to reach markerless mocap without optics.

Markerless ecological mocap can be done through IMU’s (Inertial Movement Unit) to retrieve positions from accelerometers, gyrometers, magnetometers and even barometers coupled with strong data processing software, e.g. MVN Awinda from XSens.

IMU Main Characteristics:

17grams, 47x30x13mm, including:
– 3 Accelerometers 3D: +- 160 m/s 2
– 3 Gyroscopes 3D: +- 2000 deg/s
– 3 Magnetometer 3D: +- 1.9 Gauss
– 1 Barometer
Connection to PC base by Wifi, 50m wide in open area

Awinda XSENS + MVN software: 17 IMU + 1 IMU for PROP (Object linked to choosen segment)

Example of « walking » reconstruction with 17 IMU on single level walk.

Accelerations, velocities, positions and angles can be computed and displayed in real time for the different segments or so.

MOCAP – Applications in the IMT Mines Alès, AIHM facility

Basic application in upper right body study, head, arm, hand and including fingers. The focus is done on fingers movement by pointing dedicated cameras to the hand region for extended resolution, while the rest of the upper body is measured by the other cameras. The measurement volume is about 12m3. The average 3D accuracy is about 0.6mm@100Hz recording frames.

Fig1 : 8 Miqus3 cameras with Qualisys Track Manager (Qualisys, Sweden). Display of the calibrated volume, the calibration reference system axis and the position of the cameras.
Fig2 : Positioning of the retroreflective markers (10mm and 4mm diameter, without and with flashlight) for the study of the right arm movements.
Fig3 : Detection and validation of markers (head, shoulder, arm, hand, fingers).
Fig4 : Example of trajectory of the first joint of the little finger. Coordinates (x, y, z) versus abscissa in number of images, 10s capture at 100Hz. Distances in mm expressed from the calibration reference system.

Meilleurs voeux pour 2024

Best wishes for 2024

Xmas box under control, le cadeau de Noel est sous controle

11MiqusM3 camera, 2 Miqus Video, and Qualisys track Manager

post under development, as of 2024 December

Up Page

I’MTech : Capture de mouvement pour la santé et le sport

References

Open lectures
  • Leonid Sigal, University of British Columbia, Lecture3, 15-869, Human Motion Modeling and Analysis, Fall 2012
  • Kavita Bala, Cornell University, Lecture 32 CS4620/5620 Fall 2012
  • Parag Chaudhuri, IIT Bombay, CS 775: Advanced Computer Graphics, Lecture 11 : Motion Capture
  • Prof. Vladlen Koltun, Stanford University, Motion Capture CS 448D: Character Animation

Websites

Papers
  • Desmarais, Y., Mottet, D., Slangen, P. and Montesinos, P., “A review of 3D human pose estimation algorithms for markerless motion capture,” Comput. Vis. Image Underst. 212, 103275 (2021).