What if photographers had robotic assistants capturing happy moments in events such as weddings, birthdays or graduation ceremonies? ‘Robographers’ is the preliminary effort aimed at developing such autonomous assistants that not only click photos, but also recognize and capture the human expressions accurately with equal competency. The principle of the project is facial expression recognition and accurate head pose tracking using a swarm of robots. Instead of working individually, a swarm of mobile robots will work collaboratively to accurately estimate the human expressions and click snaps accordingly.
By using multiple cameras, it is possible to improve the estimate of the facial expressions and head pose through redundant noisy measurements and better handling of real-world issues such as occlusions. By adding mobility to the cameras, thus making each camera a dynamically actuated information source, it is possible to further improve the estimate while improving the tracking of the human as they move through an uncontrolled environment.
The project Robographers has been conceptualized by the common vision of a group of students who believe that robots can perform better when they work in collaboration with each other and also with the humans while at the same time ensuring the safety and comfort of the humans in the environment. The goals of the project are as follows
1. The first component involves evaluating multiple cameras and applying computer vision and sensor fusion techniques to fuse the information from multiple static cameras using ROS and IntraFace to develop a more accurate facial pose and expression estimate. IntraFace is a facial expression recognition software developed by the Human Sensing Lab at the Carnegie Mellon University.
2. The second component of the project will involve designing and building the accurate pan-tilt units for the cameras and incorporating multi-camera head tracking into the developed sensor fusion software.