Modeling and analysis for Spring Validation Experiment

The analysis and testing was done keeping the requirements in mind. We rewrite the requirements again. Since the major two subsystems that we were to be verified were

  1. Human Detection and Navigation
  2. Face and Expression Detection           

Subsystem 1: Human Detection & Navigation

Requirements to be fulfilled

  •  Detect human in the vicinity
  •  Approach the human once detected
  •  Move as a flock with a speed of 15 cm/sec
  •  Stop at 1 meter away from the human
  •  Align around the human at -45, 0, +45 degrees angle

The following questions were framed as which if answered would lead to the fulfilment of the above mentioned requirements. If all of these questions are answered “yes” (in performance) by the subsystem then we could confidently say that the subsystem works.

Testing Criteria for Human detection and navigation subsystem:

  1. Does the central robot rotate in place?
  2. Does the central robot stop rotating in place instantly if April tag is detected for at least 70 percent of the times?
  3. Does the robot flock move toward the April tag?
  4. Does the central robot stop at 1 meter away from the Human?
  5. How much does the final distance deviate from 1m?
  6. Do the robots align themselves around the human?

Subsystem 2: Face & Smile Expression Detection

Requirements checked

  • Detect Face
  • Pan tilt units tracks the face using head pose estimate from IntraFace
  • Accurate smile expression detection and head pose detection
  • Best photo is clicked

Similarly for this subsystem following questions were framed as which if answered would lead to the fulfilment of the above mentioned requirements. If all of these questions are answered “yes” (in performance) by the subsystem then we could confidently say that the sub-system works.

Testing criteria for face detection and expression detection

  1. Does IntraFace detect face at least 80 percent of the time
  2. Do the pan tilt units adjust themselves such that face is in center of the frame?
  3. Do we get expression output every time?
  4. Is the best photo on the basis of expression and head pose clicked every time?
  5. Time taken to do the above task?

 Complete System

Requirements checked

  • All the requirements mentioned for the above sub-systems
  • Face Detection starts only after the flock aligns itself around the person
  • The same person is not clicked again
  • The system restarts after one successful photo

Testing criteria for the complete system

  1. Does face detection start after the robots have reached the person of interest?
  2. Do the pan-tilts start tracking only when the robots have stopped?
  3. Does the swarm not click the photo of the same person again?
  4.  Does the system restart autonomously after each successful run?