Performance

Full system test on 4/12/21. Visualizer on left and test footage on right.

SVD Performance

Required validations:

  1. Locomotion (M.P.5)
    • Procedure:
      • Operator powers on the robot and moves the robot forwards and backwards using remote control at a minimum speed of 0.25 m/s.
      • Operator turns the robot about its location.
    • Validation:
      • Robot connects to and is controllable by the remote control
      • Robot moves forwards and backwards at a minimum speed of 0.25 m/s.
      • Robot turns in place at a minimum angular speed of 36 degrees per second.
    • Performance: The robot connected to the remote successfully, and comfortably exceeded our specifications of 0.25 m/s forward/backward motion and turning speed of 36 degrees per second.
  2. Perception (M.P.4)
    • Procedure:
      • Operator initializes perception system and visualizer.
      • Robot is placed in one of two scenarios determined by a coin flip.
    • Validation:
      • Jetson AGX connects to RealSense, Seek and VLP16.
      • RealSense depth, RGB, and Seek Thermal streams are operational and stream to visualizer on the operator’s laptop.
      • Visualizer displays 2D map and video streams with at least 10Hz frequency.
      • Subsystem health status monitor visible and displays “OK” status for all components.
    • Performance: The system streamed at least 10Hz frequency from all data streams into the visualizer on the operator’s laptop. Additionally, all major subsystems were successfully monitored and their health status was displayed continuously.
  3. SLAM and Basic Human Detection (M.P.0, M.P.1, M.P.3)
    1. Procedure:
      1. Operator moves robot at a max speed of 0.2 m/s through the obstacles in the room for at least 2 minutes using only the 2D map and video streams displayed on the visualizer.
      2. Robot detects and localizes a single human (location determined by coin flip).
    2. Validation:
      1. 2D map of room for up to 10m from the robot is displayed and updated on visualizer; the shape of major obstacles are captured in the map.
      2. The position and travelled path of the robot is displayed on a 2D map on the visualizer.
      3. The robot detects and localizes a human lying down horizontally and facing the robot within 5 seconds of complete, unobstructed entrance into the RGB FOV.
      4. The location of the human is displayed on the 2D map of the visualizer. The centroidal accuracy of the localization is within 0.5 meters relative to the mapped room geometry.
    3. Performance: The robot localized humans to within 0.5m accurately, and was able to detect people lying down after navigating through an obstacle field. We showed that the robot was able to successfully detect people and remembered where they were after leaving the area.
  4. Human Detection Conditionality (M.P.0, M.P.1)
    1. Procedure:
      1. Robot is placed 7m distance facing a single well-lit human.
      2. The human transitions between various poses to demonstrate the failure and success modes of human detection at this distance.
    2. Validation:
      1. Robot is able to detect and localize the subject within 5 seconds in each of the following positions at 7m distance: standing, seated and side-lying with a limb separated
    3. Performance: Performance detailed under the “Component Testing and Results” section of “Documents”.

Additional validations:

  1. Verified detection beyond 7m (up to 13m for standing human)
  2. Performed multi-human tracking and localization
  3. Verified IR human detection up to 13m