ID | Functional Requirements | Performance Requirements | Justification/Assumption |
M.F.1 | Receive commands from the user: preset speech primitives/handheld interface | Word-error rate <=10%
Latency for control commands <5s | Robot should understand what the user wants it to do |
M.F.2 | Perform basic (pre-defined) social engagement with user | Fallback rate: <20% | User chats with the robot |
M.F.3 | Localize itself in the environment | Error threshold: <25 cms | Real-time visual data and precomputed map available |
M.F.4 | Plan and navigate through the pre-mapped environment | Plan global path to desired location within 2 minutes
Navigate at a speed of 0.4 m/s | Assuming latency in receiving user input, obstacle detection, path planning, and goal location is 20m away from the robot. |
M.F.5 | Autonomously avoid obstacles in the environment | Avoids 80% of the obstacles in range | Assuming objects lying in the FoV of visual sensors |
M.F.6 | Detect objects for grasping | mAP >= 80% for 10 object categories (e.g bottle, remote, medicines etc) | Predefined class of objects are placed in expected and appropriate lighting. |
M.F.7 | Manipulate predefined objects to/from planar surfaces at known locations in the environment | Greater than 70% successful picks and places | Manipulation algorithms are tuned beforehand for our set of objects. |
M.F.9 | Allow approved operators to teleoperate the robot | Communication latency <5s | Assuming connection initialization and transmission delays and command interpretation time. |
M.F.10 | Provide user with robot metrics and video feed of the robot on a handheld interface | Latency: <2s Resolution> 720p | Robot should provide a real-time experience to the user |