System Design 5/9/2018

Functional Architecture

 

There are three inputs to the system. Firstly, a well-defined map of the environment, in which the rovers will operate, is provided as input to the system. Obstacles along with their locations are presented as part of this map. Secondly, a goal way point is provided as input which is the location to which the rovers shall attempt to navigate. Third, the rovers initial start position in the given environment map is entered (and included) as an input, which is later used to update the ‘Rovers Current Position’ module. Necessary information such as environment map, rover’s current position and the goal location are fed into the ‘Path Planner’ block to determine a valid path for the rover to reach their respective goal waypoints. The ‘Path Planner’ block provides path planning functionalities for the symbiotic rovers system depending on specific scenarios encountered by the rovers as they traverse through the environment. These are explained as follows:

  1. A ‘PreferredPath’ is planned from the rover’s current position to the goal position before the rovers start traversing through the environment.
  2. Once the rovers start navigating through the environment (using the previously determined ‘Preferred Path’), the ‘Behaviour Planning’ module will modify the plan depending on the occurrence of either of the two following scenarios: a rover becomes stuck, or the current plan goes through a potentially ‘Hazardous Zone’. In the event a stuck rover scenario is observed, the behavior planning module shall generate an action plan for rescuing the stuck rover. In the event a rover perceives an approaching ‘Hazardous Zone’, the behavior planning module shall generate an action plan for both the rovers to couple together and traverse through that hazardous zone in a ”safe” manner.
  3. In the event none of these special scenarios are encountered, the ‘Behavior Planning’ module shall simply output the ‘Preferred Path’ planned previously.

The ‘Behaviour Planning’ module outputs the planned path i.e. a trajectory of rover positions (set of way points/ array of poses) that the rover should follow in order to efficiently reach the desired goal point. This trajectory is fed into the ‘Motion Controller’ module for physical execution of the planned path on the rovers. Consequently, the rovers start navigating through the environment following the planned path obtained from the ‘Behaviour Planning’ module. While the rovers are navigating through the environment, they continuously localize themselves and update their current positions (‘Rovers Current Position’ module) in the existing map. In the course of their navigation through the environment, ‘Entrapment Detection’ is carried out continuously. Detection of entrapment of any rover marks the observation of ‘Unforeseen Stuck Rover’ scenario.

As the rovers traverse through the environment, an on-board camera provides a live video feed which forms part of the output of the Functional Architecture. Also, by virtue of the ‘Motion Controller’ module enabling the rovers to follow the planned path, the other outputs of the Functional Architecture include rovers safely navigating through the hazardous zones and performing rescue operation as the situation demands.

 

Cyberphysical Architecture

 

At a high level, the system is divided into two parts: mobile rovers which execute motion commands and collect information about their environment, and a stationary base station which communicates with the rovers wirelessly and performs data synthesis and any computationally complex calculations.

Our base station laptop will load our stored map of the test environment, and is responsible for hosting our roscore process. After loading the map and receiving initial pose data from the rovers, the base station will generate a path through the environment. The base station also analyzes rover telemetry with a Bayesian classifier to determine if one of the rovers is stuck, and if needed adjusts the rover path to the stuck rover. The planned trajectory is sent over to each rover via Wi-Fi, where an on-board computer uses it and its own telemetry data to perform a rudimentary closed loop positioning control. Both the base station and the rovers have control over the towing winch and claw. Towing behaviors are controlled from the base station, but the claws will close automatically if they detect something has entered their grabbing envelope to avoid potential latency issues.

The rovers carry the following sensors: wheel encoders, RTK GPS, an IMU, an RGB camera, and an HTC Vive VR system lighthouse and associated photodiodes. The primary purpose of all sensors save the camera are for localization purposes, and all but the camera are run through an EKF. The rovers use a double Ackermann drive system for steering, and both the motor attached to the rack-and-pinion and the motor attached to the transmission are controlled by a Roboclaw motor driver, which performs its own PID control on the drive motor velocity. On-board processing is performed by an ODROID XU4 single board computer, and control of the towing winch and claw servos is achieved over a serial connection to an Arduino UNO.