Spring Subsystems

Localization

The localization subsystem determines pose of an agent given a map of the environment.
Inputs Output
Initial estimated pose Occupancy grid map of environment Wheel encoder ticks Planar LIDAR range and bearing measurements Robot pose
The system must be able to determine the pose of every agent in the map. Knowing the agents’ poses is crucial for determining victim pose, task allocation, swarm control, and navigation. To accomplish this, the sensors on the agents are used. The user must also provide a map of the environment, as well as the initial positions of the agents in the map. Firstly, the wheel encoders are used to compute odometry. This first estimate is then used for laser scan matching, which refines the odometry calculation by lining up laser scans from two different time frames, resulting in a refined estimate for change in pose. This improved estimate is used to put the laser scan in the map frame, which is then compared to the provided map, and loop closure is performed by updating the agent pose. This process is done separately and independently for each agent, all of which are using the same map provided by the user.

Accuracy testing was done in the VICON area in Professor Sycara’s lab, using the VICON motion capture system as ground truth. An artificial environment was set up and mapped, then provided to the localization system. Testing revealed that the localization system was able to maintain agent position below 6 cm RMS, which was sufficient for our application.

Once localization was proven to work in an artificial and controlled environment, we moved to testing in the real world. Testing near TechSpark in Scott Hall revealed the need for tuning of the localization parameters. We found multi-robot localization to be robust even in the presence of dynamic obstacles or obstacles that were not included in the map. We also found that we could make the agents move very quickly and aggressively without delocalization occurring, proving that localization is both accurate and robust enough for our application.

Task Generation

The task generation subsystem generates interest points or tasks for the robot to explore given an input map.
Input Output
Occupancy Grid Map Set of (x,y) nodes to be visited
The subsystem is modeled as a maximization problem which aims to maximize the visibility of the room with the least amount of points possible. Its input is an occupancy grid map with free space represented as 1 and obstacles represented as 0, and outputs a set of (x,y) nodes to visit.
On this subsystem, various methods were analyzed to compute the required points for search. After trying out different types of algorithms, we narrowed down a frontier based search algorithm namely, Voronoi Search. The base algorithm was provided to our team by Authors of the paper Algorithm provided by authors of “Improving Autonomous Exploration Using Reduced Approximated Generalized Voronoi Graphs”. This subsystem acts as a service and is invoked when called by the Task Allocator. As mentioned earlier, it takes in OGM as input and outputs x,y nodes to visit. The stack has been modified to control the number of points to explore: as few or as many as need be.

Task Allocation

The task allocation subsystem efficiently assigns each robot tasks such that all tasks are completed in minimal time.
Inputs Outputs
Occupancy grid map n Robots m Tasks Robot initial positions Next assigned task to robot i
We have implemented two different task allocation algorithms: greedy assignment, and Multiple Traveling Salesman (mTSP) assignment. While the greedy task allocator simply assigns the robot the nearest unvisited task, the mTSP task allocator formulates the task allocation problem as an optimization problem. It uses Constraint Programming (CP) to find a solution that satisfies all constraints of the problem (every task is visited only once, subtour elimination), and minimizes the maximum path cost. The path costs are calculated at the A* path length between each task.

By testing the task allocation algorithms in simulation, we found that the mTSP algorithm was on average 29% more optimal than the greedy algorithm. This exceeds our mandatory performance requirement, which was at 20%.

Additionally, the task allocation subsystem is also able to dynamically reallocate tasks. For example, if an agent runs out of battery or crashes during search, the system is able to take the unfinished tasks and reallocate them to the remaining agents such that all tasks are still completed.

Navigation

The navigation subsystem generates path and velocity commands for all robots to complete their aThe navigation subsystem is responsible for generating paths as well as velocity commands for all the agents such that the agents are able to navigate to their assigned tasks while avoiding collisions with each other and traverse the environment in the shortest time possible.
Inputs Outputs
Occupancy grid map of the environment Initial location of the agents Goal location of the agents Velocity commands
As shown in the figure below, the navigation stack takes input from the FMS regarding the current agents in the fleet, takes the initial and goal positions of the agents from the task allocator and the map of the environment to first compute the path that the agents need to follow. Once the path for all the agents is computed, the mission executive sends these paths as goals to the controllers for each of the agents.

The design of the navigation stack is outlined in fig 11. The prioritised multi A* planner, plans the trajectories for each of the agents on top of the 3d graph. These paths are sent to the pure pursuit controller, which computes the linear and angular velocities based on the error between the current position of the agent and the desired position. These linear and angular velocities are sent to the low level controller of the Khepera which converts these to wheel speeds.

Fleet Management

The fleet management system (FMS) is the bridge between the server and all the agents in the field. As depicted in the diagram the green functions are the ones which are already implemented currently in the spring. The red ones are on the schedule for the fall semester.

At a high level, the FMS accepts a config file from the operators which contains information on all the robots in the fleet, then uses that to set up communication channels for transmission and receiving with all the robots in the fleet. It also monitors these connections during operation, decodes sensor data for the ROS network, logs this data in rotating log files and also informs all the other subsystems on any changes in the fleet.

Simulation

The RoboSAR system also includes a multi robot simulator which is based on the ROS stage simulator that we can use for testing and debugging our algorithms before transferring them to real hardware. This saved us a lot of time in the spring semester and will continue to prove instrumental in our plans for the fall semester!