Below are the SVE goals and process I am envisioning:
1, manually/automatically drive a rover, R1, onto an obstacle and get it stuck (high-centered).
– type:
– (1) core goal
– success:
– (1) the rover drives onto the obstacle by itself (without operator picking it up to the obstacle);
– (2) drive forward for 5 seconds, drive backward for 5 seconds, it cannot escape
– reasons:
– (1) shows that this is a valid and reasonable stuck scenario, instead of the rover drops down from the sky and gets stuck; (the obstacle shall be smaller than the log)
– (2) restrict our “stuck” definition to “high-centered”, without generalizing it to such as slip in sand, etc. (or shall we generalize the stuck scenarios?)
2, the stuck rover, R1, shall detect entrapment automatically, and broadcast SOS to notify the other rover, R2.
– type:
– (1) core goal
– success:
– (1) after R1 gets stuck, we shall show the status of R1 being “entrapped” on RVIZ (using marker, perhaps with the corresponding likelihood)
– (2) after R1 gets stuck, we shall show the status of R2 being “SOS received” on RVIZ
– reasons:
– (1) demonstrate that our rovers have the capability of detecting entrapment autonomously and automatically;
– (2) demonstrate that the SOS signal can be broadcasted over the rover network and be received by its partner rover.
3, human operator run a ROS node/service to signify starting the autonomous rescue procedure.
– type:
– (1) necessary operation (core goal)
– success:
– (1) R2 status becomes “rescuing R1” on RVIZ
– reasons:
– (1) we need to drive R1 forward and backward to show it is really stuck, so we need to make sure the panel is aware that R1 is really stuck before the rescue procedure starts.
(after 3, no human operator interference)
4, Within a 5m X 5m vicinity with at most 2 avoidable obstacles, R2 shall plan and autonomously approach R1, and stop at an acceptable pre-tow pose.
– type:
– (1) core goal
– success:
– (1) R2 shall plan a path, and display it on RVIZ;
– (2) the planned path shall end up at a pose such that the male docking mechanism of R2 aligns to the female docking mechanism of R1;
– (3) R2 shall follow the planned path and stops at a pose with error such that err_angle < 6 degrees, err_|p| < 10cm;
– (4) R2 shall avoid all obstacles;
– (5) the estimated error of R2 end pose shall be displayed on RVIZ;
– (6) the estimated path of R2 shall be displayed on RVIZ.
– reasons:
– (1) demonstrates that the planner can come up with a path that avoids all obstacles and ends up at valid pre-tow pose;
– (2) demonstrates that the rover motion controller can accurately execute the path plan within acceptable error (under close-loop control);
– (3) demonstrates that the rescuer rover, R2, in random pose, can approach the stuck rover, R1; (generalization of original rover poses)
– (4) restrict the rescue mechanism to be towing, without generalizing it to such nudging or hitting.
5, The rescuer rover R2 shall dock to R1 autonomously.
– type:
– (1) core goal
– potential upgrade:
– (1) since the the docking mechanisms are no at the same height, which may require using an actuated manipulator with computer vision to perform the alignment and docking.
– success:
– (1) the docking mechanism shall get into a post-docking status, (e.g. if we use claw and ring, the claw shall close).
– reasons:
– (1) demonstrates that the rescuer rover, R2, can autonomously dock to the stuck rover, R1.
6, The rescuer rover R2 shall (release its winch, go to a safe location and) tow, whereas the stuck rover R1 shall collaborate (by driving towards the same direction).
– type:
– (1) core goal
– success:
– (1) under the help of R2, the stuck rover R1 shall get rid of the stuck situation and move to a safe location;
– reasons:
– (1) demonstrates that the stuck rover R1 can escape under external help from R2;
– (2) restrict the towing plan to be towing towards the original direction R1 was going, without generalizing it to such as towing with some angle.
7, Both rovers shall release the docking mechanisms and reset them to original status, and drive apart by at least 1m.
– type:
– (1) core goal
– success:
– (1) the docking mechanisms are released and reset to the original status;
– (2) the two rovers drive apart by 1m.
– reasons:
– (1) demonstrates a close-loop use case (all devices are reset to the pre-stuck status, so the rescue procedure is repeatable);
– (2) demonstrates the the rovers can resume the original mission, by showing that the can drive apart by 1m.
8, Make the scenario more “real”:
– substeps:
– (1) before [step 1], set/plan a path for R1 and R2 outside the gates highbay, so that they drive for more than one minute;
– (2) R1 comes back to the highway and perform [step 1], while R2 continues what it was doing;
– (3) after R2 receives SOS signal, it wait for [step 3].
– type:
– (1) stretch goal (priority: 2rd)
– reasons:
– (1) shows that the rovers were on some (collaborative) mission before R1 gets stuck.
9, visual odometry in loop (for EKF and entrapment detection).
– type:
– (1) stretch goal (priority: 1st)?
– (2) seems not a stretch goal, but a core goal, because if we only have a vive lighthouse on R1, then R1 cannot detect entrapment without extra reference of its velocity.
– success:
– (1) shows that the entrapment detection did not use vive (seems it cannot actually use vive in almost all situations);
– (2) shows that the visual odometry and wheel odometry are updating EKF without vive.
– reason:
– (1) more realistic for planetary missions;
– (2) it seems that we have to have this if we need to do entrapment detection for R1.
10, Points of interest discovery: drive the rover outside (e.g. in Lafarge), and collect the video stream while it is driving. (not necessarily using the actual rovers, an iPhone + a cart would be good)
– type:
– (1) stretch goal (priority: 3rd), actually this is what NASA is trying to solve (an actually valuable problem)
– success:
– (1) with unsupervised learning, shall classify “different” scenes (e.g. mountains, sand, rocks, …), above 10% accuracy bonus?
– (2) with active learning / semi-supervised learning, shall classify “different” scenes, or pick “interesting” scenes, above 15% accuracy bonus?
– reason:
– (1) shows that during a mission, the rovers does not throw away potentially useful information along the way to their destinations;
– (2) shows that we do not need prior knowledge base for this bonus task.
11, Taking pictures for each other.
– type:
– (1) stretch goal (priority: 4th)
– success:
– (1) send back some pictures, with 60%+ of them have the the entire target inside the picture.
– reason:
– (1) demonstrate an advantage of have a multi-agent robotic system on a planetary exploration mission.
Why restricting the scenarios, instead of generalizing them:
– (1) we can first pass all the above test cases robustly, and then come up with generalizations as “surprise” during SVE, if we have extra time;
– (2) demonstrating a complete and fully autonomous use case (like the above) is more appealing than diving deeper in one step;
– (3) there ARE necessary generalizations in the above test case, such as initial poses, high-centering scenarios (R1 may get high-centered in different way in the above case).
The core goals above correspond to the below story (use case):
(a) two rovers collaborate together in a planetary exploration mission;
(b) one rover, for example the rover AK1, is entrapped during the mission, where the entrapped rover, AK1, is not capable of extricating itself, and such an entrapment is detected autonomously;
(c) the entrapped rover, AK1, broadcasts a SOS signal across the rovers network to request rescue;
(d) the other rover, for example the rover AK2, receives the SOS signal, so it suspends its current task and become the rescuer rover;
(e) the rescuer rover, AK2, approaches the entrapped rover, AK1, in an autonomous manner;
(f) after getting close enough to the entrapped rover, AK1, the rescuer rover, AK2, launches the autonomous rescue procedure to extricate the entrapped rover, AK1;
(g) after the entrapped rover, AK1, is extricated, the two rovers, AK1 and AK2, resume their original tasks.
I propose that:
– (1) we shall discuss about the SVE goals before the beginning of the next semester, so that we have very solid and concrete goals to strive for. And we can better plan our schedule;
– (2) after discussion, we shall lock down our “maximum” SVE core goals before the next semester begins;
– (3) after locking down the goals, we will only remove core goals, and never add any of them;
– (4) if we have extra time, we can add new goals, but make them stretch goals, and present them as “surprises” without putting them down onto the test sheet. (I am open to ambitious goals, but I guess we shall have the “MVP” before implementing those goals, so that we can have a “robust” pace towards a functional and robust system.)
– (5) we do need to kind-of “lock down” something for the CDR, because CDR generally means we shall not modify the design of our system after that.
How do you think about the SVE goals?
—
David (Dicong Qiu), MRSD Student
Robotics Institute, Carnegie Mellon University
5000 Forbes Avenue, Pittsburgh, PA 15213
Robotics Institute, Carnegie Mellon University
5000 Forbes Avenue, Pittsburgh, PA 15213