Feedback 2017-10-21 Grades and Comments for Task 5 (CoDR)

Dear Team I: Moon Wreckers,

Below are grades and comments for Task 5 (CoDR)

Team I:

Score: 18.23/20

Comments:

Deductions:
1. Prefatory info 0.00
3. Use case/System graphical representation 0.06
4. System-level requirements 0.44
5. Functional architecture 0.08
6. System-level trade studies 0.03
7. Cyberphysical architecture 0.04
8. Subsystem descriptions 0.37
9a. Project management: Work plan 0.38
9b. Project management: System validation experiments 0.23
9d. Project management: Parts list and budget 0.03
Formatting and language penalties 0.10
Total deductions: 1.77

J. Dolan comments:
Summary: Good report. The requirements need some refinement, and some diagrams illustrating your docking and towing subsystems would be helpful.
Proj descr: Well stated, gives a good idea of the project goals.
Use case: The prose description is good. You should refer to Fig. 2 in the text, and also expand the caption to make the sequence fully intelligible to the reader.
Reqts: The basic ideas are here, but greater clarity is needed (see the following comments). MP.0: More info is needed on what “stuck” means, since it could range from very simple to very difficult scenarios. MP.1: What will it mean exactly to successfully coordinate rescue actions? Presumably it stops short of executing those actions, since execution falls into MP.0, but in that case, how is it measured? I think “perceptible to detecting” should be something like “able to perceive”. MP.3: How will the marking be done?
Trade studies: The process represented in Fig. 4 is a worthy idea. Do you have any conclusions from Fig. 5?
Cyberphys arch: The Fig. 8 caption is provisional and needs to be edited. What does “editable from the figure” mean here?
Subsys descr: The descriptions are reasonably good. However, particularly for the docking mechanism/process, it would be quite helpful to have one or more diagrams. Section 7.2: What does the phrase “in essentially” mean in the second paragraph here?
Work plan: Include end-of-month milestones for the entire spring semester. PR1 and PR2 should respectively be on Oct. 19 and Oct. 26.
Sys val expts: These are well detailed. Do you have accuracy metrics in mind for the SVE localization testing?
Risk mgmt: Good analysis and discussion.

Language: p. 1: “broader scale” –> “broader-scale” (as adjective); “a singular rover” –> “singular rovers”; “to localize” –> “in localizing”; “other,this” –> “other, this”; “cross reference” –> “cross-reference”; p. 2: “considering” –> “Considering”; “cross referencing” –> “cross-referencing”; “high centered” –> “high-centered”; p. 5: “for rover’s” –> “for the rover’s”; “comprises of” –> “comprises” (did you read the language guidelines?); “elicits” –> “causes”; “plan path” –> “plan a path”; “once of” –> “once one of”; p. 6: “Third” –> “The third”; “rescuing other” –> “rescuing the other”; p. 7: “none…are” –> “none…is”; p. 9: “Figure 2” –> “Figure 8”; “course of mission” –> “course of the mission”; “camera, coupled” –> “a camera, coupled”; p. 10: “These sequence” –> “This sequence”; “Goal” –> “The goal”; “on ground” –> “on-ground”; “user defined” –> “user-defined”; “of exploration” –> “of the exploration”; “Once sequence” –> “Once a sequence”; “have been” –> “has been”; “involve” –> “involves”; p. 11: “scenarios; when” –> “scenarios: when”; “is provided” –> “are provided”; “to region” –> “to the region”; “assisting rover’s” –> “assisting the rover’s”; “category” –> “categories”; “sampling based” –> “sampling-based”; “in a” –> “in”; “to search” –> “to a search”; p. 12: “although performs” –> “performs”; “to their” –> “to its”; “do not” –> “does not”; “integrate” –> “integrate the”; p. 13: “of rescued” –> “of the rescued”; “rover move” –> “rover moves”; “so there” –> “there”; “PID based” –> “PID-based”; “based the” –> “based on the”; p. 15: “of a several” –> “of several”; “high centered” –> “high-centered”; p. 16: “high centering” –> “high-centering”; p. 20: “build a good” –> “build good”

Y. Nadaraajan comments:
MP0 and MP1, what exactly does 80% of the time mean? MN1, what does 60% mean? The functional architecture looks good. The trade studies are good but can be a little more elaborated. Good subsection on path planning. The maximum cost seems to be out of MRSD budget. Do you have a backup plan if this happens?

D. Apostolopoulos comments:
The Mandatory Performance Requirements should be refined to a set of shorter and direct form, that will make it simpler to validate. For example, MP.1 should clarify what “coordinating rescuing actions” means. The robots share information? Or share responsibilities in rescuing each other? For MP2.2 the time scale may not be of relevance, meaning that if a robot is moving at 5cm/sec it might take a long time to get entrapped and possibly a long time to detect the entrapment
It is good that your requirements include project-related constraints
The Functional Architecture has the appearance of a simplified Cyberphysical Architecture! As is it is hard to follow the functional flow of the system. Are the pink boxes outputs? What captures the engagement of the rovers before towing? The Cyberphysical Architecture should flow from the Functional and should be in close agreement.
PM sections are fine; consider scope of the project and a realistic plan for the Fall and Spring

J.P. Vega comments:
Clear description and use case. Try to use the story board while telling the story. Requirement: What is a failed coordinate rescuing action? What about time requirement for the maneuver to take succeed? Some desirable performance requirement could go to the mandatory list. What about planning and navigation? Functional architecture: Photograph processing missing in requirements, Communication between rovers is kind of missing. Trade studies: Ok, as long as the studies are realistic and unbiased. Cyber-physical architecture: Ok, a more direct mapping between functional achitecture and cyber-physical architecture in terms of spatial layout of part would help reduce the cognitive load of the reader. Subsystems: I would have liked a little more detail about photography, localization, towing, and the ground platforms themselves. WBS: missing. It is hard for me to know whether you are missing steps or not. It is also probably hard for you to update a WBS that is not existing. The Schedule looks a bit packed by the end of the semester and no information is provided for the Spring semester. How confident are you about your estimations? Validation experiments: Try to add more context, detail, and structure into your tests. Which step of the the process is testing which requirement and how? Try to integrated your tests for the SVE into one or two tests. Responsibilities: Is Table 3 the missing WBS? If so, is it complete? And why is it there in the document? Risks: Extensive study of each risk. Try to hace a running list of your top 10 risks or more. Update the list once a week. Be realistic about this, it’s in no one’s interest to overlook the risks associated with your project.

L. Wan comments:
Team I: Good to have a storyboard, but please redraw for clarity. Missing functional requirements? Build in time in schedule for debugging and testing. Need more detail in testing– what are you testing / which requirements are you validating?

The MRSD Instructors