Software architecture

20 02 2007

After hashing out all the hardware-software intricacies, we’ve constructed a service diagram (a class diagram, essentially) for all the high-level software. MS Robotics Studio lends itself to an object-oriented design where MSRS services are the objects. Though it is guaranteed that each service will contain several classes, the “service view” creates an easily comprehendible starting point.


Starting from the simplest end, each sensor service at the bottom represents a sensor (obviously). These services are intended to relieve the rest of the system from the burden of low-level communication. They handle serial communications, data packet encoding/decoding, timing issues, access issues, and any other low-level tasks that might benefit the rest of the system. For example, the cameras we are using this year are not plug-n-play; they must be accessed through the manufacturer’s SDK. The CAMERA service adds an element of acquisition transparency so all the other services may grab camera data wihout having to mingle with the manufacturer’s code.

Each of the sensor services feed into a cloud called “Sensor Fusion.” In an earlier post, I explained sensor fusion as a way to intelligently combine sensor data into valuable information. The sensor fusion cloud represents the set of services that either extract information from the data of one sensor, or extract information from a combination of data from several sensors. The NavGrid must subscribe to each of these in order to receive the processed information.

NavGrid and Arbiter subscribe to each other so that 1) Arbiter can trigger an event to request specific information from NavGrid, and 2) NavGrid can broadcast general information to subscribers. The intent here is to enable Arbiter to make special information requests while still allowing NavGrid to broadcast its state to subscribers. NavGrid’s state information may contain all events on the absolute navigation grid, or maybe only obstacles and interests in its immediate area– the exact structure has not been decided. As an example of specific information, Arbiter may ask for the interest closest to SubjuGator’s current position.

Arbiter makes all the important decision regarding movement, mission planning and control, and any other task that might require human logic if SubjuGator were manually controlled. The blue square representing the Arbiter displays only a stub for the purpose of this high-level abstract. In actuality, the Arbiter will be composed of several services– I will provide further detail about this in another post.

NavActions subscribes to Arbiter in order to recieve navigation commands. Arbiter will issue navigation commands as changes in state. That is, when Arbiter changes the state of SubjuGator’s current course (40 degrees at a speed of 50, for example) to a new course directive (stopped, facing 35 degrees), the service will broadcast the new course of action to its subscribers. Only the subscribers that can handle such a message will accept the broadcast (so, NavGrid will discard this message). NavActions works with the underlying embedded thurster-driver board to move the sub.

It has not been decided which services will benefit from the Heartbeat service yet (hence the barrage of anonymous subscribers to it), but the idea stands that a heartbeat message would prove useful by acting as a relative clock for interested services and checking system status.

For debugging purposes, the NavGridGUI will be able to subscribe to the NavGrid merely as an observer. This will allow us to see what SubjuGator sees in regards to mission interests (for example, an orange pipeline). Also for debugging purposes, NavActions subscribes (indirectly) to the XBox 360 Controller, which allows the operator to manually control the movement of SubjuGator. The speed/dir mappings service, as you notice, connects NavActions and XBox 360 Controller. This service is necessary to map values fired off by the controller to actual speed and direction values accepted by the underlying embedded thruster-driver. The speed/dir mappings service could have been merged with either of the services adjacent to it, however past experience tells that speed and direction mappings (more so speed) will change often to accomodate different environments. Where in the test pool we want full throttle to be a safe speed that will not throw SubjuGator into a nearby wall, the TRANSDEC is much larger and full throttle should efficiently drive the sub from one end to the other.


XBox meets SubjuGator

5 02 2007

Every team implements some debugging interface to their sub. You wouldn’t guess it, but there are as many ways to interface an autonomous vehicle as there are AVs in the world. Some less-developed subs require a dry, close-proximity connection. Others tow a 100+ foot ethernet cable from a wet connection on their sub. Still others build a wireless card into their vehicle. SubjuGator has traditionally used a wireless connection via a router. The router, packed into a 100% waterproof Pelican case, is connected to the sub with a wetplug connector and a length of compatible wire. I know it sounds crazy, but it’s actually very convenient to drag the case behind the sub. The hazards are minimal, and the cord allows SubjuGator to venture underwater at least a few feet (unlike subs with a built-in wireless card, which must stay at the surface to preserve the connection).

In autonomous vehicles, manual control combined with sensor feedback aids debugging more than any software IDE or hardware CAD program. The ability to note what the sensors see while the vehicle does “x” puts the developer in place of the robot’s brain and let’s him make the decisions and note their consequences. This invaluable point of view often uncovers situations that an autonmous robot’s current behaviors do not handle. For example, what happens when your ranging sonar detects a wall ahead but accumulated positioning error over time caused your navigation grid to note a mission item just behind the wall? If the arbiter assigned a higher priority to the behavior that moves the sub to the location of the mission point and a lower priority to the ranging sonar… well, you should have thought of that. Situations like these sound contrived, but I assure you they happen in real life. With manual control, a thorough testing of the interactions between behaviors, sensors, and thrusters is not only feasible, but completely viable.

After a recent developmental push, team SubjuGator can now control the sub with a standard XBox 360 controller. The new sub’s hardware is still incomplete, but the old sub uses a subset of the new sub’s sensors, boards, and computers, and so we’re able to develop software before the electrical and mechanical teams finish. Using a smooth stone and some ash from the night’s fire, I put together a crude abstraction of how an XBox 360 controller will communicate with SubjuGator.


After you’re done doting upon this extraordinary graphic, I’ll explain to you that using services (as per MS Robotics Studio) and a PC driver for XBox 360 controllers, I put together a means by which users can turn off SubjuGator’s automaticity and interface with the service that regulates the thrusters. With this capability in place, we can really begin wearing in some of the new sensors and cameras that will adorn the new SubjuGator. Plus, we’ll get a better feel for the accuracy of each sensor, what type of information they can provide, and how reliable they are overall, before we set the beast off to the wild on its own.

Sensor fusion

28 01 2007

    From Wikipedia: “Sensor fusion is the combining of sensory data or data derived from sensory data from disparate sources such that the resulting information is in some sense better than would be possible when these sources were used individually.” Simply put, when you have several sensors that spit data at you, sensor fusion links the data in a meaningful way to produce information valuable to the task at hand. This year, like none before, SubjuGator will utilize the idea of sensor fusion.

Last year, SubjuGator operated around (and it really kills me to mention this to anyone) just one loop with just one thread. Multi-threading, in my opinion, buries most autonomous robots before they have a chance to explore their surroundings. Like a driver on a cellphone with a coffee in hand, threads don’t always work nicely together, and they may not appear to disrupt each other until your autonomous sub ramming the bottom of the swimming pool. Threads are also more difficult to debug in a not-quite-built autonomous robot– you can’t test the thruster threads with the camera threads until those items have been added. It’s a poor excuse, but one loop was easier to conceptualize, develop, debug, and test. Drawing information from sensors was a serialized task and efficient sensor fusion was near impossible.

With the dawn of Microsoft Robotics Studio, all this is changed. The best thing Microsoft brought to the table with Robotics Studio was a way to gain all the benefits of multithreading without all the hassle. In defining services, protocols for communication bewteen the services, and background processes to host the services, Robotics Studio makes multithreading inherent in a software system. Sensor fusion now occurs as a service that subscribes to two or more other services that distribute sensor data or represent some source of information. The  multithreaded, event-driven nature of Robotics Studio eliminates the loop and offers a viable way to intepret and analyze data from multiple sensors.

Service architecture

25 01 2007

We have compiled a list of core services (of the MS Robotics Studio variety) that will run onboard the sub. As you can imagine, each sensor maps to one service that extracts and packages its raw data. One or more of these services will map to a group of task-specific services that will derive information from the data delivered by the sensor services to which it subscribes. These task-specific services feed polished mission information to a traversibility grid. At the top of the hierarchy, a controlling arbiter will make navigational decisions based on the abstracted information in the traversibility grid and which phase of the mission it is executing. The arbiter will relay navigation orders to a steering service dedicated to working the thrusters. And, ideally, like a sail riding the wind, the steering service will guide the sub to its ordered destination.

Other secondary services have also been discussed. One of the most notable has already been implemented– a simple manual control of the sub’s movement via an XBox 360 controller. Also, since logging mission data (images taken by the cameras, decision made by the arbiter, etc) has always helped us at past competitions, a logging service will be implemented.

Microsoft Robotics Studio (MSRS)

13 01 2007

    Team SubjuGator has adopted MSRS as the communications platform between all hardware components within SubjuGator.  MSRS provides a (theoretically) simple way to access any part of your robot, hardware or software, as a service. So, for example, if SubjuGator uses two webcams, each of those webcams can be accessed as a service. As a service, I can turn one webcam off, have other services subscribe to that service (requesting frames, live feed, a change in frame rate, etc), and access a plethora of diagnostic information.

MSRS wields a slight learning curve, especially to the beginner C# user, such as myself. In an attempt to bump brains with some others in similar situations, I created a Google group dedicated to MSRS.