XBox meets SubjuGator

5 02 2007

Every team implements some debugging interface to their sub. You wouldn’t guess it, but there are as many ways to interface an autonomous vehicle as there are AVs in the world. Some less-developed subs require a dry, close-proximity connection. Others tow a 100+ foot ethernet cable from a wet connection on their sub. Still others build a wireless card into their vehicle. SubjuGator has traditionally used a wireless connection via a router. The router, packed into a 100% waterproof Pelican case, is connected to the sub with a wetplug connector and a length of compatible wire. I know it sounds crazy, but it’s actually very convenient to drag the case behind the sub. The hazards are minimal, and the cord allows SubjuGator to venture underwater at least a few feet (unlike subs with a built-in wireless card, which must stay at the surface to preserve the connection).

In autonomous vehicles, manual control combined with sensor feedback aids debugging more than any software IDE or hardware CAD program. The ability to note what the sensors see while the vehicle does “x” puts the developer in place of the robot’s brain and let’s him make the decisions and note their consequences. This invaluable point of view often uncovers situations that an autonmous robot’s current behaviors do not handle. For example, what happens when your ranging sonar detects a wall ahead but accumulated positioning error over time caused your navigation grid to note a mission item just behind the wall? If the arbiter assigned a higher priority to the behavior that moves the sub to the location of the mission point and a lower priority to the ranging sonar… well, you should have thought of that. Situations like these sound contrived, but I assure you they happen in real life. With manual control, a thorough testing of the interactions between behaviors, sensors, and thrusters is not only feasible, but completely viable.

After a recent developmental push, team SubjuGator can now control the sub with a standard XBox 360 controller. The new sub’s hardware is still incomplete, but the old sub uses a subset of the new sub’s sensors, boards, and computers, and so we’re able to develop software before the electrical and mechanical teams finish. Using a smooth stone and some ash from the night’s fire, I put together a crude abstraction of how an XBox 360 controller will communicate with SubjuGator.


After you’re done doting upon this extraordinary graphic, I’ll explain to you that using services (as per MS Robotics Studio) and a PC driver for XBox 360 controllers, I put together a means by which users can turn off SubjuGator’s automaticity and interface with the service that regulates the thrusters. With this capability in place, we can really begin wearing in some of the new sensors and cameras that will adorn the new SubjuGator. Plus, we’ll get a better feel for the accuracy of each sensor, what type of information they can provide, and how reliable they are overall, before we set the beast off to the wild on its own.


Sensor fusion

28 01 2007

    From Wikipedia: “Sensor fusion is the combining of sensory data or data derived from sensory data from disparate sources such that the resulting information is in some sense better than would be possible when these sources were used individually.” Simply put, when you have several sensors that spit data at you, sensor fusion links the data in a meaningful way to produce information valuable to the task at hand. This year, like none before, SubjuGator will utilize the idea of sensor fusion.

Last year, SubjuGator operated around (and it really kills me to mention this to anyone) just one loop with just one thread. Multi-threading, in my opinion, buries most autonomous robots before they have a chance to explore their surroundings. Like a driver on a cellphone with a coffee in hand, threads don’t always work nicely together, and they may not appear to disrupt each other until your autonomous sub ramming the bottom of the swimming pool. Threads are also more difficult to debug in a not-quite-built autonomous robot– you can’t test the thruster threads with the camera threads until those items have been added. It’s a poor excuse, but one loop was easier to conceptualize, develop, debug, and test. Drawing information from sensors was a serialized task and efficient sensor fusion was near impossible.

With the dawn of Microsoft Robotics Studio, all this is changed. The best thing Microsoft brought to the table with Robotics Studio was a way to gain all the benefits of multithreading without all the hassle. In defining services, protocols for communication bewteen the services, and background processes to host the services, Robotics Studio makes multithreading inherent in a software system. Sensor fusion now occurs as a service that subscribes to two or more other services that distribute sensor data or represent some source of information. The  multithreaded, event-driven nature of Robotics Studio eliminates the loop and offers a viable way to intepret and analyze data from multiple sensors.

Service architecture

25 01 2007

We have compiled a list of core services (of the MS Robotics Studio variety) that will run onboard the sub. As you can imagine, each sensor maps to one service that extracts and packages its raw data. One or more of these services will map to a group of task-specific services that will derive information from the data delivered by the sensor services to which it subscribes. These task-specific services feed polished mission information to a traversibility grid. At the top of the hierarchy, a controlling arbiter will make navigational decisions based on the abstracted information in the traversibility grid and which phase of the mission it is executing. The arbiter will relay navigation orders to a steering service dedicated to working the thrusters. And, ideally, like a sail riding the wind, the steering service will guide the sub to its ordered destination.

Other secondary services have also been discussed. One of the most notable has already been implemented– a simple manual control of the sub’s movement via an XBox 360 controller. Also, since logging mission data (images taken by the cameras, decision made by the arbiter, etc) has always helped us at past competitions, a logging service will be implemented.

A potential design (but not really)

21 01 2007

Here are four different angles of the original CAD mock-up for SubjuGator:

right side


front view from top left

left side, nose facing down


The original idea called for two tubes, connected by a series of wet plugs, mounted to a metal frame. The DVL would hang below a strafing thruster. When we designed this version, our goal was a compartmentalized submersible with a diverse range of motion. One compartment would house all the electronics while the other carried the batteries (which add significant amounts of weight to smaller subs). The metal frame, somewhat incomplete in these drawings, would support both the sub (while on land), and it would also act as a mounting point for the various sensors we forsaw ourselves possibly using in the future.

Needless to say, this design exhibited some unignorable problems. Firstly, it’s a bad idea to mount thrusters to removable endcaps. No one wants to disconnect and reconnect the thruster every time they require access to the sub’s innards. Next, two tubes spells added complexity, more circuity, and a mess of external cables. Each motor is attached via wet plug/cable combo and each tube must be wired to the next in order to transfer power and/or data. Apart from the mess of external lines to each thruster and submersible sensor, the conversion plugs required inside each tube wastes valuable space. Furthermore, we based the design on the assumption that the DVL unit (sans power supply) was fully submersible and should hang from the center of the turning radius (so turning in place yields no acceleration). It turns out that the DVL must rest partly in a dry hull.

So now, the trend has returned to a healthy, one-tube design. Though, this year, SubjuGator will be longer, stronger, and more capable.

Sensor overload

19 01 2007

This year, SubjuGator, in addition to a complete software overhaul, will undergo a body transplant. The traditional Lexan-crafted, cylindrical shell has been traded in for a mroe cost-effective and heat-dispersive aluminum shell. Lexan’s primary advantage is transparency– you can manually check for leaks, place dry cameras internally, and display onboard debug information easily from within the dry hull. The material also offers a good amount of durability (proportional to wall thickness) compared to other transparent materials. Aluminum, though it lacks the transparency and the lightness of Lexan, boasts durability, good heat-dispersion, and low cost. I had originally pushed for an aluminum endcap since our batteries would shut down after the sub had run for an extended period in the sun. It looks like the mechanical engineers brought it up a notch.

SubjuGator will also sport a variety of new sensors come competition time. The main feature is a new DVL, donated by one of our sponsors. Despite the $15000 price tage and the five additional pounds, it should be worth the navigational enhancement. The sub also finally graduated from web cameras as its primary vision source. In addition to two high-quality USB cams, several students are working on both ranging and imaging sonar devices. I am skeptical of the usefulness-versus-overhead ratio of the imaging sonar. The ranging sonar, however, will help us determine the location of the walls in SPAWAR‘s TRANSDEC, and possibly obtacles in our path. The hydrophones have not changed since last year, and similarly, most of the custom board designs will be the same.

Microsoft Robotics Studio (MSRS)

13 01 2007

    Team SubjuGator has adopted MSRS as the communications platform between all hardware components within SubjuGator.  MSRS provides a (theoretically) simple way to access any part of your robot, hardware or software, as a service. So, for example, if SubjuGator uses two webcams, each of those webcams can be accessed as a service. As a service, I can turn one webcam off, have other services subscribe to that service (requesting frames, live feed, a change in frame rate, etc), and access a plethora of diagnostic information.

MSRS wields a slight learning curve, especially to the beginner C# user, such as myself. In an attempt to bump brains with some others in similar situations, I created a Google group dedicated to MSRS.


11 01 2007

    This blog will note the developmental decisions and progress of Team Subjugator— the two-time national champion of AUVSI and ONR‘s autonomous underwater vehicle (AUV) competition. I aim to catalogue the team’s work for the sake of our peers, the competition judges, the robotics community, and most especially, the team itself. What the team reads here as a review of our successes and failures, you can use as an inside look at how a team robotics effort operates.

Discussions on SubjuGator’s hardware, software, and operating algorithms will be limited in their detail until the 2007 competition nears. Though we encourage peer learning, this is still a competition. Within this subset of discussions, the majority will focus on SubjuGator’s software because software and algorithm design is my primary role on Team Subjugator.

Post often and I’ll answer any questions of which I am capable of answering.