Yearly Archives: 2016

Giving Wall-E2 A Sense of Direction, Part V

Posted 06/18/16

My last few posts have described my efforts to create an easy-to-use magnetometer calibration utility to allow for as-installed  magnetometer calibration.  In situ calibration is necessary for magnetometers because they can be significantly affected by nearby ‘hard’ and ‘soft’ iron interferers.  In my research on this topic, I discovered there were two main magnetometer calibration methods; in one, 3-axis magnetometer data is acquired with the entire assembly containing the magnetometer placed in a small but complete number of well-known positions.  The data is then manipulated to generate calibration values that are then used to convert magnetometer data at an arbitrary position.  The other method involves  acquiring a large amount (hundreds or thousands of points) of data while the assembly is rotated arbitrarily around all three axes.  The compensation method assumes the acquired data is sufficiently varied to cover the entire 3D sphere, and then finds the best fit of the data to a perfect sphere centered at the origin.  This produces an upper triangular 3×3 matrix of multiplicative values and an offset vector that can be used to convert any magnetometer position raw value to a compensated one.  I decided to create a tool using the second method, mainly because I had available a MATLAB script that would do most of the work for me, and Octave, the free open-source application that can execute most MATLAB scripts.  Moreover, Octave for windows can be called from C#/.NET programs, making it a natural fit for my needs.  In any case, I was able to implement the utility (twice!!) over the course of a couple of months, getting it to the point where I am now ready to try calibrating my CK Devices ‘Mongoose’ IMU, as installed on my ‘Wall-E2’ four-wheel drive robot.

However, before mounting the IMU on the robot and going for ‘the big Kahuna’ result, I decided to essentially re-create my original experiment with the IMU rotated in the X-Y plane on my bench-top, as described in the post ‘Giving Wall-E2 A Sense of Direction – Part III‘.  My 4-inch compass rose had long since bitten the dust, but I had saved the print file (did I tell you that I  never throw anything away)

Mongoose IMU on 4-Inch Compass Rose

Mongoose IMU on 4-Inch Compass Rose

So, I basically re-created the original heading error test from back in March, and got similar (but not identical) results, as shown below:

Heading Error, Compensation, and Comp+Error

Heading Error, Compensation, and Comp+Error

06/19/16 Mongoose 'Desktop' Heading Error

06/19/16 Mongoose ‘Desktop’ Heading Error

Then I used my newly minted magnetometer calibration utility to generate a calibration matrix and center offset, so I can apply them  to the above data.  However, before I can do that I have to go back into CK Devices original code to find out where the calibration should be applied – more digging :-(.

In the original Mongoose IMU code, the function ‘ReadCompass()’ in HMC5883L.ino gets the raw values from the magnetometer  and  generates compensated values using whatever values the user places in two ‘struct’ objects (all zeros by default).  However, I was clever enough to only send the ‘raw’ uncalibrated magnetometer data to the serial port, so that is what I’ve been using as ‘raw’ data for my mag calibration tool – so far, so good.  However, what I need for my robot is compensated values, so (hopefully) I can (accurately?) determine Wall-E2’s heading.

So, it appears I have two options here; I can continue to emit ‘raw’ data from the Mongoose and perform any needed compensation externally, or I can do the compensation internally to the Mongoose and emit only corrected mag data.  The problem with the latter option (internal to the Mongoose) is that I would have to defeat it each time the robot configuration changed, with it’s inevitable change to the magnetometer’s surroundings.  If I write an external routine to do the compensation based on the results from the calibration tool, then it is only that one routine that will require an update.  OTOH, If the compensation is internal to the Mongoose, then modularity is maximized – a very good feature.  The deciding factor is that if the routine is internal to the Moongoose, then I can remove it from the robot and I still have a complete setup for magnetometer work.  So, I decided to write it into the Mongoose code,  but have the ability to switch it in/out with a compile time switch (something like NO_MAGCOMP?)

The compensation expression being implemented is:

W = U*(V-C), where U = spherical compensation matrix, V = raw mag values, C = center offset value

Since U is always upper triangular (don’t ask – I don’t know why), the above matrix expression simplifies to:

Wx = U11*(Vx-Cx) + U12*(Vy-Cy) + U13*(Vz-Cz)
Wy = U22*(Vy-Cy) + U23*(Vz-Cz)
Wz = U33*(Vz-Cz)

I implemented the above expression in the Mongoose firmware by adding a new function ‘CalibrateMagData()’ as follows:

Using the already existing s_sensor_data struct which is defined as follows:

Then I created another ‘print’ routine, ‘PrintMagCalData()’ to print out the calibrated (vs raw) magnetometer data. Also, after an overnight dream-state ‘aha’ moment, I realized I don’t have to incorporate a compile-time #ifdef statement to switch between ‘raw’ and ‘calibrated’ data readout from the Mongoose – I simply attach a jumper from either GND or +3.3V to one of the I/O pins, and implement code that calls either ‘PrintMagCalData()’ or ‘PrintMagRawData()’ depending on the HIGH/LOW state of the monitor pin. Now  that’s elegant! 😉

After making these changes, I fired up just the Mongoose using VS2015 in debug mode, which includes a port monitor function.  As soon as the Moongoose came up, it started spitting out 3D magnetometer data – YAY!!

It’s been a few days since I got this going – my wife and I went off to a weekend bridge tournament in Kentucky and we got back late last night – so I didn’t get  a chance to compare the ‘after-calibration’ heading performance with the ‘before’ version until today.

After Calibration Magnetic Heading Error

After Calibration Magnetic Heading Error

Comparing the above chart to the one from 6/19, it is clear that they are virtually identical.  I guess what this means is that, at least for the ‘free space’ case with no nearby interferers, calibration doesn’t do much.  Also, this implies that the heading errors observed above have nothing to do with external influences – they are ‘baked in’ to the magnetometer itself. The good news is, a sine function correction table should take most of this error out, assuming more accurate heading measurements are required (I don’t ).

In summary, at this point I have a working magnetometer calibration tool, and I have used it successfully to generate calibration matrix/center offset values for my Mongoose IMU’s HMC5883 magnetometer component.  After calibration, the ‘free space’ heading performance is essentially unchanged, as there were no significant ‘hard’ or ‘soft’ iron interferers to calibrate out.

Next up – remount the Mongoose on my 4WD robot, where there are  plenty of hard/soft iron interference sources, and see whether or not calibration is useful.

 

 

Magnetometer Calibration, Part III

posted 06/14/16

In my last post on the subject of Magnetometer Calibration, I described an entirely complete and wonderful calibration utility I wrote in C#/.NET using Windows Forms, an old version of devDept’s EyeShot 3D viewport library, and calls into the Octave libraries to execute a MATLAB calibration script.  Unfortunately, at the end of the project I discovered to my horror that my redistribution rights for the EyeShot libraries had expired some time ago, and re-upping them was prohibitively expensive – so I could use my masterpiece, but no one else could! :-(.

So, it was ‘back to the drawing board‘ for me.  I needed a (ideally free) 3D visualization capability with reasonably easy-to-implement  view manipulation (pan, zoom, rotate, coordinate axis, etc) tools that was compatible with C#/.NET.    After a fair bit of research, I found that Microsoft’s WPF (Windows Presentation Framework) platform advertised ‘full’ 3D visualization capability, so that was encouraging. Unfortunately, I had never used WPF at all, doing all my C#/.NET programming in the Windows.Forms namespace.  There were some posts that suggested the ability to place  a WPF 3D viewport window into a Forms-based app via a ‘WindowsFormsHostingWpfControl’ but after trying this a bit I decided it was going to be too hard to build up the required 3D viewport and associated view manipulation tools ‘from scratch’.  Eventually I ran across the 3D Helix Toolkit at  http://www.helix-toolkit.org/, and this looked very promising, but with the downside of having to re-create the entire application in WPF-land.  Actually, this appealed to me in a masochistic sort of way, as I would have the opportunity to learn two completely new packages/skills – WPF programming in general (which I had been ignoring for years in the hopes it would go away) and the feature-rich (but somewhat rough around the edges) Helix Toolkit.

So, off I went, reading as much as I could get my hands on about WPF and .NET visual programming.  It was initially very difficult to wrap my head around the way that WPF combines XAML with C# ‘code-behind’ to achieve the desired results.  At first I started out trying desperately to stick to my WinForms technique of drag/dropping tools onto a work surface, and then modifying properties as desired.  This worked up to a point, but I rapidly got lost due to the marked difference between WinForm’s ‘everything is a child of the main window’ and WPF’s hierarchical layout as described in XAML philosophy.  So, my first effort to build a WPF app isn’t very pretty, and  definitely violates any number of rules for WPF elegance!  However, the use of WPF and the Helix Toolkit made it reasonably easy to implement the ‘raw’ and ‘calibrated 3D views, and I had no real trouble porting the comm port and Octave implementation logic from my previous app to this one.  And of course, the entire object of the exercise was to create an app that could be shared, and the WPF version (hopefully) does that.

My plan for the future – at least with respect to the Magnetometer Calibration Utility, is to share the app within the robotics/drone community, and to continue to support  it as necessary to fix bugs and/or implement requested enhancements.  I also plan to set up a public GitHub repository as soon as I can figure out how to do it ;-).

 

 

Magnetometer Calibration, Part II

Posted 06/13/16

In my last post on this subject back in April, I had managed to figure out that my feeble attempts to compensate my on-robot magnetometer for hard/soft iron errors wasn’t going to work, and I was going to have to actually do a ‘whole sphere’ calibration to get any meaningful azimuth values from my magnetometer as installed on my robot.

As noted back in April, I had discovered two different tools for full-sphere magnetometer calibration (Yuri Matselenak’s program from the DIY Drones site, and Alain Barraud’s MATLAB script from 2008), but neither of them really filled the bill for an easy-to-use GUI for dummies like me.  At the end of April’s post, I had actually built up a partial GUI based on devDept’s EyeShot 3D viewport technology that I had lying around from a previous lifetime as a scientific software developer.  All I had to do to complete the project was to figure out how to integrate Alain’s MATLAB code into the EyeShot-based GUI and I’d be all set – or so I thought! ;-).

Between that last post in April and now, I have been busy with various insanities – competitive bridge, trying to develop a 3-point basketball shot, and generally screwing off, but I did manage to spend some time researching the issue of MATLAB-to-C# code porting.  At first I thought I would be able to simply port the MATLAB code to C# line-by-line.  I had done this in the past with some computational electromagnetics codes, so how hard could it be, anyway?  Well, I found out that it was pretty fricking hard, as in effectively impossible – at least for me; I just couldn’t figure out how to relate  the advanced matrix manipulations in Alain’s code to the available math tools in C#.  I even downloaded the Math.NET Numerics toolkit from  http://numerics.mathdotnet.com/ and tried it for a while, but I just could not make the connection between MATLAB matrix manipulation concepts and the corresponding ones in the Numerics toolkit – argghhh!!!.

After failing miserably, I decided to try and skin the MATLAB cat a different way.  I researched the Gnu Octave community, and discovered that not only was there a nice Octave GUI available for windows, but that some developers had been successful at making calls into the Octave dll’s from C# .NET code – exactly what I needed!

So, it was full steam ahead (well, that’s not saying much for me, but…) with the idea of a C#.NET GUI that used my EyeShot 3D viewport for visualization, and Octave calls for the compensation math, and  within a few weeks I had the whole thing up and running – a real thing of beauty that I wouldn’t mind sharing with the world, as shown in the following video clip.

 

Unfortunately, after doing all this work I discovered that my EyeShot redistribution license for the 3D viewport library had long since expired, and although I can run the program happily on my laptop, I can’t distribute the libraries anywhere :-(((((.

Ah, well, back to the drawing board!

Frank

(author’s note: Although I did this work back in the April/May timeframe, I didn’t post about it until now.  I decided to go ahead and post it  now as a ‘prequel’ to the next post about my ‘final solution’ to the magnetometer calibration utility challenge)

 

 

Magnetometer Calibration, Part I

Posted 20 April, 2016

I have been trying to add a magnetometer to my Wall-E2 robot for a while now, and have been plagued by installation-induced errors.  A magnetometer works fine on the bench in isolation, but not when installed on the robot.  So far, I have determined that the DC drive motors are a big contributor to these errors, and the only way to address this is with magnetometer calibration.  At first I tried just a simple lookup-table based approach, since I only needed to correct for azimuth errors (my wheeled robot very rarely departs from a horizontal orientation).  Again, this worked great ‘on the bench’, but failed miserably in the installed configuration.

So, after being forcibly convinced that ‘the easy way’ wasn’t going to work, I began researching magnetometer calibration techniques and tools.  At the DIY Drones website, I found a very informative article by Yury Matselenak  with a great explanation, and a set of two tools (a visualizer and a calibration tool).  I thought I was ‘in like Flynn’ and downloaded the tools.  Unfortunately the visualizer tool didn’t recognize comm two-digit port port numbers, so once again I was stuck (Yuri later gave me a link to the visualizer source code, but I haven’t yet had the time to fix it for the higher numbered ports).  In the meantime, I found another calibration tool, written in MATLAB by Alain Barraud, so I decided to try my hand at a magnetometer calibration manager.  I have a fair bit of experience with MATLAB from a prior lifetime as a researcher, and I have previously ported MATLAB code to C++, so how hard could it be?  From a previous project I had access to an older version the neat EyeShot 3D visualization libraries, so it was easy to add a 3D viewport to a standard C# Windows form.  My plan was to have a ‘raw’ and a ‘calibrated’ view, so it would be easy to see the effects of the calibration process, and to use the free MathNet.Numerics matrix manipulation tools to port the MATLAB code.  Unfortunately, the  MATLAB matrix manipulation routines used in the calibration code didn’t correspond closely enough to  MathNet’s library functions, so I got stuck during the port and wasn’t able to finish the ‘calibrated’ side of the application – bummer!

However, I did get far enough along to notice another problem; the raw data from my magnetometer exhibited a lot of ‘spread’ in one of the principal planes, where a small percentage of the data points formed a rough circle at about twice the radius as all the other points.  When I ran this dataset through Alain’s original MATLAB code (using Octave on my Linux box) the ‘calibrated’ dataset actually looked worse than the original.  This prompted me to add some point editing capability to my visualizer, so I could visually remove ‘outrider’ data prior to attempting calibration.

Raw vs Calibrated data before pruning outliers

Raw vs Calibrated data before pruning outliers

Raw vs Calibrated data after pruning outliers

Raw vs Calibrated data after pruning outliers

Here are some shots of the process of pruning the dataset using my calibration app

Raw magnetometer data before any pruning

Raw magnetometer data before any pruning

All data beyond a settable radius selected

All data beyond a settable radius selected

All selected outliers removed

All selected outliers removed

Pruned data written to text file for later processing through MATLAB calibration app

Pruned data written to text file for later processing through MATLAB calibration app

So, even though I couldn’t (and still can’t) figure out how to port Alain’s MATLAB code into my calibration tool, I  was able to use the tool to visually prune outliers from my data prior to running it through the calibration routine, and so all my work wasn’t a complete waste – whoopee! (not).

Frank

 

 

Giving Wall-E2 A Sense of Direction, Part IV

Posted 03/20/16

In my last post, I described my efforts to integrate a CK Devices Mongoose 9DOF IMU module into my Wall-E2 wall-following robot. In that post, I had collected  a set of azimuth (heading) data from the Mongoose showing that the Mongoose was operating properly and that significant error compensation was possible using a simple sine function.  This led me to believe that it would be feasible to install the Mongoose on the robot and create a similar compensation function.  Unfortunately, that turned out not to be the case.  When I collected a similar set of heading data in the installed case, the data showed non-linear behavior for which function-based compensation  would be difficult, if not impossible to achieve.

The image  below shows results from the ‘bare’ (desktop) uninstalled configuration, where it is clear that the Mongoose raw heading values closely follow the actual magnetic heading, and that a simple sine function is sufficient to reduce heading error to approximately +/- 2 deg.

Mongoose Mag Heading & Error on desk before installation on robot

Mongoose Mag Heading & Error on desk before installation on robot

The next set of  images shows the Mongoose installed on the robot between the upper and lower decks.  The idea was to place the Mongoose in a location reasonably well protected physically, but away from high-current elements.  I also wanted to avoid placing it on the upper deck to avoid the additional inter-deck wiring and attendant maintenance/troubleshooting complexity.

Mongoose installed on robot, side view

Mongoose installed on robot, side view

Mongoose installed on robot, top view

Mongoose installed on robot, top view

160318MongooseInstalled1_Annotated2

 

After installation, the same set of measurements were taken, with the results shown in the following plots.  As can be seen, the Mongoose heading readings in this case are hugely different than the desktop run.  Instead of being able to compensate the heading error to within +/- 2 deg, the compensated error is greater than the uncompensated one!  Looking at the Installed Raw Heading plot, it appears that the readings are relatively linear (but heavily suppressed) out to about 315 deg, where something bad happens and the reported heading falls rapidly to near zero.  I made another run in the installed configuration with the magnetometer gain turned down as far as possible, but this did not materially improve the situation.  Clearly something on the robot was drastically affecting the Mongoose magnetometer, to the extent that compensation was  impossible. Moreover, due to the retrograde readings between 315 and 360 degrees, even a lookup table solution seems problematic.

160318MongooseInstalledHdgCal1Results

As I often do when faced with what appears to be an insurmountable obstacle, I tabled the problem and did something else for a while and let my subconscious work on the problem for a while.  After a couple of days of this, I decided to go back to the uninstalled (desktop) case, re-establish my measurement baseline, and then see if I could determine what on the robot was causing such huge magnetic heading variations.   After playing around for a while, I was able to determine that the cause of the problem was the permanent magnets in the DC wheel motors – well DUH!!  After smacking myself on the forehead a couple of times for not thinking of this days ago, I realized that I had carefully determined that Wall-E’s chassis was constructed of aluminum that shouldn’t (I thought) cause problems with the magnetometer, forgetting entirely the fact that in addition to not affecting the magnetometer, it also wouldn’t shield the magnetometer from the strong magnetic fields generated by the motor magnets – oops!

So, what to do?  As much as I would like to avoid it, it appears now that the only viable solution (other than abandoning the magnetometer idea entirely) is to put the Mongoose on the top deck, as far away from the motors as possible.  I am at least a little optimistic that this will work, for two reasons; with the gain turned down, the Mongoose  almost worked where it was, and because mag fields decrease with R3, a few cm could make a significant difference.

Stay tuned!

Frank

 

 

 

Giving Wall-E2 A Sense of Direction. Part III

Posted  March 14, 2016

In my last post from about a week ago, I described my ongoing efforts to integrate the CK Devices Mongoose 9DOF IMU into my ‘Wall-E2’ wall-following robot.  Since that time, I have gotten the Mongoose successfully integrated into the robot, and am able to see magnetometer & accelerometer readings being passed through the host Arduino Mega to the PC via the Mega’s serial port.

After getting all the hardware and software issues worked out, I have now started on the issue of getting the magnetometer calibrated for it’s new home in the robot.  All magnetometers need to be calibrated after installation to compensate for errors caused by nearby metal (ferrous and non-ferrous) objects; otherwise the reported magnetic heading can be substantially off.  I have considered just using a 360-element lookup table containing offset values, but that’s a bit tacky even for one with my low standards, so I have been researching available magnetometer calibration techniques and tools.  I found a nice  discussion at DIY Drones here, but I have been having trouble getting the tools to work.  The discussion (and tools) center around the widely available  GY-273 HMC5883L  breakout board, and this ain’t quite the same animal as the Mongoose.

Following the general line of the discussion at DIY Drones, I downloaded the ‘MagMaster’ ZIP files, and attempted to get the MagViewer visualizer program linked up with my Mega/Mongoose combination, without much success.  After flailing around for a while with the robot/Mongoose setup, I decided to simplify things by isolating the Mega/Mongoose combination from everything else going on with the robot.  I had a spare Mega, so I simply connected the Mongoose to Tx1/Rx1 on the spare (same as on the robot) and loaded a modified version of the robot controller onto the Mega.

MongooseCalSetup1

The modifications basically stripped away everything to do with the robot, leaving only the code that interfaces with the Mongoose.  According to the DIY Drones discussion, this  should have allowed the MagViewer program to see the same magnetometer data as for the GY-273, but apparently not.  The viewer program never shows any activity – bummer!

Posted  March 16, 2016

Well, I’m still not sure why the MagViewer program didn’t recognize the Mongoose mag data (I posted to the DIY Drones forum, but no replies yet), so  I decided I would try a variant on my original idea of a lookup table.

First, I needed a way to accurately orient  my Mongoose module to different headings.  I Googled around a bit, and found a protractor printing site  that offered 360-degree heading graphics like the one shown below

4-inch heading circle, calibrated in degrees

4-inch heading circle, calibrated in degrees

Then I taped a narrow strip of paper to the bottom centerline of the Mongoose module to make an accurate pointer, and proceeded to record the actual Mongoose heading reading for each 5-degree increment around the circle.  I started this process by orienting the paper heading circle so that the 0/360 point corresponded to a Mongoose heading reading of 0 degrees, i.e. aligned with magnetic north as measured by the Mongoose.  The data were recorded in an Excel spreadsheet, and the error term (difference between the nominal heading value and the Mongoose reading) was calculated and graphed, as shown below.

 

Mongoose reading versus actual magnetic heading, with error plot

Mongoose reading versus actual magnetic heading, with error plot

Well, when the graph first popped up, I just about fell off my chair, as I recognized an almost perfect sine wave graph.  This immediately told me two things:

  1. The Mongoose sensor, the test setup and the data was all valid.  There is no way I could have managed that smooth of a curve by accident, and also no way it could have been that smooth if there was any significant mag field distortion
  2. At least for this case, with no significant installation errors, almost perfect heading compensation could be accomplished with just a simple sine function with a slight negative DC offset corresponding to the average value of all errors (about -1.35 according to Excel)

To test this theory, I  used Excel to calculate the required sin function values, and added the result to the calculated error values for each measured angle.  Then I plotted the compensation and comp+error curves on the same plot as before, as shown below.

Angle Error, Comp values, and Comp+Error

Angle Error, Comp values, and Comp+Error

From the plot, it is clear that the compensation  is effective, although not perfect. The compensated error amplitude looks to be about 2-2.5 degrees, more than adequate for my purposes.  I think the remaining error is due to the fact that the sensor data traces out an ellipse rather than a circle.

So, the next step is to install the Mongoose sensor back on the robot, and do a similar test utilizing an 8″ diameter version of the heading graphic.  Assuming I get similar compensation results, I’ll probably call it a day and start running field tests.  After all, I’m not sure I care if Wall-E thinks a hallway is oriented at 270, 260, or 280 degrees magnetic, as long as the next time it goes down the same hallway it gets more or less the same results!

Stay tuned!

Frank

 

 

 

 

 

Giving Wall-E2 A Sense of Direction – Part II

Posted 03/07/16

In my last post on this subject, I described my efforts to renew my acquaintance with CK Devices’ Mongoose 9DOF IMU board, with the intent of integrating it into my wall-following robot project.  This post describes the first step toward that integration goal.

The Mongoose board operates on 3.3V and communicates via RS-232 protocol over a 6-pin 0.1″ header block, compatible with the CK Devices ‘FTDI Pro’ programmer  module, as shown in the following photo.

Mongoose 9DOF IMU board, with FTDI Pro USB-Serial adapter

Mongoose 9DOF IMU board, with FTDI Pro USB-Serial adapter

Integrating this board into Wall-E requires addressing two distinct but related issues – the voltage difference between the 5V Arduino Mega controller and the 3.3V Mongoose, and  the coding changes required to retrieve the desired heading/attitude information from the Mongoose.

The Hardware

The 5V/3.3V issue requires that 5V serial data levels from the Arduino Mega be stepped down to 3.3V, and 3.3V serial data levels from the Mongoose be stepped up to 5V.  Simply ignoring these problems does seem to work (the 5V data from the Mega doesn’t appear to harm the Mongoose, and the 3.3V data from the Mongoose does seem to be recognized by the Mega), but it is inelegant to say the least.  Also, a recent addition to Wall-E’s superpowers included Pololu’s  ‘Wixel Shield’ module, which fortuitously included some spare circuitry for just this purpose, as shown in the schematic drawing excerpt below.  The upper right corner shows the full step-up circuit used to connect the 3.3V Wixel Tx line to the 5V Arduino Rx line, and the top  center  shows the step-down circuit used to connect the 5V Arduiono Tx line to the 3.3V Wixel Rx line.  The lower area shows the spare parts available on the Wixel shield – and the important thing is that there are  two general-purpose N-channel MOSFETs. These can be used to construct the same step-up circuit to connect the 3.3V Mongoose Tx line to the 5V Arduino Mega Rx1 line, and one of the 4 available 2/3 voltage dividers can be used to step down the 5V Arduino Mega Tx line to the 3.3V Mongoose Rx line.

SchematicDetail

The following image  is a small section of my overall Scheme-it (Digikey’s free online schematic capture app) schematic for Wall-E, showing the addition of the Mongoose IMU.

MongooseTxRxDetail

The photo below shows the serial up/down converter connections on the Wixel Shield board.  On the near side, Red = Mongoose Rx, Org = Mongoose Tx.  On the far side, Org = Mega Rx1, White = Mega Tx1.

Mongoose Serial Up/Down Converter Connections.  Near side Red = Mongoose Tx, Org = Mongoose Rx.  Far side: Org = Mega Rx1, White = Mega Tx1

Mongoose Serial Up/Down Converter Connections. Near side Red = Mongoose Tx, Org = Mongoose Rx. Far side: Org = Mega Rx1, White = Mega Tx1.  Yel = Mongoose 3.3V, Grn = Mongoose Gnd.

 

The Software:

After getting the hardware wired up and tested, I naturally thought All I had to do was plug in the Mongoose with the full set of reporting software, and I’d be ready to go –  NOT!  So, after the obligatory yelling and cursing and the appropriate number of random code changes hoping something would work, I was forced to fall back on my somewhat rusty (but not completely worthless) trouble-shooting skills.  I started by creating a new, blank Arduino project.  Into this project I placed code to simply echo whatever I typed on the serial command line to the serial view window.  To this I added a few lines to make the laser pointer blink on/off, just to provide a visual “I’m alive!” indication, and then tested it.  After all errors were made/corrected, etc, the final code was as shown below.

asd;lfkjsaf;
al;skfdj

The next step was to extend the echo loop to include Serial1, with a simple jumper between Tx1 and Rx1 on the Mega board.  In the software all this required was Serial1.begin(9600) in Setup() and the addition of the code to route command line bytes out Tx1 and to route received bytes on Rx1 to Tx0 so that it would get displayed on the serial monitor.  The code to do this is shown below:

The next step was to extend the loop beyond Serial1 and into the Mongoose, by replacing the jumper from Tx1 to Rx1 with the Mongoose board’s Rx & Tx lines respectively, and replacing the normal Mongoose code with the turn-around (echo) code.  No change should be required to the Arduino code, as it will still be echoing whatever shows up at Rx1 to Tx0.

The Mongoose code is:

 

When this code was loaded into the Mongoose  and run stand-alone, I got the following output by typing ‘this is a test’ on the serial port command line.

MongooseEcho1

After  the Mongoose was disconnected and placed into the robot circuit, I got the following output from the Robot project on a different comm port.

MongooseEcho2

So, it is clear from the above that the Mongoose board connected (through the step-up/down circuitry on the Wixel shield) to Rx1/Tx1 is successfully receiving and echoing whatever shows up on its serial port.

Now it was time to reload my modified version of CK Devices ‘Mongoose_base’ firmware back onto the Mongoose and get it running with Wall-E’s firmware.  After the usual number of hiccups and such, I got it running and nicely reporting ‘Tilt’ (X-axis accelerometer) and ‘Heading’ (derived from 3-axis magnetometer values) along with the normal robot telemetry values, as shown in the printout below.

Final version showing Mongoose Tilt & Hdg values integrated into normal robot telemetry

Final version showing Mongoose Tilt & Hdg values integrated into normal robot telemetry

Although  the Mongoose module  has been successfully integrated into the robot system from a technical standpoint, there is still plenty of work to be done to complete the project.  At the moment (see the photo below), it is just kind of ‘hanging around’ (literally), so some additional work will be required to mount it properly.  Stay tuned! ;-).

Mongoose installed loosely on Wall-E2

Mongoose installed loosely on Wall-E2

 

Giving Wall-E2 A Sense of Direction

Posted 02/16/16

Wall-E2, my 4WD wall-following robot, is doing pretty well these days.  He can navigate autonomously around the house quite nicely, and almost never gets irretrievably stuck. Up until the addition of front wheel  guards  a couple of months ago, Wall-E2 was quite adept at literally climbing the walls and winding up in the ‘scared tractor’ (from ‘Cars’) pose, or turning himself completely over on his back. Since then he has been much better behaved, but has still managed to very occasionally get himself into trouble (he has, on more than one occasion, managed to hang himself on a loose power or data cable, kinda like a horse rider getting scraped off by a low branch.  When this happens, Wall-E2 winds up on his back with his wheels spinning uselessly in the air.

So, my new ‘great idea’ is to give Wall-E2 a sense of direction, literally.  About 5 years ago I ginned up a pretty cool helmet-mounted attitude sensing device for my dressage-riding wife using a ‘Mongoose’ 9DOF board from CK Devices (I would post a link, but I don’t think they are being made anymore – see the Sparkfun ‘Razor’ IMU instead).  Anyway, I still had this miraculous little board hanging around, and decided to see if I could integrate it into Wall-E2.  The idea is that if I could detect an incipient ‘scared tractor’ event, I could short-circuit it by stopping or reversing the motors, or maybe taking some other action if that didn’t work.  In addition, I’m thinking maybe I could use the gyro & magnetometer sensors to have Wall-E2 report his current magnetic heading.  If I were to couple this with left/right/front distance readings, Wall-E2  *might* be able to determine where  he was in the house.  And, if he could do that, then maybe he could tell when he was close to a charging station, and hook himself for a quick electron meal (charging station yet to be designed/implemented, but hey – one thing at a time!)

So, I dug out the Mongoose board, and tried (unsuccessfully) to remember how I had gotten the darned thing to work 5 years ago (I can’t remember what happened 5 minutes ago, so 5 years was more than a stretch!).  Fortunately, I never, ever, throw files away (disk storage being effectively infinite, you know), so I was eventually able to track down my old Arduino ‘Motion Tracker’ project and bootstrap myself back up.  I did have a bit of a kerfluffle when I couldn’t get my Mongoose board to talk to the CK Devices Visualizer program, but that got solved after some head-scratching and a few emails to Cory (last name unknown) of CK Devices.

Using the very nice Visualizer program, as shown in the movie clip below, I was able to verify proper Mongoose operation.  I was also able to track down my old ‘Motion Tracker’ program (basically a very rudimentary hack of the ‘base’ Arduino program supplied by CK Devices) and verify that it still worked.  The next step(s) will be to figure out how to mount the Mongoose on Wall-E2, and how to integrate the IMU information into Wall-E2’s operations.

Stay tuned!

Evolution of a ‘Thank You’ Present

Posted January 22, 2016

As I have noted in previous posts, one of the really cool things about current 3D printing technology is the way it allows me to rapidly iterate through design options to arrive at an ‘optimum’ (where the definition of ‘optimum’ can be somewhat arbitrary) solution.

In this particular case, my wife Jo Anne was planning a trip to Florida to do some serious dressage training.  When Jo is in Florida she stays at the house of  our good friends Mike and Pauline Hall, and she wanted some sort of ‘Thank You’ present for them.  She had seen something on the inet about filling a small round plastic globe with candy and putting it on top of an inverted plastic cup, and this struck a resonance; she knew that Pauline Hall was a retired ‘Martian’ – the term used by long time dedicated Mars employees to describe themselves, and the most famous Mars product is ‘M & M’ candies.  So, she commissioned me to create a customized M&M candy stand, with the words “Mars” and “Hall” inscribed somehow.

As I have learned through previous design/print iterations, the fastest way to get from idea to finished product is to simply start building prototypes; it doesn’t take long, is incredibly cheap, and the process usually rapidly converges to a very good (if not necessarily ‘optimum’) solution.  As I do in many of my designs, I first created a model in TinkerCad and then printed it at half (50%) scale.  Jo Anne was able to look at the half-scale model and see right away whether or not I was on the right track.  In this case she liked the first model, so I printed a full-scale one, and thought I was done.  Unfortunately, I had forgotten about the inscribed “Mars” and “Pauline” text, so I was assuredly  NOT done!  So, I simply had my wife write the text on the full-scale model with a Sharpie, and partied on.

Next was a full-scale model with the text cut out of the material, but this turned out to be a disaster; I had used ‘support’ structures to keep the text edges sharp, but the support material got so well attached to the main body that I couldn’t get it off (In the past I have tried dissolvable support material, but with very limited success).  So, I suggested that we try a two-color model, with the body in red and the text in white, and Jo agreed.

Next was a half-scale two-color model to prove the concept, followed by a full-scale ‘finished’ product.  Unfortunately, a “time-saving” modification I had made to the text portion of the design caused the text to ‘run’, and I had to make another print to get a real ‘finished’ item.

In the end I got something that looked very good, and is now a completely unique gift for the Halls; it may not be super expensive or jewel-encrusted or anything, but it is something that says “Thank You” in a uniquely Paynter-ish way 😉

The image below shows the evolution of the design from plastic cup through the half-scale models to the final product on the left, shown in front of the PowerSpec 3D printer used for the work.

Hall present design evolution, shown in front of my PowerSpec dual-extruder 3D printer

Hall present design evolution, shown in front of my PowerSpec dual-extruder 3D printer

 

Making Wall-E2 Smarter Using Karnaugh Maps

Posted 01/12/16

A few weeks ago I had what I thought was a great idea – a way of making Wall-E2 react more intelligently to upcoming obstacles as it tracked along a wall.

As it stood at the time, Wall-E2 would continue to track the nearest wall until it got within a set obstacle clearance distance (about 9 cm at present), at which point it would stop, backup, and make a 90-deg turn away from the last-tracked wall direction.  For example, if it was tracking a wall to its left and detected a forward obstacle within 9 cm, it would stop, back up, and then turn 90 deg to the right before proceeding again.  This worked fine, but was a bit crude IMHO (and in my robot universe MY opinion is the only one that matters – Heh Heh!)

So, my idea was to give  Wall-E2 the ability to detect an upcoming obstacle early enough so that it could make a smooth turn away from the currently tracked wall so that it could intelligently navigate the typical concave 90-deg internal corners found in a house.  This required that Wall-E2’s navigation code recognize a third  distinct forward distance ‘band’ in addition to the current ones (less than 9cm and greater than 9 cm).  This third band would be from the obstacle avoidance distance of 9cm to some larger range (currently set at 8 times the obstacle avoidance distance).

After coding this up and setting Wall-E2 loose on some more test runs, I was able to see that this idea really worked – but not without the usual unintended consequences.  In fact, after a number of test runs I began to realize that the addition of the third distance ‘band’ had complicated the situation  to the point where I simply couldn’t acquire (or maintain) a sufficiently good understanding of all the subtleties  of the logic; Every time I thought I had it figured out, I discovered all I had done was to exchange one failure mode for another – bummer!

So, I did what I always do when faced with a problem that simply refuses to be solved – I quit!  Well, not actually, but I did quit trying to solve the problem by changing the program; instead I put it aside, and began thinking about it in the shower, and as I was lying in bed waiting to go to sleep.  I have found over the years that when a problem seems intractable, it usually means there is a piece or pieces missing from the puzzle, and until I ferret it or them out, there is no hope of arriving at a complete solution.

So, after some quality time in the showers and during the ‘drifting off to sleep’ periods, I came to realize that I was not only missing pieces, but I was trying to use some pieces in two different contexts at the same time – oops!  I decided that I needed to go back to the drawing board (literally) and try to capture  all the variables  that comprise the input set to the logic process that results in a new set of commands to the motors.  The result is the diagram below.

Overall Logic Diagram

Overall Logic Diagram

As shown in the above diagram, all Wall-E has to work with are  the inputs from three distance sensors.  The left & right sensors are acoustic ‘ping’ sensors, and the forward one is a Pulsed Light ‘Blue Label’ (V2) LIDAR sensor.  All the other ‘inputs’ on the left side are derived in some way from the distance sensor inputs.  The operating logic uses the sensor information, along with knowledge of the previous operating state to produce the next operating state – i.e. a set of motor commands.  The processor then updates the previous  operating state, and then does it all over again.

The logic diagram breaks the ‘inputs’ into four different categories. First and foremost is the raw distance data from the sensors, followed (in no particular order) by the current operating mode (i.e. what the motors are doing at the moment), the current tracking state (left, right, or neither), and the current distance ‘band’ (less than 9cm, between 9 and 72cm, and greater than 72cm).  The processor uses this information to generate a new operating mode and updates the current distance band and current tracking state.

After getting a handle on the inputs, outputs, and state variables, I decided to try my hand at using the Karnaugh mapping trick I learned back in my old logic circuit design days 40 or 50 years ago.  The technique involves mapping the inputs onto one or more two-dimensional grids, where every cell in the grid represents a possible output of the logic process being investigated.  In it’s ‘pure’ implementation, the outputs are all ‘1’ or ‘0’, but in my implementation, the outputs are one of the 8 motor operations modes (tracking left/right, backup-and-rotate left/right, step-turn left/right, and rotate-90-deg left/right).  The full set of Karnaugh maps for this system are shown in the following image.

Karnaugh Map using variables from logic diagram

Karnaugh Map using variables from logic diagram

The utility of Karnaugh maps lies in their ability to expose possible simplifications to the logic equations for the various desired outputs.  In a properly constructed K-map, adjacent cells with the same output indicate a potential simplification in the logic for that output.  For instance, in the diagram above, the ‘Backup-and-Rotate-Right’ output occurs in all four cells in the top row of the ‘Tracking Left’ map (shown in green above).  This indicates that the logic equation for that desired output simplifies down to simply “distance band ==  ‘NEAR’.  In addition, the Backup-and-Rotate-Right’ output occurs for all four cells in the ‘Stuck Recovery’ column, indicating that the logic equation is simply “operating mode == Stuck Recovery”.  The sum (OR) of these two equations gives the complete logic equation for the ‘Backup-and-Rotate-Right’ motor operating mode, i.e.

Backup-and-Rotate-Right = Tracking Left && (NEAR || STUCK)

The above example is admittedly the least complicated, but the complete logic equations for all the other motor operation modes can be similarly derived, and are shown at the bottom of the K-map diagram above.  Note that while for completeness I mapped out the K-map for ‘Tracking Neither’, it became evident that it doesn’t really add anything to the logic.  It can simply be ignored for purposes of generating the desired logic equations.

Now that I have what I hope and believe is the complete solution for the level of intelligence I wish to implement with Wall-E2, actually coding and testing it should be MUCH easier.  At the moment though, said implementation and testing will have to wait until I and my wife return from a week-long duplicate bridge tournament in Cleveland, OH.

Stay tuned! ;-))

Frank

January 16 Update:

As I was coding up the results of the above study, I realized that the  original Karnaugh map shown above wasn’t an entirely accurate description of the problem space.  In particular, I realized that  if Wall-E2 encounters an ‘open corner’ (i.e. both left & right distances are > max) just at the Far/Near boundary, it is OK to assign this condition to  either the ‘Step-Turn’ (i.e. start a turn away from the last-tracked wall)  or the ‘Open Corner’ (i.e. start a turn toward the last-tracked wall).  And if I were to arbitrarily (but cleverly!) assign this to ‘Step-Turn’, then the K-map changes from the above layout to the revised one shown below, where the ‘Open Corner’ condition has been reduced to just the one cell in the lower right-hand corner of both the left and right K-maps.

Revised Motor Control Logic Karnaugh Map

Revised Motor Control Logic Karnaugh Map

So now the logic expressions for the two  ‘Open Corner’ motor response cases (i.e. start a turn  toward the last-tracked wall) are:

Rotate 90 Left = Tracking Left && Open Corner&& Far
Rotate 90 Left = Tracking Left && Open Corner&& Far

But the  other implication of this change is that now the ‘Step-Turn’ expression can be simplified from the  ‘sum’ (logical  OR) of two 3-term expressions to a single 3-term one, as shown by the dotted-line groupings in the revised K-map, and the following expressions for the ‘Left Tracking’ case:

previous: Step-turn Right = Tracking Left && Step && (Wall Tracking || Step Turn)
new: Step-turn Right = Tracking Left && Step && !Stuck

much easier to implement!

OK, back to coding…..

Frank