Tag Archives: Arduino

Basic Arduino/MPU6050 (GY-521) test

Posted 29 September 2019,

In my quest to figure out WTF happened to my ability to acquire real-time relative heading information on both my 2-wheel and 4-wheel robots, I have been trying to start from scratch with very simple controller/IMU hardware configurations.  After succeeding with a basic functionality demonstration using a Teensy 3.2 and a Sparkfun MPU6250 IMU breakout board, I decided the next step would be to do the same thing with an Arduino Mega controller and a GY-521 )MPU6050 clone) to more closely replicate the hardware configuration on my 2-wheel and 4-wheel robots.

As usual I started this project with a web search for basic MPU6050/Arduino examples, and I found this YouTube video showing just what I was after.  After going through the video several times to make sure I understood what was going on, I decided to try and duplicate it so I could compare my (hopefully) working demo code with my (currently non-working) robot code.

In my past efforts with the MPU6050, I had struggled with the complexities of using Jeff Rowberg’s wonderful (but quite massive and convoluted) I2CDevLib GitHub repository. There was always something that didn’t quite fit the situation, and making it fit invariably required a trip down the rabbit hole into Alice’s wonderland.  Getting the right combination of files in the right places seemed to be more a matter of luck than skill.  However, this particular video does a nice job of explicitly demonstrating what has to go where.  Essentially the magic steps are:

  • Download Jeff Rowberg’s IC2DevLib repository from GitHub into a ZIP file.
  • UnZip the repository files into a temporary folder
  • Copy the Arduino/I2CDev and Arduino/MPU6050 folders into the Arduino/Libraries folder. This makes them available to the Arduino IDE (and the VS2017/Visual Micro setup I use).
  • Open a new sketch in the Arduino IDE (or a new project in the VS/VisMicro environment) and then:
    • In the Arduino IDE select ‘File-Examples, and scroll down to the ‘Examples from Custom Libraries’ section. Then select ‘MPU6050->MPU6050_DMP6’  This will load the example code into the sketch.
    • In the VS/VM environment, select the Visual Micro Explorer (under the vMicro tab). Then click on the Examples tab, expand the ‘MPU6050’ section and then select the MPU6050_DMP6 example. This will load the code into the edit window.

Assuming you have the wiring setup correct, the example should run ‘out of the box’ with no required modifications.  However, after verifying that everything was working, I made the following changes:

  • The unmodified MPU6050_6Axis_MotionApps20.h file configures the MPU6050 DMP to send data packets to the controller at a fairly high rate – like 100Hz.  This is way too high for my robot application, so I changed the configuration to send packets at a 10Hz rate, by changing the MPU6050_DMP_FIFO_RATE_DIVISOR constant from 0x01 to 0x09 (lines 271-274) as shown below
  • The Arduino I2C library (Wire.h) has a well-known and documented flaw that causes the I2C bus to hang up on an intermittent basis, so I modified I2CDev.h lines 50-57 to use the SBWIRE library that contains timeouts to prevent this problem from happening

And the last change I made was to disable the interrupt service routine (ISR) and use a polling technique.  Instead of waiting for an interrupt, I simply poll the DMP register with

‘mpuIntStatus = mpu.getIntStatus();’

every time through the loop.  If the return value indicates that a data packet is ready, it is read; otherwise it does nothing.  This appears to be entirely equivalent to the interrupt technique as long as the loop is fast enough service the DMP’s FIFO.

30 September Update:

Well, something’s not equivalent, as the yaw values are fine for a few minutes, but then start showing up as ‘179.000’.  From my previous work I know this means that the line

mpu.getFIFOBytes(fifoBuffer, packetSize);

is getting out of sync with the DMP and isn’t reading a complete packet.  When I then changed the code back to the original interrupt-driven model, the yaw values stay valid forever.

03 October Update:

I modified the code to break the ‘put other programming stuff here’ block out of the ‘if()’ within a ‘while()’ within a ‘loop()’ structure for two reasons:

  • It gave me a headache every time I tried to figure out how it worked
  • I wanted to do ‘the programming stuff’ only once every K Msec where K was something like 100 or 200.  With the above nested structure, that would never work.

After removing extraneous comments and unused code, the resulting program is shown below:

Notes about the above program:

  • I used the SBWIRE library vs the normal Arduino WIRE library to avoid the well-known and well documented infinite blocking problems in the WIRE code.  This was accomplished by editing the I2C interface implementation section in I2Cdev.h as follows

  •  
  • I lowered the MPU6050 interrupt rate to 20Hz (I don’t need anything faster for my wall-following robot by modifying MPU6050_6AxisMotionApps20.h as follows:
  • The loop() function has just three blocks
    • if (!dmpReady) return; this bypasses everything else if the MPU6050 didn’t init correctly

    • All this section does is call GetIMUHeadingDeg() whenever an interrupt has been processed in the ISR

    • This section is the ‘everything else’ block. In my robot programs, this section runs the robot, using the yaw value output from the MPU6050 as appropriate.
  • I discovered that the local variable ‘fifoCount’ can become desynchronized from the actual FIFO count resulting in a situation where the line:

if (mpuInterrupt && fifoCount < packetSize)

in the loop() function fails with fifoCount == packetSize.  The fix for this was to remove the fifoCount comparison from the if() statement, making it just ‘if (mpuInterrupt)’.  This means the if() block will execute every time the interrupt occurs, whether or not there is data in the FIFO.

With the above modifications, the program has run for many hours with no problems, so I’m convinced I have most, if not all, the problems licked.  I’m still using the interrupt-driven version rather than the polling version I would prefer, but that’s a small price to pay for the demonstrated stability of the interrupt-driven version.

Future Work:

Next I plan to try the new MotionDriver 6.12 version of the MPU6050 DMP firmware, which is reputed to be faster, better, and more stable than the present 2.0 version.

04 October Update.

As it happens, the only thing that was required to change from MotionApps V2 to MotionApps V6.12 was to change #include “MPU6050_6Axis_MotionApps20.h” to #include “MPU6050_6Axis_MotionApps_V6_12.h” in little test program.  This compiled and ran fine, and the only difference I could see is that V6.12 has a fixed interrupt rate of about 200Hz, whereas V2.0 could be adjusted down to about 20Hz.  According to some Invensense documentation, the newer version has better/faster calibration capabilities and (maybe?) lower drift rates??

Stay Tuned

 

Frank

 

 

 

 

 

Back to the future with Wall-E2. Wall-following Part VI

Posted 13 August 2019

In my last post on this subject, I discussed the idea of using orientation information to compensate raw wall offset distance values to account for the errors associated with robot orientation.  The idea was that if I could do that, then Wall-E2 would know how far he was away from the wall regardless of orientation, and would be able to make appropriate corrections to get to and stay at a predetermined offset from the wall.

Well, it didn’t really work out that way.  After getting through the geometry analysis and the math, it turned out that in order to use the compensation algorithm, I have to know the initial robot orientation with respect to the wall, and I don’t :-(.  Without knowing this, it is basically impossible to apply the correct compensation.  For example, if the robot is originally oriented 30º away from the wall, then a ‘toward-wall’ rotation will cause the measured distance to go down, and an upward compensation is required.  However, if the robot is initially oriented toward the wall, then that same ‘toward-wall’ rotation will cause the measured distance to go up and a downward compensation is required – bummer!

However, all is not lost;  the ability to perform relatively precise angular rotations means that I can use incremental rotations for acquiring and then tracking a predetermined offset distance.  In the acquisition phase, the robot orientation is changed in 10º increments in the appropriate direction, and an N-point slope calculation is performed to determine whether or not the current ‘cut angle’ will allow the robot to eventually reach the predetermined offset distance.   As the robot approaches the offset line, the cut angle is reduced until it is zero, in theory resulting in the robot travelling parallel to the wall at the offset distance.  At this point the robot transitions from ‘capture’ to ‘track’ mode, and the response to distance deviations becomes more robust.

This strategy was implemented using my 2-motor robot, and seems to work well once the normal crop of bugs was eradicated.  The following Excel plots show the results of two short runs where the robot first captured and then tracked a 30cm offset setting.

Capture and track a 30cm wall offset starting from the outside

Capture and track a 30cm wall offset starting from the inside

So far I have only implemented this completely for the right side, but as the left side is identical, I anticipate no problems in this regard.

Future Work:

So far I have demonstrated the ability to capture and then track a predetermined wall offset distance, starting from either inside or outside the desired offset distance. This represents a quantum leap in performance, as Wall-E2 currently can only track whatever distance it first measures – it has no capability to capture a desired offset distance.  However, there are still some ‘edge’ cases that need to be dealt with one way or the other.  For instance, if the robot orientation is too far away from parallel, the current algorithm won’t be able to rotate it enough to capture the desired offset or the measured distance will exceed the max range gate of the ping sensors (currently set at 200cm).  These conditions may not be all that deleterious, as eventually Wall-E2 will get close enough to something to trigger an avoidance response, thereby resetting the entire orientation picture (hopefully to something a little more parallel).

In addition to the wall tracking problem, the new capability to make reasonably precise angular rotations should significantly improve Wall-E2’s performance in handling ‘open-corner’ and ‘closed-corner’ situations; currently these cases are handled with timed turns, which are only correct for one floor covering type (hard vs soft) and battery state.  With the heading measurement capability, a 90º corner turn will always be (approximately) 90º whether it is on carpet or hard flooring.  In addition, now I can program in obstacle avoidance step-turns for approaching obstacles instead of relying entirely the ‘backup-and-turn’ approach.

Stay tuned!

Frank

 

 

MPU6050 IMU Motor Noise Troubleshooting

Posted 24 July 2019

For a while now I’ve been investigating ways of improving the wall following performance of my autonomous wall-following robot Wall-E2.  At the heart of the plan is the use of a MPU6050 IMU to sense relative angle changes of the robot so that changes in the distance to the nearest wall due only to the angle change itself can be compensated out, leaving only the actual offset distance to be used for tracking.

As the test vehicle for this project, I am using my old 2-motor robot, fitted with new Pololu 125:1 metal-geared DC motors and Adafruit DRV8871 motor drivers, as shown in the photo below.

2-motor test vehicle on left, Wall-E2 on right

The DFRobots MPU6050 IMU module is mounted on the green perfboard assembly near the right wheel of the 2-motor test robot, along with an Adafruit INA169 high-side current sensor and an HC-05 Bluetooth module used for remote programming and telemetry.

This worked great at first, but then I started experiencing anomalous behavior where the robot would lose track of the relative heading and start turning in circles.  After some additional testing, I determined that this problem only occurred when the motors were running.  It would work fine as long as the motors weren’t running, but since the robot had to move to do its job, not having the ability to run the motors was a real ‘buzz-kill’.  I ran some experiments on the bench to demonstrate the problem, as shown in the Excel plots below:

Troubleshooting:

There were a number of possibilities for the observed behavior:

  1. The extra computing load required to run the motors was causing heading sensor readings to get missed (not likely, but…)
  2. Motor noise of some sort was feeding back into the power & ground lines
  3. RFI created by the motors was getting into the MPU6050 interrupt line to the Arduino Mega and causing interrupt processing to overwhelm the Mega
  4. RFI created by the motors was interfering with I2C communications between the Mega and the MPU6050
  5. Something else

Extra Computing Load:

This one was pretty easy to eliminate.  The main loop does nothing most of the time, and only updates system parameters every 200 mSec.  If the extra computing load was the problem, I would expect to see no ‘dead time’ between adjacent adjustment function blocks.  I had some debug printing code in the program that displayed the result of the ‘millis()’ function at various points in the program, and it was clear that there was still plenty of ‘dead time’ between each 200 mSec adjustment interval.

Motor noise feeding back into power/ground:

I poked around on the power lines with my O’scope with the motors running and not running, but didn’t find anything spectacular; there was definitely some noise, but IMHO not enough to cause the problems I was seeing.  So, in an effort to completely eliminate this possibility, I removed the perfboard sub-module from the robot entirely, and connected it to a separate Mega microcontroller. Since this setup used completely different power circuits (the onboard battery for the robot, PC USB cable for the second Mega), power line feedback could not possibly be a factor.  With this setup I was able to demonstrate that the MPU6050 output was accurate and reasonable until I placed the perfboard sub-module in close proximity to the robot; then it started acting up just as it did when mounted on the robot.

So it was clear that the interference is RFI, not conducted through any wiring.

RFI created by the motors was getting into the MPU6050 interrupt line to the Arduino Mega and causing interrupt processing to overwhelm the Mega

This one seemed very possible.  The MPU6050 generates interrupts at a 20Hz rate, but I only use measurements at a 5Hz (200mSec) rate.  Each interrupt causes the Interrupt Service Routine (ISR) to fire, but the actual heading measurement only occurs every 200 mSec. I reasoned that if motor-generated RFI was causing the issue, I should see many more activations of the ISR than could be explained by the 20Hz MPU6050 interrupt generation rate.  To test this theory, I placed code in the ISR that pulsed a digital output pin, and then monitored this pin with my O’scope.  When I did this, I saw many extra ISR activations, and was convinced I had found the problem.  In the following short video clip, the top trace is the normal interrupt line pulse frequency, and the bottom trace is the ISR-generated pulse train.  In normal operation, these two traces would be identical, but as can be seen, many extra ISR activations are occurring when the motors are running.

So now I had to figure out what to do with this information.  After Googling around for a while, I ran across some posts that described using the MPU6050/DMP setup without using the interrupt output line from the module; instead, the MPU6050 was polled whenever a new reading was required.  As long as this polling takes place at a rate greater than the normal DMP measurement frequency, the DMP’s internal FIFO shouldn’t overflow.  If the polling rate is less than the normal rate, then FIFO management is required.  After thinking about this for a while, I realized I could easily poll the MPU/DMP at a higher rate than the configured 20Hz rate by simply polling it each time through the main loop – not waiting for the 200mSec/5Hz motor speed adjustment interval.  I would simply poll the MPU/DMP as fast as possible, and whenever new data was ready I would pull it off the FIFO and put it into a global variable.  The next time the motor adjustment function ran, it would use the latest relative heading value and everyone would be happy.

So, I implemented this change and tested it off the robot, and everything worked OK, as shown in the following Excel plot.

And then I put it on the robot and ran the motors

Crap!  I was back to the same problem!  So, although I had found evidence that the motor RFI was causing additional ISP activations, that clearly wasn’t the entire problem, as the polling method completely eliminates the ISP.

RFI created by the motors was interfering with I2C communications between the Mega and the MPU6050

I knew that the I2C control channel could experience corruption due to noise, especially with ‘weak’ pullup resistor values and long wire runs.  However, I was using short (15cm) runs and 2.2K pullups on the MPU6050 end of the run, so I didn’t think that was an issue.  However, since I now knew that the problem wasn’t related to wiring issues or ISR overload, this was the next item on the list.  So, I shortened the I2C runs from 15cm to about 3cm, and found that this did indeed suppress (but not eliminate) the interference.  However, even with this modification and with the MPU6050 module located as far away from the motors as possible, the interference was still present.

Something else

So, now I was down to the ‘something else’ item on my list, having run out of ideas for suppressing the interference.  After letting this sit for a few days, I realized that I didn’t have this problem (or at least didn’t notice it) on my 4-motor Wall-E2 robot, so I started wondering about the differences between the two robot configurations.

  1. Wall-E2 uses plastic-geared 120:1 ‘red cap’ motors, while the 2-motor robot uses pololu 125:1 metal-geared motors
  2. Wall-E2 uses L298N linear drivers while the 2-motor version uses the Adafruit DRV8871 switching drivers.

So, I decided to see if I could isolate these two factors and see if it was the motors, or the drivers (or both/neither?) responsible for the interference. To do this, I used my new DPS5005 power supply to generate a 6V DC source, and connected the power supply directly to the motors, bypassing the drivers entirely.  When I did this, all the interference went away!  The motors aren’t causing the interference – it’s the drivers!

In the first plot above, I used a short (3cm) I2C wire pair and the module was located near, but not on, the robot. As can be seen, no interference occurred when the motors were run.  In the second plot I used a long (15cm) I2C wire pair and mounted the module directly on the robot in its original position.  Again, no interference when the motors were run.

So, at this point it was pretty definite that the main culprit in the MPU6050 interference issue is the Adafruit DRV8871 switch-mode driver.  Switch-mode drivers are much more efficient than L298N linear-mode drivers, but the cost is high switching transients and debilitating interference to any I2C peripherals.

As an experiment, I tried reducing the cable length from the drivers to the motors, reasoning that the cables must be acting like antennae, and reducing their length should reduce the strength of the RFI.  I re-positioned the drivers from the top surface of the robot to the bottom right next to the motors, thereby reducing the drive cable length from about 15cm to about 3 (a 5:1 reduction).  Unfortunately, this did not significantly reduce the interference.

So, at this point I’m running out of ideas for eliminating the MPU6050 interference due to switch-mode driver use.

  • I read at least one post where the poster had eliminated motor interference by eliminating the I2C wiring entirely – he used a MPU6050 ‘shield’ where the I2C pins on the MPU6050 were connected directly to the I2C pins on the Arduino microcontroller.  The poster didn’t mention what type of motor driver (L298N linear-mode style or DRV8871 switch-mode style), but apparently a (near) zero I2C cable length worked for him.  Unfortunately this solution won’t work for me as Wall-E2 uses three different I2C-based sensors, all located well away from the microcontroller.
  • It’s also possible that the motors and drivers could be isolated from the rest of the robot by placing them in some sort of metal box that would shield the rest of the robot from the switching transients caused by the drivers.  That seems a bit impractical, as it would require metal fabricating unavailable to me.  OTOH, I might be able to print a plastic enclosure, and then cover it with metal foil of some sort.  If I go this route, I might want to consider the use of optical isolators on the motor control lines, in order to break any conduction path back to the microcontroller, and capacitive feed-throughs for the power lines.

27 July 19 Update:

I received a new batch of GY-521 MPU6050 breakout boards, so I decided to try a few more experiments.  With one of the GY-521 modules, I soldered the SCL/SDA header pins to the ‘bottom’ (non-label side) and the PWR/GND pins to the ‘top’.  With this setup I was able to plug the module directly into the Mega’s SCL/SDA pins, thereby reducing the I2C cable length to zero.  The idea was that if the I2C cable length was contributing significantly to RFI susceptibility, then a zero length cable should reduce this to the minimum  possible, as shown below:

MPU6050 directly on Mega pins, normal length power wiring

In the photo above, the Mega with the MPU6050 connected is sitting atop the Mega that is running the motors. The GND and +5V leads are normal 15cm jumper wires.  As shown in the plots below, this configuration did reduce the RFI susceptibility some, but not enough to allow normal operation when lying atop the robot’s Mega.

GY-521 MPU6050 module mounted directly onto Mega, normal length power leads

I was at least a little encouraged by this plot, as it showed that the MPU6050 (and/or the Mega) was recovering from the RFI ‘flooding’ more readily than before.  In previous experiments, once the MPU6050/Mega lost sync, it never recovered.

Next I tried looping the power wiring around an ‘RF choke’ magnetic core to see if raising the effective impedance of the power wiring to high-frequency transients had any effect, as shown in the following photo.

GND & +5V leads looped through an RF Choke.

Unfortunately, as far as I could tell this had very little positive effect on RFI susceptibility.

Next I tried shortening the GND & +5V leads as much as possible.  After looking at the Mega pinout diagram, I realized there was GND & +5V very close to the SCL/SDA pins, so I fabricated the shortest possible twisted-pair cable and installed it, as shown in the following photo.

MPU6050 directly on Mega pins, shortest possible length power wiring

With this configuration, I was actually able to get consistent readings from the MPU6050, whether or not the motors were running – yay!!

In the plot above, the vertical scale is only from -17 deg to -17.8 deg, so all the variation is due to the MPU6050, and there is no apparent deleterious effects due to motor RFI – yay!

So, at this point it’s pretty clear that a significant culprit in the MPU6050’s RFI susceptibility is the GND/+5V and I2C cabling acting as antennae and  conducting the RFI into the MPU6050 module.  Reducing the effective length of the antennas was effective in reducing the amount of RFI present on the module.

With the above in mind, I also tried adding a 0.01uF ‘chip’ capacitor directly at the power input leads, thinking this might be just as effective (if not more so) than shortening the power cabling.  Unfortunately, this experiment was inconclusive. The normal length power cabling with the capacitor seemed to cause just as much trouble as the setup without the cap, as shown in the following plot.

Having determined that the best configuration so far was the zero-length I2C cable and the shortest possible GND/+5V cable, I decided to try moving the MPU6U6050 module from the separate test Mega to the robot’s Mega. This required moving the motor drive lines to different pins, but this was easily accomplished.  Unfortunately, when I got everything together, it was apparent that the steps taken so far were not yet effective enough to prevent RFI problems due the switch-mode motor drivers

The good news, such as it is, is that the MPU6050/Mega seems to recover fairly quickly after each ‘bad data’ excursion, so maybe we are most of the way there!

As a next step, I plan to replace the current DRV8871 switch-mode motor drivers with a single L298N dual-motor linear driver, to see if my theory about the RFI problem being mostly due to the high-frequency transients generated by the drivers and not the motors themselves.  If my theory holds water, replacing the drivers should eliminate (or at least significantly suppress) the RFI problems.

28 July 2019 Update:

So today I got the L298N driver version of the robot running, and I was happy (but not too surprised) to see that the MPU6050 can operate properly with the motors ON  or OFF when mounted on the robot’s Mega controller, as shown in the following photo and Excel plots

2-motor robot with L298N motor driver installed.

However, there does still seem to be one ‘fly in the ointment’ left to consider.  When I re-installed the wireless link to allow me to reprogram the 2-motor robot remotely and to receive wireless telemetry, I found that the MPU6050 exhibited an abnormally high yaw drift rate unless I allowed it to stabilize for about 10 sec after applying power and before the motors started running, as shown in the following plots.

2-motor robot with HC-05 wireless link re-installed.

I have no idea what is causing this behavior.

31 July 2019 Update

So, I found a couple of posts that refer to some sort of auto-calibration process that takes on the order of 10 seconds or so, and that sounds like what is happening with my project.  I constructed the following routine that waited for the IMU yaw output values to settle

This was very effective in determining when the MPU6050 output had settled, but it turned out to be unneeded for my application.  I’m using the IMU output for relative yaw values only, and over a very short time frame (5-10 sec), so even high yaw drift rates aren’t deleterious.  In addition, this condition only lasts for a 10-15 sec from startup, so not a big deal in any case.

At this point, the MPU6050 IMU on my little two-motor robot seems to be stable and robust, with the following adjustments (in no particular order of significance)

  • Changed out the motor drivers from 2ea switched-mode DRV8871 motor drivers to a single dual-channel L298N linear mode motor driver.  This is probably the most significant change, without which none of the other changes would have been effective.  This is a shame, as the voltage drop across the L298N is significantly higher than with the switch-mode types.
  • Shortened the I2C cable to zero length by plugging the GY-521 breakout board directly into the I2C pins on the Mega.  This isn’t an issue on my 2-motor test bed, but will be on the bigger 4-motor robot
  • Shortened the IMU power cable from 12-15cm to about 3cm, and installed a 10V 1uF capacitor right at the PWR & GND pins on the IMU breakout board.  Again, this was practical on my test robot, but might not be on my 4-motor robot.
  • Changed from an interrupt driven architecture to a polling architecture.  This allowed me to remove the wire from the module to the Mega’s interrupt pin, thereby eliminating that possible RF path.  In addition, I revised the code to be much stricter about using only valid packets from the IMU.  Now the code first clears the FIFO, and then waits for a data ready signal from the IMU (available every 50 mSec at the rate I have it configured for).  Once this signal is received, the code immediately reads a packet from the FIFO if and only if it contains exactly one packet (42 bytes in this configuration).  The code shown below is the function that does all of this.

Here’s a short video of the robot making some planned turns using the MPU6050 for turn management.  In the video, the robot executes the following set of maneuvers:

  1. Straight for 2 seconds
  2. CW for 20 deg, starting an offset maneuver to the right
  3. CCW for 20 deg, finishing the maneuver
  4. CCW for 20 deg, starting an offset maneuver to the left
  5. CW for 20 deg, finishing the maneuver
  6. 180 deg turn CW
  7. Straight for 3 sec
  8. 20 deg turn CCW, finishing at the original start point

So, I think it’s pretty safe to say at this point that although both the DFRobots and GY-521 MPU6050 modules have some serious RFI/EMI problems, they can be made to be reasonably robust and reliable, at least with the L298N linear mode motor drivers.  Maybe now that I have killed off this particular ‘alligator’, I can go back to ‘draining the swamp’ – i.e. using relative heading information to make better decisions during wall-following operations.

Stay tuned!

Frank

 

Arduino Remote Programming Using A HC-05 Bluetooth Module

Posted 10 June 2019

As part of my recent Wall-E2 Motor Controller Study, I reincarnated my old 2-motor robot as a test platform for Pololu’s ’20D’ metal gear motors.  When I got the robot put together and started testing the motors, I realized I needed a way to remotely program the Arduino controller and remotely receive telemetry, just as I currently do with my 4-wheel Wall-E2 robot.

On my Wall-E2 robot, remote programming/telemetry is accomplished using the very nice Pololu Wixel Shield.  However, I have been playing around with the cheap and small HC-05 Bluetooth module,  and decided to see if there was maybe a way to use this module as a replacement for the Wixel.

As I usually do, I started with LOTS of web research.  I found some posts claiming to have succeeded in remotely programming an Arduino using a HC-05 module, but the information was sketchy and incomplete, so I decided I would try and pull all the various sources together into a (hopefully) more complete tutorial for folks like me who want to use a HC-05 module for this purpose.

Overall Approach:

In order to remotely program an Arduino using a HC-05, the following basic parts are required:

  • A wireless link (obviously) between the PC and the HC-05.
  • A serial link between the PC and the Arduino and between the Arduino and the HC-05. This part is also well established, and the Arduino-to-HC-05 link can be done with either a hardware port (as with the Mega 2560) or a SoftwareSerial port using the SoftwareSerial library.  My tutorial uses the Mega 2560, so I use Tx/Rx1 (pins 18/19) for the Arduino-to-HC-05 link
  • A way of resetting the Arduino to put it back into programming mode, so the new firmware can be uploaded.
  • A serial connection between the HC-05 and Tx/Rx0 on the microcontroller – more about this later.

The Wireless Link

The HC-05 is a generic Bluetooth device, and as such is compatible with just about everybody’s Bluetooth setup – phones and PC’s.  I plan to use this with my Dell XPS15 9570 laptop, and I can pair with the HC-05 no problem.  Here’s a link to a tutorial on pairing with the HC-05, and here’s another.  As another poster mentioned, the pairing mechanism creates multiple ‘outgoing’ and ‘incoming’ COM ports, and it’s hard for me to figure out which to use.  In this last iteration, I found that I could remove the two ‘incoming’ COM ports and use just the ‘outgoing’ one. Don’t know if that is the right thing, but….

A serial link between the PC, the Arduino and the HC-05

This part is discussed and demoed in many tutorials, but the piece that is almost always missing is why you need to have this link in the first place. The reason is that several AT commands must be used in order to configure the HC-05 correctly for wireless Arduino program upload, and (as I understand it anyway), AT commands can only be communicated to the HC-05 via it’s hardware serial lines, and only when the HC-05 is in ‘Command’ or ‘AT’ mode.  The configuration step is a one-time deal; once the HC-05 is configured, it does not need to be done again unless the application requirements change.

A way of resetting the Arduino to accept firmware uploads

This is the tricky part.  As ‘gabinix’ said in this post:

Hi Paul… To be honest I couldn’t find any tutorials to explain how to program/upload sketches with the HC-05. In fact, the conclusion you came up with is in-line with all the information out there. But it’s actually an extremely simple solution.

The only thing that keeps the HC-05 from uploading a program to arduino is that it doesn’t have a DTR (Data Terminal Ready) pin which tells the arduino to reset and accept a new sketch.

The solution is to re-purpose the “state” pin (PI09)  on the breakout board. It’s purpose is to attach to an LED and indicate the connection status. It’s default setting is to send the pin HIGH when a connection is made, but you can simply enter into command mode of the HC-05 and use an AT COMMAND to tell it to send the pin LOW when a connection is made.

Voila! In about 1 minute of time you have successfully re-purposed the LED pin to a DTR pin which will reset your arduino to accept a new sketch when you hit the upload button.

A couple things to note… This will work for a pro-mini without additional hardware by connecting to the DTR pin. If you’re using an UNO or similar, you will need a capacitor in between our custom “state” pin and the reset pin on the uno. The reason is that the HC-05 will drive our custom pin LOW for the entire connection which would essentially be the same as holding the reset button the entire time. Having the cap in between solves that problem.

It a quick easy fix, takes about a minute to do. It’s just a lot harder to explain the steps to do it in a couple sentences.

Here’s a link to the AT COMMAND set —> http://robopoly.epfl.ch/files/content/sites/robopoly/files/Tutoriels/bluetooth/hc-05-at_command_set.pdf

and here’s a link to a tutorial, video, and sketch on how to enter the AT COMMANDS. —> http://www.techbitar.com/modify-the-hc-05-bluetooth-module-defaults-using-at-commands.html  <<< no longer available 🙁

So, the trick is to re-purpose the STATE output (PI09, AKA Pin 32, AKA LED2, see this link) via the AT+POLAR(X,0) command to go LOW when the connection to upload the program is first started.  This signal is then connected to the Arduino’s RESET pin via the capacitor noted above (to make this signal momentary).  The ‘Instructables’ tutorial on this subject at this link actually gets most of this right, except it doesn’t explain why the AT commands are being entered or what they do – so I found it a bit mysterious.  In addition, it recommends soldering a wire directly to pin 32 rather than re-purposing the STATE output pin (re-purposing the STATE pin allows a no-solder setup). Eventually I ran across this link which contains a very good explanation of the AT commands used by the HC-05.  The required AT commands are:

My module is the variety with a small pushbutton already installed on the ‘EN’ pin, so entering ‘Command’ mode is accomplished by holding the pushbutton depressed while cycling the power, and then releasing the button once power has been applied.

When this is done, the LED will change from fast-blink to a very slow (like 2 sec ON, 2 sec OFF) blink mode, as shown in the following short video:

This indicates the HC-05 is in ‘Command’ mode and will accept AT commands.  If you have the style without the pushbutton, you’ll have to figure out a way to short across the pads where the pushbutton should be, while cycling the power.

The screenshot below shows the result of executing these commands using the wired USB connection to the Arduino and the hard-wired serial connection between the Mega’s Tx1/Rx1 port and the HC-05 running in ‘Command’ mode.

HC-05 configuration using the wired serial port connection to the HC-05

NOTE:  The various posts and tutorials on the HC-05 describe separate AT ‘mini’ and ‘full’ command modes; the ‘mini’ mode only recognizes a small subset of all AT commands, while ‘full’ recognizes them all.  ‘Mini’ mode is entered by momentarily applying VCC to pin 34, and ‘full’ mode is entered by holding pin 34 at VCC for the entire session.  One poster described this as a flaw in the HC-05 version 2 firmware which might be corrected in later versions.  It appears this may have been the case, as the HC-05 module I used responded with VERSION:3.0-20170601 and recognized all the commands I gave it (not a comprehensive test, but enough to make me think this problem has gone away).

Wiring Layout for HC-05 Configuration via AT commands

I decided that this post was my chance to learn how to make ‘pictoral’ wiring diagrams using the Fritzing app.  I had seen other posts with this kind of layout, and initially thought it was kinda childish.  However, when I started working with Fritzing (in English, ‘Fritzing’ sounds like an adverb, not a proper noun – so a bit strange to my ears…), I realized it has a LOT of power, so now I’m a convert ;-).

HC-05 wired for initial configuration using AT commands

In the diagram above, I’m using the Rx1/Tx1 (pins 19/18) hardware serial port available on the Mega.  If you are using a Uno, you’ll need to use SoftwareSerial to configure a second port for connection to the HC-05.  A 2.2K/1.0K voltage divider is used to drop Arduino Tx output voltages to HC-05 Rx input levels, but no conversion is required in the other direction. The HC-05 can be powered directly from Arduino +5V, as the HC-05 has an onboard regulator.

Initial AT Configuration Arduino Sketch

All the code above does is transfer keystrokes from the Arduino to the HC-05, and vice versa. This is all that is required to configure the HC-05 using AT commands.

Serial Connection between the HC-05 and Tx/Rx0 for Program Uploads

Most Arduino microcontrollers are shipped with a small program called a ‘bootloader’ already installed.  This small program is only active for a few seconds after a board reset occurs, and it’s job is to detect when a new program is being uploaded.  If the bootloader sees activity on whatever serial port it is watching, it writes the incoming data into program memory and then transfers control to the user program.  The stock Arduino bootloader only monitors Tx/Rx0 for this; activity on other ports (specifically Rx1 in my case) will be ignored and program uploads will fail.  After the HC-05 has been initially configured via AT commands over the PC-to-Arduino-to-HC-05 serial links, the connection from the HC-05 to the Arduino must be changed so that PC-to-HC-05 data transferred over the Bluetooth link arrives at the Arduino’s Rx0 port so the stock bootloader will see it and write it to the Arduino’s program memory.  This minor point wasn’t at all clear (at least not to me) in the various tutorials, so I wasted a LOT of time trying to figure out why I couldn’t get the last part of the puzzle to fit – ugh!

Shown below is my Fritzing diagram for the final configuration of my test setup, showing the Tx/Rx lines changed from Tx/Rx1 (pins 18/19) to Tx/Rx0 (pins 1/0). The HC-05 STATE output is connected to Arduino reset via a 0.22uF capacitor, with resistors to form a simple one-shot circuit.  The STATE line goes LOW (after reconfiguration via the AT+POLAR=1,0 command) which causes a momentary LOW on the Arduino reset line.  This is the magic required to upload programs to the Arduino wirelessly. When the Bluetooth connection is terminated, the STATE line goes HIGH again and the Arduino end of the now-charged capacitor jumps to well above 5V. The diode shown on the diagram clamps this signal to within a volt or so above +5V to avoid damage to the Arduino Reset line when this happens.  This diode isn’t shown on any of the other tutorials I found, so it is possible the Arduino Reset line is clamped internally (good).  It’s also possible it isn’t protected, in which case not having this diode will eventually kill the Arduino (bad).

HC-05 wired for remote program upload. Note that the Tx & Rx lines have been moved from Tx/Rx1 to Tx/Rx0

Testing

The first thing I did after configuring the HC-05 (using the above AT commands) was to see if I could still connect to and communicate with it over Bluetooth from my laptop.  I used RealTerm, although any terminal program (including the Arduino IDE serial monitor) should do.  The very first thing that happened is I had to re-pair the laptop with the HC-05, and the name given by the HC-05 was markedly different, as shown in the captured pairing dialog.

Pairing dialog on my Dell XPS15 9570 laptop

The next thing was to see if I could get characters from my BT serial connection through to my Arduino serial port.  After fiddling around with the baud rates for a while, I realized that now I had to change the BT serial terminal baud rate from 9600 to 115200, and the Arduino-to-HC-05 baud rate from 38400 (the default ‘Command’ mode rate) to 115200.  Once I did this, I could transmit characters back and forth between RealTerm (connected to the HC-05 via Bluetooth) and my Visual Studio/Visual Micro setup (connected to Arduino via the wired USB cable) – yay!

For the next step in the testing, I need to remove the hard-wired USB connection and power the Arduino from an external power source.  When I did this by first removing the USB connector (thereby removing power from the HC-05) and then plugged in external power, I noticed that the HC-05 was no longer connected to my laptop (the HC-05 status LED was showing the ‘fast blink’ status, and my connection indicator LED was OFF).  I checked in my BT settings panel, and the HC-05 (now announcing itself as ‘H-C-2010-06-01’) was still paired with my laptop, but just transmitting some characters from my RealTerm BT serial monitor did not re-establish the connection.  However, when I changed the port number away from and then back to the BT COM port, this did re-establish the connection; the HC-05 status LED changed to the 2-blinks-pause-2-blinks cycle, and my connection LED illuminated.

So, now I connected the output of my STATUS line one-shot circuit to the Arduino reset line and changed my VS2017/VM programming port from the wired USB port to the BT port (interestingly it was still shown as ‘HC-05’ in Visual Studio/Visual Micro).  After some initial problems, I got the ‘Connected’ status light, but the upload failed with the error message “avrdude: stk500v2_getsync(): timeout communicating with programmer” and the communication status changed back to ‘not connected’.

At this point I realized I was missing something critical, and yelled (more like ‘pleaded’) for help on the Arduino forum.  On the forum I got a lot of detailed feedback from very knowledgeable users, most notably ‘dmjlambert’.  Unfortunately dmjlambert was ultimately unsuccessful in solving the problem, but he was able to validate that the steps I had taken so far were correct as far as they went, and ‘it should just work’.  To paraphrase the Edison approach to innovation, “we didn’t know what worked, but we eliminated most potential failure modes”.  See this forum post for the details.

After this conversation (over several days), I decided to put the problem down for a few days and do other things, hoping that a fresh look at things with a clear head might provide some insight.  A few days later when I came back to the project, I ran some tests suggested by dmjlambert to verify that the connection to the Arduino RESET pin via the 0.22uF capacitor did indeed reset the Arduino when the STATE line transitioned from HIGH to LOW.  To do this I created a modified ‘Blink’ program that blinked 10 times rapidly and then transitioned to a steady slow blink.  Using this program I could see that that the Arduino did indeed reset each time a Bluetooth connection to the HC-05 was established.

So, the problem had to be elsewhere, and about this time I realized I was assuming (aka ‘making an ass out of you and me’) that the program upload data being received over the Bluetooth link was somehow magically making it to the bootloader program.  This had been nagging at me the whole time, but I ‘assumed’ (there’s that word again) that since this problem had never been mentioned in any of the tutorials or even in the responses to my forum posts, it must not be a problem – oops!

Anyway, to make a long story short, I moved the HC-05 – to – Arduino connection from Rx/Tx1 to Rx/Tx0 and program uploads started working immediately – YAY!!

I went back through the tutorials I had been following to see if I had missed this magic step, and didn’t find any references to moving the serial connection at all.  So, if you are doing this with a UNO, you’ll need to move the serial connection from whatever pins  you were using (via SoftwareSerial) to Rx/Tx0 as the last step.  If you are using an Arduino Mega or other uino controller that supports additional hardware serial ports as I did, you’ll have to move the connection from Rx/Tx-whatever to Rx/Tx0 as the last step.

This tutorial was put together in the hope that I could maybe help others who are interested in using the HC-05 Bluetooth module for remote program uploads to a Arduino-compatible microcontroller, and maybe save them from some of the frustration I experienced.  Please feel free to comment on this post, especially if you see something that I got wrong or missed.

13 Aug 2019 Update:

Here’s a short video showcasing the ability to program an Arduino Mega 2560 wirelessly from my Windows 10 PC using the HC-05 Bluetooth module

At the start of the video, the HC-05 status light is blinking rapidly, signalling the ‘No Connection’ state.  Then, at about 2 seconds, the light changes to the slow double-blink ‘Connected’ state, the yellow LED on the Mega blinks OFF & then ON again, signalling that the Mega has been reset and is now awaiting program upload, followed immediately by rapid blinking as the new program is uploaded to the Mega’s program memory.  During the upload, the HC-05 status LED continues to show the slow double-blink ‘Connected’ status.  Then, at about 18 seconds, the program upload terminates and the HC-05 returns to the ‘No Connection’ state.

The small white part on the green perf-board is the 220 nF capacitor.  The other two modules on the perf-board are a MPU6050 IMU and a high-side current sensor.

Stay tuned!

Frank

 

25 October 2021 Update:

I came back to this post to refresh my memory when trying to initialize and use a new HC-05 module for my new Wall-E3 project, and failing badly. I finally got something to work, but only after screwing around a lot. I realized I didn’t have a good handle on what mode the HC-05 was in – even though the onboard LED changes behavior to indicate the mode. So, here is a short video showing the LED behavior for the ‘disconnected’ and ‘connected’ modes.

HC-05 LED indications for ‘connected’ and ‘disconnected’ modes

In the above video, the HC-05 starts out in the normal power-on ‘disconnected’ state (rapidly flashing LED). Then after a few seconds a BT connection is established, and the LED behavior changes to ‘connected’ (two short blinks and a long pause). Then after a few more seconds the connection is dropped and the LED behavior changes back to ‘disconnected’ (rapidly flashing)

Back to the future with Wall-E2. Wall-following Part II

Posted 09 February 2019

A long time ago in a galaxy far, far away, I set up a control algorithm for my autonomous wall-following robot Wall-E2.  After a lot of tuning, I wound up with basically a bang-bang system using a motor speed step function of about 50, where the full range of motor speeds is 0-255.  This works, but as you can see in the following chart & Excel diagram, it’s pretty clunky.  The algorithm is shown below, along with an Excel chart of motor speeds taken during a hallway run, and a video of the run.

for left wall tracking

for right wall tracking

 

Run 1, using homebrew algorithm

Note the row of LEDs on the rear. They display (very roughly) the turn direction and rate.

Since the time I set this up, I started using a PID algorithm for the code that homes the robot in on its charging station using a modulated IR beam, and it seems to work pretty well with a PID value of (Kp,Ki,Kd) = (200,0,0).  I’d like to use the knowledge gained for the IR homing subsystem to make Wall-E2 a bit more sophisticated and smooth during wall-following operations (which, after all, will be what Wall-E2 is doing most of the time).

In past work, I have not bothered to set a fixed distance from the wall being followed; I was just happy that Wall-E2 was following the wall at all, much less at a precise distance. Besides, I really didn’t know if having a preferred distance was a good idea.  However, with the experience gained so far, I now believe a 20-30 cm offset would probably work very well in our home.

So, my plan is to re-purpose the PID object used for IR homing whenever it isn’t actually in the IR homing mode, but with the PID values appropriate for wall-following rather than beam-riding.

PID Parameters:

For the beam-riding application I used a setpoint of zero, meaning the algorithm adjusts the control value (motor speed adjustment value) to drive the input value (offset from IR beam center) to zero.  This works very nicely as can be seen in the videos.  However, for the wall-following application I am going to use a setpoint of about 20-30cm, so that the algorithm will (hopefully) drive the motors to achieve this offset.  The Kp, Ki, & Kd values will be determined by experimentation.

 

 

13 February 2019 Update:

I got the PID controller working with a target offset of 25cm and Kp,Ki,Kd = 2,0,0 and ran some tests in my hallway.  In the first test I started with Wall-E2 approximately 25cm away from the wall. As can be seen in the following video, this worked quite well, and I thought “I’m a genius!”  Then I ran another test with Wall-E2 starting about 50cm away from the wall, and as can be seen in the second video, Wall-E2 promptly dived nose-first right into the wall, and I thought “I’m an idiot!”

 

 

The problem, of course, is the PID algorithm correctly turns Wall-E2 toward the wall to reduce the offset to the target value, but in doing so it changes the orientation of the ping sensor with respect to the wall, and the measured distance goes up instead of down.  The PID response is to turn Wall-E2 more, making the problem even worse, ending with Wall-E3 colliding nose-first with the wall it’s supposed to be following – oops!

So, it appears that I’ll need some sort of two stage approach the the constant-offset wall following problem, with an ‘approach’ stage and a ‘capture’ stage.  If the measured distance is outside a predefined capture window (say +/- 2cm or so), then either the PID algorithm needs to be disabled entirely in favor of a constant-angle approach, or the PID parameters need to change to something more like Kp,Ki,Kd = 0,1,1 or something.  More experimentation required.

Stay tuned,

Frank

 

 

State memory for Wall-E2 – writing telemetry packets to FRAM

posted 28 September 2018

In previous posts, I have described my effort to give time, memory and relative heading super-powers to Wall-E2, my autonomous wall-following robot.   This posts describes a helper class I created to allow Wall-E2 to periodically write its current operating state to FRAM memory, for later readout by his human master(s), and a small test program to verify proper operation of the helper class.

My current conception of Wall-E2’s operational state consists of the current time/date, its tracking mode and submode, and the current left, right, and forward distances, and the current battery voltage. These parameters have been encapsulated in a CFramStatePacket class with methods for writing state packets to FRAM and reading them back out again.   The complete code for this class is shown below. Note that all the class code is contained in just one file – FramPacket.h.   There is no associated .cpp file, as I didn’t think that was necessary.

To test my new CFRAMStatePacket class, I created a small test program that periodically writes simulated state packets to FRAM using the helper class methods, and optionally (if the user creates an interrupt by grounding the appropriate pin) reads them back out again.   This program is designed to run on an Arduino Mega 2560.   If a Uno is used, the interrupt pin number would have to be changed.

The test code also looks for a low on the  CLEAR_FRAM_PIN (Pin 3) on startup.   If it finds one, it will clear  NUM_FRAM_BYTES_TO_CLEAR (currently 2000) FRAM bytes and then read them back out again, byte-by-byte.   Otherwise, the program will continue storing state packets where it left off the last time it was powered up.   Here’s the test code:

And here’s some output from a typical run:

And here is an Excel plot showing the simulated values

Plot of the simulated values generated by the test program

 

So now I have a way for Wall-E2 to write a minute-by-minute diary of its operating state to non-volatile storage, but I don’t yet have a good way to read it all back out again.   That’s the next step – stay tuned!

01 October Update:

I created a small program to read back telemetry packets from FRAM.   When I want to see what  Wall-E2 has been up to, I will replace his normal operating firmware with this sketch, which will allow me to read out all or parts of FRAM contents.   The sketch is included below:

and a typical output run is shown below.   Note that this program decodes the stored 4-byte unix time value into human-readable date/time format.   And yes, I know it’s in that funny ‘American’ mm/dd/yyyy format, but I’m an American, so … ;-).

 

 

Frank

 

Integrating Time, Memory, and Heading Capability, Part VIII

Posted 13 September 2018

Now that I have worked out most of the problems associated with the MPU6050 6DOF IMU module, it was time to integrate the new heading-based turn algorithm into the main Wall-E2 operating system.   As I have done in many past projects over the last half-century or so, I started this process by documenting the entire OS, with particular emphasis on how Wall-E2 currently navigates around his world.   When I started doing this in the 1970’s, the medium I used was an MIT Engineering notebook, hand-written in ink.   Over the ensuing decades the medium has changed, but not the basic idea – the process of putting coherent sentences and paragraphs onto paper (or screen) forces me to think through what is – and is not – important/true.   I have solved many a seemingly intractable problem not with an oscilloscope or debugging tool, but by simply writing things down.   In the current iteration of this process, I use Microsoft Word (not for any particular reason, except that it is available and familiar)   initially, and then dump the results into a post like this one – see below ;-).

 

Description of FourWD_WallE2_V1 Navigation Algorithm

09/04/18

At the start of each pass through loop(), the software determines the current OPMODE given the current environment and the immediately previous OPMODE.   The existing OPMODEs are NONE, CHARGING, IRHOMING, WALLFOLLOW, and DEADBATTERY

  • NONE: Default OPMODE when no other mode can be found to apply to the situation.   As of this writing, the only use for this OPMODE is to initialize the PrevOpMode and CurrentOpMode loop variables in Setings()
  • CHARGING: set in GetOpMode() if the charger is physically connected (CHG_CONNECT_PIN goes HIGH) and the CHG_SIG_PIN is active (LOW). In the CurrentOpMode Switch the PrevOpMode is also set to CHARGING (so that both prev and current op modes are CHARGING), the motors are stopped, and MonitorChargeUntilDone() is called.
    • MonitorChargeUntilDone() blocks until charging is complete, or the BATT_CHG_TIMEOUT value is reached or the charger is physically disconnected (manually pulled out for some reason).
  • IRHOMING: Set in GetOpMode() when the call to IRBeamAvail() (checks IR beacon signal strength) returns TRUE.   In the CurrentOpMode Switch the PrevOpMode is also set to IRHOMING (so that both prev and current op modes are IRHOMING).   A blocking call is made to IRHomeToChgStn() with an Avoidance Distance’ of 0 for hungry’ or 30cm (for full- no need to charge’. The idea here is that in the full’ case, the robot will continue to home until near the charging station, and then break off.
    • IRHomeToChgStn(): sets up a PID and enters a loop, exited only when either the charger connects, or the robot gets stuck or it gets too close to the charging station (this can only happen in the full’ case).   Is Stuck’ is determined in IsStuck() if the front distance variance gets too small (i.e. the front distance isn’t changing).
  • WALLFOLLOW: This is the OpMode that is assigned by GetOpMode() when none of the other mode conditions apply. IOW, this is what the robot does when it isn’t doing anything else.   In the WALLFOLLOW Case section of the CurrentOpMode Switch, the wall-following operation is further broken down into a TrackingCase Switch, with   TRACKING_LEFT, TRACKING_RIGHT, TRACKING_NEITHER sub-modes, with state mode variables maintained for both the current and previous tracking modes.   Each time through the loop(), the various tracking cases make one adjustment to the left/right motor speeds. there are no blocking calls at all in the entire WALLFOLLOW section, with the exception of the BackupAndTurn()’ calls in the TRACKING_LEFT and TRACKING_RIGHT cases when an obstruction or the stuck’ condition is detected.
    • BackupAndTurn( bool bIsLeft, int motor_speed): The idea here is for the robot to back up and do a course change to extract itself from some situation. Up until now, this has been accomplished by making a timed turn one way or the other, but this hasn’t worked well because the correct time for turning on carpet is wildly different than the correct time on hard flooring.   The new heading sensor is intended to solve this problem.
    • Now that Wall-E2 can make accurate turns, the question becomes “what’s the best way to do obstruction-avoidance or stuck-recovery turns?”. If Wall-E2 is following a wall when it gets stuck, maybe it should back up slightly and try to go around, or maybe it should just turn around and go back the way it came.   Maybe a simple obstruction should be treated one way, but a stuck’ condition treated another?   The go back the way I came’ model is simple enough but might result in an uninteresting ping-pong’ shuttle track where it stays until it runs out of battery.   A more complex response might allow the robot to go around obstacles and continue its journey?   Maybe it backs up slightly (wall-following in reverse, maybe?), and then makes an X degree turn away from the wall, runs straight for a second, and then starts wall following again.

09/05/18

The current BackupAndTurn()’ routine takes bIsLeft, a Boolean representing the current tracking direction (left or right) and a motor speed.   All it does is call RollingTurnRev(bIsLeft, 1500), where 1500 is the time in millisecond to run the motors.

RollingTurnRev() just calls RunBothMotorsMsec() with the motor speed on one side set to MAX and on the other to OFF (we know this won’t work on Wall-E2, because the wheelbase is too wide – he just locks up.

RollingTurnRev() is called in two places; ExecDiscManeuver() and   BackupAndTurn(). BackupAndTurn() is called from 4 places;   TRACKING_NEITHER/RIGHT/LEFT, and IRHomeToChgStn().   In all these cases, the robot knows which (if any) wall is closer, so it can execute the proper rolling turn

From what I see so far, it appears all these cases can be handled by a turn routine that does the following:

  1. Moves straight backward for just a second or so (or maybe even less)
  2. Makes a 45 ° forward rolling turn away from the nearest wall. If there is no nearest wall, go opposite the way it went last time (requires a global Boolean to save this value)
  3. Makes another 45 turn in the opposite direction to the first one.   This will have the effect of a side-step maneuver, as shown in this post.

After this review, it was clear that all I had to do to integrate the new heading-based turn capability into Wall-E2’s OS was to replace the RollingTurnRev() function with a new ‘RollingTurn()’ function that takes flags for FWD/REV and for CCW/CW, and a parameter for the number of degrees to turn.   Since I had already demonstrated all the code blocks in stand-alone test programs, all I had to do was copy the appropriate code pieces into the appropriate spots in Wall-E2’s OS, and then spiff things up a bit here and there.   When I was done, I had a single function that could facilitate a range of maneuvers.

To test the newly integrated capability, I added some code to Wall-E2’s setup() function to perform a series of S-turns, each of which demonstrates a typical avoidance maneuver. For convenience, I told Wall-E2 to execute a ‘K-turn’ reversal and then S-turn his way back to me.   As can be seen in the following short video, this worked fairly well!

Now that I have the basic heading-based turn capability integrated into Wall-E2, the next step will be to demonstrate that Wall-E2 can use its new superpowers to avoid obstacles in ‘the real world’ (as real as it gets for Wall-E2, anyway).

Stay tuned!

Frank

 

Integrating Time, Memory, and Heading Capability, Part VII

Posted 29 August 2018

In my last post on this subject, I demonstrated Wall-E2’s new found ability to make heading-based turns instead of timing-based ones, making the turns much more terrain-independent.   Unfortunately, as I continued to test this capability, it became clear that Wall-E2’s heading superpower wasn’t quite ready for prime time.   Sometimes he turned 45 deg or even 180 deg when asked to do 90 – oops!

So, I went back to my small test setup – a spare Mega and a small solderless breadboard, as shown below, and started going through the problem slowly and methodically.   Eventually I figured out that most of the problem was caused by the way I was retrieving yaw data from the Inversense MPU6050 (I have the DFRobots version).   I had the module set up to produce interrupts at 20Hz, and the code was trying to keep up with that (unsuccessfully, as it turned out).   Once I figured that out, I backed the code off to where it only checks for heading changes at a 10Hz rate, and things started working much better.

Arduino Mega and small plugboard test setup for robot turn management

I also figured out that the algorithm I was using for detecting the desired heading was fatally flawed.   I was trying to watch for the case where the current heading passed the target heading, but with all the special cases (both directions, the -180 – to – +180 cut, etc) I kept getting it wrong.   I finally found this post that describes a very simple formula for comparing two compass headings.   The formula assumes both values are in degrees in the range 0-360.   Mine are in degrees but in the range -180 to +180, but I took care of that by adding 360 to negative headings.   After some experimentation I settled on a match ratio of about 0.90 for the ‘slow down’ threshold, and 0.98 for ‘match’.   The 0.98 threshold provides about a 6 degree error margin, which with 10 measurements/sec means that the robot would have to rotate more than 60 deg/sec to get through 6 deg between measurements.   Experiments show that a 90 deg turn takes about 3 sec, meaning 30 deg/sec.   So there should always be at least 2 measurements at 0.1sec/measurement in the 0.2 sec it takes the robot to rotate 6 deg – a 2/1 safety margin.

The short video clip below shows the robot doing a series of 180 deg K-turns, simulating an avoidance maneuver.

 

So, at this point I think I’m pretty much done with adding turn management capability to Wall-E2’s superpower repertoire; however, I still have to update Wall-E2’s operating system software to replace the current timed-turn routines with the new heading-based turn routines.

08 September 2018 Update:

In my current Wall-E2 OS, when the robot gets stuck or runs into an obstacle, it backs straight up, and makes a timed turn away from the nearest wall if there is one, otherwise it turns the opposite way it did from the last time it was in a similar situation.   This is all fine and good, but  now that Wall-E2 has heading-aware turn capability, he should be able to respond a little more intelligently.   As indicated in the diagram below, the idea is that Wall-E2 could back straight up from an obstacle, and then go around it by making two linked 45-90 º turns one way or the other (away from the nearest wall if there is one.

New avoidance maneuver made possible by Wall-E2’s new heading superpowers

As an experiment, I programmed this maneuver into Wall-E2 using linked 45 º degree turns, just to see how it would work out.   As the following short video shows, it seems to work very well.

Stay tuned!

Frank

 

Mid-2018 Wall-E2 Project Status

Posted 26 August 2018

It’s been a year and a half since I last described the status and challenges in my ongoing campaign to create Wall-E2, an autonomous wall-following robot.   The name ‘Wall-E’ was taken from the 2006 movie of the same name.   In the movie, Wall-E was an autonomous trash-compactor robot that had all sorts of adventures, and my Wall-E2 autonomous wall-following robot certainly fills that bill!

From the previous system status report in early 2017, I described the following tasks:

Its been a year and a half since I updated the status of my ongoing campaign to create an autonomous wall-following robot.   The robot system consists of the following main subsystems:

  • Battery and charging subsystem
  • Drive subsystem (wheels, motors and motor drivers)
  • IR homing subsystem for charging station
  • LIDAR for front ranging and ultrasonic SONAR for left/right ranging
  • I2C Sensor subsystem (MPU6050 6DOF IMU, FRAM, RTC)
  • Operating system

Battery and charging subsystem:

Since the last update, the battery and charging system has been updated from dual 1-Amp single-cell Adafruit PB1000C chargers utilizing a 5V source to a TP-5100 2-amp dual-cell charger utilizing a 12V source.   This significantly simplified the entire system, as now the battery pack doesn’t have to be switched between series and parallel operation. Also, now the charging and supply leads are independent so the supply leads to the rest of the robot were upgraded to lower gauge wire to reduce the IR drops when supplying motor drive currents.   See this post for details.

Drive subsystem (wheels, motors and motor drivers):

The motors were upgraded to provide a better gear ratio, although this was done before I realized that most of the traction issues were caused by IR drops in the battery wiring.   The motor driver modules are unchanged, but I may later swap them out for more modern 3V-capable drivers so that I can swap in an Arduino Due microcontroller for the Mega (the Due has the same footprint/IO as the Mega, but has a much faster CPU and more memory)

 IR homing subsystem for charging station:

The IR homing subsystem utilizes a pulsed IR beacon on the charging station coupled with dual IR sensors in a flared sunshade housing, backed by a Teensy 3.5 CPU configured as a null pattern matched-filter.   The Teensy reports left/right homing error as a value between -1 and 1 over an I2C bus to the main microcontroller, which drives the motors to null out the signal.   As the system stands today, the operating system can successfully home in on the charging station and connect to the charger. The robot knows its current battery voltage (charge condition) and therefore can decide to connect to the charger or to avoid it.

LIDAR for front ranging and ultrasonic SONAR for left/right ranging:

The front/left/right ranging subsystem is one of the most mature subsystems on the robot.   The subsystem can successfully follow walls, and detect/recover from stuck’ conditions.   The only thing this subsystem lacks is the ability to make consistent turns on different terrain, due to the lack of heading information (this will be supplied by the new tri-sensor module)

I2C Sensor subsystem (MPU6050 6DOF IMU, FRAM, RTC):

The I2C sensor subsystem is a new addition since the last update, and has yet to be fully integrated into he system.   The subsystem consists of a Inversense MPU6050 6DOF solid-state accelerometer, and Adafruit FRAM (Ferromagnetic RAM) and RTC (Real-Time Clock) modules.   The MPU6050 gives the robot the ability to sense relative heading changes, which makes it capable of executing consistent N-degree turns on both hard flooring like the kitchen and atrium areas and the carpet in the rest of the house. The FRAM and RTC units should allow the robot to remember its charge/discharge history, even through power ON/OFF cycles.

The relative heading capability has been tested off-line from the main operating system, but has not yet been integrated into the OS. Same for the FRAM/RTC modules.   Integration of this subsystem was stalled for quite a while due to problems with the Arduino I2C (Wire) library, but these problem were just recently resolved by switching to a more robust I2C library (SBWire).   See this post for details.

 

Operating system:

The operating system has evolved quite a bit over the course of this adventure, but its current state seems pretty stable.   The OS is implemented as a set of modes, as follows:

  • MODE_CHARGING: Occurs when the robot is physically connected to a charging station
  • MODE_IRHOMING: Occurs when a charging station beacon signal is detected
  • MODE_WALLFOLLOW: Occurs when the robot isn’t in any other mode.
  • MODE_DEADBATTERY: Occurs when the sensed battery voltage falls below DEAD_BATT_THRESH_VOLTS volts

 

 

Future Work Plans:

  • Complete the integration of the tri-sensor module: This entails adding the hardware and software required to sense loss of power so that the current date/time stamp can be written to the FRAM, along with the complementary ability to read out the last power cycle date/time stamp from the FRAM on power-up.   In addition, the current timed turn routines need to be replaced by the new heading-sensitive turn algorithms.
  • Investigate the idea of multiple charging stations with different IR beacon frequencies. The current matched filter algorithm forms a very narrow-band filter, to discriminate the desired IR beacon signal from unwanted flooding’ from overhead lighting sources and sunlight.   The center frequency of the filter is set in software on the Teensy microcontroller, so it should be possible to have the Teensy routinely check for beacon signals at other signals, as long as the frequencies are far enough apart to prevent overlap.   The current filter center freq was more or less arbitrarily set to 520Hz. high enough to be well away from, and not a multiple of, 60Hz, but low enough for the Teensy processing rate.   Something like 435Hz (60*7.25) would probably work just as well, and is far enough away from 520Hz to be well outside the filter bandwidth (about +/- 10Hz IIRC).

Complete the implementation of the fixed charging station.

This task has been completed, and along the way the charging voltage was changed from 5V to 12V, to accommodate the new 12V on-board battery charging system.   See this post for details

Integrate the IR homing software from the 3-wheel robot into Wall-E2’s code base:

This task has also been accomplished.   See this post for details.

Integrating Time, Memory, and Heading Capability, Part VI

Posted 25 August 2018

In my previous posts, I have been describing my efforts to give Wall-E2, my autonomous wall-following robot, relative heading sensing ability using the DFRobots MPU6050 6DOF module.   As I went through this process, I discovered that the ‘standard’ Arduino Wire library was seriously defective, and the problem had been known, but not fixed for almost a decade!   Once I figured this out, I was able to fix my local copies of Wire.c/h and twi_c/h and all my hangup problems went away.   Subsequently I found another Wire library (SBWire by Shuning (Steve) Bian that also incorporates the necessary fixes, so I started using his library instead of my own local fixes.

Anyway, after all the I2C drama, I finally got the damned thing working, and ran some tests to demonstrate Wall-E2’s new-found ability to make reasonably precise and consistent turns.   In the first test I had Wall-E2 make a series of 90-deg (ish) turns, and in the second one I had him make some 180-deg (ish) K-turns to simulate what he might want to do after disconnecting from (or avoiding) a charging station.