Tag Archives: robots

Wall Parallel Find PID Tuning

Posted 10 April 2021

In addition to using PID for homing to its charging station and for turn rate control, Wall-E2 also uses PID for finding the parallel orientation to a nearby wall. After successfully tuning the turn rate and IR Homing PID controllers using the Ziegler-Nichols method for PID tuning, I decided to see what I could do with the PID controller for parallel orientation finding

Wall-E2 uses two 3-element VL53L0X Time-of-Flight distance sensors for parallel orientation finding. The idea is that when all three sensors report the same distance, then the robot must be oriented parallel to the wall. The Teensy 3.5 Array Controller MCU calculates a ‘steering value’ using the expression (shown for the left side array):

This value is fed to the PID engine, which drives the motors to zero it out – thus arriving at a parallel orientation. Originally I just basically ‘winged it’ in choosing PID Kp, Ki & Kd values, arriving emperically at Kp = 200, Ki = 50, Kd = 0. However, after going through the K-N process with Wall-E2’s other two PID control setups, I decided to try it with this one as well.

The first step is to determine Kc, the Kp value for which the system oscillates in a reasonably stable fashion. To accomplish this I started with Kp = 20 and worked my way up in stages, plotting the ‘steering value’ each time. The last three trials (as shown in the following plots) were for Kp = 400, Kp = 500 and Kp = 600:

Looking at the above plots, it looks like Kp = 600 will work for Kc. Using the K-N formula, we get

Using the above values for the Parallel Find PID, we get the following plot:

Which is not exactly what I thought it would be – it looks like my guess for Kc must be off. Trying again with a Kc = 400 –> PID = 200,180,240, we get:

which, to my eye at least, seems a bit better.

To test how this worked with ‘real’ parallel finding, I incorporated these parameter values into my ‘RotateToParallelOrientation()’ routine and ran a couple of tests. Here’s one where Wall-E2 starts in the ‘toed-out’ position:

And here’s the Excel plot from this same run

As can be seen, the robot takes less than two seconds to converge on a pretty decent parallel orientation, starting from a 30-40º angle to the near wall.

Here’s another run where the robot starts in the ‘toed-in’ orientation.

And here’s the Excel plot for the run

Again, the robot gets to a pretty decent parallel orientation within 2 seconds of the start of the run. The only concern I have with this run is that it winds up pretty close to the wall.

Turn Rate PID Tuning

Posted 05 April 2021

Wall-E2, my autonomous wall-following robot, does a lot of turns to follow walls. Originally Wall-E2 used a simple timing algorithm to make turns, but this wasn’t very accurate. On a hard surface a 5-second turn at half motor speed could result in a 360º turn, while the same 5 seconds on carpet my only cover 90º. After installing the MPU6050 IMU about 18 months ago, turns could be controlled much more accurately, but the turn rate still varied widely on carpet vs hard flooring. Some time ago, I revised Wall-E2’s program to use a PID engine for turn rate control, but this resulted in a low-frequency ‘motorboating’ movement as the robot ramped up motor current until the angle started changing, followed by ramping the motor current down again because the turn rate target had been exceeded.

After having some success with improving the robot’s performance in homing to the charging station by utilizing the Zeigler-Nichols tuning method, I decided to try using it to improve turn rate control.

The method starts with setting the PID variables to (Kp, 0, 0) and then varying the Kp values to determine Kc, the value at which the setpoint variable (in this case, the turn rate in deg/sec) exhibits a steady oscillation. The value of Kc is then used to calculate all three PID parameters using

The existing setting for Kp, Ki, and Kd was Kp = 5, Ki = 0, Kd = 10, and this resulted in the following plot for a single 270º turn.

To start the process of determining Kc, I first zeroed out the Kd factor, resulting in the following plot:

So the system is clearly exhibiting a ‘constant amplitude oscillation’, but Kc is the minimum Kp value that produces oscillation. So, I started reducing Kp looking for the point at which the oscillation stopped, producing the following plots:

Comparing the above plots, it seems the value of Kc is probably around 0.1. Using the Z-N formula above, I get

These values are almost two orders of magnitude smaller than the values I had been using – ouch! Looking back on the original work, I had declared the Kp, Ki, & Kd variables as ‘const int’, which means the lowest positive value I could use was ‘1’, which might explain why I never tried anything smaller. Of course, I could have also done what I did with the home-to-charger PID engine and arranged the turn rate calculation so that instead of Kc = 0.1, it would have been more like 100, resulting in (Kp,Ki,Kd) values of (50,45,60).

Anyway, using values of (0.05, 0.045, 0.06) results in the following plot

And here is a short video showing the robot executing the 270º turn plotted above

I was quite impressed by the difference between what I had before and the current performance after implementing the Z-N tuning method; the turn rate was an almost constant 40º/sec (an average of 42.9º according to Excel). And, now that I have a semi-constant rate to look at, it appears to me that I should probably crank up the turn rate to something more like 90º/sec.

With the turn rate cranked up to 90º/sec and using the same values for Kp, Ki, Kd, I get the following plot and video.

The plot of turn rate vs time shows considerably more variation with a 90º/sec turn rate, as compared to 45º/sec; maybe Kc is different due to the different physical dynamics of the robot? The average turn rate from the Excel data is 80º/sec; not the 90º/sec I was looking for, but still not bad. Even so, this result is much better than I had before, with extreme motorboating between 0º/sec and something much higher. I may or may not try to re-determine Kc, but in the meantime I think I’ll run with this for a while.

10 April 2021 Update:

Some of the above data were collected at a pretty low charge level. When I had the opportunity to recharge Wall-E2 I re-took the 90º/sec run with the following results:

As the above plot shows, the turn rate was held almost constant (average of 87.9º) throughout the turn – very nice!

Here’s another run, this time on carpet with a fully charged battery. As expected, the robot has a harder time getting started on carpet due to the added sideways friction on the wheels. However, once the turn gets started, it stabilizes fairly well on the 90º turn rate target

Stay tuned,

Frank

Another Try at Charging Station Homing PID Tuning

Posted 04 April 2021

Lately I have been working on improving the performance of Wall-E2, my autonomous wall-following robot, when homing in and connecting to it’s charging station. The robot uses the PID (Proportional-Integral-Derivative) library to drive the motors to home in on an IR beacon, and this ‘mostly’ works, but still occasionally hangs up on the lead-in rails. I have made several attempts to get this right (see this post and earlier work), but have never really gotten it zeroed in. After yet another web search for tuning help, I ran across this post dealing with the Ziegler-Nichols method for PID tuning. Basically the method starts by setting the proportional (Kp), integral (Ki) and derivative (Kd) terms to zero, and then slowly increasing the proportional (Kp) term until a ‘stable oscillation condition’ is achieved (Kp = Kc). Then the Ki & Kd terms can be calculated using the following relationships:

Getting to the ‘Kc’ (Kp-critical) value for my setup is a bit more difficult than normal, as the PID engine only operates for a few seconds, from the time the IR homing beacon is detected, to the time the robot actually connects (or doesn’t) to the charging probe. Here’s a short video showing a typical run (Kd = 150 in this case), and an Excel plot of the steering value from the same run.

As can be seen from the above, there really isn’t much of an ‘oscillation’ to go on – there is basically only one full cycle from about 10.5 sec to around 12.0 sec.

Here’s another run, this time with Kd set to 200. As can be seen, this is much more like what I was expecting to see, with several full cycles of oscillation. The amplitude trails off a bit toward the end, but this may have been due to a low battery level – I’ll have to repeat this experiment after getting a full charge into the robot.

Kp = 200, Kd = Ki = 0. Period is approximately 1.3 sec

However, using the above data with Kc = 200, we get

I revised my program to incorporate the Z-N numbers from the above calculations, and this resulted in the homing runs shown below; in the first one, Wall-E2 was oriented directly at the charging station beacon, and the robot’s track was pretty much direct, with no side-side oscillation at all. In the second one I oriented the robot a bit off-axis to excite a more active homing response. In both runs, the LEDs on the rear of the robot show the current relative wheel speed commands – LED’s to the left of center indicate higher wheel speed on the left, and vice versa. In the first run, the LEDs show that there is some oscillation of wheel speed commands, but it is relatively small, leading to an almost serene homing performance. In the second run the initial orientation offset forces the robot to more actively manage the wheel speeds to stay ‘on the beam’.

Stay tuned,

Frank

Charging Station Initial Approach Algorithm Improvement

Posted 20 March, 2021

In order to realize my long-term goal of a fully autonomous wall-following robot, Wall-E2 has to be able to reliably mate to its charging station when it gets low on go-juice. Unfortunately, Wall-E2 occasionally fails to mate properly, usually due to an initial misalignment with the center of the IR homing beam. I haven’t worried too much about this, as there have been more pressing problems, but as these have been solved, the mating problem has risen to the top of the to-do list.

The basic geometry for the charging station is shown below:

Tilted gate option. The tilt decreases the minimum required IR beam capture distance from about 1.7m to about 1.0m

As long as the robot starts its approach on or near the boresight of the IR beam, all goes swimmingly. However, if Wall-E2 detects the IR beam while tracking the wall at right angles to the one depicted above, it can easily start its approach before getting to the center of the beam, resulting in it getting stuck on the outside guide-in rail (upper rail in the above diagram). In addition, if Wall-E2 is tracking too close to the wall above, it can actually get stuck on the inside guide-in rail (lower rail in the above diagram).

So, what is needed here is a way to force the robot to line up on the IR beam centerline before committing to the mating approach. To investigate this, I created a part-task version of Wall-E2’s operating system that does just one thing; it detects the IR homing signal, and then takes action to position itself in the center of the IR homing beam, aligned with the beam’s boresight. In the aviation instrument (blind) flying world, this is known as the ‘IAP’ (Initial Approach Point), so I needed to create an algorithm so Wall-E2 could navigate to the Charging Station IAP, and start it’s final approach from the same place every time.

In previous work I have gotten Wall-E2 smart enough to track walls at a constant offset. so this is where I started with the current effort. When Wall-E2 starts to track a wall, the first thing it does is use the near-side array of VL53L0X IR laser TOF sensors to orient parallel to the wall, without regard to the absolute offset. It then angles toward or away from the wall to achieve tracking at the desired offset.

The starting position for the current effort is with the robot placed close to the wall leading to the charging station, pointed generally toward the charging station. When the robot wakes up, it sees that there is an active IR homing beacon, and takes action to navigate to the IAP.

  • First it uses the parallel orientation algorithm to align itself parallel to the near wall so it can measure it’s offset from the wall, and also to ensure that the front distance measurement accurately reflects the distance from the robot to the charging station.
  • Next, it compares the wall offset and front distance measurements to the known values for the IAP, i.e. a 50 cm offset at a distance of 180 cm. It then calculates how much additional offset it needs to place itself in the center of the beam.
  • If necessary, the robot turns 90º away from the wall and moves away to achieve the desired offset. If the robot is already far enough away from the wall, it skips this step
  • After getting far enough away, the robot turns in place (a ‘Spin Turn’) until the signal strength of the received IR homing beacon rises above a set threshold. This gets the robot oriented generally in the direction of the charging station
  • The last step is to fine-tune the robot’s orientation so that it is centered in the beam and also well aligned with the beam boresight.

The following photograph shows the robot at the IAP, ready to start the final approach to the charging station.

Wall-E2 at the Initial Approach Point, ready to start the final approach to the charging station.

And the following video shows the entire process, up to the point where the robot would actually start the final approach.

15 April 2021 Update:

One of the issues with the current initial approach algorithm is the lack of accuracy in achieving the desired wall offset, due to Wall-E2’s tendency to ‘coast’ past the desired distance. I could just lower the offset target by a fixed amount to account for the ‘coast’ effect, but since that changes significantly depending on whether Wall-E2 is on carpet or hard flooring, that doesn’t sound like a good ide.

Instead, I decided to use yet another PID object to manage offset distance acquisition, using the following algorithm:

Using this code, I got the following output:

And a short video showing the offset acquisition process:

Here’s the same process, but starting from farther away than desired

21 April 21 Update:

After some additional work on the initial approach algorithm, I arrived at a pretty nice spot; Wall-E2 will reliably detect the IR homing beacon, offset the proper amount from the wall using a 90º turn and a PID engine-driven rear-distance controlled movement, and then rotate to orient to the IR beam boresight. The ‘rotate-to-boresight’ operation takes place in two stages. In the first stage, the robot turns toward the beacon in 10deg steps until the beacon is re-acquired (this is necessary because the robot loses the beam signal when it turns 90º to the wall) and then uses another PID-driven algorithm to center up on the beam boresight. Here’s a short video showing the process.

IR Homing with Initial Approach Phase added. 2-sec pauses are inserted to delineate sub-phases

As can be seen from the above video, the robot successfully navigates to the initial approach point (IAP), rotates to orient with the homing beacon boresight, and then homes to the charging station. This all works, but it is pretty clunky and inelegant. The initial 90º turn away from the wall is in itself a bit problematic, as it can easily overshoot, and then the robot loses the beacon signal, which means that after the appropriate wall offset has been reached, the robot has to turn back toward the charging station to re-acquire the signal, and it has to do so ‘gently’ so as not to overshoot.

I think it would be much better if the initial turn away from the wall was just 45º so the robot won’t lose the beacon signal while navigating to the IAP, and potentially eliminating the first part of the ‘rotate to boresight’ phase. Here is the relevant geometry:

Charging station initial approach and homing geometry

In the above figure, the robot currently makes a 90º utilizes the line labelled ‘Offset =…’ to offset out to the IR beam boresight. I’m thinking that the line labelled ‘x = …’ would work better, as the robot only has to make a 45º turn initially, and then the robot might not lose the beam signal as it offsets out to the IR boresight line. Here’s the supporting math.

Initial Approach Point math

In the above figure, an example is worked out for d = 120cm, where the perpendicular offset is found to be 34.4cm and the 45º turn distance is found to be 1.09*Offset = 37.8cm.

23 April 2021 Update:

The change from 90º to 45º IAP approach angle turned out to be pretty easy to do – really just a matter of changing ‘SpinTurn(90)’ ‘SpinTurn(45)’ and the offset value to 1.09 x offset. Here’s a short video showing the result.

After a few more runs (with some failures due to the robot hanging up on the outside rail), I realized my basic beam geometry estimate was significantly off. Instead of a beam angle of about 16deg, it was more like 11, yielding a distance::offset ratio of about 0.18 instead of 0.27. Revising the program to use the more accurate ratio resulted in the following much nicer homing run.

Homing run using a distance::offset ratio of 0.18 vs 0.27

And here is the telemetry from the run:

Much nicer!

Stay tuned!

Frank

Wall Tracking Trials Using Office ‘Sandbox’ Part II

Posted 24 January 2021,

Back in November of 2020, I posted about some wall-tracking exercises using my Office ‘sandbox’. Since then I have done some work on the charging station to make it more robust, and on Wall-E2’s ability to home in on and connect to the charging station. The following short video shows Wall-E2 making a complete circuit of the sandbox, ending with a homing run and connection to the charging station.

Wall-E2 makes a complete circuit of the ‘sandbox’, ending up connected to the charging station.

I plan to do quite a bit more work on the charging station homing algorithm, in particular how Wall-E2 reacts when it gets stuck trying to connect (which happens with somewhat disconcerting regularity).

Stay Tuned!

Frank

Solving the Teensy VL53L0X Array Controller Reset Problem

Posted 16 November 2020

Back in May of this year, I converted Wall-E2, my autonomous wall-following robot, from using HC-SR04 ultrasonic ‘ping’ sensors to VL53L0X infrared time-of-flight sensors for left & right (and now rear) distance measurement and obstacle detection, as described in this and follow-on posts. Since then, I have been successfully integrating the new sensing capability into Wall-E2’s wall-tracking and obstacle avoidance algorithms, as described in this post among others.

In recent ‘sandbox’ runs, however, I started to notice that the VL53L0X controller (a Teensy 3.5) wasn’t always providing proper distance measurements. Sometimes it would return ‘-1’ for some or all seven measurements. Eventually I figured out that the problem only occurred when I restarted the main controller via the wireless serial connection; when I restarted by cycling the power, VL53L0X measurements were always proper. After looking into this a bit, I realized that the problem occurred because the VL53L0X array controller wasn’t being restarted when the main controller was, except when everything was power cycled.

So, I needed a way to ensure that the VL53L0X array controller got restarted, even with a serial-port reset of the main controller. The Teensy 3.5 actually has a RESET function exposed on a pin pad (although internal to the PCB, not on the periphery) so I added a pin to this pad, and connected it to the wire formerly used as the ‘left ping’ control line. Then I modified the setup code to pull this line LOW for a few 10’s of milliseconds and then back HIGH again, to restart the Teensy.

To test the modification, I modified the main controller code to send a HIGH to an unused digital pin as the first instruction in setup() and set that pin back LOW again as the last instruction. Immediately after setting this pin high, the RESET signal is sent to the Teensy. The Teensy program was modified to set an unused pin HIGH at the start of it’s setup() program, and LOW at the end. By monitoring these two pins with my wondrous Hanmatek DOS1102 DSO (see below) I was able to definitively confirm that the Teensy restarts every time the main controller does – yay!

Yellow trace is main controller setup() timing, blue is Teensy VL53L0X array controller setup() timing

In the above scope photo, the horizontal scale is 1 sec/div. The yellow trace shows the main controller setup() function timing, and the blue is the Teensy VL53L0X array controller setup() function timing. The Teensy gets reset about 500 mSec after the main controller setup() function starts, and it ends about 5.5 Sec later, about 500 mSec before the main controller setup() function ends. The relative timing shown above is the same whether the main controller is restarted via a power switch cycle or a serial port re-open restart.

Stay Tuned,

Frank

Wall Tracking Trials Using Office ‘Sandbox’ Part I

Posted 12 November 2020

Back in October I added a TIMER5 timer interrupt to my autonomous wall-following robot (WAll-E2) code to manage sensor updates. Since then I have made the timer interrupt the sole timing source for all sensor and tracking updates, and upped the update rate from 5Hz to 10Hz. In addition, I’ve been making some improvements to Wall-E2’s obstacle detection/response abilities, and this post describes the results of these enhancements.

Wall-E2’s job is to autonomously track walls forever. This implies the ability not only track walls, but to deal with obstacles as they occur, and recharge its batteries at one or more provided charging stations as needed. Wall-tracking per se has been the subject of several previous posts, and is now reasonably well managed using the ‘find parallel’ technique described here. This post deals with the effort to detect and respond to obstacles as they occur. Here’s a recent run in my office ‘sandbox’

In the above telemetry printout, the first obstacle encounter occurs at 7.65 sec, corresponding to about 4 sec into the video. The obstacle is recognized at 18 cm, well inside the desired offset distance of 30 cm. I believe this occurred because the robot had just started turning back toward the near wall with a target steering value of -WALL_OFFSET_TRACK_SETPOINT_LIMIT (-0.3, the maximum toward-wall steering value) which meant that the normal forward obstacle detection limit of WALL_OFFSET_TGTDIST_CM (30cm in this case) wasn’t in force and the backup limit of MIN_FRONT_OBSTACLE_DIST_CM (20cm in this case) triggered instead. This causes the following code block to execute:

As can be seen in the above code snippet, this causes the robot to make a 90º ‘spin turn’ to the right, and then restart wall tracking.

At about 12.6 sec (about 10 sec into the movie) we see it detect the upcoming wall at about 30 cm (due to a bug in the code, the printed values are incorrect). This causes the following code block to execute:

This code executes a 90º ‘step turn’ (identical to a ‘spin turn’) to the right, and drops back into wall tracking mode.

At about 16 sec into the movie and 20 sec after program start, the robot again detects an upcoming obstacle at about 30cm, and again executes a 90º ‘step turn’ to the right to follow the new wall.

About 1.5 sec later, the robot detects one of the chair legs (I think it was the one nearest the wall in the movie) and tries to get away using another 90º ‘spin turn’, but then exhibits some abnormal behavior. When it attempts to find the parallel orientation to the new (non-existent) wall, it exits RotateToParallelOrientation(Left) with SteeringVal = -79.86, a very strange result. I believe this is because Wall-E2 detected the ‘stuck’ condition while it was attempting to complete the parallel orientation procedure, in this ‘while’ loop

So, it exited abnormally, thus the odd SteeringVal number, and then re-detected it in the main tracking loop because the front distance history array isn’t re-initialized after the first detection. This, apparently, is a ‘feature’, not a bug – who knew! ;-).

After the second ‘stuck’ condition detection, the robot attempts to disengage using the ExecuteStuckRecoveryManeuver(), which, in this case tries to back up and then execute an ‘end-around’ maneuver to get past the chair leg. It finished the backup portion of the maneuver successfully with 23cm remaining rearward, and then executed a 90º ‘spin turn’. Then it went forward 21cm using the front distance sensor (not shown in the video), and halted when I took over manual control.

All in all, this was a very successful ‘sandbox’ run. Lots of good data with clear indications of where things are working well and where things need to be modified/fixed.

  • A bug in the telemetry display code for the ‘Wall Offset Limit’ detection printout (fixed).
  • In the situation at 7.65 sec where the obstacle detection occurred at 18cm vs 30cm, the robot should recognize that it needs to back up to the wall offset target before making the spin turn (done).
  • And, of course, porting all this new stuff to the right-side tracking sections

16 November 2020 Update:

Tonight I got the first cut done at porting the TRACKNG_LEFT algorithms over to the TRACKING_RIGHT case, and made what appears to be a successful right-side sandbox run, as shown in the following short video

And here is the telemetry from the run:

Here’s an Excel plot of the Right side center distance and L/R motor speeds vs time.

Comparing the times from the video and the telemetry, it appears the video time is about 3-3.5 sec lower than the telemetry values. In the video, Wall-E2 detects the first upcoming wall at about 9 sec, and this corresponds with the telemetry at 12.358 where the wall is detected with front distance of 19cm. The reason the wall didn’t get detected earlier is the robot was currently tracking back toward the wall with a steering value of -0.3, and this causes the wall detection value to be reduced to suppress false positives.

The robot then takes 2 sec to back up to 33 cm and turn 90 deg CCW, and then it starts tracking the right-side wall again. It detects the next wall at 17.1 sec (14 sec in video) and 30 cm (the steering value at that point was 0.10, so no reduction in upcoming obstacle detection distance). Because the detection occurred at 30 cm, the robot doesn’t need to back up; it just makes another 90 deg spin turn CCW and starts tracking the right-side wall again. The last segment clearly shows that Wall-E2 is capable of tracking to and capturing the desired offset of 30 cm.

All in all, a very successful right-side sandbox run.

Stay tuned!

Frank

Adafruit DS3231 Module vs generic ZS-042 Module

Posted 30 October 2020,

Back in May of 2018, well over 2 years ago, I posted about adding an Adafruit DS3231 RTC module to Wall-E2, my autonomous wall-following robot project. This addition went swimmingly until about 6 months later in September of 2018 when I posted to the Adafruit support forum, saying that I was having trouble with the ‘lostPower()’ function return values; it seems like it was returning FALSE (no power loss) even though I had removed the battery and turned off the power to the system. As described in the post, I eventually gave up on this in February of 2019after discovering that I was getting radically different results when I used a different Arduino Mega and two different Adafruit DS3231 modules. Eventually I wound up in the situation where both DS3231 modules appeared to work correctly no matter what I did – strange!

Fast-forward to the present. In the process of adding a rear distance sensor to Wall-E2, I once again ran across the same anomalous behavior by the Adafruit DS3231 RTC module; The ‘lostPower()’ function stubbornly refused to declare a loss of power, even with the battery removed and the main power turned off. After a lot more investigation, including a dedicated test program and some more back-and-forth on the Adafruit forum, I (and the Adafruit support guys) still was unable to resolve the issue.

In desperation, I fished a generic ‘ZS-042’ DS3231 RTC module out of my parts bin and started working with it, thinking maybe I could use it to get a clue why the Adafruit modules were failing. As it turned out, the ZS-042 module worked perfectly from the get-go with the Adafruit RTC library, and the ‘lostPower()’ function correctly returned TRUE when main power was lost with the battery removed, and FALSE when power was lost but the battery was in place.

Here are some photos of the Adafruit and ZS-042 modules:

As can be readily seen, the ZS-042 module is considerably larger, due almost entirely to the decision to use the LIR-2032 Li-ion rechargeable cell instead of the smaller non-rechargeable CR1220 type. Other differences:

  • The ZS-042 module includes a power LED. This LED illuminates when main power is available on the VCC pin, but not when the RTC module is running from the battery
  • The Adafruit module exposes the RST (reset) line. If you need this, the ZS-042 won’t work for you.
  • When used with the supplied LIR2032, the battery is recharged and/or float-charged from VCC through a 1N4148 diode. This works fine if VCC is 5V, but doesn’t work at all if VCC is 3.3V.
  • The 32KHz output is open-drain, without a pullup on both the Adafruit module, but the ZS-042 module has a pullup to VCC. What this means in practice is you can’t easily monitor this output when operating off the battery, so it is hard to tell if the RTC module is still running. My solution to that was to attach a completely separate power supply to the 32KHz output via a 10K pullup resistor. The Adafruit module needs this to see the 32KHz output for both battery power and mains power. The ZS-042 module only needs it for battery power.
Adafruit module with temporary 10K pullup resistor installed. Note clock scope trace in background
ZS-042 module with main power applied to USB connector. 32KHz output is present even without an external pullup
Same setup but with USB connector removed. Now need a 10K external pullup to an external supply to monitor 32KHz clock

So, there you have it. The Adafruit module is smaller, has an additional output (RST) and uses a smaller, non-rechargeable CR2210 button cell. However, in my testing and use over a two-year period, I came to distrust its ability to reliably detect and report on complete power loss situations that would require a forced date/time update.

The ZS-042 module is significantly larger due to its use of the rechargeable Li-ion LIR2032 button cell, and doesn’t have the RST output. It is also considerably cheaper and widely available. Lastly, it appears to more reliably report complete power loss occurrences, allowing proper date/time updates.

For my money, I have replaced the Adafruit DS3231 module in my system with the ZS-042 module. In practice, complete RTC power failure events are very rare, so in all probability there would be no appreciable difference between the two choices. However, for those applications (like mine) where you really do want to know if the RTC loses its sense of time, I don’t feel comfortable with the Adafruit module.

If anyone has a better understanding of the Adafruit module, please feel free to comment.

30 October 2020 Update

I replaced the Adafruit DS3231 RTC module on my Wall-E2 autonomous wall-following robot with the ZS-042 DS3231 RTC module. As shown in the following photos, I had to re-arrange the I2C FRAM and I2C MPU6050 IMU modules in order to make room for the significantly larger ZS-042 module.

Original layout. Adafruit RTC module on left, MPU6050 IMU in center, FRAM on right
Straight replacement not going to work – oops!
After re-arrangement

Stay tuned,

Frank

Adding a VL53L0X Rear Distance Sensor to Wall-E2

posted 24 October 2020

After documenting left-side wall-tracking success with Wall-E2, my autonomous wall-tracking robot (see this post and this post), I started thinking about improving Wall-E2’s obstacle avoidance performance.

Wall-E2 can encounter several distinct obstacle situations during wall tracking operations. In the simplest case, Wall-E2 approaches an upcoming corner while tracking a wall, and needs to know how to transition from tracking the current wall to tracking the upcoming wall. A more difficult situation arises when Wall-E2 is ‘stuck’ – prevented from moving forward by an obstacle that isn’t detected by its front LIDAR distance sensor; a shoe, or the curved foot of a coat rack. A third situation arises when Wall-E2 encounters an obstacle that just wasn’t there a second ago; a cat or a human foot or a bag of groceries.

In the simple wall-to-wall transition case, all Wall-E2 has to do is make a right-angle turn away from the current wall and start following the next wall; this was successfully demonstrated several times in the previous posts. This maneuver utilizes a ‘spin-turn’ technique intended to minimize the backward movement of the robot while turning. This is done to prevent Wall-E2 from backing into the currently-tracked wall while attempting to turn toward and track the upcoming wall. Unfortunately, this maneuver is not always successful, whereupon Wall-E2 tries to climb backwards up the current wall, often with disastrous results.

In the ‘stuck’ case, Wall-E2 has to first recognize that it is no longer moving forward (or in any other direction for that matter), and then figure out what to do about it. Detection is accomplished by looking at the variance of front distance measurements over time; the ‘stuck’ condition is declared when the front-distance variance falls below a pre-determined value. A typical ‘stuck’ recovery maneuver is to back up slightly, and then make a right-angle turn away from the wall currently being tracked. This maneuver, while usually successful, has the same problem as the simple wall-to-wall transition; it sometimes results in the same backward-up-the-wall climb, with similar results.

The ‘suddenly appearing obstacle’ case can be handled in a manner similar to ‘stuck’ detection, but bypassing the variance measurement stage. and the resulting avoidance maneuver is similar to the ‘stuck’ case

Wall-E2 currently handles all of the above cases fairly well, except when it backs into something while maneuvering to avoid the detected obstacle. So, my challenge was to find a way to avoid running into something while backup up from something else. The easy answer to this problem was to add a rear-distance sensor to Wall-E2, and then use that information to modify obstacle-avoidance behavior as necessary.

During the changeover from ‘ping’ style distance sensors to left and right 3-element arrays of VL53L0X time-of-flight sensors I learned quite a bit about the care and feeding of the VL53L0X, and also wound up with quite a few spares. So, I took one of the spares and installed it on the rear ‘bumper’ plate on Wall-E2, as shown in the following photo:

GY-530 VL53L0X mounted on rear ‘bumper’

Since the 2nd-deck Teensy 3.5 was already handling both 3-element VL53L0X arrays, I simply added the rear sensor to the left-hand array ‘Wire2’ daisy-chain, and connected its XSHUT pin to Teensy pin 8. Then I modified the Teensy’s program to initialize and poll the rear sensor in the same manner as all the others, and tested it to make sure it was responding properly to rear-aspect obstacles.

The next step is to incorporate rear-aspect distance information into the various obstacle avoidance algorithms in the main program.

‘Stuck’ case:

The ‘stuck’ case by definition occurs when the mathematical variance of the last 3-5 seconds of forward distance measurements fall below a set value, indicating that the robot is no longer moving forward or backward. When this happens while wall tracking, the robot has to decide what to do. The current response is to back up for 1 second at half speed, execute a 90 deg ‘spin turn’ away from the nearest wall and then go back to normal operations.

I think I would like to enhance this algorithm as follows:

  • If the measured front distance is less than MAX_FRONT_DISTANCE_CM (currently set at 400 cm) by at least STUCK_BACKUP_DISTANCE_CM (currently set at 25), then back up by STUCK_BACKUP_DISTANCE_CM using front distance measurements as the primary means of terminating the backup maneuver. If the front distance measurement cannot be used, but the rear distance measurement is valid (less than MAX_REAR_DISTANCE_CM, currently set at 100), then back up using the rear sensor measurement. If neither measurement is available, then revert back to a 1 second half-speed movement. In all cases, use the rear distance measurement to prevent ‘reverse wall climb’ by stopping the motors if the robot gets too close to an obstacle while backing up.
  • Execute a ‘spin turn’ away from the nearest wall – this is the same as the current algorithm.
  • Execute a ‘rolling turn’ back toward the original direction of travel. This should offset the robot further away from the nearest wall, and hopefully allow it to bypass the obstacle.

Left Side Wall Tracking Success With VL53L0X Array, Part II

Posted 10 October 2020

After the left-wall tracking success described previously in this post, I made some more adjustments and also set up a ‘ tracking sandbox’ in my lab to test Wall-E2’s ability to detect & respond to upcoming obstacles. Here’s a short video showing Wall-E2 in action

Tracking run demonstrating obstacle avoidance maneuvers

Here’s the raw output from the run:

And here is an Excel plot of just the movement sections of the above, highlighting the avoidance maneuvers.

left-side wall distances are shown in mm, while the front distance is shown in cm. Note 1-2 sec gaps during turns

Comparing the Excel plot to to the video, the front distance plot shows a monotonically decreasing value and then a large jump after each obstacle avoidance turn. It appears that the robot acquires and tracks the 30cm offset target successfully on the first wall, but doesn’t do as well on the second one. It was much more successful on the third wall. The plot for the last wall is only about 2 seconds long.

All in all, this looks like a pretty successful run for Wall-E2. It tracked three different walls (the fourth wall was too short to track) and successfully avoided obstacles three times – woo hoo!

12 October 2020 Update:

On the above ‘sandbox’ run, I noticed that at the end of the third leg at about 14 seconds into the run, the ‘spin turn’ at the white foam core wall wasn’t a ‘step turn’, but a ‘backup and turn’ triggered by the front distance going below the front obstacle limit of 20 cm, rather than the tracking obstacle clearance limit of 30 cm. Here are two output lines that illustrate the difference

and

In the video, these events are at about 7 & 14 seconds respectively. From this I came to the conclusion that at least the front distance wasn’t getting updated enough to keep the robot from getting too close to the obstacle before it realized there was a problem. At the time, the update rate for the system was set at 5Hz or 200 mSec. If the robot is travelling at 50 cm/sec, it means that it will travel 10 cm between distance updates – ouch!

So, I changed the timer interrupt timeout value for a 10Hz rate, and ran the ‘sandbox’ run again. This time when I looked at the output I could see that each leg terminated with something like

and it was clear that the updates were happening about every 100 mSec. Here’s the output:

and a short video:

And an Excel plot showing the left wall and forward distances progressing through the run.

Note that the front distance is shown in cm, while the left wall distances are shown in mm

At this point, I’m pretty happy with Wall-E2’s new-found wall tracking superpowers, at least for the left wall case. Now I need to port the V7 left-side-only code back into the main program and also port it to the right wall case.

Stay tuned!

Frank