Monthly Archives: December 2022

Move to a Specified Distance, Revisited

Posted 24 December, 2022

As part of the suite of tools associated with wall tracking and IR beam homing, I created a set of ‘move to specified distance’ routines using my home-grown PID algorithm as follows:

  • MoveToDesiredFrontDistCm (MTFD)
  • MoveToDesiredRearDistCm(MTRD)
  • MoveToDesiredLeftDistCm(MTLD)
  • MoveToDesiredRightDistCm(MTRD)

I saw some odd problems in my past wall-tracking exercises, so I thought it would be a good idea to go back and test these in isolation to work out any bugs. As usual, I constructed a limited part-task program for this purpose, and ran a number of tests on my desktop ‘range’. I started with the ‘MoveToDesiredFrontDistCm()’ function, as shown below:

MoveToDesiredFrontDistCm ():

Here’s some output from a typical run:

The movement goes OK, and the robot telemetry says it stopped very close to the desired 60cm distance. However, when I measured the actual distance from the ‘wall’ to the robot, I got more like 67 or 68cm, probably indicating that the robot coasted some after the motors were turned off. When I instrumented the code to show the next 10 distance measurements after the exit from the subroutine, it became easy to see that this was the case – the robot coasted from 59 to 67cm. The robot should correct this by going back the opposite way, but it doesn’t because the last measurement (59cm) fits inside the 59-61cm ‘basket’ for subroutine termination (the ‘while’ loop termination criteria).

So, I thought what I could do is check the reported front distance after function exit, and just call it again if the robot had coasted too far from the target. The second run should get much closer, as the starting error term would be much smaller. So now the test code looks like this:

This should have worked well, except when I tried it, the function exited with a ‘STUCK_AHEAD’ error – oops! Here’s a run going from 60 from 30cm:

The ‘stuck’ checks have to be there because the robot can’t ‘see’ obstacles that are too low to interrupt the front LIDAR beam, or aren’t directly in the line of sight, but how to manage? The ‘stuck’ checks depend on a variance calculation on the last 50 measurements (held in a bit-bucket array), so I began to wonder why the front variance was decreasing so rapidly while the rear variance wasn’t (one would think they would behave more or less the same). So I went back and printed out the front & back distance array contents after each run, and saw that the reason the front variance was decreasing so rapidly was because I was using a 50mSec time interval, meaning the 50-element array was getting filled in 2500mSec – or about 2.5sec. So, in order to get more variance in a normal run, either the run has to be done at a faster speed (leading to more overshoot) or the timing interval has to be increased. In earlier work I had discovered that the front LIDAR system produces errors for long distance measurements when using short measurement intervals, so I increased the measurement interval from 50 to 200mSec, and now the variance decreases at a much slower rate – yay!

With the longer 200mSec measurement interval, the movement routine now has time to adjust for overshoots, as shown below:

As the above run shows, the robot stopped very close (59cm) to the desired 60cm, without any significant overshoot, and the front variance actually increased significantly during the run – nice!

The following run was in the other direction – from 60 to 30cm:

The robot did a very good job of stopping right on the mark, but coasted 2cm further over the next 2sec. This caused the test program to run the movement routine a second time, but it almost immediately exited again. The front/rear variance numbers were quite high, well out of the danger zone (the error codes shown did not effect the routine – it only looks for ‘STUCK_AHEAD’ and ‘STUCK_BEHIND’ which trigger on front and rear variance thresholds respectively.

So, for at least the ‘MoveToDesiredFrontDistCm()’ function, it appears that PID = (1.5, 0.1, 0.2) works very well. On to the next one!

MoveToDesiredRearDistCm():

Using the same PID set as for MoveToDesiredFrontDistCm() produces excellent results for the ‘rear motion’ routine as well. Here’s a run going from 60 to 30cm based on the rear distance sensor:

MoveToDesiredLeftDistCm():

Next, I tackled the ‘MoveToDesiredLeftDistCm()’ function, which uses the left center distance measurement (corrected for orientation angle). This went fairly quickly, as the PID values for the front/back case seemed to work well for the ‘side’ case too. Here’s the output and a short video from a typical run:

MoveToDesiredRightDistCm():

For this side, I simply copied ‘MoveToDesiredLeftDistCm()’ and changed all occurrences of ‘Left’ to ‘Right’. Here’s the telemetry and a short video from a typical run:

Summary:

It looks like all four (front/back/left/right) motion features are now working fine, with the same PID = (1.5, 0.1, 0.2) configuration – yay!

Now the challenge is to integrate this all back into my mainline program and start testing wall tracking again.

Stay Tuned,

Frank

WallE3 Spin Turn, Revisited

Posted 23 December, 2022

Just over a year ago I made a radical change to WallE2’s form factor to improve it’s turning performance. Since then I have been making other changes and improvements to WallE3, including a recent successful effort to improve WallE3’s ‘rolling turn’ capabilities. Based on that, this post describes a similar effort to improve WallE3’s ‘spin turn performance.

To date, WallE3’s ‘spin turn’ performance has been lackluster at best. Rather than turning smoothly, it tends to ‘motorboat’ it’s way through the turn, starting and stopping several times through the turn. After achieving some success with smoothing out the ‘rolling turn’ feature, I decided to take another crack at ‘spin turn’.

I started by creating yet another part-task program that does nothing but support spin turn operations. This program accepts a set of parameters, calls the existing ‘SpinTurn()’ subroutine to execute the specified spin turn, and then returns to the parameter entry state for another go.

The starting point for the effort was a 90º CCW turn at 45º/s, with PID = (1,0,0). As shown below, this produced a nice smooth turn, but way too slow (17º/s vs 45º/s):

After a coupe of dozen trials, I gradually worked my way up to a ‘final’ PID of (0.7,0.3,0). Here are plots and videos for various turn lengths and rates.

SpinTurn(CCW,30,45)PID(0.7,0.3,0)

SpinTurn(CCW,60,45)PID(0.7,0.3,0)

SpinTurn(CCW,90,45)PID(0.7,0.3,0)

After running these tests, I decided to see how WallE3 would perform on carpet with these same settings. As shown below, carpet performance was pretty much identical to the smooth surface runs.

SpinTurn(CCW,30,45)PID(0.7,0.3,0)
SpinTurn(CCW,60,45)PID(0.7,0.3,0)
SpinTurn(CCW,90,45)PID(0.7,0.3,0)

So, with much better performance now for both the ‘rolling turn’ and ‘spin turn’ features, it is now probably time to revisit the whole wall-tracking thing, and see if I can achieve better performance than in past efforts.

Stay tuned,

Frank

Using mklink to centralize Teensy OTA Files

Posted 20 December 2022

This post describes the actions I have taken to centralize the ‘board.txt’ and ‘TeensyOTA1.ttl’ (Tera Term macro) locations that facilitate the Teensy OTA capability, so that all my various firmware projects targeted at my wall-following robot all use the same set of files. This is done by using the Windows ‘mklink’ command to create ‘hard’ links from the various program folders to a single folder called ‘Robot Common Files’.

For the past five or six years I’ve been working on a 4-wheel robot project to autonomously navigate around my house using a (by now) fairly sophisticated wall-following algorithm. As I have worked through the various problems, I’ve gone through at least three different physical form factors, starting with a 3-wheel (two driven wheels, one castering wheel) and ending up (so far) with a custom 4-wheel ‘wide-track’ model.

Also along the way I have gone through any number of software versions. I work mainly with the Arduino ecosystem, but I don’t use Arduino directly. I use Microsoft Visual Studio 2022 Community Edition with the Visual Micro extension, and this is a great platform.

My latest hardware upgrade was to replace my original Arduino Mega 2560 processor with a Teensy 3.5 main processor, with a firmware update to achieve over-the-air (OTA) updates using a Wixel RF pair.

I now have at least twenty different software/firmware projects that I run on the robot for different reasons. I have a ‘main line’ project that incorporates all features required to full execute autonomous wall-following, and then I have lots of smaller projects aimed at exercising some small subset of the full feature set – things like distance calibration testing for the robot’s two 3-element VL53L0X time-of-flight distance sensors, and for testing out different ways of managing ‘rolling’ and ‘spin’ turns, and different algorithms for wall-tracking. All of these projects use the same ‘over-the-air’ (OTA) firmware/hardware configuration for firmware updates. Up until now I have been simply copying the two required files – ‘board.txt’ and ‘TeensyOTA1.ttl’ (a Tera Term macro) from project folder to project folder, but a recent Visual Micro update broke my OTA routine, and I discovered that having these two files copied into multiple project folders didn’t work so well – oops!

So, I decided to create ‘Robot Common Files’ folder in my Documents\Arduino folder tree, place these two files (along with the required FlashTxx.cpp/h files and my ‘enums.h’ file) into this folder, and then use the ‘mklink /H’ command (12/23/22 note: Admin privileges are not required) to create ‘hard’ links from each project’s folder to the single files in m Robot Common Files folder. The commands I used to accomplish this were:

where ‘WallE3_AnomalyRecovery_V2’ gets replaced each time with the actual project name

Now, when I want to edit either the ‘board.txt’ or ‘TeensyOTA1.ttl’ file, I can simply right-click on the filename in any project folder and select ‘edit with NotePad++’, and the single file in Robot Common Files gets opened for edit, and the changes are immediately available in all projects that use the Teensy OTA feature.

31 January 2023 Update:

After trying this trick a few times, I realized I had left out a step, so this update fixes that. Below is the entire Cmd line session for my latest ‘new project’:

The first line above shows the ‘cd’ step, which I had left out of the previous post. After that, I just copy/pasted the ‘mklink….’ text for each of the two files, and then everything was wonderful again.

Stay Tuned,

Frank

WallE3 Rolling Turn, Revisited

Posted 04 December 2022,

A few days ago I described my revisit to WallE3’s ‘parallel orientation search’ feature, a feature that had been more or less discarded due to poor performance. As described in this post and this post, I have been able to obtain much better accuracy and precision from the ST Micro VL53L0X sensor arrays.

With this in mind, I decided to revisit my old ‘Rolling Turn’ algorithm, to see if I could get it’s performance up to the point where it could be used instead of ‘Spin Turn’ for the ‘parallel find’ maneuver. The Spin Turn maneuver involves rotating one set of wheels backwards and the other forwards to effect a ‘spin in place’ action. The Rolling Turn maneuver involves rotating both sets of wheels in the same direction, but at different speeds. My going-in assumption was that a rolling turn should allow more accurate control of the turn rate, as there should be less acceleration involved.

In preparation for this project, I went back to the web and read some more about PID tuning. This time I found this well-written and informative post by ‘marco_c’ – Thanks marco!

I had dropped the original ‘RollingTurn’ function from the code some time ago, so I started this experiment by copying SpinTurn() and adapting it to move both sets of wheels in the same direction (forward in this case) instead of in opposite directions. The conversion from SpinTurn and RollingTurn took surprisingly little effort, as everything but the motor direction code is pretty much the same.

I created a part-task program by creating a new ‘WallE3_RollingTurn_V1’ arduino project in VS2022 Community, and then copying my WallE3_WallTrackTuning_V4. The WallE3_WallTrackTuning_V4 project is also a part-task test vehicle, and as such had almost all the required parameter entry code already in setup().

After working my way through the usual number of mistakes, I finally got the project working smoothly. I copied captured telemetry output to Excel and used it’s excellent plotting facilities to investigate possible PID triplet combinations. After a day or so of work, I finally homed in on a triplet PID(1.1, 0.1, 0) as providing a pretty nice, smooth turning action for a 180º turn at 30º/sec. Then I tested the same configuration using both a 60º/sec and a 90º/sec commanded turn rate. As shown in the plots below, the robot did well in all three of these conditions:

180º turn at 30º/sec
180º turn at 60º/sec
180º turn at 90º/sec

Here’s a short video showing the 180º turn at 90º/sec configuration:

180º turn at 90º/sec

My test program as described above only handles forward motion CCW & CW turns. If I decide to put this feature back into WallE3’s main codebase, I’ll have to expand it a bit to handle the backward motion case as well.

05 December 2022 Update:

I modified the RollingTurn() function to accommodate both forward and backward motion, and both CW & CCW turns, so four cases.

Also, as I was doing this, it occurred to me that I might want to revisit my ‘SpinTurn()’ function in light of my better understanding of PID tuning. Maybe I can get SpinTurn to operate a little more smoothly.

Stay tuned,

Frank

Wall Parallel Find PID Tuning, Part II

Posted 01 December 2022

I thought I had the ‘parallel find’ problem solved about 18 months ago, back in April of 2021, but it seems this is a problem that refuses to die (well, up until now, anyway). This post describes the ‘new, new!’ solution, based on much improved distance measurement performance from the VL53L0X time-of-flight sensors and associated software. Basically, I discovered that I had been doing VL53L0X distance calibration all wrong, and a big part of the problem was my use of an integer instead of floating point data type to represent distance values. The VL53L0X sensor reports distances in integer millimeters, but when I converted the integer mm values to integer cm (without even rounding – but by truncation – yikes!) the resulting loss of precision caused significant problems with the parallel find algorithm.

So, after converting all my VL53L0X distance variables from integer to float (which turned out to be pretty easy as I had practiced good modularity and low coupling – yay!), I started over again with a two stage strategy. The first part addressed the calibration problem by redoing a series of tests to acquire compensation curves for both the right and left-side sensor arrays which were then programmed into the Teensy 3.5 processor that handles the VL53L0X arrays. The second part was to revisit the ‘parallel find’ algorithm. This post describes the result of the second part – revisiting the ‘parallel find’ algorithm.

The parallel find algorithm implements a two-stage search for the parallel condition, defined as the robot (or, more accurately, the relevant VL53L0X array) orientation that produces identical distance readings from the front and rear sensors of the relevant (i.e. left or right) array. The first ‘coarse’ stage searches for the change in sign of the steering value, and the second ‘fine tune’ stage searches for a steering value < 0.01, meaning that the front and rear distance values are nearly identical.

Here is the code for the parallel find subroutine:

And here is a short video and the telemetry output for a successful parallel find run

So it appears that my original parallel find algorithm now performs much better, due mainly to the more accurate distance reporting obtained by changing from integer to float data type, and better distance compensation curves for each of the six sensors in the two VL53L0X arrays. Now I need to back-port this improved code into my main robot navigation code, probably by way of the Wall Track part-task program described in my earlier ‘More Wall Track PID Tuning‘ post.

29 December 2022 Update:

After going through some additional ‘part-task’ tune-up exercises, I started the process of integrating all the changes back into my Wall Tracking PID Tuning code. After the usual number of screw-ups I got the left-side offset capture feature working up to the point of turning back parallel to the wall. The code at this point just used ‘SpinTurn()’ to turn 30º in the opposite direction as the first turn away from the wall, with a note that this was done because ‘Parallel Find is fatally flawed’. Well, since I had just gotten through testing this feature I thought I could just drop it in. When I did, the robot promptly turned in the wrong direction and started spinning – yikes! Back to the ‘Parallel Find’ drawing board!

After making some code changes to make the part-task code a bit easier to use, I got everything working again, with basically the same PID values (100,0,0) as before. Here’s the output and a short video from a run:

30 December 2022 Update:

Not so fast, pardner! Well, it seems I was a bit premature about completing this effort – After a few more trials I realized this wasn’t working anywhere near as well as I thought – oops!

So, after lots more trials aimlessly wandering around PID space, I came up with new values that *seem* to work (for the left side case, at least). I also discovered that I really don’t need a two-stage process (‘coarse’ tune followed by ‘fine’ tune) to get a decent result, as long as the robot turns slowly enough to allow the distance measurement changes to keep up. Here’s are a couple of runs (telemetry output and short video) for the new setup.

05 January 2023 Update:

Well, even the above ‘slow as she goes’ idea doesn’t work all the time or that well. I wound up trying to instrument what is going on by doing the following

  • Turning the robot in 10⁰ steps, followed by a 1500 mSec pause to let the measurements catch up
  • At 100mSec intervals during the 1500mSec pause, computing and displaying the 5-pt running average of the left front and rear distances, along with the running average computation of the steering value.

When I did this, I got the following plot

1500mSec plot of the steering value computed from a 5-pt running average of the left front and left rear distances. Each line is a different 10deg angle orientation, with the lowest (yellow) line representing a parallel or near-parallel orientation

As can be seen from the above plots, there is significant variation over the 1500mSec interval, even though the robot is stationary. In particular, the last plot shows that at the start of the 1500mSec stationary period, the steering value goes from 0.15 (i.e. not parallel at all) to near zero (i.e. very parallel), even though the robot isn’t moving at all.

I have no idea why this is happening, but it sure screws up any thoughts of rapidly finding a parallel orientation, and even sheds significant doubt on the entire idea of using multiple VL53L0X sensors for wall tracking – UGH!!

This time I tried a ‘parallel find’ run using 5⁰ SpinTurn() steps, with no averaging. Here’s the data, and a short video showing the result

This actually looks pretty good, and the unaveraged distance sensor results, although noisy, behaved reasonably well.

07 January 2023 Update:

As I was drifting off to sleep After yesterday’s ‘successful’ trial runs, I was happy visualizing the robot using the ‘SpinTurn() facility to gradually turn to the (mostly) parallel position (I’m a visual person, so I often turn programming problems into visual ones), it suddenly struck me that I was taking the long way around the barn; here I was sneaking up on the parallel orientation by small SpinTurn increments waiting for the steering value to change signs, when actually all I had to do was use my ‘GetWallOrientDeg(float steerval)’. This function takes the current steering value as the sole parameter and returns the current orientation angle with respect to the nearest wall; with this information I could use SpinTurn() to rotate to the parallel orientation in one shot – cool!

So, I recoded my test program to do just that, with the ‘enhancement’ (I hope) of taking a second or so at the start to acquire a 10-point average of the steering value, so as to, hopefully, reduce errors due to noisy distance sensor outputs. Here’s the output and a short video showing the result:

This seemed to work very well, and is a much simpler algorithm than trying to use an ‘on the fly’ steering value and a PID machine. I believe with this result I can get back to the problem I was originally trying to solve – that of reasonable wall following.

02 February 2023 Update:

Referring to my current quest to incorporate the results from all my previous ‘part-task’ programs into a ‘WallE3_Complete_V1’, I have been going through each ‘part-task’ program to verify results, and then incorporating the results (usually in the form of a single function) into the ‘complete’ program. When I got to WallE3_ParallelFind_V1, I discovered that, although I had made quite a bit of progress, including drastically simplifying the ‘parallel find’ algorithm, it still didn’t work very well.

After some additional work, I now think that ‘parallel find’ is ready for inclusion in the complete program. Here’s the functional code from WallE3_ParallelFind_V1:

The above code will replace the code in ‘RotateToParallelOrientation()’. See my post on consolidating everything into a ‘Complete’ program for more details

Stay tuned,

Frank