Author Archives: paynterf

More Wall Track PID Tuning Work

Posted 15 October 2022

While working on my new ‘RunToDaylight’ algorithm for WallE3, my autonomous wall-following robot, I noticed that when WallE3 finds a wall to track after travelling in the direction of most front distance, it doesn’t do a very good job at all, oscillating crazily back and forth, and sometimes running head-first into the wall it is supposedly trying to track. This is somewhat disconcerting, as I thought I had long ago established a good wall tracking algorithm. So, I decided to once again plunge headlong into the swamps and jungles of my wall-tracking algorithm, hoping against hope that I won’t get eaten by mosquitos, snakes or crocodiles.

I went back to my latest part-task program, ‘WallE3_WallTrackTuning’. This program actually does OK when the robot starts out close to the wall, as the ‘CaptureWallOffset()’ routine is pretty effective. However, I noticed that when the robot starts out within +/- 4cm from the defined offset value, it isn’t terribly robust; the robot typically just goes straight with very few adjustments, even as it gets farther and farther away from the offset distance – oops!

So, I created yet another part-task program ‘WallE3_WallTrackTuning_V2’ to see if I could figure out why it isn’t working so well. With ‘WallE3_WallTrackTuning’ I simply called either TrackLeftWallOffset() or TrackRightWallOffset() and fed it the user-entered offset and PID values. However, this time I decided to pare down the code to the absolute minimum required to track a wall, as shown below (user input code not shown for clarity):

The big change from previous versions was to go back to using the desired offset distance as the setpoint for the PID algorithm, and using the measured distance from the (left or right) center VL53L0X sensor as the input to be controlled. Now one might be excused from wondering why the heck I wasn’t doing this all along, as it does seem logical that if you want to control the robot’s distance from the near wall, you should use the desired distance as the setpoint and the measured distance as the input – duh!

Well, way back in the beginning of time, when I changed over from dual ultrasonic ‘Ping’ sensors to dual arrays of three VL53L0X LIDAR sensors well over 18 months ago, I wound up using a combination of the ‘steering value’ ( (front – rear)/100 ) and the reported center distance – desired offset as the input to the PID calc routine, as shown in the following code snippet:

This is the line that calculates the input value to the PID:

This actually worked pretty well, but as I discovered recently, it is very difficult to integrate two very different physical mechanisms in the same calculations – almost literally oranges and apples. When the offset is small, the steering value term dominates and the robot simply continues more or less – but not quite – parallel to the wall, meaning that it slowly diverges from the desired offset – getting closer or further away. When the divergence gets large enough the offset term dominates and the robot turns toward the desired offset, but it is almost impossible to get PID terms that are large enough to actually make corrections without being so large as to cause wild oscillations.

The above problem didn’t really come to a head until just recently when I started having problems with tracking where the robot started off at or close to the desired offset value and generally parallel, meaning both terms in the above expression were near zero – for this case the behavior was a bit erratic to say the least.

So, back to the basics. The following plot and short video show the robot’s behavior with the new setup (offset = 40cm, PID = (10,0,0)):

Tracking left wall with desired offset = 40cm
Desired offset = 40cm, PID = (10,0,0)

With this setup, the robot tracks the desired 40cm offset very well (average of 41.77cm), with a very slow oscillation. I’m sure I can tweak this up a bit with a slightly higher ‘P’ value and then adding a small amount of ‘I’, but even with the most basic parameter set the system is obviously stable.

20 October 2022 Update:

I made another run with PID(10,0,0), but this time I started the run with the robot displaced about 7cm from the 40cm offset. As shown in the plot and short video, this caused quite a large oscillation; not quite disastrous, but probably would have been if my test wall had been any longer.

PID(10,0,0) with robot initial distance from wall = 33cm
PID(10,0,0) with initial position at 33cm

After looking at the data from this run, I decided to try lowering the P value from 10 to 5, thinking that the lower value would reduce the oscillation magnitude with a non-zero initial displacement from the desired setpoint. As the following plot and short video shows, the result was much better.

221022 PID(5,0,0) init dist 33cm
221022 PID(5,0,0) init dist 33cm

So then I tried PID(3,0,0), again with an initial placement of 33cm from the wall, 7cm from the setpoint of 40cm

PID(3,0,0) init dist 33cm, avg for all points ~41.3cm
PID(3,0,0) init dist 33cm, avg for all points ~41.3cm

As shown by the plot and video, PID(3,0,0) does a very good job of recovering from the large initial offset and then maintaining the desired 40cm offset. This result makes me start to wonder if my separate ‘Approach and Capture’ stage is required at all. However, a subsequent run with PID (3,0,0) but with an initial placement of 15cm (25cm error) disabused me of any thoughts like that!

ouch!

After talking this over with my PID-expert stepson, he recommended that I continue to decrease the ‘P’ term until the robot never quite gets to the desired setpoint before running out of room, and then adding some (possibly very small) amount of ‘D’ to hasten capture of the desired setpoint. So, I continued, starting with a ‘P’ value of 2, as shown below:

The ‘Err’ term is the actual PID error term, not multiplied by P as before

This result was a bit unexpected, as I thought such a ‘straight-line’ trajectory should have ended before going past the 40cm setpoint, indicating that I had achieved my ‘not quite controlling’ value of ‘P’. However, after thinking about a bit and looking at the actual data (shown below), I think what this shows is that the robot case is fundamentally different than most PID control applications in that reducing the error term (and thus the ‘drive’ signal) doesn’t necessarily change the robot’s trajectory, as the robot will tend to travel in a straight line in the absence of a contravening error term. In other words, removing the ‘drive’ due to the error term doesn’t cause the robot to slow down, as it would in a normal motor drive scenario.

23 October 2022 Update:

In previous work on this subject, I had already recognized that the ‘capture’ and ‘track’ phases of Wall-E’s behavior required separate treatment, and had implemented this with a ‘CaptureWallOffset()’ function to handle the ‘capture’ phase. This function calculates the amount of rotation needed to achieve an approximately 30 deg orientation angle with respect to the near wall, then moves forward to the desired wall offset value, and then turns back to parallel the near wall.

So, my next step is to re-integrate this ‘CaptureWallOffset()’ routine with my current distance-only based offset tracking algorithm. The idea is to essentially eliminate the problem space where the distance-only PID algorithm fails, so the PID only has to handle initial conditions very near the desired setpoint. When the ‘CaptureWallOffset()’ routine completes, the robot should be oriented parallel to the wall, and at a distance close to (but not necessarily the same as) the desired setpoint. Based on the above results, I think I will change the setpoint from the current constant value (40 cm at present) to match the measured distance from the wall at the point of ‘CaptureWallOffset()’ routine completion. This will guarantee that the PID starts out with the input matching the setpoint – i.e. zero error.

With this new setup, I made a run with the robot starting just 13cm from the wall. The CaptureWallOffset() routine moved the robot from the initial 13cm to about 37cm, with the robot ending up nearly parallel. The PID tracking algorithm started with an initial error term of +3.3, and tracked very well with a ‘P’ value of 10. See the plot and short video below. The video shows both the capture and track phases, but the plot only shows the track portion of the run.

PID tracking after completion of CaptureWallOffset()

Here’s a run with PID(3,0,0), starting at an offset of 22cm.

24 October 2022 Update:

While reading through some more PID theory/practice articles, I was once again reminded that smaller time intervals generally produce better results, and that struck a bit of a chord. Some time back I settled on a time interval of about 200mSec, but while I was working with my ‘WallTrackTuning_V2’ program I realized that this interval was due to the time required by the PulsedLight LIDAR to produce a front distance measurement. I discovered this when I tried to reduce the overall update time from 200 to 100mSec and got lots of errors from GetFrontDistCm(). After figuring this out, I modified the code to use a 200mSec time interval for the front LIDAR, and 100mSec for the VL53L0X side distance sensors.

So, it occurred to me that I might be able to reduce the side measurement interval even further, so I instrumented the robot to toggle a digital output (I borrowed the output for the red laser pointer) at the start and end of the wall tracking adjustment cycle, as shown in the code snippet below:

Using my handy-dandy Hanmatek DSO, I was able to capture the pin activity, as shown in the following plot:

Wall track update cycle activity with 100mSec interval (20mSec/div)

As shown above, the update code takes a bit less than 20mSec to complete, and idles for the remaining 80mSec or so, waiting for the 100mSec time period to elapse. So, I should be able to reduce the time interval by at least a factor of two. I changed the update interval from 100mSec to 50mSec, and got the activity plot shown below:

Wall track update cycle activity with 50mSec interval (20mSec/div)

The above plot has the same 20mSec/div time scale as the previous one; as can be seen, there is still plenty of ‘idle’ time between wall tracking updates. Now to see if this actually changes the robot’s behavior.

As shown in the next plot and video, I ran another ‘sandbox’ test, this time with the update interval set to 50mSec vice 100mSec, and with an 11Cm initial offset.

PID(5,0,0), initial distance 11Cm
PID(5,0,0), initial distance 37Cm

Then I ran it again, except this time with a PID of (10,0,0):

PID(10,0,0), initial distance 11Cm
221024 PID(10,0,0) Init 36cm, LCorr

This wasn’t at all what I expected. I thought the larger ‘P’ value would cause the robot to more closely track the desired offset, but that isn’t what happened. Everything is fine for the first two seconds (140,000 to 142,000 mSec), but then the robot starts weaving dramatically- to the point where the motor values max out at 127 on one side and 0 on the other – bummer. Looks like I need another consulting session with my PID wizard stepson!

25 October 2022 Update:

My PID wiz stepson liked my idea of breaking the wall tracking problem into an offset capture phase, followed by a wall tracking phase, but wasn’t particularly impressed with my thinking about reducing the PID update interval while simultaneously increasing the P value to 10, so, I’m back to taking more data. The following run is just the wall tracking phase, with 50mSec update interval and a P value of 3.

As can be seen, the robot doesn’t really make any corrections – just goes in a straight line more or less. However, the left/right wheel speed data does show the correct trend (left wheel drive decreasing, right wheel drive increasing), so maybe a non-zero ‘I’ value would do the trick?

Here’s a run with PID(3,0.5,0):

PID(3,0.5,0) init dist 40cm

In the above plot the I value does indeed cause the robot to track back to the target distance, but then goes well past the target before starting to turn back. Too much I?

Here’s another run with PID(3,0.1,0) – looking pretty good!

PID(3,0.1,0) init dist 40cm

This looks pretty good; with I = 0.1 the robot definitely adjusted back toward the target distance, but in a much smoother manner than with I = 0.5. OK, time to let the master view these results and see if I’m even in the right PID universe.

One thing to mention in these runs – they are performed in my office, and the total run length is a little over 2m (210Cm), so I’m only seeing just one correction maneuver. Maybe if I start out with a small offset from the target value? Nope – that won’t work – at least not tonight; my current code changes the setpoint from the entered value (40Cm in this case) to the actual offset value (36Cm on this run) before starting the run. Curses! Foiled again!

27 October 2022 Update:

Today I had some time to see how the PID handles wall-tracking runs starting with a small offset from the desired value. First I started with a run essentially identical to the last run from two days ago, just to make sure nothing significant had changed (shouldn’t, but who knows for sure), as shown below:

Then I tried a run with the same PID values, but with a small initial offset from the desired 40Cm:

As can be seen, the robot didn’t seem to handle this very well; there was an initial correction away from the wall (toward the desired offset), but the robot cruised well past the setpoint before correcting back toward the wall. This same behavior repeated when the robot went past the setpoint on the way back toward the wall.

To see which way I needed to move with the ‘I’ value, a made another run with the ‘I’ value changed from 0.1 to 0.25, as shown below:

Now the corrections were much more dramatic, and tracking was much less accurate. on the second correction (when the robot passed through the desired setpoint going away from the wall), the motor drive values maxed out (127 on the left, 0 on the right).

Next I tried an ‘I’ value of 0.05, as shown below:

This looks much nicer – deviations from the desired offset are much smaller, and everything is much smoother. However, I’m a little reluctant to declare victory here, as it may well be that the ‘I’ value is so small now that it may not be making any difference at all, and what I’m seeing is just the natural straight-line behavior of the robot. In addition, the robot may not be able to track the wall around some of the 45deg wall direction changes found in this house.

28 October 2022 Update:

I decided to rearrange my office ‘sandbox’ to provide additional running room for my wall-following robot. By setting up my sandbox ‘walls’ diagonally across my office, I was able to achieve a run length of almost 4 meters (3.94 to be exact). Here is a plot and short video from a run with PID(3,0.1,0):

First run on my new 4m wall

I was very encouraged by this run. The robot tracked very well for most of the run, deviating slightly at the very end. I’m particularly pleased by the 1.4sec period from about 129400 (129.4sec) to about 130800 (130.8sec); during this period the left & right wheel motor drive values were pretty constant, indicating that the PID was actively controlling motor speeds to obtain the desired the wall offset. I’m not sure what caused the deviation at the end, but it might have something to do with the ‘wall’ material (black art board with white paper taped to the bottom half) in that section. However, after replacing that section with white foam core, the turn-out behavior persisted, so it wasn’t the wall properties causing the problem.

After looking at the data and the video for a while, I concluded that the divergence at the end of the run was real. During the first part of the run, the robot was close enough to the setpoint so that no significant correction was required. However, as soon as the natural straight-line behavior diverged enough from the set point to cause the PID to produce a non-small output, the tracking performance was seriously degraded. In other words, the PID was making things worse, not better – rats.

So, I tried another run, this time adding just a smidge of ‘D’, on the theory that this would still allow the PID to drive the robot back toward the setpoint, but not quite as wildly as before. With PID (3, 0.1, 0.1) I got the following plot:

Adding some ‘D’

As can be seen, things are quite a bit nicer, and the robot seemed to track fairly well for the entire 4m run.

Tried another run this morning with PID(3,0,0.1), i.e. removing the ‘I’ factor entirely, but leaving the ‘D’ parameter at 0.1 as in my last run yesterday. As can be seen in the following plot and short video, the results were very encouraging.

Made another run with ‘D’ bumped to 0.5 – looks even better.

Next, I investigated Wall-E3’s ability to handle wall angle changes. As the following plot and video shows, it actually does quite well with PID(3,0,0.5)

transients at end of run are due to encountering another angled wall – not shown in video

30 October 2022 Update

After a few more trials, I think I ended up with PID(3,0,1) as a reasonable compromise. With this setup, Wall-E3 can navigate both concave and convex wall angle changes, as shown in the following plot and short video.

As an aside, I also investigated several other PID triplets of the form (K*3,0,K*1) to see if other values of K besides 1 would produce the same behavior. At first I thought both K = 2 and K = 3 did well, but after a couple of trials I found myself back at K = 1. I’m not sure why there is anything magic about K = 1, but it’s hard to get around the fact that K = 2 and K = 3 did not work as well tracking my ‘sandbox’ walls.

At this point, I think it may be time to back-port the above results into my WallE3_AnomalyRecovery_V2.sln project, and then ultimately back into my main robot control project.

06 November 2022 Update:

Well, now I know why my past efforts at wall tracking didn’t rely exclusively on offset distance as measured by the 3 VL53L0X sensors on each side of the robot. The problem is that the reported distance is only accurate when the robot is parallel to the wall; any off-parallel orientation causes the reported distance to increase, even though the robot body is at the same distance. In the above work I thought I could beat this problem by compensating the distance measurement by the cosine of the off-parallel angle. This works (sort of) but causes the control loop to lag way behind what the robot is actually doing. Moreover, since there can be small variations in the distance reported by the VL53L0X array, the robot can be physically parallel to the wall while the sensors report an off-parallel orientation, or alternatively, the robot can be physically off-parallel (getting closer or farther away) to the wall, while the sensors report that it is parallel and consequently no correction is required. This is why, in previous versions, I tried to incorporate a absolute distance measurement along with orientation information into a single PID loop (didn’t work very well).

09 November 2022 Update:

After beating my head against the problem of tracking the nearby wall using a three-element array of VL53L0X distance sensors and a PID algorithm, I finally decided it just wasn’t wasn’t working well enough to rely on for generalized wall tracking. It does work, but has a pretty horrendous failure mode. If the robot ever gets past about 45 deg orientation w/r/t the near wall, the distance values from the VL53L0X sensor become invalid and the robot goes crazy.

So, I have been spending my night-time sleep preparation time (where I do some of my best thinking) trying to think of different ways of skinning this particular cat, as follows:

  • The robot needs to be able to accurately track a given offset
  • Must have enough agility to accommodate abrupt wall direction changes (90 deg changes are easy – but there are several 45 deg changes in our house)
  • Must handle obstacles appropriately.

It’s that first item on the list that I can’t seem to handle with the typical PID algorithm. So, I started to think about alternative schemes, and the one I decided to experiment with was the idea of implementing a zig-zag tracking algorithm using my already-proven SpinTurn() function. SpinTurn() uses relative heading output from my MP6050 MPU to implement CW/CCW turns, and has proven to be quite reliable (after Homer Creutz and I beat MPU6050 FIFO management into submission).

I modified one of my Wall Track Tuning programs to implement the ‘zig-zag’ algorithm, and ran some tests in my office ‘sandbox’. As the following Excel plot and short video shows, it actually did quite well, considering it was the product of a semi-dream state thought process!

As can be seen from the above, the robot did a decent job of tracking the desired 40Cm offset (average distance for the run was 39.75Cm), especially for the first iteration. I should be able to tweak the algorithm to track the wall faster and with less of a ‘drunken sailor’ behavior.

Stay tuned,

Frank

FlashForge Creator PRO 2 IDEX GUI Issues

Posted 30 August 2022

Exactly one year ago I posted about receiving my then-new FlashForge Creator PRO 2 IDEX 3D printer. Since then I have made many successful prints and have been very happy with the machine. Then just a few weeks ago I started having serious problems with prints not sticking to my flexible build plate. Such problems occur regularly with 3D printers, and they are usually fairly easy to troubleshoot and fix, but this time I found that the touch-screen GUI on the FFCP2 IDEX machine to be more of a hindrance than a help in working my way through the problem. This post describes the issues I encountered and some suggested changes to the GUI to resolve them. The firmware used is the latest version, V1.8.

Extruder Z-axis Offset Calibration:

One of the most common problems for non-sticking prints is the extruder Z-axis offset. If the offset is too large, the filament won’t contact the print bed with enough area to adhere the first layer, and this can cause the BOD (ball of death) as the filament balls up around the extruder. When I do this procedure on my Prusa MK3S+ single-extruder printer, the display shows the current offset so adjustments can be made from a known starting position. Unfortunately the FFCP2 GUI for some reason forces the user to calibrate both extruders every time, and starts by setting the offset on both extruders to +2mm, undoubtedly to make sure the nozzles are well away from the print bed. This also erases the existing offset value, so there is no way to slightly ‘tweak’ the offset value one way or the other – the offsets have to be set from the beginning every time. For first-time users, this might be OK, but for experienced printers it is a royal PITA.

Imagine you have done some minor maintenance on the right extruder, but the left extruder is printing perfectly. Now you need to recalibrate the right extruder, but don’t want to mess with your finely-tuned left extruder. Nope – can’t do that. As soon as you select ‘Z axis Calibration’ you are doomed! The offset values for both extruders fly out the window and you are back to using the supplied plastic spacer to calibrate both extruders ‘by feel’. Doesn’t matter that you had that left extruder all dialed in – you are hosed!

Or, maybe you are happily running PETG on both extruders, but now you want to make a print that requires a dissolvable filament, like AquaSys120 or similar. You change out the filament on one extruder and make a test print using just the dissolvable filament and find it either isn’t extruding at all (offset too close to the bed) or more likely isn’t adhering to the bed due to subtle differences in the texture/stickiness of AquaSys120 filament vs PETG. You know the way to fix this is to slowly vary the z-offset of the AquaSys120 extruder, but you don’t want to mess with the nicely printing PETG extruder. Sorry – go directly to jail, do not pass GO, and do not collect $200.

The only way to handle either of these scenarios is to have already recorded the old Z offset values for both extruders (and who does that?) so you can go through the you must calibrate both every time procedure, dial both extruder offsets to where you had them before, and then adjust slightly from there. Make sure you write the new values down, because if you want to make the next tweak or you make a filament change a month from now, you have to start all over again – ARGGHHHH!

As I mentioned above, this might be OK for first-time users, but becomes a royal PITA for more experienced users. The FFCP2 does have an ‘Advanced’ menu, but this menu only handles X and Y axis tweaks – not the Z axis. This menu should be revised to allow the Z-offset for either extruder to be set independently, and should offer options to start from the default (+2mm) location or from the current z-axis offset position.

Z-axis Calibration Delays Due to Extruder Cooling Requirement:

When troubleshooting a printing problem, I might go through several Z-axis calibration procedures. However, each time I start the procedure, the printer forces me to wait for the extruders to cool to room temperature, which takes several minutes – ARGHHH! This is stupid for two reasons; first, I would think I’d want to calibrate the Z-axis offset at the normal operational temperature to take any temperature-related mechanical changes into consideration, not at room temp. Second, if FlashForge decided that any such mechanical changes were inconsequential, then it shouldn’t matter at what temperature the procedure is conducted, and so there shouldn’t be any delay at all. This seems to be just one of those things where the programmer decided the process should always start with room temperature extruders and never gave any thought to the tradeoff between programming ease and customer frustration.

Print File Names:

The ‘Print’ menu shows a partial list of the available print files, and allows the user to scroll up and down the list as necessary. However, as shown in the following photos, print file names are truncated after N characters, so similarly named files (I use version numbers a lot) all look the same. Moreover, when a file is actually selected by tapping on the name, it still isn’t shown full length. My Prusa MK3S+ printer has the same problem, but solves it by repeatedly scrolling through the name.

Print file list. Notice how similar they are, because the differences are later in the name
Selected print file name is also truncated – you have to just hope you have picked the correct one!

Conclusion:

The FlashForge Creator PRO 2 IDEX printer is a great printer, and I have gotten many many good prints from it over the last year. Even with the frustrations associated with the less-than-perfect GUI, I don’t regret trading down from my previous MakerGear M3-ID Independent Dual Extruder (IDEX) machine. However, I believe the GUI was not given the care and resources it needed to be a first-class example of a 3D printer user interface, and should be updated. If it remains in this ‘sort-of-OK-sort-of-clunky’ state, I think it will sour a lot of 3D makers off the FlashForge brand.

Flashforge could actually kill two birds with one stone if they were to open-source the GUI code; then users like myself who are frustrated with the current performance could collaborate in making it better.

Stay tuned,

Frank

Transitioning from TinkerCad to Blender with CAD Sketcher

Posted 6 August 2022

I have been been doing 3D printing (a ‘Maker’ in modern jargon) for almost a decade now, and almost all my designs started out life in TinkerCad – Autodesk’s wonderful online 3D design tool. As I mentioned in my 2014 post comparing AutoDesk’s TinkerCad and 123d Design offerings, TinkerCad is simple and easy to use, powerful due to its large suite of primitive 3D objects and manipulation features, but runs out of gas when dealing with rounded corners, internal fillets, arbitrary chamfers and other sophisticated mesh manipulation options.

Consequently, I have been keeping an eye out for more modern alternatives to TinkerCad – something with the horsepower to do more sophisticated mesh modeling, but still simple enough for an old broke-down engineer to learn in the finite amount of time I have left on earth. As I discovered eight years ago, AutoDesk’s 123D Design offering wasn’t the app I was looking for, but Blender, with the newly introduced CAD Sketcher and CAD Transforms add-ins, may well be. Blender seems to be aimed more at graphic artists, animators, and 3D world-builders rather than for the kind of dimension-driven precision design for 3D printing, but the CAD Sketcher and CAD Transforms add-ons go a long way toward providing explicit dimension-driven precision 3D design tools for us maker types.

I ran across the Blender app several months ago and started looking for online tutorials; the first one I found was the famous ‘Donut Tutorial’ by Blender Guru. After several tries and a large amount of frustration due to the radical GUI changes between Blender 2.x and 3.x, I was able to get most of the way through to making a donut. Unfortunately for me, the donut tutorial didn’t really address dimension-driven 3D models at all, so while the tutorial was kinda fun, it didn’t really address my issue. Then I ran across Maker Tales Jonathan Kobylanski’s demo of the CAD Sketcher V0.24 Blender add-on, and I became convinced that Blender might well be a viable TinkerCad replacment.

So, I worked my way through Jonathan’s CAD Sketcher 0.24 tutorial, and as usual got in trouble several times due to my ignorance of basic Blender GUI techniques. After posting about my problems, Jonathan was kind enough to point me at his paid “How To Use Blender For 3D Printing” 10-lesson series for $124USD. I signed right up, and so far have worked (and I do mean worked!) my way through the first six lessons. I have to say this may be the best money I’ve ever spent on self-education (and at my advanced age, that is saying a LOT 🙂 ). In particular, Jonathan starts off with the assumption that the student knows absolutely NOTHING about Blender (which was certainly true in my case) and shows how to set the program up with precision 3D modeling in mind. All lessons are extensively documented, with video, audio, and all keypresses fully described. At first I was more than a little intimidated by the deluge of short-cut keys (and still am a little bit), but Jonathan’s lessons expose the viewer to slightly more bite-size chunks than the normal fire-hose method, so I was able to stay more or less on the same continent with him as he moved through the design step. I also found it extremely helpful to go back through the first few lessons several times (very easy to do with the academy.makertales.com lesson layout), even to the point of playing and replaying particular steps until I was comfortable with whatever procedure was being taught. There is a MakerTales Discord server and a channel dedicated to helping academy students, and Jonathan seems to be pretty responsive in responding to my (usually clueless) comments and pleas for help.

Jonathan encourages his students to go beyond the lessons and to modify or extend the particular focus of any lesson, so I decided to try and use Blender/CAD Sketcher for a small project I have been considering. My main PC is a Dell XPS15 laptop, connected to two 24″ monitors via a Dell WD19TBS Thunderbolt docking station. I have the monitors on 4″ risers, but found they still weren’t high enough for comfortable viewing and seating ergonomics, so I designed (in TCAD, several years ago) a set of riser risers as shown in the image below

My two-display setup. Note the red ‘riser elevators’ under the metal display risers
Closeup showing the built-in shelf for my XPS 15 laptop

As shown above, the ‘riser elevator design incorporates a built-in shelf for my XPS15 laptop. This has worked well for years, but recently I have been looking for ways to simplify/neaten up my workspace. I found that I could move my junk tray from the side of my work area to the currently unused space underneath my laptop, but with the current arrangement there isn’t enough clearance above the tray to see/access the stuff in the back. I was originally thinking of simply replacing the current 3D printed risers with new ones 40mm higher, but in an ‘aha!’ moment I realized I didn’t have to replace the risers – I could simply add another riser on top. The new piece would mate with the current riser vertical tab that keeps the laptop from sliding sideways, and then replicate the same vertical tab, but 40mm higher.

Doing either the re-designed riser or the add-on would be trivial in TinkerCad, but I thought it would be a good project to try in Blender, now that I have some small inkling of what I’m doing there. So, after the normal number of screwups, I came up with a fully-defined sketch for a small test piece (I fully subscribe to Jonathan’s “When in doubt – test it out” philosophy), as shown:

CAD Sketcher sketch for the test piece. Same as the final piece, except for height

I then 3D printed on my Prusa MK3S printer. Halfway through the print job I realized I didn’t need the full 20mm thickness to test the geometry, so I stopped it midway through and placed it on top of one of the original risers, as shown in the following photo:

Maybe not completely perfect, but still a pretty good fit

After convincing myself that the design was going to work, I modified the sketch for the full 40mm height I wanted, and printed 4ea out, as shown:

CAD Sketcher sketch for the full-height version
4ea full-size riser add-on pieces

After installation, I now have my laptop higher by 40mm, and better/easier access to my junk tray as shown – success!

Finished project. Laptop higher by 40mm, junk tray now much more accessible

And more than that, I have now developed enough confidence in Blender/CAD Sketcher to move my 3D print designs there rather than relying strictly on TinkerCad. Thanks Jonathan!

16 August 2022 Update:

Just finished Learning Project 7: Stackable Storage Crate, and my brain is bulging at the seams – whew! After finishing, I just had to try printing one (or two, if I want to see whether or not I really got the nesting geometry right), even though each print is something over 13 hours on my Prusa MK3S with a 0.6mm nozzle. Here’s the result:

Hot off the printer – after “only” 13 hours!
Underside showing stacking groove. Printed without supports, just using bridging

Frank

FlashForge Creator PRO 2 IDEX Filament/Color Designator Project

Posted 29 July 2022

I’ve had my Flashforge Creator PRO 2 IDEX 3D printer for a while now, and ever since Jaco Theron and I got the Prusa Slicer Configuration for this printer working, I have been enjoying trouble-free (as much as any 3D printer is ‘trouble-free’) dual-color printing.

However, there are some ‘gotchas’ that can make using this printer annoying.

  • The way that the FFCP2 filament spools are arranged on the back of the printer means that the filament from the left spool feeds the right extruder, and vice versa, which leads to confusion about which filament feeds what extruder
  • The printer configuration in the slicer refers to the left extruder as ‘Extruder 2, the left extruder temperature as ‘T1’, the right extruder as ‘Extruder 1, the right extruder temperature as ‘T0’, so I’m never sure which physical extruder I’m dealing with when setting up for a print.
  • The filament spools are located at the rear of the printer, so it’s impossible to tell what filament type is loaded without physically rotating the whole printer, removing the spool from the holder, and looking at the label. And, since my short-term memory is about equal to that of a amoeba, I wind up doing this multiple times.

So, I decided to see what I could do to ameliorate this issue. The first thing I did was to use my handy-dandy Brother label maker to label the left and right extruders with their respective designations in the software, as shown in the photo below.

The next thing was to use my newly-acquired Blender super-powers to create and install removable filament color/type tags to both sides so I would no longer have to rely on my crappy memory to know what filament type and color was loaded on each side, as shown in the following photo.

Filament type and color tags for each extruder

The type/color tags slide into slots in the plate holders, and the plate holders are mounted using the FFCP2′ 4mm hex-head front plate mounting screws. I printed up tags for all my normal colors and filament types and store them inside the printer (the red box seen inside the printer on the left-hand side). Then, when I change a spool, I change the tags to match the new filament type & color.

Additional Work on Wall Tracking Algorithm

Posted 15 July 2022,

After getting everything working (or so I thought) in my sandbox, I started running into problems again with wall tracking. It just wasn’t very smooth at all. So, I decided to create a part-task version of my Wall-E3 code (WallE3_WallTrackTuning.ino) to just tackle PID tuning for left/right wall tracking. this version allows PID and offset values to be entered interactively to facilitate faster tuning. After a number of runs, I wound up with a PID set of (200,20,0). This produce a very nice, smooth tracking behavior.

In addition, I went back through all my code and re-educated myself on exactly how my current wall offset capture algorithm developed and whether or not it was, in fact, what I wanted. I started by diagramming all (I hope) relevant initial orientation and offset cases, as shown in the following Visio chart.

Wall Capture Algorithm Recap

In the above figure, four basic configurations are diagrammed; The first two are for left-side tracking with the robot starting in three different configurations inside the desired 40cm tracking offset, and three more outside the tracking offset. The second two are the same as the first, but for right-side tracking.

The algorithm is based on knowing the robot’s orientation w/r/t the local wall, which is determined by the expression ‘Steer = (Front – Rear)/100’, implemented in the Teensy 3.5 MCU that manages the VL53L0X lidar array. This result is available to the main program as ‘glLeftSteeringVal’ and ‘glRightSteeringVal’. The steering values are proportional to the orientation angle in degrees, calculated as OrientDeg = steerval/0.0175.

Expressions for which way and how much to turn to achieve the desired capture approach angle of +/- 30 degrees were determined for each of the 12 starting configurations shown (numbered 1-12 in the above figure). An examination of the resulting expressions showed that they could be collapsed down into just two different calls to the ‘SpinTurn(isCCW, numdeg)’ subroutine – one for left-side tracking, and one for right-side tracking, as shown by the bold-face expressions above.

Project to Create 3D Clay Cubes

Posted 26 June 2022,

Got an interesting request from a fellow Duplicate Bridge player the other day. I was asked if I could somehow create a way to produce precise cubes of modeling clay. Of course, since I am the proud owner of not just one, but two 3D printers (a FlashForge Creator Pro II IDEX and a Prusa MK3S), I said “sure, no sweat! How many would you like?”

As usual, once I got home and started playing around with the project, I realized that I had, once again, jumped into the pool without first checking to see if there was any water. Turns out that the garden variety modeling clay is pretty gooey stuff, and sticks to EVERYTHING, including 3D printed PLA/PETG forms. But hey – a project that just falls together without any drama isn’t much of a project, is it?

Anyway, I adopted my usual tactic of “fail quickly, fail often” (Space X has nothing on me!) to home in on something that might produce what I was after. My first thought was a simple rectangular cylinder with inner dimensions matching the desired clay cube dimensions, combined with a press that just barely slides into the cylinder, as shown below:

This worked OK, but had some drawbacks. The original press with just a simple cylindrical handle really didn’t allow for much pressure without hurting my hand, so I added a ‘squashed ball’ top to make that easier, and added some length to the extrusion die to produce a longer extruded clay blank, which I intended to cut to length with an Exacto knife. Again, this worked OK, but not spectacularly; as can be seen in the last of the above photos, the extruded blank exhibited some grooves created by some errant pieces of filament.

My next thought was to use a long rectangular cylinder not as an extrusion die, but just as a removable form; after pressing clay into the form with the press, the form would be cut away to access the clay blank, which would then be cut to length as before. However, this would require a new form for every blank, so that wasn’t great. Then I hit upon the idea of making the form out of nested pieces, as shown below:

Idea for nested pieces to create a removable form

While the print turned out very well, the result of having four removable sides was kind of a mess – very difficult to get together (and then somehow bind/tape the thing together)

Next I tried a similar ‘removable form’ idea, but with just two removable pieces rather than four, and an end-piece as well.

Two-piece form with end cap

This didn’t work very well either, because the two pieces could easily slide against each other, making it impossible to keep the desired shape. So, I tried again but used a notch and cutout feature to keep the pieces aligned, as shown below

This worked well enough that I decided to try forming a cube with my Sculpey clay and my press. To start, I wrapped the form with nylon filament strapping tape to withstand the press pressure, and started loading clay balls into the form. This actually worked fairly well, but when I pulled the form apart, the clay stuck to the walls and deformed badly.

Back to the web to research release agents for modeling clay, and found this site that specifically recommended water as a release agent for Sculpey clay – yay!

So, I tried again to form a cube, as shown below:

This time I got a pretty nice cube, with each side almost exactly 19mm or 3/4″ – yay!

So at this point it’s time to consult with my bridge club friend and find out if this really what they want. Stay tuned!

29 July 2022 Update:

Got some feedback from my ‘customer’ today. She liked the split-form implementation, but would like to try a 1x1x1 inch cube instead of the 3/4×3/4×3/4 we started with. So, I went back to TCad and ‘whipped up’ (well, for me, ‘whipping up takes a while…) the new version shown below:

1x1x1″ Clay Cube Press
Cube press showing split halves with registration notch

Frank

Wall-E3 Charging Station Integration, Part II

Posted 28 May 2022

After getting the IR homing capability working with Wall-E3, I started working on the ability to recognize the charging beacon during normal wall-following operations, and then transitioning (or not, depending on battery level) to the charge station docking procedure.

The ‘transition-to-charge’ feature assumes that the robot is tracking the right-hand wall within a few meters of the charging station, and detects the beacon. To connect to the charger, the robot must first navigate away from the wall to a point on the IR beacon centerline, turn to line up with the beam, and then follow it to the charger.

As I was testing this feature, I noticed at one point the robot detected the IR beacon while tracking the wall opposite the charger, not the wall where the charger was installed. This is a real problem, because my current ‘transition-to-charger’ algorithm assumes the geometry associated with the common wall case. In the common wall case, the robot makes a 90 deg turn away from the wall and moves to a spot calculated to put it on the beam centerline, but this won’t work at all if the robot is starting from a point on the opposite wall. In that case, the robot needs to travel forward or backward parallel to the opposite wall by a distance calculated to put it on centerline. So, the first thing the robot needs to be able to do is to detect which case (common wall or opposite wall) it is dealing with.

The primary difference between the two cases is how much the robot has to turn away/toward the wall it is currently tracking in order to point directly at the beacon generator; if it is on the common wall, then it may not need to turn at all, or may even have to turn a bit toward the wall to point directly at the generator. If it is currently tracking the opposite wall however, it will have to turn 30-60 deg away from the wall. So, the determination as to which case is in play, the robot needs to measure the number of degrees of heading change required to face the charger; if the heading change is 30-60 degrees, then it is the opposite wall case. If it is just a few degrees, then it is the common wall case.

The following short video shows the two cases and the heading change required to point to the charger in each case.

Turn to beacon performance, for opposite and common wall cases

21 June 2022 Update:

Progress has been a bit slow over the last month, as I’ve been doing other things, including recovering from an eye/scalp injury sustained at the Senior Games in Florida (those old guys can still pack a wallop!). However, I now think I have the charging station homing operation working, at least for the right-wall-tracking (the most common and probably only) case.

As it turns out, I didn’t really need to worry about the problem of the robot picking up the IR homing beacon while tracking the far wall (i.e. the wall perpendicular to the one associated with the charging station). As I noted above, the robot can sometimes ‘see’ a beacon signal above the threshold, but when it does, the steering signal (goes from -1 to +1) is always off scale on one side or another. So, by adding the requirement that the steering signal be within the range of -0.8 to +0.8, this case is entirely eliminated, leaving only the ‘same-wall’ homing case – yay!

I made a number of improvements to the IR homing code, in particular the ‘initial approach’ code that places the robot optimally with respect to the IR beacon beam, so that the subsequent ‘home-to-charging-station’ procedure is almost always successful.

In the following short video, the robot tracks the opposite wall, makes the turn to track the adjacent wall, and then detects the IR homing beacon at the conclusion of the offset capture phase for the adjacent wall. Then the robot transitions to ‘I’m hungry – feed me’ mode.

In ‘feed me’ mode, the robot first aligns itself with the beacon in a two-stage (coarse and fine) procedure. Once this is accomplished, the robot determines its distance from the beacon, and then uses this information to determine how far off the adjacent wall it should be to perfectly line up with the center of the IR beam, makes a turn to be perpendicular to the adjacent wall, and then moves forward or backward to achieve the desired offset measurement. Once this is accomplished the robot re-aligns itself with the beacon again, using the same two-stage procedure. Once all this is done, the robot then homes on the beacon to connect to the charger.

The above algorithm is shown in action in the following short video. When viewing the video, keep the following in mind:

  • The homing beacon is only recognized at the conclusion of the wall offset capture maneuver for the adjacent wall, even though the IR beacon signal is above the threshold as soon as the robot make the turn to the adjacent wall. This is done to place the robot at a known location to start the process.
  • While aligning itself to the beacon, the robot turns ON its red laser pointer to allow us humans to follow the action. Although the ‘red’ laser looks mostly white in the video, it is indeed red.
  • The beacon alignment takes place in two stages – coarse, then fine. This can be see by the speed at which the red laser dot moves back and forth.
  • In this particular video, the robot is already at the proper wall offset, so the ‘move to desired rear distance’ operation is truncated – the robot makes the turn to be perpendicular to the wall, determines it is at the proper offset, and so immediately turns back to align to the beacon signal.
Successful ‘track and then home to charger’ run

Wall-E3 Charging Station Integration

Posted 14 April 2022

Wall-E3 has a significantly different form-factor than Wall-E2, requiring modification of the lead-in rails on the charging station, as shown below.

Once the required mods were made, I uncommented ‘#define IR_HOMING_ONLY in my code to have Wall-E3 concentrate solely on homing to the charging station, and then worked my way into a set of PID values (100,0,30) that gave reasonable homing performance, as shown in the following short video:

And here’s an Excel plot showing typical homing performance.

24 April 2022 Update:

The above data and video was collected using the ‘NoPing’ version of the IR Homing algorithm, so I went back and did this again using the version that monitors the front distance, both for detecting a ‘stuck’ condition and for the ability to navigate around the charging station if a charge isn’t needed. Of course, this didn’t work right away for various reasons, but eventually I got it working and made another test run. Here’s a short slo-mo video, and the accompanying Excel chart showing homing performance.

IR Homing run in slo-mo with tracking status shown on rear panel LEDs
Steering value and front distance vs time

01 May 2022 Update:

Wall-E3 has made some significant progress in the last week, and is now capable of switching from right-wall tracking to IR Homing to the charging station, as shown in the following short video and telemetry output:

automatic switching from right-wall tracking to charging station homing

Stay tuned,

Frank

New Wall-Following Capability For Wall-E3

Posted 28 March, 2022

I’ve been working with Wall-E3, my new Teensy 3.5-powered autonomous wall-following robot. I’ve gotten left-wall and right-wall tracking working pretty well, but the transition from one wall to the next (typically right-angle) wall was pretty awkward. The robot basically ran right up to the next wall, stopped, backed up, and then made a right-angle turn to follow the next wall. So, I am trying to make that transition a bit smoother.

After trying a few different ideas, the one I settled on was to use my current very successful ‘SpinTurn()’ function to do the transition. I modified my ‘CheckForAnomalies()’ function to add a check for forward distance less than twice the desired offset distance. When this distance is detected, the robot stops and then makes a right-angle ‘spin’ turn (one sides wheels go forward, the other sides wheels go backward) in the direction away from the currently tracked wall, and then re-enters ‘Track’ mode causing it to track the next wall normally. Here’s a short video showing the process:

robot tracking left wall, then making a ‘spin’ turn to follow the next wall

Now that I have it working for left-wall tracking, it should be easy to port it to the right-wall tracking condition.

07 April 2022 Update:

Well, as usual, what I thought would be easy has turned out to be anything but. I was able to port the left wall tracking algorithm to the right side, but as I was testing the result, I noticed that Wall-E3 doesn’t really track the right (or left, for that matter) wall. After it ‘captures’ the desired wall offset and turns back to the parallel orientation, it basically goes straight ahead (same speed applied to both side’s motors). If the initial orientation is close to parallel, it looks like it is tracking, but it isn’t.

So, I tried a number of ideas to actually get it to track the desired offset, but they all resulted in poor-to-catastrophic tracking. After working the problem, I began to see that, as always, the issue is the errors associated with the VL53L0X sensor distance measurements. There are two distinct types of errors – an initial ‘calibration’ error associated with sensor-to-sensor variation, and the measurement error that occurs when the robot isn’t oriented parallel to the measured surface.

Calibration Errors:

Each individual VL53L0X sensor gets a slightly different value for the distance to the target, and sometimes ‘slightly’ can be pretty big – 2-3cm at 20cm, for instance. Up until now I had been ignoring these errors, but the time had come to do something about. So, as I always do when troubleshooting an issue, I started taking data. I ran a bunch of trials for all seven VL53L0X sensors at various distances. After gathering the data, I used Excel’s curve-fitting capability to fit a linear equation to the points, as shown below:

The linear-fit equations gave me a starting point, but they still had to be tweaked a bit to provide the best possible match between what the VL53L0X sensor reported and the actual measurement. Again I used Excel to tweak the equations to give the best match as shown below:

Left-side ‘tweaked’ correction expression
Right-side ‘tweaked’ correction expression
Rear ‘tweaked’ correction expression

The expressions shown in red are the ones used to correct the VL53L0X-measured distances to be as close as possible to the actual distances (10cm, 20cm, 30cm).

The above corrections were coded into a set of seven ‘correction’ functions for the Teensy 3.5 program that manages the two VL53L0X arrays and the single VL53L0X rear distance sensor.

Correction functions in Teensy_7VL53L0X_I2C_Slave_V4.ino

While this did, indeed, solve a lot of problems – especially with the calculations for wall offset capture initial approach angle, it still didn’t entirely address Wall-E3’s inconsistent offset tracking performance.

Orientation Angle Induced Errors:

Wall-E3 tracks walls by offset by comparing the center VL53L0X measurement to the desired offset, and adjusting the left/right motor speeds to turn the robot in the desired direction. Unfortunately, the turn also throws off the measurements as now the sensors are pointing off-perpendicular, and return a different distance than the actual robot-to-wall perpendicular distance. I tried adjusting the PID controller algorithm to control the robot’s steering angle rather than the offset distance, and then calculating a new steering angle each time – this worked, but not very well.

So, the solution (I think) is to come up with a distance correction factor for off-perpendicular orientations. Going through the trigonometry, I came up with this expression:

corrdist=measdist*cos(steeringAngle) 

I programmed this into the following function:

and then ran some tests to verify that the correction algorithm was having the desired effect. Here’s the setup:

and here are some Excel plots showing the results

Distance correction for off-perpendicular angles

As can be seen from the above plot, the corrected distance (gray curve) is pretty constant for angles of -30, 0 and +30 degrees.

09 April 2022 Update:

I have been thinking about the above orientation angle induced errors issue for a couple of days. I wasn’t really happy with that correction as shown in the above Excel plot, and it occurred to me that I didn’t really have to strictly abide by the above correction expression derived from the actual geometry. What I really wanted was a correction that would be accurate at low (or zero) offset angles, but would slightly over-correct for orientation angles in the +/- 30 deg range. In this way, when the PID engine adjusts the motor speeds to correct for an offset error, the system doesn’t try to run away. In fact, for a slight overcorrection algorithm, the center distance reported by the robot might actually go down rather than up for off-perpendicular angles. This would tend to make the PID think that it was over-correcting instead of under-correcting as it does with uncorrected distance reporting.

So, I went back to my test setup, and made some more measurements of corrected vs uncorrected center distances for -30, 0, and 30 degree orientations, for varying values of ‘tweak’ values in he correction expression, as shown in the Excel plots below:

Out of the above correction values, I like the “cos(1.1*corr_ang_rad)” configuration the best. The correction doesn’t modify the center distance at all for the parallel case, and produces a very slight over-correction at the +/- 30 degree orientation cases.

I added the ‘1.1*’ correction to the ‘OrientCorr()’ function and performed another right wall tracking test in my office ‘sandbox’ as shown in the short video below:

Right wall tracking with sensor calibration and orientation correction applied

Here is the telemetry output for this run:

Looking at the video and the telemetry, the first leg starts with the normal offset capture maneuver, which ends with the robot about 44cm from the wall. Then it makes a pretty distinct correction toward the wall, overshoots the desired offset, and winds up the leg at about 22cm from the wall.

The second leg again starts with a capture maneuver to about 43cm. Then it stabilizes at about 32cm from the wall – nice.

The third leg maneuvers to about 43cm, and then again stabilizes at about 30cm.

The fourth leg was a bit anomalous, as it appeared to way overcorrect after capturing the desired 40cm offset, but I couldn’t find anything in the telemetry to explain it. It’s a mystery!

It’s clear from the above that I no longer need to correct for orientation angle induced errors during the offset maneuver, as these are now handled by my recent ‘global’ correction code. This will probably help with subsequent offset tracking, as the initial offset should be closer to the offset target at the start of the tracking phase. We’ll see…

After a number of trial runs, I finally settled (as much as anything is ‘settled’ in the Wall-E world) on PID = (400,5,40). Here’s a short video showing performance in this configuration:

And, once again, I still have to port this configuration and code back to the left-side wall tracking configuration. Here’s a short video of left-side wall tracking. Interestingly, my ‘random walk’ PID tuning technique resulted in significantly different PID values (300,0,200) vs (400,5,40) than the right side. No clue why.

At this point, I believe I have gone about as far as I can at the moment for wall tracking. WallE3 now can consistently track the walls in my office ‘sandbox’ using either the left-side or the right-side wall for reference. My plan going forward is to ‘archive’ this version (WallE3_WallTrack_V5) by copying it to a new project. The new project will have the goal of integrating charging station homing/connection into the system.

In preparation, I recently modified the charging station lead-in structure to accommodate the wider wheelbase on WallE3, as shown in the following photo:

17 April 2022 Update:

Well, I spoke too soon when I said above that wall-tracking was “settled”. I ran into a couple of significant problems; first, when the robot is already close to the proper offset, it is supposed to just turn to parallel the wall and then go into tracking mode, but on a number of occasions Wall-E3 ran out of control into the next wall. Secondly, wall tracking was anything but smooth, and I couldn’t get it to reliably track the desired offset. So, back to the drawing board (again).

The ‘close enough’ failures were being caused by a flaw in the ‘RotateToParallelOrientation() routine; as the robot approached the parallel orientation, the PID controller also started slowing the rotation speed, to the point where the robot wasn’t rotating anymore – just going straight ahead. If the actual parallel orientation wasn’t reached, the robot just kept going straight ahead forever – oops! The fix for this was to abandon the RotateToParallelOrientation() subroutine entirely, and just use WallOrientDeg() to get the current angular offset from parallel, and SpinTurn() to turn that angular amount back to parallel. RotateToParallelOrientation() is only used in two places (TrackRight/LeftWallOffset()), so the entire function can be removed as well.

The issue with offset tracking continues to bedevil me. When the robot is turned to approach the offset, the measured distances go the wrong way, so the PID tends to ‘wind up’ and drive the robot toward or away from the wall, rather than smoothly approaching the offset. I thought I had the answer to this by ‘tweaking’ the distance corrections due to off-parallel angles, but sadly, this did not help.

So, I removed the off-angle distance correction and went back to just tracking the steering angle – a value proportional to the difference between the front and rear side distance measurements. Now tracking was much more stable, but the robot traveled in a straight line slightly toward the wall. After a few trials, I realized that the robot was doing exactly what I told it to do – drive the front/back measurement error to zero, but unfortunately ‘zero’ did not equate to ‘parallel’. After scratching my head for a while, I realized that rather than using zero as the setpoint, I should use the value that causes the robot to travel parallel to the wall – which turned out to be about 0.25. Using this value I could increase the Kp value back up to 400 or so, and this resulted in very good tracking of whatever offset resulted from the ‘offset approach’ phase of the tracking algorithm. Just this step was a huge improvement in tracking performance, but it wasn’t quite ‘offset tracking’ yet as it didn’t pay any attention to the actual offset – just the difference between the front and back wall offset measurements.

Once I had this working, I was able to re-incorporate my earlier idea of biasing the actual steering value with a term that is proportional to the actual offset, i.e.

WallTrackSteerVal = glRightSteeringVal + (float)(glRightCenterCm – offsetCm) / 50.f;

This, coupled with the empirically determined steering value setpoint of 0.25 resulted in a very stable, very precise tracking performance, as shown in the short video below and the associated telemetry and Excel plots.

PID (400,0,0), SetPoint = +0.25
All four wall sections – note straight lines are due to gaps between wall sections

So now I think I finally (I’ve only been working on this for the last three years!) have a wall tracking algorithm that actually makes sense and does what it is supposed to do – track the wall at a constant offset – yay!!

After getting the right side working, I ported everything back to the left side, with some differences; for the left side approach phase, I wound up using a fudge factor of 10cm vice 5cm to get the approach to stop near the desired offset. Also, the base steering value setpoint was -0.35 instead of +0.25, and the input (WallTrackSteerVal) wound up being

With these settings, Wall-E3 seemed pretty comfortable navigating around my office ‘sandbox’, as shown in the following short video:

Stay tuned,

Frank

Wall-E3 Right Wall Following Trial

Posted 23 March 2022,

Earlier this month I was able to demonstrate a multi-lap left-side wall tracking run by Wall-E3 in my office ‘sandbox’. This post describes my efforts to extend this capability to right-side wall tracking.

Since I already had the left-side wall tracking algorithm “in the can”, I thought it would be a piece of cake to extend this capability to right-side tracking. Little did I know that this would turn into yet another adventure in Wonderland – but at least when I finally made it back out of the rabbit-hole, the result was a distinct improvement over the left-side algorithm I started with. Here’s the left-side code:

The above code works, in the sense that it allows Wall-E3 to successfully track the left-side wall of my ‘sandbox’. However, as I worked on porting the left-side tracking code to the right side, I kept thinking – this is awful code – surely there is a better way?

After letting this problem percolate for few days, I decided to see if I could approach the problem a little more logically. I realized there were two major conditions associated with the problem – namely is the robot’s initial position inside or outside the desired offset distance K? In addition, the robot can start out parallel to the wall, or pointed toward or away from the wall. Ignoring the ‘started out parallel’ degenerate case, this reminded me of a 3-parameter Karnaugh map configuration, so I started sketching it out in my notebook, and then later in a Word document, as shown below:

As shown above, I broke the 3-parameter into two 2-parameter Karnaugh maps, and the output is denoted by αT. After a few minutes it became obvious that the formula for αT is pretty simple – its either αR – αA1 or αR – αA2 depending on whether the robot starts out outside or inside the desired offset distance. In code, this boils down to one line, as shown at the bottom of the Karnaugh map above, using the C++ ‘?’ trinary operator, and choosing CW vs CCW is easy too, as a negative result implies CCW, and a positive one implies CW. The actual code block is shown below:

Here’s a short video of Wall-E3 navigating the office ‘sandbox’ while tracking the right-side wall.

So, it looks like Wall-E3 now has tracking ability for both left-side and right-side walls, although I still have to clean things up and port the simpler right-side code into TrackLeftWallOffset().

25 March 2022 Update:

Well, that was easy! I just got through porting the new right-side wall tracking algorithm over to TrackLeftWallOffset(), and right out of the box was able to demonstrate successful left-wall tracking in my office ‘sandbox’.

At this point I believe I’m going to consider the ‘WallE3_WallTrack_V3’ project ‘finished’ (in the sense that most, if not all, my wall tracking goals have been met with this version), and move on to V4, thereby limiting the possible damage from my next inevitable descent through the rabbit hole into wonderland.

Stay tuned,

Frank