WallE3 Wall Track Tuning Review

Posted 22 January 2023

This post is intended to get me synched back up with the current state of play in my numerous wall track tuning exercises. I am using these posts as a memory aid, as my short-term memory sucks these days.

Difference between WallE3_WallTrackTuning_V5 & _V4:

  • V5 uses ‘enums.h’ to eliminate VS2022 intellisense errors
  • V5 RotateToParallelOrientation(): ported in from WallE3_ParallelFind_V1
  • V5 MoveToDesiredLeftDistCm uses Left distance corrected for orientation, but didn’t make change from uint16_t to float (fixed 01/22/23)
  • V5 changed parameter input from Offset, Kp,Ki,Kd to Offset, RunMsec, LoopMsec
  • V5 modified to try ‘pulsed turn’ algorithm.

Difference between WallE3_WallTrackTuning_V4 & _V3:

  • V4 added #define NO_LIDAR
  • V4 changed distance sensor values from uint16_t to float
  • V4 experimented with ‘flip-flopping’ WallTrackSetPoint from -0.2 to +0.2 dep on how close the corrected center distance was to the desired offset (However, I believe this was done incorrectly – the PID engine compared the corrected center distance to WallTrackSetPoint – literally apples to oranges)

So WallE3_WallTrackTuning_V5 seems to be the latest ‘tuning’ implementation.

Comparison of WallE3_WallTrack_Vx files:

WallE3_WallTrack_V2 vs WallE3_WallTrack_V1 (Created: 2/19/2022)

  • V2 moved all inline tracking code into TrackLeft/RightWallOffset() functions (later ported back into V1 – don’t know why)
  • V2 changed all ‘double’ declarations to ‘float’ due to change from Mega2560 to T3.5

WallE3_WallTrack_V3 vs WallE3_WallTrack_V2 (Created: 2/22/2022)

  • V3 Chg left/right/rear dists from mm to cm
  • V3 Concentrated all environmental updates into UpdateAllEnvironmentParameters();
  • V3 No longer using GetOpMode()

WallE3_WallTrack_V4 vs WallE3_WallTrack_V3 (Created: 3/25/2022)

  • V4 Added ‘RollingForwardTurn() function

WallE3_WallTrack_V5 vs WallE3_WallTrack_V4 (Created: 3/25/2022)

  • No real changes between V5 & V4

24 January 2023 Update:

I returned WallE3_WallTrackTuning_V5 to its original configuration, using my custom PIDCalcs() function, using the following modified inputs:

So this is the ‘other’ method – using the modified steering value as the input, and trying to drive the system to zero. Within just a few trials I rapidly homed in on one of the PID triplets I had used before, namely PID(3,0,1). Here’s the raw output, a plot of orientation-corrected distance vs setpoint, and a short video on my 4m straight wall section:

So, the ‘steering value tweaked by offset error’ method works on a straight wall with a PID of (3,0,1). This result is consistent with my 11 January 2023 Update of my ‘WallE3 Wall Tracking Revisited‘ post, which I think was done with WallE3_WallTrackTuning_V5 (too many changes for my poor brain to follow).

Unfortunately, as soon as I put breaks in the wall, the robot could no longer follow it. It runs into the same problems; the robot senses a significant change in distance, starts to turn to minimize the distance, but the distance continues to go in the wrong direction due to the change in the robot’s orientation. This feedback continues until the robot is completely orthogonal to the wall.

26 January 2023 Update:

I went back and reviewed the post that contained the successful 30 October 2022 ‘two 30deg wall breaks’ run, and found that the run was made with the following wall following code:

As can be seen from the above, the line

Compares the orientation-corrected wall distance to the desired offset distance, as opposed to comparing the computed steering value to a desired steering value of zero, with a ‘fudge factor’ of the distance error divided by 10 as shown below:

So, I modified WallE3_WallTrackTuning_V5 to use the above algorithm to see if I could reproduce the successful ‘two breaks’ tracking run.

Well, the answer was “NO” – I couldn’t reproduce the successful ‘two breaks’ tracking run – not even close – grrr!!

So, as kind of a ‘Hail Mary’ move, I went back to WallE3_WallTrack_V2, which I vaguely remember as the source of the successful ‘two break’ run, and tried it without modification. Lo and Behold – IT WORKED! Whew, I was beginning to wonder if maybe (despite having a video) it was all a dream!

So, now I have a baseline – yes!!! Here’s the output and video from a successful ‘two break’ run:

Successful ‘two break’ run with WallE3_WallTrack_V2

And here is the wall tracking code for this run:

And here is just the wall tracking portion of the above function:

From the above, it appears that WallE3_WallTrack_V2 uses a ‘steering value’ setpoint of zero, and also ‘tweaks’ the input to the PID engine using the error between the measured wall distance and the desired wall offset, as shown in the following snippet:

This is very similar to what I was trying to do in WallE3_WallTrackTuning_V5 initially below (copied here from ’24 January 2023 Update:’ above for convenience):

So, the original Tuning_V5 math appears to be identical to the WallTrack_V2 math; both use an offset factor as shown:

WallE3_WallTrackTuning_V5:

WallE3_WallTrack_V2:

Aha! In WallE3_WallTrack_V2 the offset error (in mm) is divided by 1000, but in WallE3_WallTrackTuning_V5 the offset error (in cm) is divided by 10, which still leaves a factor of 10 difference between the two algorithms! I should be dividing the cm offset by 100 – not 10!

28 January 2023 Update:

SUCCESS!!! So I went back to my WallE3_WallTrackTuning_V5 program, and modified it to be the same as WallE3_WallTrack_V2, except using distances in cm instead of mm, and dividing by 100 instead of 10. After the usual number of stupid errors, I got the following successful run on the ‘two break’ wall setup:

WallE3_WallTrackTuning_V5 using ‘tweaked’ steering value. Average LCCorr = 36.9 cm

After running this test a couple more times to assure myself that I wasn’t dreaming, I started to play around with the PID values to see if I could get a bit better performance. The first run (shown above) produced an average offset distance of about 36.9cm. A subsequent run showed an average of 26.4cm. This implies that the steering value ‘tweak’ isn’t really doing much. For instance, this line:

shows that for a corrected distance of 28.13cm, the calculated steerval is (21.4-19.3) /100 = 2.1/100 = 0.021, the ‘offset_factor’ is (40-19.3)/100 = 20.7/100 = 0.207. So, the ‘tweaked’ steerval should be 0.228 and should produce an error term of +0.228 but it is only reporting an error of 0.00!

I added the steerval and the offset_factor to the output telemetry and redid the run with 300,0,0 as before. This time I got

In this case, steerval = (37.7 – 37.1)/100 = +0.06 (pointing slightly away from the wall), tweak = (38.87-40)/100 = -1.13/100 = -0.013, so the total error of +0.0487, giving left/right motor speeds of 60/89, i.e. correcting slightly back toward the wall – oops! It looks like I need to use a slightly smaller divisor for the ‘tweak’ calculation.

I added the divisor for the offset_factor to the parameter list for ‘Tuning_V5’ and redid the run, using ‘100’ to make sure I got the same result as before.

I was able to confirm that using this method with the ‘tweak’ divisor as a parameter I got pretty much the same behavior. Then I started reducing the divisor to see what would happen as the ‘tweak’ became more of a factor in the PID calculation.

For a divisor of 75, we got the following output:

Looking at the line at time 124142, we see:

The steering value is very low (0.02) because the front and rear sensor distances are very close, but because the robot is well inside the intended offset distance of 40cm, the ‘tweak value’ of -0.11 is actually dominant, which drives the robot’s left motors harder than the right ones, which should correct the robot back toward 40cm offset. When we look at the Excel plot, we see:

tweak divisor 75, two break wall.

The plot shows the ‘tweak’ value becoming more negative as the corrected distance becomes smaller relative to the desired offset distance of 40cm, and thus tends to correct the robot back toward the desired offset. At the 12.41sec mark (shown by the vertical line in the above plot) the ‘tweak’ value is -0.11, compared to the steering value of +0.02, so the ‘tweak’ input should dominate the output. With a P of 300, the output with just the steering value would be -300 x 0.02 = -6 resulting in left/right motor speeds of 69/81, steering the robot very slightly toward the wall. However, with the ‘tweak’ value of -0.02 the output is -300 x (0.02 -0.11) = +27 (actually +28), resulting in motor speeds of 103/46, or moderately away from the wall, as desired.

To simply the tuning problem, I changed the wall configuration back to a single straight wall, but started the robot with a 30cm offset (10cm closer than desired) but still parallel (steering value near zero). Here’s the output:

As can be seen from the above plot, the robot starts out at about 32cm, and slowly closes to about 28cm. Simultaneously the ‘tweak’ value goes from about -0.12 to about -0.18 (at 109749 mSec, the black line). The result is the robot starts moving away from the wall, getting to the desired 40cm offset a little over 2sec later (the red line). After this point it maintains about 40cm offset, as desired. The average offset distance from the red line to the end of the run is about 37.5cm – nice!

So it looks like P = 300 and tweak divisor = 75 is a nice starting point.

30 January 2023 Update:

Now that I have both the PID and divisor values as parameters, I plan to make some more ‘straight wall’ runs to see how the robot behaves. My belief is that a slightly lower divisor ratio, and possibly a slightly higher P value will be beneficial – we’ll see

Starting with P = 300 and divisor = 50:

note starting offset of approx 31cm.

This run started with the robot placed roughly parallel to and offset about 32cm from the start of the 4m straight wall section. For the first two seconds the offset increased monotonically to about 38cm, and after that the robot maintained an offset between 35 and 39cm. The average for the ‘maintenance’ portion of the run was approximately 38cm – nice!

PID(350,0,0), divisor = 50:

note starting offset of approx 25cm

As can be seen in the above plot. the robot started off at an offset of approx 27cm, and rapidly (about 1.5sec) moved to an offset of about 38cm. After that it maintained an offset of between 35 and 41cm for the rest of the run for an average of 37.5cm. This is excellent performance, and now I have to wonder just a bit if a separate ‘offset capture’ phase is really required.

Making another run with the same parameters (350,0,0) div 50 but with the robot placed more or less parallel but about 10cm from the wall:

As can be seen from the above raw data and plot, the robot starts out at about 15cm and rapidly (within about 1sec) moves to about 37cm. After that the robot maintains a wall distance between 35 and 40cm, with an average distance of 38.4cm – very nice!

Here’s a short video showing the action:

After this run I decided to try a wall configuration with a single 30º break using the same PID(350,0,0) and div factor (50) as before. The robot was placed nearly parallel with an approx 13cm offset. Here’s the telemetry, the corresponding Excel plot, and a short video.

note starting offset of approx 12cm

After this run I decided to push my luck and try a ‘two break’ wall configuration, again with a very small initial offset. Here’s the raw telemetry output, the corresponding Excel plot, and a short video showing the action.

Black and red lines show approx location of first and second breaks, respectively

From the above it is pretty clear that PID(350,0,0) and ‘tweak’ divisor 50 does a very good job of tracking a straight, one-break or two-break wall at a defined offset distance. In addition it is pretty clear that this configuration does not require a separate ‘offset capture’ function – it does just fine all by itself. I guess I’m a little bummed out that I spent so much time ‘perfecting’ (to the degree that anything I do can be said to be ‘perfect’) the capture function.

23 July 2023 Update:

I have been trying to address the tendency of WallE3 to oscillate back and forth after navigating past the 45º break from the entrance hallway into the kitchen, and I kept getting the feeling I had already solved this problem at least once before. I searched through my older posts and found this one, which clearly shows much better performance than I was now seeing. After carefully perusing the above data, I finally figured out the difference; the above runs used a 50mSec interval, and I was currently using a 100mSec interval – oops!

I changed my current code to 50mSec, and ‘sure nuff’ WallE3’s wall tracking performance around breaks improved dramatically – yay!

This episode proves once again the value of copious documentation – you never know when you will need advice from your former self!

After changing the PID update interval to 50 vs 100 mSec, I made a few runs on my ’45º break’ wall configuration with PID = (350,0,0), (350,0,10), (350,0,20), and (350,0,30). As shown in the following three short videos, both the (350,0,10) and (350,0,20) produced better tracking performance, but the (350,0,30) was definitely inferior.

PID = (350,0,0)
PID = (350,0,10)
PID = (350,0,20)
PID = (350,0,30)

After these tests, I edited WallE3_Complete_V3 to use PID = (350,0,20) with a 50mSec interval.

Move to a Specified Distance, Revisited

Posted 24 December, 2022

As part of the suite of tools associated with wall tracking and IR beam homing, I created a set of ‘move to specified distance’ routines using my home-grown PID algorithm as follows:

  • MoveToDesiredFrontDistCm (MTFD)
  • MoveToDesiredRearDistCm(MTRD)
  • MoveToDesiredLeftDistCm(MTLD)
  • MoveToDesiredRightDistCm(MTRD)

I saw some odd problems in my past wall-tracking exercises, so I thought it would be a good idea to go back and test these in isolation to work out any bugs. As usual, I constructed a limited part-task program for this purpose, and ran a number of tests on my desktop ‘range’. I started with the ‘MoveToDesiredFrontDistCm()’ function, as shown below:

MoveToDesiredFrontDistCm ():

Here’s some output from a typical run:

The movement goes OK, and the robot telemetry says it stopped very close to the desired 60cm distance. However, when I measured the actual distance from the ‘wall’ to the robot, I got more like 67 or 68cm, probably indicating that the robot coasted some after the motors were turned off. When I instrumented the code to show the next 10 distance measurements after the exit from the subroutine, it became easy to see that this was the case – the robot coasted from 59 to 67cm. The robot should correct this by going back the opposite way, but it doesn’t because the last measurement (59cm) fits inside the 59-61cm ‘basket’ for subroutine termination (the ‘while’ loop termination criteria).

So, I thought what I could do is check the reported front distance after function exit, and just call it again if the robot had coasted too far from the target. The second run should get much closer, as the starting error term would be much smaller. So now the test code looks like this:

This should have worked well, except when I tried it, the function exited with a ‘STUCK_AHEAD’ error – oops! Here’s a run going from 60 from 30cm:

The ‘stuck’ checks have to be there because the robot can’t ‘see’ obstacles that are too low to interrupt the front LIDAR beam, or aren’t directly in the line of sight, but how to manage? The ‘stuck’ checks depend on a variance calculation on the last 50 measurements (held in a bit-bucket array), so I began to wonder why the front variance was decreasing so rapidly while the rear variance wasn’t (one would think they would behave more or less the same). So I went back and printed out the front & back distance array contents after each run, and saw that the reason the front variance was decreasing so rapidly was because I was using a 50mSec time interval, meaning the 50-element array was getting filled in 2500mSec – or about 2.5sec. So, in order to get more variance in a normal run, either the run has to be done at a faster speed (leading to more overshoot) or the timing interval has to be increased. In earlier work I had discovered that the front LIDAR system produces errors for long distance measurements when using short measurement intervals, so I increased the measurement interval from 50 to 200mSec, and now the variance decreases at a much slower rate – yay!

With the longer 200mSec measurement interval, the movement routine now has time to adjust for overshoots, as shown below:

As the above run shows, the robot stopped very close (59cm) to the desired 60cm, without any significant overshoot, and the front variance actually increased significantly during the run – nice!

The following run was in the other direction – from 60 to 30cm:

The robot did a very good job of stopping right on the mark, but coasted 2cm further over the next 2sec. This caused the test program to run the movement routine a second time, but it almost immediately exited again. The front/rear variance numbers were quite high, well out of the danger zone (the error codes shown did not effect the routine – it only looks for ‘STUCK_AHEAD’ and ‘STUCK_BEHIND’ which trigger on front and rear variance thresholds respectively.

So, for at least the ‘MoveToDesiredFrontDistCm()’ function, it appears that PID = (1.5, 0.1, 0.2) works very well. On to the next one!

MoveToDesiredRearDistCm():

Using the same PID set as for MoveToDesiredFrontDistCm() produces excellent results for the ‘rear motion’ routine as well. Here’s a run going from 60 to 30cm based on the rear distance sensor:

MoveToDesiredLeftDistCm():

Next, I tackled the ‘MoveToDesiredLeftDistCm()’ function, which uses the left center distance measurement (corrected for orientation angle). This went fairly quickly, as the PID values for the front/back case seemed to work well for the ‘side’ case too. Here’s the output and a short video from a typical run:

MoveToDesiredRightDistCm():

For this side, I simply copied ‘MoveToDesiredLeftDistCm()’ and changed all occurrences of ‘Left’ to ‘Right’. Here’s the telemetry and a short video from a typical run:

Summary:

It looks like all four (front/back/left/right) motion features are now working fine, with the same PID = (1.5, 0.1, 0.2) configuration – yay!

Now the challenge is to integrate this all back into my mainline program and start testing wall tracking again.

Stay Tuned,

Frank

Boca Chica Space X Pilgrimage

Posted 26 January 2022.

I grew up on Florida’s east coast (Daytona Beach) in the 50’s and 60’s. We could watch the launces from our front yard, and once my dad and I travelled the 100 miles or so south to Cape Canaveral to witness one of the Apollo launches. Back in those days folks just parked alongside the roads and watched. I vividly remember watching the launch, and many seconds later actually feeling the sound vibrate through my chest – what an experience! Later, working as an engineer for the government, I watched the Challenger disaster from a Florida highway overpass. Since then I have watched NASA become a cringing shadow of itself and the U.S. become a nation without the capacity to launch humans into space. It took Elon Musk and Space X to show the world how to make space accessible again, and show us an even bigger dream – Starship and Mars.

I have been following developments at Boca Chica, Texas since shortly after StarHopper made it’s famous 500′ flight in August of 2019, and have watched through the eyes of Boca Chica Mary and all the other NSF crew as the launch site grew from nothing but a single concrete landing pad to the state it is in today, with sub-orbital and orbital flights visible on the horizon. Regardless of whatever else happens or where it happens, Boca Chica Texas will be forever known as the place where human interplanetary travel got its start.

I have wanted for some time now to see this magical place with my own eyes, and this week my wife and I finally got to do our Starbase pilgrimage. We flew from Columbus, OH to Harlingen, Tx on Southwest. We stayed overnight in Brownsville, and then made the drive out to Starbase this morning. All the pictures, videos, and commentary can’t do justice to the real thing. In this post I hope to add my ‘pilgrim’s point of view’ in the hope that others will be encouraged to do their own pilgrimage.

In preparation for the trip, I had been asking about Starbase do’s and don’ts since last November, but I hadn’t gotten much in the way of useful information. NSF Discord member xredbaron62x was the most helpful, passing along links to some other threads talking about Boca Chica trips, but the information itself wasn’t very useful. In particular, most posters mentioned that South Padre Island (SPI) was the best place to stay, but that option doubles the round-trip distance to the launch site. You have to go from SPI to Brownsville, and then Brownsville to BC. I couldn’t see the logic for this unless there was a specific reason, like a known launch date, so we booked a hotel in B’ville.

We arrived late Tuesday night, and sacked out at the hotel. The next morning we got up, had breakfast there, and headed out about 9 am. There were closures scheduled for both the days we were going to be there, but I had checked the night before and the Wednesday one had already been cancelled, so we didn’t have to worry about the 10am closure. As it turned out, leaving B’ville at 9am meant we were ‘off-shift-change’ and traffic along Hwy 4 was almost non-existent. As we drove south on a beautiful sunny day, I happened to notice what appeared to be brand-new high-voltage utility poles alongside the road (I’m an Electrical Engineer – we tend to notice these things).

brand-new high-voltage utility lines? There’s nothing out here, except for…..

With my background as an EE I could tell these poles were carrying some serious high voltage and high current, based on the number of conductors per insulator position, and the spacing between the insulators. Since there’s nothing on this road except Space X’s production and launch facilities, I jumped directly to the conclusion that these lines were intended to upgrade power at the Space X sites. As we travelled south, we could see that these power lines were just being installed, as we started passing construction crews and very large conductor rolls.

Here the power lines switched to the west side, and you can see the big rolls of conductors waiting for installation

Further on, we saw an odd roadside sign, complete with a yellow smiley-face. This is the entrance to the shooting range we’ve all heard about in conjunction with Starbase.

Sometime after that, I started to see the instantly recognizable skyline of the production site, and very faintly beyond that, the launch site itself. I like this photo because it gives the viewer some idea of the isolation of this place; there’s nothing around except miles and miles of miles and miles.

Now entering the Twilight Zone…..

Here’s a short video to emphasize the ‘miles and miles of miles and miles’

We stopped a little later on so I could take some additional photos, and I happened to notice a roadside historical memorial to ‘Camp Belknap’ where 7,000-8,000 volunteers to fight in the 1840’s war with Mexico were housed in terrible conditions over a Texas summer. Looking out over the terrain, I can’t imagine anyone lasting for long in such desolation, much less 7,000 – 8,000 soldiers. Then I noticed that the site is well-kept, with red, white, and blue flowers on one side, and an American flag on the other – maybe there’s still hope for our country after all!

very well kept roadside historical monument

As we got closer to the production site, details – like SN16 and SN15 and (Booster 3??) became visible

Here’s a short video taken as we passed the iconic ‘S T A R B A S E’ sign

We drove on past the production site, me catching just a glimpse of a woman with a big camera (BC Mary?). Between the production site and the launch site, we came to this – a big stop-sign in a road construction area.

Big-time road construction

The next mile or so between the production and launch sites was all torn up; I think they have been working on this section of the road forever.

And here we are at the launch site, watched over by the one and only Starhopper

Starhopper rules the launch site

As we wandered along the road-edge on the far side of the road, I noticed that there wasn’t much going on, and thought I might be able to get a ‘I was here’ photo from near the security booth. So, I trotted over to the security booth and asked the guard if it would be OK. He said “Sure – see that line in the pavement (pointing to a faint line that, apparently marked the near edge of the public road)? – just as long as you are on the other side of that line you are OK”. Cool! So I stepped forward about 1 foot and my wife took this shot – Yay!

“Just over the line” at the entrance to the launch site

Next we went down to the beach, as I wanted to see if I could find one of the many robot cameras we all have been watching.

Nice beach, with some rain clouds coming in from offshore
My wife and chauffer for our Boca Chica adventure!

I didn’t find any NSF cameras, but I did find a guy with a huge cameral linked to a Starlink terminal. He said he was with a German outfit that was doing regular narrated Starbase updates (I forgot the name of the app), and he pointed further down the beach where he said some Lab Padre guys were set up. By this time the offshore rain clouds had gotten a lot closer, so we beat feet back to the car.

On our way back to the hotel, we ran across this sign, which we thought summed up our hopes and wishes for Starbase

Stay tuned

Frank

The Tao of Engineering

Posted 17 December 2021

I have a step-granddaughter of whom I am immensely proud, not least because she has “the knack” – i.e. been bitten by the engineering bug. Although we never talked much as she was growing up, we (or at least “I”) have begun to talk about engineering in general and her Materials Science studies in particular. This morning I was searching my emails for a missing link to her blog site when I ran across a forgotten email from three years ago where I described some of my approach to problem-solving. As I glanced through the material, it hit me that this could be something that others might benefit from, so I decided to post it on my blog site – maybe others will see it and benefit – maybe not. Anyhoo, here it is:

Claire,
I had a lot of fun geeking with you over the weekend with your Raman spectroscopy project, and it occurred to me that you and I look at problems like this from different ends of a long journey (end of mine and start of yours).
So, in an effort to fulfill my role as GOG, I thought I would pass some thoughts on ‘the Tao of Engineering”

  • It’s rarely obvious how to get from where you are to where you want to go, but it is usually fairly easy to see how to get from where you are to someplace that might give you some additional insight into the problem.
  • It is extremely easy to exhaust yourself going in circles.  This is probably the number one killer of prospective engineers/scientists.  If you can develop an effective technique for avoiding this one problem, you’ll be much more successful.  Everyone believes they can recognize this phenomenon and won’t fall victim to the trap, but humans aren’t evolved to handle long-term problems – their memory isn’t good enough.  There are probably an infinite number of effective ‘circle-breaking’ techniques, but the one that I chose/discovered is to write everything down, in excruciating detail, as if I were describing the situation to someone else.  This produces an intellectual ‘breadcrumb trail’ that allows you to recognize when you are covering the same ground again.  Over and over again I have experienced that the act of documenting a problem will often solve it entirely, or at least illuminate bad assumptions and/or fruitful lines of inquiry.
  • Make sure you actually understand the problem; can’t tell you how many times I solved the problem I wanted to solve (easier, more fun, whatever) rather than the one I was supposed to solve.  Any problem, no matter how large or small, can be described in a single paragraph.  If you can’t write a single paragraph that completely describes the problem, then you don’t understand it.  I would tell my young engineers they had to be able to put the entire problem description on one side of a 3×5 card, in an easily readable type  size. 
  • physics doesn’t care what you think!  The data is the data, and you have to adjust to it – as it will never adjust to you.  Too many people try to make the data tell the story they want to hear, rather than listening to the story the data wants to tell.
  • Until proven to be correct, always assume everything is wrong. Find a way to cross-check everything.  Many times this can be accomplished by ‘cheating’ – using a known-good information set as the input to a process or program, and comparing the answer from the program/process to the already-known answer.  If they agree, then it’s a good bet that the program/process can then be trusted to do the same for your data.  If they don’t, then you need to discard that program/process, or find out why it doesn’t work the way you think it should (referring back to ‘physics doesn’t care what you think’ as necessary).  Everyone wants ‘THE BIG RED BUTTON’ (like the Staples ‘That was easy’ button) that solves the problem in one fell swoop, but that almost never happens (if it were that easy, it wouldn’t be your homework/lab assignment).

I have attached a Word document showing how I used my ‘document everything’ technique on a recent problem I had with a solid-state accelerometer module I wanted to use on my autonomous robot.  As usual, I ran around in circles for a while chasing ghosts until I decided to get serious and start documenting things, and then (also as usual), I started making progress toward a real solution.
love,
Frank


G.Frank Paynter, PhD
OSU ESL Research Scientist (ret)
EM Workbench LLC614 638-6749 (cell)

Loader Loading…
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

ILI9341-Based Digital Clock Project

A while ago, the clock/time display on our microwave started having problems; it’s a 7-segment vacuum-flourescent display, and a couple of the segments a no longer lighting up. Instead of “End”, we now see “Erd”, and the clock is getting harder to read. The microwave itself is still running fine, but because this particular display is our primary time display in the house (aside from our phones), not having a good clock display is annoying.

I investigated getting the appropriate repair part for the display, but that module costs more than the entire microwave did originally, so that didn’t seem like a practical idea. And while I can get a cheap stick-on clock for just a few bucks, what’s the fun in that?

So, I’m in the middle of a design project to build a digital clock to replace the one on the microwave. At first I built up a clock using a Teensy 3.2 and one of my spare Nokia 5110 LCD displays, but that turned out not to be very practical, as the display is basically unreadable from more than just a few feet away, with or without backlighting.

So, I started looking around for better displays, and ran across the ILI9341 TFT display with a touch-sensitive screen. Even better, this was available with Teensy-optimized drivers, as shown here – what a deal! I started thinking that with a touch-sensitive screen, I might be able to implement some on-screen buttons for setting the time, which would be way cool!

So, I ordered two of these displays from PJRC, and started working on the design. Here’s the initial schematic

Here’s my initial breadboard setup:

Teensy 3.2 in foreground, Adafruit DS3231 RTC in background, ILI9341 TFT touch display

I originally used the Arial proportional font provided by the library, but I discovered that it produced bad artifacts after a few hours, as shown below: The only way to avoid these artifacts is to refresh the entire screen on every pass through the 1-second timing loop, which causes a very annoying ‘blink’. Eventually I figured this out, and changed to a non-proportional font, as shown below;

Non-proportional font looks worse, but doesn’t suffer from artifacts

Once I got everything working (well, except for the touch screen stuff that I plan to add later), I fabricated a nice box using Open SCAD and TinkerCad, and made a more permanent version using a half-sized ‘perma-proto’ board, as shown below

Here’s the more-or-less final schematic

7 March 2021 Update:

After a long and somewhat agonizing journey, I finally got the touch-screen stuff working, so I can now adjust the time on my digital clock using on-screen touch-sensitive controls. Here’s the updated schematic:

The only real difference between this one and the previous schematic is the T_IRQ line is no longer connected; the Teensy-optimized XPT2046_Touchscreen.h/cpp library doesn’t use it.

Here’s a short video showing how the touch-screen enabled time/date adjustment feature works:

Although I hope to clean up this code considerably in the future, I include it here in it’s entirety in it’s current state

10 March 2021 Update:

After getting all of the above to work, I decided to re-tackle the proportional fonts issue. In my first attempt, I had used the ‘ILI9341_t3’ version of the ILI9341 Teensy library, and there is a newer ‘ILI9341_t3n‘ version out now. So, I modified one of the example programs (unfortunately still written for the old library) to be compatible and got proportional fonts (well, just the Arial one) working on the display.

After several hours of running this example with no apparent artifacts or problems, I decided to update my complete clock program to use the Arial font. After making the modifications and running it overnight, the time & date displays were still rock-solid the next day, as shown in the following photo – YAY!!

So now all that is left to do is to upload the new Arial-based code (with the time background color switched back to ‘black’) to my ‘working’ clock module, sit back and enjoy the proportional font display.

14 July 2021 Update:

I noticed that my clock had some ‘issues’ with time/date adjustments, so I ‘put it back in the shop’ for some additional TLC. While I was at it, I noticed that the system schematic didn’t include the DS3231 (hard to have a clock without a RTC!), so I updated it as well. Here’s the updated schematic.

The updates made were to make the time/date adjustments more robust. The updated code is included in its entirety below. First the main program:

And then the ‘CustomBox’ class file (no .cpp file – everything is in the .h file)

Stay tuned,

Frank

Dewalt DWS713 Miter Saw Cut Line Shadow Mod

My wife got me a brand-new Dewalt DWS713 Miter Saw for Christmas, and boy is it nice! Sure beats the heck out of my old plastic miter box, that’s for sure. After playing with it for a while and making some gorgeous cuts, I decided I wanted to try my hand at adding a cut line shadow modification.

After doing some web research, I found this post, of a guy who added a cut line shadow system to Bosch miter saw, and this Thingiverse design for a LED mounting shroud. So, I ordered an LED lamp similar to the one used in the video, and downloaded the Thingiverse shroud design so I could print one here on my 3D printer.

The LED lamp comes with a small cylindrical control module containing an ON/OFF switch and an AC/CD converter, as shown here

ON/OFF switch and AC/DC converter are housed in the magnetic base

As described in the video showing how to mount a cut line shadow LED to a Bosch miter saw, the gooseneck part of the LED lamp wasn’t long enough to allow full travel of the blade guard. So I cut the gooseneck away from the magnetic base, and fabricated a new housing for the AC/DC converter and (because I managed to screw up the original one) a new, larger ON/OFF switch.

3D printed enclosure for ON/OFF switch and AC/DC converter.

With the added length gained by replacing the original cylindrical enclosure, I was able to mount the LED at the front end of the blade housing and still allow full travel of the retractable blade guard. However, I discovered that I needed to reduce the height of the LED lamp housing slightly, to allow more clearance for the retractable blade guard travel, as shown

Here is a photo of the LED assembly temporarily attached to the miter saw, with the retractable blade guard removed

This temporary setup produced a nice cut line shadow as shown below

The photo below shows the completed assembly, with the gooseneck attached to the blade housing via a 3D-printed gooseneck retainer clip and two 3mm machine screws (the blade housing was drilled & tapped for 3mm), and the control housing mounted to the rear of the blade housing.

The final product seems to work fairly well, as demonstrated in the following photo

Here is a photo of the 3D-printed gooseneck retainer clip

I used hot glue to temporarily attach the clip to the blade housing, and then drilled through the clip and into the housing with a 3mm tap drill. Then I removed the clip, tapped the housing for a 3mm screw, and then re-drilled the clip for a 3mm clearance hole, then re-assembled using 2ea 3mm x 6mm machine screws.

Here is a screenshot showing the rounded-corner enclosure I designed to hold the ON/OFF switch and the AC/DC converter

Because I managed to screw up the original ON/OFF switch, I used this switch I had in my parts bin, from an earlier project

All the 3-D printable parts are on Thingiverse here.

27 October 2021 Update:

While changing blades the other day, I noticed that the front LED lamp holder had come loose from the top of the blade guard – the hot glue attachment of the holder had detached from the metal. So, I printed up a new lamp holder with longer sides and this time attached it with 3mm screws to 3mm tapped holes in the blade guard, as shown in the following photo:

New front LED lamp hold, this time attached with two 3mm screws

Replacing HC-SRO4 Ultrasonic Sensors with VL53L0X Arrays Part III

In the previous two posts on this subject, I described my efforts to replace the ultrasonic ‘ping’ sensors on my autonomous wall-following robot Wall-E2 with ST Microelectronics VL53L0X ‘Time-of-Flight’ LIDAR proximity sensors, with the aim of using an array of the ToF sensors to solve the problem of wall tracking at a specified offset. Part II demonstrated just such a capability, but only on the right side, and not in a way that was fully integrated with the rest of the robot operating system.

This post describes the work to fully integrate the new ‘parallel-find’ and ‘OffsetCapture/Track’ algorithms into the rest of the robot’s autonomous wall-following operating system. The robot operating system is a loop that runs every 200 mSec.  Each time through the loop the current operating mode is determined in GetOpMode(), as follows:

  • Dead Battery:  Battery died before robot could find and connect to a charging station
  • IR Homing: Robot has detected a charging station and is homing on it’s IR signal
  • Charging: Robot is currently charging on one of the available charging stations
  • Wall Follow:  nothing else is going on, so continue tracking the nearest wall

In the above list, all but the last (Wall Follow)  are ‘blocking’ in the sense that once they start, they run to completion without interruption.  The ‘Wall Follow’ mode processing path however, typically makes a small adjustment to the left/right motor speeds depending on the situation and then exits back to the main loop, where GetOpMode() runs again.  So wall following involves multiple passes through GetOpModes() so that higher-priority tasks (like dead battery recognition, IR homing, and battery charging are executed in a timely manner.

The result of GetOpMode() is passed to a CASE statement where actions appropriate to the determined mode are executed as follows:

  • MODE_CHARGING:   This mode blocks until charging is complete
  • MODE_IRHOMING:  This mode blocks while homing station to the charge station.  If the robot isn’t ‘hungry’, it attempts to go around the charging station rather than connecting to it.
  • MODE_DEADBATTERY:  This mode blocks forever
  • MODE_WALLFOLLOW: This is the default robot mode – whenever one of the above ‘special’ modes doesn’t apply. The MODE_WALLFOLLOW mode is further broken down into tracking cases as determined in GetTrackingDir(), which returns one of TRACKING_NONE, TRACKING_LEFT, TRACKING_RIGHT, or TRACKING_NEITHER.

With the replacement of the ‘ping’ sensors with the VL53L0X array) Wall-E2 can now accurately determine its orientation with respect to a nearby wall.  So I now believe I can make the offset capture operation a blocking call, significantly simplifying the MODE_WALLFOLLOW code.  Here is the code for the TRACKING_RIGHT case:

In the above code the robot first checks for obstacles, and either maneuvers to capture the desired wall offset (first time through this case) or continues to track the previously captured offset.

 

 

 

Integrating Heading-based Wall Tracking into Wall-E2

Posted 09 September 2019

After successfully demonstrating heading-based wall tracking with my little 2-motor robot, I am now trying to add this capability to Wall-E2’s repertoire.

Current Algorithm:

Every MIN_PING_INTERVAL_MSEC millisec Wall-E2 measures left/right/forward distances, and then calls ‘GetOpMode()’ to determine its current operating mode.  The currently defined modes are:

  • MODE_NONE: (0) default case
  • MODE_CHARGING (1):  connected to the charger and not finished
  • MODE_IRHOMING (2): coded IR signal from a charger has been detected
  • MODE_WALLFOLLOW (3):  trying to run parallel to a wall
  • MODE_DEADBATTERY (4): battery exhausted

The MODE_WALLFOLLOW operational mode is returned from GetOpMode() when none of the other modes are active – in other words it is the real ‘default mode’.  The MODE_WALLFOLLOW mode has three distinct sub-modes;

    • TRACKING_LEFT (1): actively tracking the left wall
    • TRACKING_RIGHT (2): actively tracking the right wall
    • TRACKING_NEITHER (3): not tracking either wall (both distances > 200cm)

TRACKING_LEFT & TRACKING_RIGHT are identical for purposes of heading-based wall tracking, and the TRACKING_NEITHER case isn’t relevant, so we just have to come up with a way of integrating heading-based tracking into either the LEFT or RIGHT case.  The TRACKING_LEFT case block is shown below:

The first thing that happens is a check for an obstacle within the MIN_OBS_DIST_CM or the ‘stuck’ condition (determined via a call to ‘IsStuck()’).  If this test passes, a check is made for an obstacle that is farther away than MIN_OBS_DIST_CM but closer than STEPTURN_DIST_CM, so a gradual ‘step-turn’ can be made to negotiate an upcoming corner.  If that check too fails, then the robot is assumed to be tracking the left wall, safely far from any oncoming obstacles.  In this case, a new set of motor speeds is computed using a PID setup to keep the left PING distance constant – no consideration is given for whether or not the PING distance is appropriate – just that it is either greater or less than the one measured in the last pass.

So, the thing we want to change is the way the robot responds to PING distances to the left wall.  We want the robot to acquire and maintain a selected standoff distance from the wall being tracked (the LEFT wall in this case).  Pretty much by definition, this requires two distinct activities – the first to acquire the selected standoff distance, and the second to maintain that distance for the duration of the wall-tracking activity.  So now we have three state variables for wall tracking, and only the last one (ACQUIRE/MAINTAIN) is new

  •  TRACKING_LEFT/RIGHT
    • NAV_WALLTRK (as opposed to NAV_STEPTURN)
      • PHASE_ACQUIREDIST/PHASE_MAINTAINDIST

In the PHASE_ACQUIREDIST phase, a heading change of up to 20 deg is made in the appropriate direction to move toward the selected offset distance.  The situation is then monitored in subsequent loop passes to determine if a larger or small heading cut is required.  When the robot gets close to the desired distance, all the added heading offset is removed and the robot enters the PHASE_MAINTAINDIST phase.  In this phase, small(er) ping distance variations cause larg(er) heading changes, resulting in a sawtooth pattern about the selected distance.

 

Alzheimer’s Light Strobe Therapy Project

Posted 24 March, 2019

A friend told me about a recent medical study at MIT where lab mice (genetically engineered to form amyloid plaques in their brains to emulate a syndrome commonly associated with Alzheimer’s) were subjected a 40Hz strobe light several hours per day.  After repeated exposures, the mice showed significantly reduced plaque density in their brains, leading the researchers to speculate that ‘light strobe therapy’ might be an effective treatment for Alzheimer’s in humans.

The friend’s spouse has been diagnosed with Alzheimer’s, so naturally he was keen to try this with his spouse, and asked me if I knew anything about strobe lights and strobe timing, etc.  I told him I could probably come up with something fairly quickly, and so I started a project to design and fabricate a light strobe therapy box.

The project involves a 3D printed housing and 9V battery clip, along with a white LED and a Sparkfun Pro Micro 5V/16MHz microcontroller, as shown in the following schematic.

Strobe Therapy schematic

I had a reflector hanging around from another project, so I used it just as much for the aesthetics as for functionality, and I designed and printed up a 2-part cylindrical housing. I also downloaded and printed a 9V battery clip to hold the battery, as shown in the following photos

Finished Strobe Therapy Unit

Internal parts arrangement

Closeup showing Sparkfun Pro Micro microcontroller

The program to generate the 40Hz strobe pulses is simplicity itself.  I used the Arduino ‘elapsedMillis’ library for more accurate frequency tuning, but ‘delay()’ would probably be close enough as well.

 

I’m not sure if this will do any good, but I was happy to help someone whose loved one is suffering from this cruel disease.

Frank

 

WallE2 Robot in Arduino #include file hell

Posted 02 February 2019

After a vacation from my WallE2 autonomous wall-following robot code to recover from rotator cuff surgery (and creating/testing my new digital tensionometer), a week or so ago I decided it was time to get back into WallE2 mode.   At the time I thought this would be a piece of cake, as I had left WallE2 in pretty good shape, code-wise back in September 2018  (at least that’s what I thought!).

Instead, For the past couple of weeks I have been enduring what can only be described as “#include file hell”.   The first time I tried to compile my main program, I saw a couple of warnings in an i2cDev related library.   The code still compiled, but I take all warnings very seriously, and these weren’t there the last time I compiled the code.

So, I started trying to figure out what, if anything, had changed, and how to go about fixing whatever problem had caused the warnings to pop up.   Unfortunately, everything I did made things worse – and worse – and worse.   Nothing made sense.   Jeff Rowberg, the creator of the fine i2cDev collection of i2c device drivers was mystified, as was Tim Leek, the main guy on the Visual Micro forum.   Arggghhhh!

So, Jeff Rowberg suggested I try compiling the project in the Arduino IDE rather than in VS2017/Visual Micro, to eliminate any issues caused by that environment.   Up until this point I had actually  never used the Arduino IDE, much preferring the more helpful and feature rich VS2017/Visual Micro IDE. But, what the heck – how hard could it be?

Well, the answer was –  DAMNED HARD!   Using the Arduino IDE after the VS/VM environment was like moving backwards in time from the 21st century to the stone age –   having to rub sticks together to make a compile happen!   Moreover, the Arduino IDE created even more (and different) problems than I had experienced so far, meaning that I not only wasn’t draining the swamp, but the alligators were getting even more numerous!   Some of the ‘features’ of the Arduino IDE:

  • When the IDE is first launched, it comes up with the last .INO file loaded.   If you want a different file, it launches a new IDE instance with file->open; soon your desktop is littered with IDE instances.
  • When it tries to find a library based on a ‘#include <libraryName.h>’ line, it can’t handle [library name]-master as is common with libraries downloaded from GitHub
  • It requires exact name matching, including capitalization.   So ‘#include <libraryName>’ will not match with the ‘Arduino\Libraries\Libraryname’ folder.
  • Editing is clunky, and there’s no such thing as Intellisense.

After running around in circles with my hair on fire for the last week or so, making my wife miserable with my griping and inundating Tim Leek and Jeff Rowberg with ever-more-desperate cries for help, I finally decided that I was simply going to have to start over from scratch with my robot program (some 3000+ lines of code in the main program and over a dozen custom libraries), and just build it up piece by piece until everything works again – groan.   It’s not like I don’t have backups and wasn’t using revision control – I do and I was; it’s just that the programs that compiled cleanly back in September are generating warnings and errors now, and everything I do makes the problem worse!

Since the original problem seemed to be related to the library that runs the DFRobots MPU6050 module, I decided to start there.   After struggling up the learning curve on the Arduino IDE, I also decided I would make sure that each program step would compile cleanly in  both the VS/VM and Arduino IDE’s before proceeding to the next step.   I reasoned that since the Arduino IDE is much pickier about library locations and names, I could use it as sort of an editorial check on VS/VM; if it works in the Arduino IDE, the VS/VM setup will have  no problem.

For the DFRobots MPU6050 6DOF IMU module, I started with Jeff Rowberg’s ‘MPU6050_DMP6.INO’ example program buried way down in the ‘i2cdevlib-master\Arduino\MPU6050\examples\MPU6050_DMP6’.   According to the i2cDev ReadMe, I could either put the entire i2cDev-master folder in Arduino\Libraries and let the linker figure it out, or just put the required files in the project (solution folder for VS/VM, ‘sketchbook folder’ for Arduino IDE) folder.   I elected for the latter (local files) option, as I was at least a little suspicious that part or all of my original problem was caused by the compiler/linker loading from the wrong library folder in the i2dDev folder tree.   In addition, I completely removed the i2cDev folder tree from my PC and re-downloaded it from GitHub, placing it in a completely unrelated folder so that neither environment could possibly find it.   Then I copied the required header/.cpp files from the hidden i2cDev folder tree into the project folder.   In VS/VM I created a project called ‘MPU6050_DMP6_Example’ and copied the Arduino versions of I2Cdev.cpp/.h, MPU6050.cpp/.h, MPU6050_6Axis_Motion.h, and helper_3dmath.h into it. Then I started working to get this project to compile both in the VS/VM & Arduino IDE’s.

I’ve now gotten it to compile and link in the Arduino IDE (albeit with the same warnings I started with just before I went down the rabbit hole into header file hell).   However, I can’t get it to compile in VS/VM – it blows a whole bunch of errors of the form – apparently one error for each MP6050 function)

These errors proved impossible to correct, and nobody on either the Arduino or Visual Micro forums seemed to be able to help.   Finally in desperation I uninstalled and re-installed the Visual Micro extension to VS2017, and  that didn’t solve the problem either – exactly the same behavior.

So, last night I uninstalled VS2017 entirely from my system, and deleted the entire contents of the temp folder being used for temporary compile files.   On my system this was  C:\Users\Frank\AppData\Local\Temp.

03 February 2019

This morning I reinstalled VS2017CE and, using the Tools & Extensions menu, reinstalled Visual Micro. I left everything pretty much at the default settings (including the IDE selection and IDE location entries).   The only thing non-standard with the setup was the inclusion of ‘https://raw.githubusercontent.com/sparkfun/Arduino_Boards/master/IDE_Board_Manager/package_sparkfun_index.json’ in the ‘Optional additional boards manager urls’ field.   This was apparently left over from my previous installation.   I’m not worried about this particular setting, but it does indicate that not everything about the previous incarnation of Visual Micro was actually removed from my system.

After installing VS/VM, I ran through a few of my simpler projects, and so far they have all compiled w/o problems (or had understandable and easily fixable problems).   I also compiled each program in the Arduino IDE,  taking care to follow the Arduino IDE restrictions (no “-master” in library folder names, exact capitalization, etc).

  • BlinkTest.ino – very simple, no #includes
  • ClassTest.ino – Very simple class construction project – no #includes
  • DigitalScale.ino – Several #includes, including the HX-711 load cell library
  • StepperSpeedCtrl – uses   #include <Stepper.h>
  • AdaFruit_BTLE_UART_Test.ino – uses 7 different library #includes
  • Arduino_IMU6050_Test4.ino – uses i2cDev and MPU6050 libraries, along with SBWire, elapsedMillis, and PrintEx.   this compiled OK, but with the same warnings (overrun & ‘one definition rule’) as when I first started this odyssey.   Fortunately, that’s all that happened – I didn’t get the ‘(.text.startup+0x1e4): undefined reference to MPU6050::initialize()’ error – yay!   This program also compiles in the Arduino IDE, with the same exact warnings.   So, it appears I may be back where I started on this odyssey, with a program using the MPU6050 libraries that compiles OK but with one   understandable/fixable warning (the overrun warning) and one mystery warning (the ‘one definition rule’ warning)
  • MPU6050_DMP6_Example:   spoke too soon!   This program blows the same  ‘(.text.startup+0x1e4): undefined reference to MPU6050::initialize()’ errors as before in VM, but compiles fine (albeit with the same two warnings as always)   in the Arduino IDE.
  • Arduino_IMU6050_Test4:   This is a program I created some time ago, and I found that it compiles/links fine (stil with the overrun/ODR warnings), both VS/VM and Arduino IDE

In desperation, I decide to create a completely new Arduino project in VS/VM, copy the ‘known-good’ code from  Arduino_IMU6050_Test4 into it, and then start hacking it down to the point where it fails.   Surprise surprise, when I did this, the new project (UnDefTest2) failed right away in VS/VM, blowing LOTS of linker errors!   Moreover, it compiled/linked fine in Arduino IDE – how could this be?   There MUST be something different about the VS/VM environment between  Arduino_IMU6050_Test4 and UnDefTest4 – but what?   After putting the two projects up side-by-side (this is where a dual monitor setup comes in REAL handy), I finally twigged to the difference; in the ‘working’ version, the local header/cpp files had been ‘added’ to the project’s ‘Header Files’ and ‘Source Files’ folders via the Solution Explorer (right-click on the folder icon, select ‘Add Existing…’, select the desired files, click OK). As soon as I added the relevant files to the UnDefTest4 project, it compiled/linked fine – YAY!!

I could not believe what I was seeing!   For some reason, VS/VM refused to process header/cpp files in the same folder as the .INO file, even though I had carefully checked the ‘Local Files Override Library Files’ option in the ‘Vmicro->Compiler’ menu.   At the same time, the Arduino IDE  always searches the local folder before anything else, so simply placing the relevant files in the local folder does the trick.   The fact that Visual Micro requires an additional (and non-intuitive) step for this boggles the mind.

04 February 2019

OK, when I started all this foolishness I was trying to find out (and fix) whatever was causing the ‘One Definition Rule’ (ODR) violation warning I was getting on all my programs that used the MPU6050.   I really  really hate warnings, and I was determined to get to the bottom of this, and I finally did!

The ‘one definition rule’ (ODR) warning is caused when the compiler/linker sees code that can produce two different definitions for the same object. If that can happen, EVER, then an ODR violation warning is issued. Believe it or not, that is exactly what happens when the compiler processes MPU6050.H – it sees that there are some conditions for which two different descriptions of the MPU6050 class could exist – and says “no no”. The relevant portion of the class definition is shown below:

When the compiler sees these lines, it says to itself; “Hmm, the way this is written, it is theoretically possible for there to exist  two different versions  of ‘Class MPU6050’, one with just two private member variables (devAddr & buffer) and one with four (with the addition of dmpPacketBuffer & dmpPacketSize), and this is a strict no-no; I’m going to whack that programmer across the head with an ODR violation!”

If the #ifdefined and #endif lines are commented out – the ODR warning goes away

Now, I suspect nobody has ever had a problem with this issue, as it would be very unlikely to have a project where BOTH versions of MPU6050 are in play, but of course the compiler doesn’t see it this way.

On a slightly different, but related subject, the OTHER warning was due to a potential integer overrun in the dmpGetGravity() function, as shown below:

if the last line of the above calculation is changed to (note the addition of ‘UL’)
– (int32_t)qI[2] * qI[2] + (int32_t)qI[3] * qI[3]) / (2 * 16384UL);
then this warning goes away as well.

Mission accomplished!   I now have MPU6050 code that compiles without errors (or warnings!!) in both the VS/VM and Arduino IDE environments.   Along the way I learned more than I ever wanted to know about ‘One Definition Rule’ violations and the innards of both the VS/VM environment and the Arduino IDE.

to paraphrase a quote attributed to Abraham of Lincoln:

I feel like the man who was tarred and feathered and ridden out of town on a rail. To the man who asked him how he liked it, he said: “If it wasn’t for the honor of the thing, I’d rather walk.”

 

Stay tuned,

 

Frank