Yearly Archives: 2019

Back to the future with Wall-E2. Wall-following Part III

Posted 24 March 2019,

In my previous post on this subject, I described some initial wall-following tests using a PID controller, and concluded that I would have to use some sort of two-stage approach/capture arrangement to make things work properly.  In addition, I needed some more ‘field’ (i.e., my hallway) testing to determine the best PID values for wall-following.

I started by causing the distance input to vary in a square wave fashion with an amplitude of about 20cm around the 50cm target value.  The resulting speed variations are shown in the following Excel plot.  This gave me some confidence that the basic PID mechanism was working properly.

Square wave response of PID = 1,0,0

After verifying basic operation, I started field testing with the most basic values possible – PID = 1,0,0:  This resulted in a very slow oscillation, as shown in the following video and Excel plot:

PID = 1,0,0, initial offset = target distance = 50cm

Then I moved on to the PID value I had obtained from previous field testing, i.e. PID = 10,0,0. This resulted in the behavior shown in the following video and Excel plot

PID = 10,0,0, initial offset = target distance = 50cm

For completeness, I also tested the PID = 10,2,0 case, but as can be seen in the following video and Excel plot, this did not appreciably change wall-tracking performance, at least not for the offset = target case.

PID = 10,2,0, initial offset = target distance = 50cm

A comparison of all three PID values is shown in the following Excel plot

Next I tried the PID = 1,0,0 case with an initial offset of 25cm and a target of 50cm, to gauge how the PID algorithm performs with an initially large error term.

PID = 1,0,0, initial offset = 25cm, target distance = 50cm

Upgraded Variance Calculation for Wall-E2 ‘Stuck’ Detection

Posted 23 April 2019

A little over three years ago I developed an effective technique for detecting when Wall-E2, my autonomous wall-following robot, had gotten itself stuck and needed to back up and try again.  The technique, described in that post, was to continuously calculate the mathematical variance of the last N forward distance measurements from the onboard LIDAR module.  When Wall-E2 is moving, the forward distances increase or decrease, and the calculated variance is quite high (generally in the thousands).  However, when it gets stuck the distances remain steady, and then the variance decreases rapidly to near zero. Setting a reasonable threshold allows Wall-E2 to figure out it is stuck on something and take appropriate action to recover.

The above arrangement has worked very well for the last three years, but recent developments and some nagging concerns prompted me to re-examine this feature.

Computational Load:

When I first added the variance calculation routine, there was very little else going on in Wall-E2’s brain; ping left, ping right, LIDAR forward, calc variance, adjust motor speeds, repeat.  Since then, however, I have added a number of different sensor packages, all of which add to the computational load in one way or another. I began to be concerned that I might be about to exceed the current 100 mSec time budget.

Performance Problems:

Although the LIDAR/Variance technique has performed very well, there were still occasions where Wall-E2’s behavior indicated something was wrong; either he would think he was stuck when he obviously wasn’t, or it would take too long to detect a stuck condition, or other random glitches.  This didn’t happen very often, but just enough to  put me on notice that something wasn’t quite right.

Brute Force vs Incremental Variance Calculation:

I could only think of two ways to address the computational load; reduce the size of the array of LIDAR distances used, or reduce the time required for each computation.  The original size selection of 50 measurements was sort of based on the idea that it would take about 5 seconds for a 50-entry array to be completely refreshed at the current 10Hz loop frequency.  This means that Wall-E2 should be able to detect a stuck condition about 5 seconds after it happens.  Decreasing the array size would decrease the detection time more or less linearly, but would probably also increase the chances of a false positive.  The other alternative would be to find a way to speed up the variance calculation itself.

Variance is a measure of the variability of a dataset, and is defined as:

The current algorithm for computing the variance is a ‘Brute Force’ method that loops through the entire array of LIDAR distance values twice – once for computing the mean, and then again to compute the squared-difference term, before subtracting the squared mean from the squared difference term to get the final variance value, as shown below:

This works fine, but is computationally clunky.  With a 50-element array, this computation takes around 1.5 mSec.  This is only about 1.5% of the 100 mSec time window for a 10Hz cycle time, but still…

Starting from a slightly different definition of the variance calculation

and then expanding, manipulating algebraically and then recombining, we arrive at an incremental version of the variance formula, as shown below:

Incremental version of the variance expression

And this is implemented in the code section shown below:

The incremental calculation doesn’t use any loops and therefore is quite a bit faster, for no loss in accuracy.  For a 50-element array as above, the incremental calculation takes only about 220 uSec, or about 7-8 times faster than the ‘brute force’ technique.

As a side-benefit of changing from the ‘brute force’ to the incremental technique, I also discovered the reason Wall-E2 was displaying occasional odd behavior.  The sonar ping measurements have a maximum value of 200 cm, which fits nicely into a 8-bit ‘byte’ data type.  However, the LIDAR front distance sensor goes out to 400 cm, which doesn’t.  Unfortunately, I had defined the forward distance array as an array of ‘byte’ values, which meant that everything worked fine as long as the reported distance was less than 255 cm, but not so much for distances over that value.  Fixing this problem turned out not to be as simple as changing the array definition from ‘byte’ to uint16_t, as then I ran into numerical overflow problems in the incremental variance calculation because of the squaring operations involved.  Due to the way the compiler operates, it isn’t sufficient to cast the result of uint16_t * uint16_t, to uint32_t, as the overflow happens before the cast.  To avoid this problem, the individual terms need to be uint32_t before the multiplication occurs.  Once this was done, all the results settled down to what they should be, and now correct results are obtained for the full range of possible distance values.  Shown below is an Excel plot of a test run performed using simulated input values from 100 to 400 (anything above 255 used to cause overrun problems)

As can be seen in the above plot, the ‘brute force’ and incremental methods both produce identical outputs, but the incremental method is 7-8 times faster.

Shown below is the same setup, but this time the input data is actual LIDAR data from the robot

 

03 May 2022 Update:

As part of my ongoing work to integrate charging station homing into my new Wall-E3 autonomous wall-following robot, I discovered that the ‘stuck detection’ routine wasn’t working anymore. Upon investigation I found that the result returned by the above incremental variance calculation wasn’t decreasing to near zero as expected when the robot was stuck; in fact, the calculated variance stayed pretty constant at about 10,000 – how could that be? After beating on this for a while, I realized that if the four terms to the right of the previous incremental variance calculation add to near zero, which they will if the array contents are constant, then the new calculated variance will be the same as the old calculated variance. So, if the ‘previous variance’ number starts out wrong, it will stay wrong, and the ‘stuck’ flag will never be raised – oops!

I started thinking again about why I changed from the ‘brute force’ method to the incremental method in the first place. The reason at the time was my concern that the brute force method might eat up too much of the available loop cycle time, but now that I have changed the processor from an Arduino MEGA2560 to a Teensy 3.5, that might not be applicable anymore. So I replaced the incremental algorithm with the brute force one, and instrumented the block with a hardware pin to actually measure the time required to compute the variance. As shown below, this is approximately 8μSec, insignificant compared to the 100 mSec tracking loop duration.

‘Brute Force’ variance calculation duration

Stay tuned,

Frank

Alzheimer’s Light Strobe Therapy Project

Posted 24 March, 2019

A friend told me about a recent medical study at MIT where lab mice (genetically engineered to form amyloid plaques in their brains to emulate a syndrome commonly associated with Alzheimer’s) were subjected a 40Hz strobe light several hours per day.  After repeated exposures, the mice showed significantly reduced plaque density in their brains, leading the researchers to speculate that ‘light strobe therapy’ might be an effective treatment for Alzheimer’s in humans.

The friend’s spouse has been diagnosed with Alzheimer’s, so naturally he was keen to try this with his spouse, and asked me if I knew anything about strobe lights and strobe timing, etc.  I told him I could probably come up with something fairly quickly, and so I started a project to design and fabricate a light strobe therapy box.

The project involves a 3D printed housing and 9V battery clip, along with a white LED and a Sparkfun Pro Micro 5V/16MHz microcontroller, as shown in the following schematic.

Strobe Therapy schematic

I had a reflector hanging around from another project, so I used it just as much for the aesthetics as for functionality, and I designed and printed up a 2-part cylindrical housing. I also downloaded and printed a 9V battery clip to hold the battery, as shown in the following photos

Finished Strobe Therapy Unit

Internal parts arrangement

Closeup showing Sparkfun Pro Micro microcontroller

The program to generate the 40Hz strobe pulses is simplicity itself.  I used the Arduino ‘elapsedMillis’ library for more accurate frequency tuning, but ‘delay()’ would probably be close enough as well.

 

I’m not sure if this will do any good, but I was happy to help someone whose loved one is suffering from this cruel disease.

Frank

 

Chess Piece Replacement Project

Posted 15 March 2019,

A week or so ago a family friend asked if I could print up a replacement part for a chess set.  I wasn’t sure I could, but what the heck – I told them to send it to me and I would do my best.  Some time later a package arrived with the piece (shown below) to be duplicated – a pawn I think.

Chess piece to be duplicated

Chess piece to be duplicated

The piece is about 43 x 20 x 20 mm, and as can be seen in the above photos, has a LOT of detail.  I didn’t know how close I could come, but I was determined to give it the old college try!

3D Scanning:

The first step was to create a 3D model of the piece.  I was semi-successful in doing something similar with an aircraft joystick about five years ago, but that piece was a lot bigger, and had a lot less detailed surface.   This previous effort was done using Autodesk Capture123, and it was a real PITA to get everything right.  Surely there were better options now?

My first thought was to utilize a professional 3D scanning service, but this turned out to be a lot harder than I thought. There is a LOT of 3D scanning hardware out there now, but most of it is oriented toward 3D scans of industrial plants, architecture installations and big machinery.  Very little to be found in the way of low-cost high-resolution 3D scanning hardware or services.  There are, of course, several hobbyist/maker 3D scanners out there, but the reviews are not very spectacular.  I did find two services that would scan my piece, but both would charge several hundred dollars for the project, and of course would require a round-trip mailing of the part itself – bummer.

Next, I started researching possibilities for creating a scan from photos – basically the same technique I used for the joystick project.  While doing this, I ran across the ‘Photogrammetry’ and ‘Photogrammetry 2’ video/articles produced by Prusa Research, the same folks who make the Prusa Mk3 printer I have in my lab – cool!  Reading through the article and watching the video convinced me that I had a shot at creating the 3D model using the Meshroom/AliceVision photogrammetry tool.

At first I tried to use my iphone 4S camera with the  chess piece sitting on a cardboard box for the input to Meshroom, but this turned out to be a disaster.  As the article mentioned, glossy objects, especially small black glossy objects, are not good candidates for 3D photogrammetry.  Predictably, the results were less than stellar.

Next I tried using my wife’s older but still quite capable Canon SX260 HX digital camera.  This worked quite a bit better, but the glossy reflectivity of the chess piece was still a problem. The wife suggested we try coating the model with baby powder, and this worked MUCH better, as shown in the following photos.  In addition, I placed the piece on a small end table covered with blue painter’s tape so I would have a consistent, non-glossy background for the photos.  I placed the end table in our kitchen so I could roll my computer chair around the table, allowing me to take good close-up photos from all angles.

End table covered with blue painter’s tape

Chess piece dusted with baby powder

Chess piece dusted with baby powder

Chess piece dusted with baby powder

Next, I had to figure out how to use Meshroom, and this was both very easy and very hard.  The UI for Meshroom is very nice, but there is next to no documentation on how to use it.  Drag and drop a folder’s worth of photos, hit the START button, and pray.

Meshroom UI

As usual (at least for me), prayer was not an effective strategy, as the process crashed or hung up multiple times in multiple places in the 11 step processing chain.  This was very frustrating as although voluminous log files are produced for each, the logs aren’t very understandable, and I wasn’t able to find much in the way of documentation to help me out.  Eventually I stumbled onto a hidden menu item in the UI that showed the ‘image ID’ for each of the images being processed, and this allowed me to figure out which photo caused the system to hang up.

Meshroom UI showing hidden ‘Display View ID’s’ menu item.

Once I figured out how to link the view ID shown in the log at the point of the crash/hangup with an actual photograph, I was able to see the problem – the image in question was blurred to the point where Meshroom/AliceVision couldn’t figure out how it fit in with the others, so it basically punted.

Photo that caused Meshroom/AliceVision to hang up

So, now that I had some idea what was going on, I went through all 100+ photos looking for blurring that might cause Meshroom to hang up.  I found  and removed five more that were questionable, and after doing this, Meshroom completed the entire process successfully – yay!!

After stumbling around a bit more, I figured out how to double-click on the ‘Texturing’ block to display the solid and/or textured result in the right-hand model window, as shown in the following photo, with the final solid model oriented to mirror the photo in the left-hand window.

textured model in the right-hand window oriented to mirror the photo in the left-hand window

So, the next step (I thought) was to import the 3D .obj or .3MF file into TinkerCad, clean up the artifacts from the scanning process, and then print it on my Prusa Mk3.  Except, as it turns out, TinkerCad has a 25MB limit on imports due to its cloud-based nature, and these files are way bigger than 25MB – oops!

Back to the drawing board; first I looked around for an app I could use to down-size the .obj file to 25MB so it would fit into TinkerCad, but I couldn’t figure out how to make anything work.  Then I stumbled across the free Microsoft suite of apps for 3D file management – 3DPrint, 3DView, and 3DBuilder.  Turns out the 3DBuilder app is just what the doctor ordered – it will inhale the 88MB texturedMesh.obj file from Meshroom without even breaking a sweat, and has the tools I needed to remove the scanning artifacts and produce a 3MF file, as shown in the following screenshots.

.OBJ file from Meshroom after drag/drop into Microsoft 3DBuilder. Note the convenient and effective ‘Repair’ operation to close off the bottom of the hollow chess piece

Side view showing all the scanning artifacts

View showing all the disconnected scanning artifacts selected – these can be deleted, but the other artifacts are all connected to the chess piece

The remaining artifacts and chess piece rotated so the base plane is parallel to the coordinate plane, so it can be sliced away

Slicing plane adjusted to slice away the base plane

After the slicing operation, the rest of the scanning artifacts can be selected and then deleted

After all the scanning artifacts have been cleared away

Chess piece reoriented to upright position

Finished object exported as a .3MF file that can be imported into Slic3r PE

Now that I had a 3D object file representing the chess piece, I simply dropped it into Slic3r Prusa Edition, and voila! I was (almost) ready to print!  In Slic3r, I made the normal printing adjustments, and started printing trial copies of the chess piece.  As usual I got the initial scale wrong, so I had to go through the process of getting this right.  In the process though, I gained some valuable information about how well (or poorly) the 3D scan-to-model process worked, and what I could maybe improve going forward.  As shown in the following photo, the first couple of trials, in orange ABS, were pretty far out of scale (original model in the middle)

I went through a bunch of trials, switching to gray and then black PLA, and narrowing the scale down to the correct-ish value in the process.

The next photo is a detail of the 4 right-most figures from the above photo; the original chess piece is second to right.  As can be seen from the photo, I’m getting close!

All of the above trials were printed on my Prusa Mk3 using either orange ABS or gray (and then black) PLA, using Prusa’s preset for 0.1mm layer height.  Some with, and some without support.

After the above trials, I went back through the whole process, starting with the original set of scan photos, through Meshroom and Microsoft 3D Builder to see if I could improve the 3D object slightly, and then reprinted it using Prusa’s 0.05mm ‘High Detail’ settings.  The result, shown in the following photos is better, but not a whole lot better than the 0.1mm regular ‘Detail’ setting.

Three of the best prints, with the original for comparison. The second from right print is the 0.05mm ‘super detail’ print

I noticed that the last model printed was missing part of the base – a side effect of the slicing process used to remove scanning artifacts.  I was able to restore some of the base in 3D Builder using the ‘extrude down’ feature, and then reprinted it. The result is shown in the photo below.

 

“Final” print using Prusa Mk3 with generic PLA, Slic3r PE with 0.1mm ‘Detail’ presets, with support

Just as an aside, it occurred to me at some point that the combination of practical 3D scanning using a common digital camera and practical 3D printing using common 3D printers is essentially the ‘replicator’ found in many Sci-Fi movies and stories.  I would never thought that I would live to see the day that sci-fi replicators became reality, but at least in some sense it has!

Stay tuned!

Frank

 

 

 

 

 

 

 

 

Better Battery Charging for Wall-E2

Posted 08 February 2019,

After recovering from my bout with #include file hell, I’m back to working on Wall-E2, my autonomous wall-following robot.  In a previous post I described the integration of the TP5100 charger module into Wall-E2’s system, but I have lately discovered that the TP5100 end-of-charge (EOC) detection scheme isn’t very reliable in my application.  The TP5100 uses a current threshold to determine EOC, which works fine in a normal application where the battery pack isn’t simultaneously supplying current to the load, but in my application, Wall-E2 stays active and alert while it’s docked at it’s feeding station; it has to, in order to be able to respond to the EOC signal and detach itself.  So, the current going through the TP5100 never goes below the idling current for Wall-E2, which is on the order of 300mA or so.  This is enough to keep the charging current above the TP5100 EOC threshold, and so Wall-E2 hangs on to the charging station forever – not what I had in mind!

Life would be good if I somehow measure Wall-E2’s idling current while on charge and the total charging current.  Then I could subtract the two values to get the excess current, i.e. the current going into the battery but not coming out – the current actually going into increasing the battery charge level. When this current falls below an appropriate threshold, then charging could be terminated. This scenario is complicated by the need to measure the current on the high side of the charging circuit and of the +Vbatt supply to the rest of the system.

Well, as it turns out, Adafruit (and I’m sure others) makes a high-side current sensor just for this purpose, based on the 1NA219 and INA169 chips. The INA219 module reports current via an I2C connection, while the 1NA169 module provides a open-emitter current source proportional to the current through an onboard 0.1Ω resistor (see this data sheet for details).  My plan is to use two of these modules; one at the charging circuit input, and a second one at the 8.4V +VBatt supply from the battery to the rest of the system. Since Wall-E2 stays awake during charging, it should be simple to monitor both currents and decide when charging is complete (or complete enough, anyway).  As a bonus, I should be able to extend the life of Wall-E2’s battery pack by terminating the charge at less than 100% capacity. See this very informative post by François Boucher for the details.

03 March 2019 Update:

After the usual number of mistakes and setbacks, I think I have the dual current sensor feature working, and now WallE-2 charges by monitoring both the battery voltage and the actual charging current (total current measured at the charging connector minus the run current measured on WallE-2’s main power line).  As a final test, I discharged the main battery pack at about 1A for about 1 hour, and then charged it again using the two-current method. As shown in the Excel plots below, Charging terminated when the actual battery charging current fell below 50mA.

Complete charge cycle, after discharging at approx 1A for approx 1 Hr

Last 20 minutes or so of charge operation, showing detail of end-of-charge behavior

two 1NA169 high-side current sensors mounted in the battery/motor compartment.  Note the 3D-printed mounting plates.

Here is a showing the installation of the two 1NA169 sensor modules in WallE-2’s battery compartment.  The one on the left measures running current, and the one on the right measures total current (charging + running).

The following figure shows the system schematic for WallE-2, with the two new 1NA169 sensors highlighted

System schematic with locations of new current sensors highlighted

Now that I have the current sensors and the new charge algorithm working, it’s time to go back an take another look at the charge/discharge characteristics of the Panasonic 18650B cells I’m using to see if I can extend their life with a more intelligent charge/discharge scheme.  The following plot shows the charge characteristics for this cell.

Charge plot for the Panasonic 18650B LiPo cell

As noted by François Boucher, the red line above is the total energy returned to the battery during charge.  As he notes, the battery acquires about 90% of its total capacity in the first 105 minutes of the charge period, when charged at 0.5C at 25C.  My battery pack is a 2-cell parallel x 2-cell series stack, and currently I’m charging to a 50mA cutoff.  According to Boucher, this is way too low – I’m charging to almost 100% capacity and thereby limiting the cycle life of the battery pack.  Looking at the end-of-charge detail plot above (repeated below), I should probably use a charge current threshold of around 500mA charging current (250mA per cell in the parallel stack) for about 90% capacity charge.

End-of-charge detail with approximate 90% charge current highlighted

On the discharge side, Panasonic’s Discharge Characteristics plot below shows a discharge down to 2.50V/cell.  WallE-2’s typical current drain is about 1A or about 0.3 – 0.5C, and the cutoff I’m using is 3.0V cell.  From the plot, this gives about 3150mAH of the approximately 3300mAH available at 0.5C, or about 95%.  So, it looks like I should raise the discharge cutoff voltage to about 3.2V or about 3000mAH of the 3300mAH available, or about 90%.

Conclusion:

So revisiting WallE-2’s battery management seems to have paid off; I now have much better visibility into and control over charge/discharge of the 18650B battery pack, and at least some expectation that I can use WallE-2’s new found battery super powers for good rather than evil ;-).

Stay tuned!

Frank

 

Back to the future with Wall-E2. Wall-following Part II

Posted 09 February 2019

A long time ago in a galaxy far, far away, I set up a control algorithm for my autonomous wall-following robot Wall-E2.  After a lot of tuning, I wound up with basically a bang-bang system using a motor speed step function of about 50, where the full range of motor speeds is 0-255.  This works, but as you can see in the following chart & Excel diagram, it’s pretty clunky.  The algorithm is shown below, along with an Excel chart of motor speeds taken during a hallway run, and a video of the run.

for left wall tracking

for right wall tracking

 

Run 1, using homebrew algorithm

Note the row of LEDs on the rear. They display (very roughly) the turn direction and rate.

Since the time I set this up, I started using a PID algorithm for the code that homes the robot in on its charging station using a modulated IR beam, and it seems to work pretty well with a PID value of (Kp,Ki,Kd) = (200,0,0).  I’d like to use the knowledge gained for the IR homing subsystem to make Wall-E2 a bit more sophisticated and smooth during wall-following operations (which, after all, will be what Wall-E2 is doing most of the time).

In past work, I have not bothered to set a fixed distance from the wall being followed; I was just happy that Wall-E2 was following the wall at all, much less at a precise distance. Besides, I really didn’t know if having a preferred distance was a good idea.  However, with the experience gained so far, I now believe a 20-30 cm offset would probably work very well in our home.

So, my plan is to re-purpose the PID object used for IR homing whenever it isn’t actually in the IR homing mode, but with the PID values appropriate for wall-following rather than beam-riding.

PID Parameters:

For the beam-riding application I used a setpoint of zero, meaning the algorithm adjusts the control value (motor speed adjustment value) to drive the input value (offset from IR beam center) to zero.  This works very nicely as can be seen in the videos.  However, for the wall-following application I am going to use a setpoint of about 20-30cm, so that the algorithm will (hopefully) drive the motors to achieve this offset.  The Kp, Ki, & Kd values will be determined by experimentation.

 

 

13 February 2019 Update:

I got the PID controller working with a target offset of 25cm and Kp,Ki,Kd = 2,0,0 and ran some tests in my hallway.  In the first test I started with Wall-E2 approximately 25cm away from the wall. As can be seen in the following video, this worked quite well, and I thought “I’m a genius!”  Then I ran another test with Wall-E2 starting about 50cm away from the wall, and as can be seen in the second video, Wall-E2 promptly dived nose-first right into the wall, and I thought “I’m an idiot!”

 

 

The problem, of course, is the PID algorithm correctly turns Wall-E2 toward the wall to reduce the offset to the target value, but in doing so it changes the orientation of the ping sensor with respect to the wall, and the measured distance goes up instead of down.  The PID response is to turn Wall-E2 more, making the problem even worse, ending with Wall-E3 colliding nose-first with the wall it’s supposed to be following – oops!

So, it appears that I’ll need some sort of two stage approach the the constant-offset wall following problem, with an ‘approach’ stage and a ‘capture’ stage.  If the measured distance is outside a predefined capture window (say +/- 2cm or so), then either the PID algorithm needs to be disabled entirely in favor of a constant-angle approach, or the PID parameters need to change to something more like Kp,Ki,Kd = 0,1,1 or something.  More experimentation required.

Stay tuned,

Frank

 

 

WallE2 Robot in Arduino #include file hell

Posted 02 February 2019

After a vacation from my WallE2 autonomous wall-following robot code to recover from rotator cuff surgery (and creating/testing my new digital tensionometer), a week or so ago I decided it was time to get back into WallE2 mode.   At the time I thought this would be a piece of cake, as I had left WallE2 in pretty good shape, code-wise back in September 2018  (at least that’s what I thought!).

Instead, For the past couple of weeks I have been enduring what can only be described as “#include file hell”.   The first time I tried to compile my main program, I saw a couple of warnings in an i2cDev related library.   The code still compiled, but I take all warnings very seriously, and these weren’t there the last time I compiled the code.

So, I started trying to figure out what, if anything, had changed, and how to go about fixing whatever problem had caused the warnings to pop up.   Unfortunately, everything I did made things worse – and worse – and worse.   Nothing made sense.   Jeff Rowberg, the creator of the fine i2cDev collection of i2c device drivers was mystified, as was Tim Leek, the main guy on the Visual Micro forum.   Arggghhhh!

So, Jeff Rowberg suggested I try compiling the project in the Arduino IDE rather than in VS2017/Visual Micro, to eliminate any issues caused by that environment.   Up until this point I had actually  never used the Arduino IDE, much preferring the more helpful and feature rich VS2017/Visual Micro IDE. But, what the heck – how hard could it be?

Well, the answer was –  DAMNED HARD!   Using the Arduino IDE after the VS/VM environment was like moving backwards in time from the 21st century to the stone age –   having to rub sticks together to make a compile happen!   Moreover, the Arduino IDE created even more (and different) problems than I had experienced so far, meaning that I not only wasn’t draining the swamp, but the alligators were getting even more numerous!   Some of the ‘features’ of the Arduino IDE:

  • When the IDE is first launched, it comes up with the last .INO file loaded.   If you want a different file, it launches a new IDE instance with file->open; soon your desktop is littered with IDE instances.
  • When it tries to find a library based on a ‘#include <libraryName.h>’ line, it can’t handle [library name]-master as is common with libraries downloaded from GitHub
  • It requires exact name matching, including capitalization.   So ‘#include <libraryName>’ will not match with the ‘Arduino\Libraries\Libraryname’ folder.
  • Editing is clunky, and there’s no such thing as Intellisense.

After running around in circles with my hair on fire for the last week or so, making my wife miserable with my griping and inundating Tim Leek and Jeff Rowberg with ever-more-desperate cries for help, I finally decided that I was simply going to have to start over from scratch with my robot program (some 3000+ lines of code in the main program and over a dozen custom libraries), and just build it up piece by piece until everything works again – groan.   It’s not like I don’t have backups and wasn’t using revision control – I do and I was; it’s just that the programs that compiled cleanly back in September are generating warnings and errors now, and everything I do makes the problem worse!

Since the original problem seemed to be related to the library that runs the DFRobots MPU6050 module, I decided to start there.   After struggling up the learning curve on the Arduino IDE, I also decided I would make sure that each program step would compile cleanly in  both the VS/VM and Arduino IDE’s before proceeding to the next step.   I reasoned that since the Arduino IDE is much pickier about library locations and names, I could use it as sort of an editorial check on VS/VM; if it works in the Arduino IDE, the VS/VM setup will have  no problem.

For the DFRobots MPU6050 6DOF IMU module, I started with Jeff Rowberg’s ‘MPU6050_DMP6.INO’ example program buried way down in the ‘i2cdevlib-master\Arduino\MPU6050\examples\MPU6050_DMP6’.   According to the i2cDev ReadMe, I could either put the entire i2cDev-master folder in Arduino\Libraries and let the linker figure it out, or just put the required files in the project (solution folder for VS/VM, ‘sketchbook folder’ for Arduino IDE) folder.   I elected for the latter (local files) option, as I was at least a little suspicious that part or all of my original problem was caused by the compiler/linker loading from the wrong library folder in the i2dDev folder tree.   In addition, I completely removed the i2cDev folder tree from my PC and re-downloaded it from GitHub, placing it in a completely unrelated folder so that neither environment could possibly find it.   Then I copied the required header/.cpp files from the hidden i2cDev folder tree into the project folder.   In VS/VM I created a project called ‘MPU6050_DMP6_Example’ and copied the Arduino versions of I2Cdev.cpp/.h, MPU6050.cpp/.h, MPU6050_6Axis_Motion.h, and helper_3dmath.h into it. Then I started working to get this project to compile both in the VS/VM & Arduino IDE’s.

I’ve now gotten it to compile and link in the Arduino IDE (albeit with the same warnings I started with just before I went down the rabbit hole into header file hell).   However, I can’t get it to compile in VS/VM – it blows a whole bunch of errors of the form – apparently one error for each MP6050 function)

These errors proved impossible to correct, and nobody on either the Arduino or Visual Micro forums seemed to be able to help.   Finally in desperation I uninstalled and re-installed the Visual Micro extension to VS2017, and  that didn’t solve the problem either – exactly the same behavior.

So, last night I uninstalled VS2017 entirely from my system, and deleted the entire contents of the temp folder being used for temporary compile files.   On my system this was  C:\Users\Frank\AppData\Local\Temp.

03 February 2019

This morning I reinstalled VS2017CE and, using the Tools & Extensions menu, reinstalled Visual Micro. I left everything pretty much at the default settings (including the IDE selection and IDE location entries).   The only thing non-standard with the setup was the inclusion of ‘https://raw.githubusercontent.com/sparkfun/Arduino_Boards/master/IDE_Board_Manager/package_sparkfun_index.json’ in the ‘Optional additional boards manager urls’ field.   This was apparently left over from my previous installation.   I’m not worried about this particular setting, but it does indicate that not everything about the previous incarnation of Visual Micro was actually removed from my system.

After installing VS/VM, I ran through a few of my simpler projects, and so far they have all compiled w/o problems (or had understandable and easily fixable problems).   I also compiled each program in the Arduino IDE,  taking care to follow the Arduino IDE restrictions (no “-master” in library folder names, exact capitalization, etc).

  • BlinkTest.ino – very simple, no #includes
  • ClassTest.ino – Very simple class construction project – no #includes
  • DigitalScale.ino – Several #includes, including the HX-711 load cell library
  • StepperSpeedCtrl – uses   #include <Stepper.h>
  • AdaFruit_BTLE_UART_Test.ino – uses 7 different library #includes
  • Arduino_IMU6050_Test4.ino – uses i2cDev and MPU6050 libraries, along with SBWire, elapsedMillis, and PrintEx.   this compiled OK, but with the same warnings (overrun & ‘one definition rule’) as when I first started this odyssey.   Fortunately, that’s all that happened – I didn’t get the ‘(.text.startup+0x1e4): undefined reference to MPU6050::initialize()’ error – yay!   This program also compiles in the Arduino IDE, with the same exact warnings.   So, it appears I may be back where I started on this odyssey, with a program using the MPU6050 libraries that compiles OK but with one   understandable/fixable warning (the overrun warning) and one mystery warning (the ‘one definition rule’ warning)
  • MPU6050_DMP6_Example:   spoke too soon!   This program blows the same  ‘(.text.startup+0x1e4): undefined reference to MPU6050::initialize()’ errors as before in VM, but compiles fine (albeit with the same two warnings as always)   in the Arduino IDE.
  • Arduino_IMU6050_Test4:   This is a program I created some time ago, and I found that it compiles/links fine (stil with the overrun/ODR warnings), both VS/VM and Arduino IDE

In desperation, I decide to create a completely new Arduino project in VS/VM, copy the ‘known-good’ code from  Arduino_IMU6050_Test4 into it, and then start hacking it down to the point where it fails.   Surprise surprise, when I did this, the new project (UnDefTest2) failed right away in VS/VM, blowing LOTS of linker errors!   Moreover, it compiled/linked fine in Arduino IDE – how could this be?   There MUST be something different about the VS/VM environment between  Arduino_IMU6050_Test4 and UnDefTest4 – but what?   After putting the two projects up side-by-side (this is where a dual monitor setup comes in REAL handy), I finally twigged to the difference; in the ‘working’ version, the local header/cpp files had been ‘added’ to the project’s ‘Header Files’ and ‘Source Files’ folders via the Solution Explorer (right-click on the folder icon, select ‘Add Existing…’, select the desired files, click OK). As soon as I added the relevant files to the UnDefTest4 project, it compiled/linked fine – YAY!!

I could not believe what I was seeing!   For some reason, VS/VM refused to process header/cpp files in the same folder as the .INO file, even though I had carefully checked the ‘Local Files Override Library Files’ option in the ‘Vmicro->Compiler’ menu.   At the same time, the Arduino IDE  always searches the local folder before anything else, so simply placing the relevant files in the local folder does the trick.   The fact that Visual Micro requires an additional (and non-intuitive) step for this boggles the mind.

04 February 2019

OK, when I started all this foolishness I was trying to find out (and fix) whatever was causing the ‘One Definition Rule’ (ODR) violation warning I was getting on all my programs that used the MPU6050.   I really  really hate warnings, and I was determined to get to the bottom of this, and I finally did!

The ‘one definition rule’ (ODR) warning is caused when the compiler/linker sees code that can produce two different definitions for the same object. If that can happen, EVER, then an ODR violation warning is issued. Believe it or not, that is exactly what happens when the compiler processes MPU6050.H – it sees that there are some conditions for which two different descriptions of the MPU6050 class could exist – and says “no no”. The relevant portion of the class definition is shown below:

When the compiler sees these lines, it says to itself; “Hmm, the way this is written, it is theoretically possible for there to exist  two different versions  of ‘Class MPU6050’, one with just two private member variables (devAddr & buffer) and one with four (with the addition of dmpPacketBuffer & dmpPacketSize), and this is a strict no-no; I’m going to whack that programmer across the head with an ODR violation!”

If the #ifdefined and #endif lines are commented out – the ODR warning goes away

Now, I suspect nobody has ever had a problem with this issue, as it would be very unlikely to have a project where BOTH versions of MPU6050 are in play, but of course the compiler doesn’t see it this way.

On a slightly different, but related subject, the OTHER warning was due to a potential integer overrun in the dmpGetGravity() function, as shown below:

if the last line of the above calculation is changed to (note the addition of ‘UL’)
– (int32_t)qI[2] * qI[2] + (int32_t)qI[3] * qI[3]) / (2 * 16384UL);
then this warning goes away as well.

Mission accomplished!   I now have MPU6050 code that compiles without errors (or warnings!!) in both the VS/VM and Arduino IDE environments.   Along the way I learned more than I ever wanted to know about ‘One Definition Rule’ violations and the innards of both the VS/VM environment and the Arduino IDE.

to paraphrase a quote attributed to Abraham of Lincoln:

I feel like the man who was tarred and feathered and ridden out of town on a rail. To the man who asked him how he liked it, he said: “If it wasn’t for the honor of the thing, I’d rather walk.”

 

Stay tuned,

 

Frank

 

 

 

Digital Tension Scale, Part V

posted 09 January 2019,

In my previous post on this subject I described the components I planned to use for my Digital Tension Scale project, and also the design for a box that would mount directly on the S-shaped load cell assembly.

This post describes the ‘final’ (to the extent than anything I do can be considered final) assembly of the completed system into my 3D-printed housing, and the results of some initial battery-powered testing.

As shown in the following photos, the major components (Teensy 3.2 microcontroller, HC-05 Bluetooth Module, HX-711 load cell amp/A-D, and Sparkfun ‘Basic’ LiPo Charger) were mounted on perfboard, which was then in turn attached to the box lid via a set of custom-printed standoffs.   A short piece of ribbon cable connects the Teensy to the LCD display.   The general idea behind this physical layout is to allow easy access to the electronics for troubleshooting, and to allow for battery charging and/or Teensy programming without having to open the box.

3D-printed housing. Note the glow from the Sparkfun charger LED

View of housing showing the access port for supplying USB power and/or programming the Teensy

View with the lid and electronics board removed. The LCD display is face down in its cutout

Exploded view showing all system components

Showing connections from load cell to HX-711

Top view showing how load cell attaches to the housing

Closeup showing load cell lead routing and power/programming port

End view showing charging port

 

Preliminary Testing Results:

At this point I have everything running on battery power only inside the box, and I have been able to demonstrate remote data capture on my PC using the HC-05 BT link.   The following image shows the data taken from my rowing machine, and a short video demonstrating the setup.

Complete Code:

Here is the complete Teensy 3.2 program as it stands today.   As you can see if you inspect the code, I have the Teensy low-power stuff turned OFF for the moment (that’s the purpose of the ‘#define NO_SNOOZE’ statement.

 

Schematic:

Future Work:

  • Do some more work to reduce power consumption to extend the battery life.   I got the ‘Snooze’ feature to work on the Teensy, but that only reduces the Teensy’s power consumption; it does nothing directly to reduce the power consumption of the other components.   I tried using a MOSFET to turn the HC-05 BT module on & off, and found this to be impractical, as then the module loses its connection to the remote data collection device.    I have also tried removing power from the LCD module, but that also turned out to be problematic.

Stay tuned,

Frank