Yearly Archives: 2019

Speaker Amplifier Project, Part VI – Second Production Run

Posted 29 September 2019,

I got an email from Dr. Betty Lise Anderson of the Electrical Engineering Department (I think it’s actually Electrical and Computer Engineering now) at The Ohio State University, asking me if I still had the documentation for the speaker amplifiers I created a couple of years ago for her STEM outreach program.  . Dr. Anderson said these units were very well-liked by her STEM outreach students; so well liked in fact that they apparently walked away on their own!  She asked me if I would be willing to fabricate another couple of amps, and said she would happily pay for all the parts.

Since I never throw anything away, I did indeed have the documentation and even some remaining parts from the original project.  I still had a half-dozen or so of the custom audio level indicator PCBs and at least one Adafruit 20W Class D amplifier left over.  I figured I’d need a couple of wall-wart 12V power supplies and one more amplifier – everything else was already available in my parts bins.  I figured the hundred bucks or so required to get all the parts was not worth worrying about, and besides I could probably write it off as advertising expense for EM Workbench LLC.

The enclosure:

When I made the first set, I 3D printed an enclosure that was a modified version of the nice rounded-corner box design published by Adafruit for just the amplifier.  However, when I tried this trick again, I wound up not liking the result.  Instead, I decided I should be able to create my own rounded corner box.  I searched around on Thingiverse and found a few parameterized rounded box designs, but they all seemed sort of half-baked.  So I broke out my copy of Open SCAD and started figuring out how to do it myself.  I ran across a video that demonstrated the rounded-corner technique using a ‘minkowski’ function, and then I was off and running.  After just a few hours (OK, more than a few, but definitely less than infinity) I had coded a nice, compact Open SCAD module to generate an arbitrary shaped rounded-corner box with an optional companion nesting lid.  The code is available on Thingiverse here.  Using the Open SCAD module, I generated an enclosure and companion lid and exported the result as an STL file, which I then sucked into Tinkercad to add the required cutouts and such for the amplifier project.

Amplifier enclosure as generated in Open SCAD

Amplifier enclosure after importing the STL file into Tinkercad

Amplifier enclosure after modification for the Adafruit amplifier and level indicator PCB

After getting the enclosure design all spiffed up, I started printing it on my trusty PowerSpec 3D Pro 3D printer, only to have it die on me – so much for ‘trusty’!  This was not an entirely unexpected event, as I had been noticing a ‘burnt insulation’ smell coming from it over the last few weeks, and suspected that it might be on its last legs.  So, this batch of amplifier enclosures would have to be single-color (the last one was dual-color red or the enclosure and gray for the text) – at least until my new MakerGear M3-ID 3D printer shows up :-)).  Here’s the result.

Amplifier and Activity Indicator:

In reviewing the documentation from the original project, I saw that the activity indicator schematic wasn’t entirely accurate, so I brought it up to date – mostly cosmetic/lettering, but…

View showing power indicator LED installation before installing power input terminal connector

View showing 2.2K current limiting resistor for power indicator LED

View showing connections between activity monitor PCB and amplifier board

The finished product:

Two complete amplifiers with companion power supplies

A large part of the motivation for this post was to thoroughly document all aspects of fabricating the second run of OSU/STEM speaker amplifiers, so that when I get that next call from Dr. Betty Lise Anderson… 😉

Frank

Sparkfun MPU9250 Test with Teensy 3.2

Posted 15 September 2019

After successfully demonstrating heading-based wall following with my little two motor robot, I attempted to integrate this capability back into my newly re-engined (re-motored??) 4-wheel Wall-E2 robot. Naturally the attempt was a dismal failure, for reasons I have yet to determine.  For some reason, the MPU 6050 IMU on the 4-wheel robot refused to produce valid heading data, and when I then attempted to redo the previously successful experiment on my 2-wheel robot, it too failed – in the same manner!  Clearly the two robots got together and decided to misbehave just to watch me tear what’s left of my hair out!

So, after trying in vain to figure out WTF with respect to the two robots and their respective IMU’s, I decided to just start all over with a different controller and a different IMU and see if I could just make something positive happen.  I found a Sparkfun MPU 9250 IMU breakout board in my parts bin, left over from an older post.  Because the Sparkfun board is set up for 3.3V only, I decided to use a Teensy controller instead of an Arduino Mega and see if I could just get something to work.

After the usual number of screwups and frustrations, I was finally able to get the Sparkfun MPU 9250 breakout board and the Teensy 3.2 talking to each other and to capture some valid heading (yaw) data from the MPU 9250.  The reason for this post is to document the setup and the code so when I have this same problem a year from now, I can come back here and have a working, documented baseline to start with.

The Hardware:

I used a Teensy 3.2 and a Sparkfun MPU 9250 IMU breakout board, both mounted on a small ASP solderless breadboard, as shown in the following photo, along with a Fritzing view of the layout

The Software:

I wrote a short program to display heading (yaw) values from the 9250, as shown below. The program uses Brian (nox771)’s wonderful i2c_t3 I2C library for the Teensy 3.x & LC controllers, and a modified version of ‘Class_MPU9250BasicAHRS_t3.h/cpp that incorporates an adaptation of Sebastian Madgwick’s “…efficient orientation filter” for 6D0F vs 9DOF.  The modification removes the magnetometer information from the calculations, as I already know that the magnetic field in my environment is corrupted with house wiring and is unreliable.

The modified Madgwick routine is included below

Note to self; after reviewing the extensive email thread with Kris Winer (tleracorp@gmail.com) I now believe I encapsulated all the required modifications to the AHRS code into a new class called “Class_MPU9250BasicAHRS_t3” with Class_MPU9250BasicAHRS_t3.h & .cpp files, and then referenced this new class in my MPU9250 work.

The results:

After getting everything working (and figuring out the history), I finally started getting reliable heading data from the MPU9250 as shown in the following Excel plot, where the breadboard was manually rotated back and forth.

Stay tuned,

 

Frank

 

 

 

 

Accessing the Internet with an ESP32 Dev Board

Posted 27 August 2019

During my recent investigation of problems associated with the MPU6050 IMU on my 2-motor robot (which I eventually narrowed down to I2C bus susceptibility to motor driver noise), one poster suggested that the Espressif ESP32 wifi & bluetooth enabled microcontroller might be a good alternative to Arduino boards because the ESP32 chip is ‘shielded’ (not sure what that means, but…).  In any case, I was intrigued by the possibility that I might be able to replace my current HC-05 bluetooth module (on the 2-motor robot) and/or the Wixel shield (on the 4-motor robot) with an integrated wifi link that would be able to send back telemetry from anywhere in my house via the existing wifi network.  So, I decided to buy a couple (I got the Espressif ESP32 Dev Board from Adafruit) and see if I could get the wifi capability to work.

As usual, this turned out to be a much bigger deal than I thought.  My first clue was the fact that Adafruit went to significant pains on their website to note that the ESP Dev Board was ‘for developers only” as shown below:

Please note: The ESP32 is still targeted to developers. Not all of the peripherals are fully documented with example code, and there are some bugs still being found and fixed. We got many sensors and displays working under Arduino IDE, so you can expect things like I2C and SPI and analog reads to work. But other elements are still under dev

Undaunted, I got two boards, and set about connecting my ESP32 dev board to the internet.  I found several examples on the internet, but none of them worked (or were even understandable, at least to me).  That’s when I realized that I was basically clueless about the entire IoT world in general, and the ESP32’s place in that world in particular – bummer!

So, after lots of screaming, yelling, and hair-pulling (well, not the last because I don’t have much left), I finally got my ESP32 to talk to the internet and actually retrieve part of a web page without crashing.  In order to consolidate my new-found knowledge (and maybe help other ESP32 newbies), I decided to create this post as a ‘how to’ for ESP32 internet connections.

General Strategy

Here’s the general strategy I followed in getting my ESP Dev Board connected to the internet and capable of downloading data from a website.

  1. Install ESP32 libraries and tools into either the Arduino IDE or the Visual Micro extension to Microsoft Visual Studio (I have the VS 2019 Community Edition).
  2. Install and run a localhost server.  This was a great troubleshooting tool, as with it I could monitor website requests to the server.
  3. Install ‘curl’, the wonderful open-source tool for internet protocol data transfers.  This was absolutely essential for verifying the proper http request syntax needed to elicit the proper response from the server.
  4. Use curl to figure out the proper HTTP ‘GET’ string syntax.
  5. Modify the WiFiClientBasic example program to successfully retrieve a document from my localhost server.

Install ESP32 libraries and tools

This step by itself was not entirely straightforward;  I wound up installing the libraries & tools using the Arduino IDE rather than in the VS2019/Visual Micro environment.  I’m sure it can be done either way, but it seemed much easier in the Arduino IDE.  Once this is done, then the ESP32 Dev Board can be selected (in either the Arduino IDE or the VS/VM environment) as a compile target.

Install and run a localhost server

This step is probably not absolutely necessary, as there are a number of ‘mock’ sites on the internet that purport to help with IoT app development.  However, I found having a ‘localhost’ web server on my laptop very convenient, as this gave me a self-contained test suite for working through the myriad problems I encountered.  I used the Node.js setup for Win10, as described in this post.  The cool thing about this approach is the console window used to start the server also shows all the request activity directed to the server, allowing me to directly monitor what the ESP32 is actually sending to the site. Here are two screenshots showing some recent activity.

The first log fragment above shows the server starting up, and the first set of http requests.  The first half dozen or so requests are from another PC; I did this to initially confirm I could actually reach my localhost server.  This first test failed miserably until I figured out I had to disable my PC’s firewall – oops!  The next set of lines are from my curl app showing what is actually received by the server when I send a ‘GET’ request from curl.

The screenshot above shows some more curl-generated requests, and then a bunch of requests from ‘undefined’.  These requests were generated by my ‘WiFiClientBasic’ program running on the ESP32 – success!!

Install ‘curl’

Curl is a wonderful command-line program to generate http (and any other web protocol you can imagine) requests.  You can get the executable from this site, and unzip and run it from a command window – no installation required.  Using curl, I was able to determine the exact syntax for an http ‘GET’ request to a website, as shown in the screenshot below

The screenshot above shows curl being used from the command line.  The first line C:\Users\Frank>curl -v http://192.168.1.90:1337/index.html generates a ‘GET’ request for the file ‘index.html’ to the site ‘192.168.1.90’ (my localhost server address on the local network), and the -v (verbose) option displays what is actually sent to the server, i.e.

GET /index.html HTTP/1.1
> Host: 192.168.1.90:1337
> User-Agent: curl/7.55.1
> Accept: */*

This was actually a huge revelation to me, as I had no idea that a simple ‘GET’ request was a multi-line deal – wow! Up to this point, I had been trying to use the ‘client.send()’ command in the WiFiClientBasic example program to just send the ‘GET /index.html HTTP/1.1’ string, with a commensurate lack of success – oops!

Modify the WifiClientBasic example program

Armed with the knowledge of the exact syntax required, I was now able to modify the ‘WifiClientBasic’ example program to emit the proper ‘GET’ syntax so that the localhost server would respond appropriately.  The final program (minus my network login credentials) is shown below.

This produced the following output:

Conclusion:

After all was said and done, most of the problems I had getting the ESP32 to connect to the internet and successfully retrieve some contents from a website were due to my almost complete ignorance of HTTP protocol syntax.  However, some of the blame must be laid at the foot of the WiFiClientBasic example program, as the lack of any error checking caused multiple ‘Guru Meditation Errors’ (which I believe is Espressif-speak for ‘segmentation fault’) when I was trying to get everything to work.  In particular, the original example code assumes the website response will be available immediately after the request and tries to read an invalid buffer, crashing the ESP32.  My modified code waits in a 1 Msec delay loop for client.available() to return a non-zero result. As shown in the above output, this usually happens after 5-7 Msec.

In addition, I found that either the full syntax:

GET /index.html HTTP/1.1
Host: 192.168.1.90:1337
User-Agent: ESP32
Accept: */* {newline}

or just

GET /index.html HTTP/1.1{newline}

worked fine to retrieve the contents of ‘index.html’ on the localhost server, because the ‘host’ information is already present in the connection, and the defaults for the  remaining two lines are reasonable.  However, I believe the trailing {newline} is still required for both cases.

So, now that I can successfully use the ESP32 to connect to my local wireless network and perform internet functions, my plan is to try and use some of the IoT support facilities available on the internet (like Adafruit’s io.adafruit.com) to see if I can get the ESP32 to upload simulated robot telemetry data to a cloud-based data store. If I can pull that off, then I’ll be one step closer to replacing my current HC-05 bluetooth setup (on the 2-motor robot) and/or my Wixel setup (on the 4-motor robot).

Stay tuned!

Frank

 

 

 

Back to the future with Wall-E2. Wall-following Part VI

Posted 13 August 2019

In my last post on this subject, I discussed the idea of using orientation information to compensate raw wall offset distance values to account for the errors associated with robot orientation.  The idea was that if I could do that, then Wall-E2 would know how far he was away from the wall regardless of orientation, and would be able to make appropriate corrections to get to and stay at a predetermined offset from the wall.

Well, it didn’t really work out that way.  After getting through the geometry analysis and the math, it turned out that in order to use the compensation algorithm, I have to know the initial robot orientation with respect to the wall, and I don’t :-(.  Without knowing this, it is basically impossible to apply the correct compensation.  For example, if the robot is originally oriented 30º away from the wall, then a ‘toward-wall’ rotation will cause the measured distance to go down, and an upward compensation is required.  However, if the robot is initially oriented toward the wall, then that same ‘toward-wall’ rotation will cause the measured distance to go up and a downward compensation is required – bummer!

However, all is not lost;  the ability to perform relatively precise angular rotations means that I can use incremental rotations for acquiring and then tracking a predetermined offset distance.  In the acquisition phase, the robot orientation is changed in 10º increments in the appropriate direction, and an N-point slope calculation is performed to determine whether or not the current ‘cut angle’ will allow the robot to eventually reach the predetermined offset distance.   As the robot approaches the offset line, the cut angle is reduced until it is zero, in theory resulting in the robot travelling parallel to the wall at the offset distance.  At this point the robot transitions from ‘capture’ to ‘track’ mode, and the response to distance deviations becomes more robust.

This strategy was implemented using my 2-motor robot, and seems to work well once the normal crop of bugs was eradicated.  The following Excel plots show the results of two short runs where the robot first captured and then tracked a 30cm offset setting.

Capture and track a 30cm wall offset starting from the outside

Capture and track a 30cm wall offset starting from the inside

So far I have only implemented this completely for the right side, but as the left side is identical, I anticipate no problems in this regard.

Future Work:

So far I have demonstrated the ability to capture and then track a predetermined wall offset distance, starting from either inside or outside the desired offset distance. This represents a quantum leap in performance, as Wall-E2 currently can only track whatever distance it first measures – it has no capability to capture a desired offset distance.  However, there are still some ‘edge’ cases that need to be dealt with one way or the other.  For instance, if the robot orientation is too far away from parallel, the current algorithm won’t be able to rotate it enough to capture the desired offset or the measured distance will exceed the max range gate of the ping sensors (currently set at 200cm).  These conditions may not be all that deleterious, as eventually Wall-E2 will get close enough to something to trigger an avoidance response, thereby resetting the entire orientation picture (hopefully to something a little more parallel).

In addition to the wall tracking problem, the new capability to make reasonably precise angular rotations should significantly improve Wall-E2’s performance in handling ‘open-corner’ and ‘closed-corner’ situations; currently these cases are handled with timed turns, which are only correct for one floor covering type (hard vs soft) and battery state.  With the heading measurement capability, a 90º corner turn will always be (approximately) 90º whether it is on carpet or hard flooring.  In addition, now I can program in obstacle avoidance step-turns for approaching obstacles instead of relying entirely the ‘backup-and-turn’ approach.

Stay tuned!

Frank

 

 

Back to the future with Wall-E2. Wall-following Part V

Posted 08 August 2019

In my last post on this subject, I described some ideas for improving Wall-E2’s wall following performance by compensating for distance-to-wall errors caused by Wall-E2 not being oriented perfectly parallel to the wall.  The situation is shown in the diagram below:

When the robot is parallel to the wall, as shown in light purple, the ping sensor measures distance d1 to the wall.  However, when it rotates to make a wall-following adjustment, the ping sensor now measures distance d2, even though the robot’s center of rotation (CR) hasn’t moved at all.  If the wall-following algorithm is based strictly on ping distance, the robot tends to wander back and forth, chasing ping measurements that don’t reflect (no pun intended) reality.  I need a way of relating the measured distance to the distance from the robot’s CR to the wall, so that wall-following adjustments can be made referenced to the CR, not to the ping sensor position on the robot.

Given the above geometry, an expression can be developed to relate the perpendicular distance d1 and the measured distance d2, as shown below:

Expression relating perpendicular distance to measured distance for any rotation angle

I set up an experiment where the robot was placed on a platform about 16cm away from an obstacle.  I measured the ‘ping’ distance to the obstacle as the robot was manually rotated +/- 20 deg.  Then I plotted  the data in Excel as shown below:

In the above plot, the heading values (blue line) have been normalized to the initial heading and any linear drift removed.  After correction, the robot changes heading almost exactly +/- 20 deg.  Similarly, the measured distances (orange line) values were normalized to the nominal distance of 16cm.  As can be seen, the measured distance varied about +4 to -2 cm, even though the robot center of rotation (CR) remained fixed.  Then the distance compensation expression shown above was applied, resulting in the gray line.  This shows that the compensation expression is effective in reducing angle-induced distance changes.

Next, I set up a ‘live’ experiment with the 2-motor robot to more closely emulate the normal operating environment.  I set up a section of ‘wall’ and had the robot make a single 60 deg turn, starting with the robot angled about 30 deg toward the wall, and ending with the robot angled about 30 deg away from the wall.  Distance measurements were taken as rapidly as possible during the turn, but not before or after the turn started.

Here’s a short video of the 2-motor robot approaching a ‘wall’ at an angle of about 30º and making a single turn of about 60º.  The entire sequence is about 3 seconds long.  The robot runs straight for about 1 sec, then turns for about 1 sec, then goes straight again for about 1 sec.

The measured ‘ping’ distances for the 1-second turn portion of the run is shown in the Excel plot below

The above plot starts when the robot starts turning at about 1.2 sec into the video (the approach to the wall is not shown).  When the turn starts, the measured distance to the wall is  about 20 cm.  The measured distance decreases rapidly to about 16 cm at about 0.4 sec into the turn (about 1.6 sec into the video), and stays there for about 0.4 sec and then starts climbing rapidly to about 23 cm when the turn finishes.  However, the distance from the center of rotation (CR) of the robot to the wall changes hardly at all.  The blue painter’s tape in the background of the video has black markings each 5 cm, and it is possible to estimate the distance from the CR to the wall throughout the turn.  My estimate is that the robot’s CR starts at about 25 cm, decreases to about 22 cm at the apex of the turn, and then goes back to about 25 cm at the end of the turn.  The measured distance decreases 4 cm and then increases 8 cm while the robot’s CR decreases 3 cm and increases 3  – quite a difference, due entirely to the angle change between the robot and the wall during the turn.  After normalizing the heading values so that they reflect the angle off parallel and applying the distance compensation expression above, I got the following plot:

In the above plot, the gray line shows the corrected distance from the robot CR to the wall.  As estimated from the video earlier, the CR varies only about 1cm during the turn.  This is pretty strong evidence that the proposed distance correction scheme is correct and effective in removing distance measurement errors due to robot heading changes.

With the technique demonstrated above, I am optimistic that I can now not only improve wall tracking, but also can implement wall-following at a specific distance, say 25 cm.  The difficulty with trying to displace laterally to acquire and then lock to a specific distance is the large changes in measured difference due to the angle change needed to move toward or away from the wall made it impossible to determine where the robot’s CR actually was relative to the desired offset distance.  By (mostly) removing this orientation-induced error term, it should be feasible to determine the actions needed to approach and then track the desired offset distance.

Stay tuned!

Frank

08 February 2020 Update:

As I continued my campaign to integrate heading information into my wall-following robot algorithm, my efforts to compensate ‘ping’ distances for off-parallel robot orientations with respect to the nearest wall kept failing, and I didn’t know why.  I had gone through the math several times and was convinced it was valid, and as the plot above showed, it should work.

So, I made another run at it, completely redoing the math from the ground up – and running some more test in my ‘local range’ (aka my office).  Still no joy – no matter what I did, the math  seemed to be overcompensating, as shown in the plot below:

Ping Distance vs Calc Distance for two heading changes

This plot (and others like it)  convinced me that I was still missing something fundamental.  As I often do, I was thinking about this in bed while drifting off to sleep, and I realized that I might be able to determine the culprit by cheating; I would place the robot at a set distance from the wall, and carefully rotate it manually over a compass rose.  At each heading I would manually measure the distance from the ping sensor to the wall, perpendicular to the plane of the sensor (i.e. I would physically measure the distance I would expect the ping sensor to report), and also record the ‘ping’ distance reported by the sensor.  With just a few measurements the problem became obvious; the ‘ping’ distance for slant angles to the wall do not even remotely resemble the actual physical distance – it is much less, as shown below.

As can be seen , the compensation algorithm actually works quite well, when dealing with the physically measured slant range.  However, because the ‘ping’ distance loses accuracy very rapidly off-parallel angles beyond about 20 degrees, the compensation algorithm is ineffective.  A classic case of ‘GIGO’.

After performing the above experiment, I was still left with the mystery of why the compensation algorithm appeared to work so well before – WTF?  So, I went back and very carefully examined the previous plot and the underlying data, and discovered I’d made another classic experimental error – The ‘Calculated Distance’ data was plotted on the wrong scale.  When plotted on the correct scale, the plot changes to the one shown below:

Previous plot with ‘Calc Distance’ plotted on the correct scale

Now it is clear that the calculated compensation using ‘ping’ distances is not at all useful.

So, the bottom line on all of this it that the effort to apply a heading-based ping distance compensation was doomed to failure from the start, because the distance reported by the ping sensor is wildly inaccurate for off-perpendicular geometries.  The good news is that now at least I know why the compensation effort was doomed to fail!

In the meantime, I independently developed a technique for determining the heading required for orienting the robot parallel to the wall as the heading associated with the minimum ping distance achieved by swinging the robot back and forth. This technique utilizes the ping sensors in the realm where they are most accurate, and does away entirely with the need for compensation.

Stay tuned!

Frank

 

 

Back to the future with Wall-E2. Wall-following Part IV

Posted 30 April 2019

In two previous posts (here & here) I described my efforts to upgrade Wall-E2’s wall following performance using a PID control algorithm.  The results of my efforts to date in this area have not been very spectacular – a better description might actually be ‘dreadful’ :-(.

After some additional analysis, I came to believe that the reason the PID approach doesn’t work very well is a fundamental feature of the way Wall-E2 measures distance to the nearest wall.  Wall-E2 has two acoustic sonar units fixed to its upper deck, and they measure the distance perpendicular to the robot’s longitudinal axis.  What this means, however, is that when the robot is angled with respect to the nearest wall, the distance measured isn’t the perpendicular distance, but rather the hypotenuse of the right triangle with the right angle at the wall.  So, when Wall-E2 turns toward or away from the wall, the measured distance increases even though the robot hasn’t actually moved toward or away.  Conversely, if the robot is angled in toward the wall and then turns to be parallel, the measured distance decreases even if the robot hasn’t moved at all relative to that wall. The situation is shown in the sketch below:

Using Excel, I ran a simulation of the ping distance versus the actual distance for a range of angle offsets from 0 to 30 degrees, as shown below:

As shown above, the ping distance for a constant 25 cm offset ranges from 25 (robot longitudinal axis parallel to the wall) to almost 29 cm for a 30 degree off-axis heading. These values translate to a percentage error of zero to approximately 15%, independent of the initial parallel distance.

So, it becomes obvious why a standard PID algorithm has trouble; if the ping distance goes up slightly, the PID algorithm attempts to compensate by turning toward the wall.  However, this causes the ping distance to increase rather than decrease, causing the algorithm to command an even greater angle toward the wall, which in turn causes a further increase in ping distance – entirely backward.  The reverse happens for an initial decrease in the ping distance starting from a parallel orientation.  The algorithm commands a turn away from the wall, which causes the ping distance to increase immediately, even though the actual distance hasn’t changed.  This causes the algorithm to seriously overcorrect in one case, and seriously undercorrect in the other.   Not good.

What I need is a way to compensate for the changes in ping distance caused by Wall-E2’s angular orientation with respect to the wall being tracked. If Wall-E2 is oriented parallel to the wall, then no correction is needed; if not,then a correction is required.  Fortunately for the good guys, Wall-E2 now has a way of providing the needed heading information, with the integration of the MPU6050-based 6DOF IMU module described in this post from last September.

To investigate this idea, I modified an old test program to have Wall-E2 perform a series of mild S-turns in my test hallway while capturing heading and ping distance data.  The S-turns were tweaked so that Wall-E2 stayed a fairly constant 50 cm from the right-hand wall, as shown in the following movie clip.

 

Start of test area showing tape measure for offset distance measurement

Using Excel, I plotted the reported ping distance, the commanded heading, and the actual heading versus time, as shown below:

In the above plot, the initial CCW turn (away from the wall) was a 10° change, and all the rest were approximately 20° to maintain a more-or-less straight line.  At the end of the second (the first CW turn) and subsequent heading changes, there is an approximately 0.5 sec straight period, during which no data was captured.  As can be seen, the ping distance (gray curve) goes up slightly as the first CCW turn starts, then levels off during the changeover from CCW to CW turns, and then precipitously declines as the CW turn sweeps the ping sensor toward the perpendicular point.  Part of this decline is actual distance change caused by the 0.5 sec straight period that moves the robot toward the wall.  After the next (CCW) heading change is commanded, the robot starts to turn away from the wall causing the ping distance to increase, but this is partially cancelled by the fact that the robot continues to travel toward the wall during the S-turn. As soon as the robot gets parallel to the wall, then the ping distance goes up quickly as the heading continues to change in a CCW direction.  This behavior repeats for each S-turn until the end of the run.

As an exercise, I added another column to the spreadsheet – “perpendicular distance”, and set up a formula to compute the adjusted distance from the robot to the wall, using the recorded angular offset.  This computation presumes that the robot started off parallel to the wall (confirmed via the video clip).  The result is shown on the yellow line in the plot below:

Ping distance and heading vs time, with calculated perpendicular distance added

 

As can be seen from the above plot and video, the compensated distance looks like it might be a good match with the perpendicular distance estimated from the video. For instance, at 17 sec into the video, the robot has just finished the first clockwise turn and straight run, and is just starting the second counter-clockwise turn.  At this point the robot is oriented parallel to the wall, and the ping distance and the perpendicular distance should match. The video shows that distance should be about 33-35 cm, and the recorded ping distance at this point is 36 cm.  However, the calculated distance went directly from 45 cm at point 11 to 34 cm at point 12 and basically stayed at that value before changing rapidly from 34 to 45 over points 19 & 20.  Again at 19 seconds into the video, the robot is approximately 42-44 cm from the wall and parallel to it; both the actual ping distance and the calculated perpendicular distance agree at this point at 45 cm – a close match to the estimate from the video.

So now the question is – can I use the calculated perpendicular wall distance to assist wall-following operations?  A significant issue may be knowing when the robot is actually parallel to the wall, to establish a heading baseline for compensation calcs.

When is the robot parallel to the wall?

A unique feature of the point or points where the robot is parallel to the wall is that the ping distance and the calculated distance are equal.  However, that’s a bit of ‘chicken and the egg’ as one has to know the robot is parallel in order to use an offset angle of 0 degrees for the compensation calc to work out.  Since the heading information available from the MPU6050 IMU is only relative, the heading value for the parallel condition can be anything, and can vary arbitrarily from run to run.  So, what to do?  One thought would be to have the robot make a short S-turn at the start of any tracking run to establish the heading for which the ping distance goes through a minimum or maximum – the heading for the max/min point would be the parallel heading. From there on, that heading should be reliably constant until the next time the robot’s power is cycled.  Of course, a new parallel heading value would be required each and every time Wall-E2’s tracking situation changes (obstacle recovery, step-turns and reversals at the end of a hallway, changing from the left wall to the right one, etc).  Maybe a hybrid mode would be feasible, whereby the robot uses uncompensated heading-based S-turns instead of the current ‘bang-bang’ system for initial wall tracking, shifting to a compensation algorithm after a suitable parallel heading is determined.

Looking at the above plots, it may not be all that useful to look for maxima and/or minima, as there are multiple headings for which the ping distance is constant, so which one is the parallel heading?  Thinking about ways to rapidly find the parallel heading, it occurred to me that my previous work on quickly finding the mathematical variance of a set of values might be useful here.  I plugged the above ping distance numbers into the Excel spreadsheet I used before, and got the following plot of ping distance and 3-element running variance vs time.

So, looking at the above plot, it is encouraging that a 3-point running variance calculation shows near-zero values when the robot is most probably parallel or nearly parallel to the wall.  Adding the heading information to the spreadsheet gives the plot shown below

and now it is clear that the large variance values are associated with the changes from one heading to another, and the low variance values are associated with the middle section of each linear heading change (S-turn) segment.  If I further enhance the plot by putting the variance plot on a secondary scale and zooming in on the low variance sections, we get the plot shown below:

 

 

 

Variance scale modified to show 0-5 range only

In the above plot, the variance line is zoomed in to the 0-5 range only, emphasizing the 0-0.5 unit variance range.  In this plot it can be seen that the variance actually has a distinct minimum very near zero at time points 7, 16, 22, 28-30, and 35-38.  These time values correspond to robot heading values of 64, 61, 63, 61-67. and 70-65.  Discarding the last set as bad data (this is where the robot literally ‘hit the wall’ at the end of the run), we can compute an approximate parallel heading value as the average of all these values, or the average of 64, 61, 63, 64 (average of 61-67) = 63 degrees.  From the video we can see that the robot started out parallel to the wall, and the first heading reading was 62 degrees – a very good match to the calculated parallel heading value.

The next step, I think, is to run some more field tests against a wall, with wall-following and heading assist integrated into the code.

Frank

 

 

 

 

MPU6050 IMU Motor Noise Troubleshooting

Posted 24 July 2019

For a while now I’ve been investigating ways of improving the wall following performance of my autonomous wall-following robot Wall-E2.  At the heart of the plan is the use of a MPU6050 IMU to sense relative angle changes of the robot so that changes in the distance to the nearest wall due only to the angle change itself can be compensated out, leaving only the actual offset distance to be used for tracking.

As the test vehicle for this project, I am using my old 2-motor robot, fitted with new Pololu 125:1 metal-geared DC motors and Adafruit DRV8871 motor drivers, as shown in the photo below.

2-motor test vehicle on left, Wall-E2 on right

The DFRobots MPU6050 IMU module is mounted on the green perfboard assembly near the right wheel of the 2-motor test robot, along with an Adafruit INA169 high-side current sensor and an HC-05 Bluetooth module used for remote programming and telemetry.

This worked great at first, but then I started experiencing anomalous behavior where the robot would lose track of the relative heading and start turning in circles.  After some additional testing, I determined that this problem only occurred when the motors were running.  It would work fine as long as the motors weren’t running, but since the robot had to move to do its job, not having the ability to run the motors was a real ‘buzz-kill’.  I ran some experiments on the bench to demonstrate the problem, as shown in the Excel plots below:

Troubleshooting:

There were a number of possibilities for the observed behavior:

  1. The extra computing load required to run the motors was causing heading sensor readings to get missed (not likely, but…)
  2. Motor noise of some sort was feeding back into the power & ground lines
  3. RFI created by the motors was getting into the MPU6050 interrupt line to the Arduino Mega and causing interrupt processing to overwhelm the Mega
  4. RFI created by the motors was interfering with I2C communications between the Mega and the MPU6050
  5. Something else

Extra Computing Load:

This one was pretty easy to eliminate.  The main loop does nothing most of the time, and only updates system parameters every 200 mSec.  If the extra computing load was the problem, I would expect to see no ‘dead time’ between adjacent adjustment function blocks.  I had some debug printing code in the program that displayed the result of the ‘millis()’ function at various points in the program, and it was clear that there was still plenty of ‘dead time’ between each 200 mSec adjustment interval.

Motor noise feeding back into power/ground:

I poked around on the power lines with my O’scope with the motors running and not running, but didn’t find anything spectacular; there was definitely some noise, but IMHO not enough to cause the problems I was seeing.  So, in an effort to completely eliminate this possibility, I removed the perfboard sub-module from the robot entirely, and connected it to a separate Mega microcontroller. Since this setup used completely different power circuits (the onboard battery for the robot, PC USB cable for the second Mega), power line feedback could not possibly be a factor.  With this setup I was able to demonstrate that the MPU6050 output was accurate and reasonable until I placed the perfboard sub-module in close proximity to the robot; then it started acting up just as it did when mounted on the robot.

So it was clear that the interference is RFI, not conducted through any wiring.

RFI created by the motors was getting into the MPU6050 interrupt line to the Arduino Mega and causing interrupt processing to overwhelm the Mega

This one seemed very possible.  The MPU6050 generates interrupts at a 20Hz rate, but I only use measurements at a 5Hz (200mSec) rate.  Each interrupt causes the Interrupt Service Routine (ISR) to fire, but the actual heading measurement only occurs every 200 mSec. I reasoned that if motor-generated RFI was causing the issue, I should see many more activations of the ISR than could be explained by the 20Hz MPU6050 interrupt generation rate.  To test this theory, I placed code in the ISR that pulsed a digital output pin, and then monitored this pin with my O’scope.  When I did this, I saw many extra ISR activations, and was convinced I had found the problem.  In the following short video clip, the top trace is the normal interrupt line pulse frequency, and the bottom trace is the ISR-generated pulse train.  In normal operation, these two traces would be identical, but as can be seen, many extra ISR activations are occurring when the motors are running.

So now I had to figure out what to do with this information.  After Googling around for a while, I ran across some posts that described using the MPU6050/DMP setup without using the interrupt output line from the module; instead, the MPU6050 was polled whenever a new reading was required.  As long as this polling takes place at a rate greater than the normal DMP measurement frequency, the DMP’s internal FIFO shouldn’t overflow.  If the polling rate is less than the normal rate, then FIFO management is required.  After thinking about this for a while, I realized I could easily poll the MPU/DMP at a higher rate than the configured 20Hz rate by simply polling it each time through the main loop – not waiting for the 200mSec/5Hz motor speed adjustment interval.  I would simply poll the MPU/DMP as fast as possible, and whenever new data was ready I would pull it off the FIFO and put it into a global variable.  The next time the motor adjustment function ran, it would use the latest relative heading value and everyone would be happy.

So, I implemented this change and tested it off the robot, and everything worked OK, as shown in the following Excel plot.

And then I put it on the robot and ran the motors

Crap!  I was back to the same problem!  So, although I had found evidence that the motor RFI was causing additional ISP activations, that clearly wasn’t the entire problem, as the polling method completely eliminates the ISP.

RFI created by the motors was interfering with I2C communications between the Mega and the MPU6050

I knew that the I2C control channel could experience corruption due to noise, especially with ‘weak’ pullup resistor values and long wire runs.  However, I was using short (15cm) runs and 2.2K pullups on the MPU6050 end of the run, so I didn’t think that was an issue.  However, since I now knew that the problem wasn’t related to wiring issues or ISR overload, this was the next item on the list.  So, I shortened the I2C runs from 15cm to about 3cm, and found that this did indeed suppress (but not eliminate) the interference.  However, even with this modification and with the MPU6050 module located as far away from the motors as possible, the interference was still present.

Something else

So, now I was down to the ‘something else’ item on my list, having run out of ideas for suppressing the interference.  After letting this sit for a few days, I realized that I didn’t have this problem (or at least didn’t notice it) on my 4-motor Wall-E2 robot, so I started wondering about the differences between the two robot configurations.

  1. Wall-E2 uses plastic-geared 120:1 ‘red cap’ motors, while the 2-motor robot uses pololu 125:1 metal-geared motors
  2. Wall-E2 uses L298N linear drivers while the 2-motor version uses the Adafruit DRV8871 switching drivers.

So, I decided to see if I could isolate these two factors and see if it was the motors, or the drivers (or both/neither?) responsible for the interference. To do this, I used my new DPS5005 power supply to generate a 6V DC source, and connected the power supply directly to the motors, bypassing the drivers entirely.  When I did this, all the interference went away!  The motors aren’t causing the interference – it’s the drivers!

In the first plot above, I used a short (3cm) I2C wire pair and the module was located near, but not on, the robot. As can be seen, no interference occurred when the motors were run.  In the second plot I used a long (15cm) I2C wire pair and mounted the module directly on the robot in its original position.  Again, no interference when the motors were run.

So, at this point it was pretty definite that the main culprit in the MPU6050 interference issue is the Adafruit DRV8871 switch-mode driver.  Switch-mode drivers are much more efficient than L298N linear-mode drivers, but the cost is high switching transients and debilitating interference to any I2C peripherals.

As an experiment, I tried reducing the cable length from the drivers to the motors, reasoning that the cables must be acting like antennae, and reducing their length should reduce the strength of the RFI.  I re-positioned the drivers from the top surface of the robot to the bottom right next to the motors, thereby reducing the drive cable length from about 15cm to about 3 (a 5:1 reduction).  Unfortunately, this did not significantly reduce the interference.

So, at this point I’m running out of ideas for eliminating the MPU6050 interference due to switch-mode driver use.

  • I read at least one post where the poster had eliminated motor interference by eliminating the I2C wiring entirely – he used a MPU6050 ‘shield’ where the I2C pins on the MPU6050 were connected directly to the I2C pins on the Arduino microcontroller.  The poster didn’t mention what type of motor driver (L298N linear-mode style or DRV8871 switch-mode style), but apparently a (near) zero I2C cable length worked for him.  Unfortunately this solution won’t work for me as Wall-E2 uses three different I2C-based sensors, all located well away from the microcontroller.
  • It’s also possible that the motors and drivers could be isolated from the rest of the robot by placing them in some sort of metal box that would shield the rest of the robot from the switching transients caused by the drivers.  That seems a bit impractical, as it would require metal fabricating unavailable to me.  OTOH, I might be able to print a plastic enclosure, and then cover it with metal foil of some sort.  If I go this route, I might want to consider the use of optical isolators on the motor control lines, in order to break any conduction path back to the microcontroller, and capacitive feed-throughs for the power lines.

27 July 19 Update:

I received a new batch of GY-521 MPU6050 breakout boards, so I decided to try a few more experiments.  With one of the GY-521 modules, I soldered the SCL/SDA header pins to the ‘bottom’ (non-label side) and the PWR/GND pins to the ‘top’.  With this setup I was able to plug the module directly into the Mega’s SCL/SDA pins, thereby reducing the I2C cable length to zero.  The idea was that if the I2C cable length was contributing significantly to RFI susceptibility, then a zero length cable should reduce this to the minimum  possible, as shown below:

MPU6050 directly on Mega pins, normal length power wiring

In the photo above, the Mega with the MPU6050 connected is sitting atop the Mega that is running the motors. The GND and +5V leads are normal 15cm jumper wires.  As shown in the plots below, this configuration did reduce the RFI susceptibility some, but not enough to allow normal operation when lying atop the robot’s Mega.

GY-521 MPU6050 module mounted directly onto Mega, normal length power leads

I was at least a little encouraged by this plot, as it showed that the MPU6050 (and/or the Mega) was recovering from the RFI ‘flooding’ more readily than before.  In previous experiments, once the MPU6050/Mega lost sync, it never recovered.

Next I tried looping the power wiring around an ‘RF choke’ magnetic core to see if raising the effective impedance of the power wiring to high-frequency transients had any effect, as shown in the following photo.

GND & +5V leads looped through an RF Choke.

Unfortunately, as far as I could tell this had very little positive effect on RFI susceptibility.

Next I tried shortening the GND & +5V leads as much as possible.  After looking at the Mega pinout diagram, I realized there was GND & +5V very close to the SCL/SDA pins, so I fabricated the shortest possible twisted-pair cable and installed it, as shown in the following photo.

MPU6050 directly on Mega pins, shortest possible length power wiring

With this configuration, I was actually able to get consistent readings from the MPU6050, whether or not the motors were running – yay!!

In the plot above, the vertical scale is only from -17 deg to -17.8 deg, so all the variation is due to the MPU6050, and there is no apparent deleterious effects due to motor RFI – yay!

So, at this point it’s pretty clear that a significant culprit in the MPU6050’s RFI susceptibility is the GND/+5V and I2C cabling acting as antennae and  conducting the RFI into the MPU6050 module.  Reducing the effective length of the antennas was effective in reducing the amount of RFI present on the module.

With the above in mind, I also tried adding a 0.01uF ‘chip’ capacitor directly at the power input leads, thinking this might be just as effective (if not more so) than shortening the power cabling.  Unfortunately, this experiment was inconclusive. The normal length power cabling with the capacitor seemed to cause just as much trouble as the setup without the cap, as shown in the following plot.

Having determined that the best configuration so far was the zero-length I2C cable and the shortest possible GND/+5V cable, I decided to try moving the MPU6U6050 module from the separate test Mega to the robot’s Mega. This required moving the motor drive lines to different pins, but this was easily accomplished.  Unfortunately, when I got everything together, it was apparent that the steps taken so far were not yet effective enough to prevent RFI problems due the switch-mode motor drivers

The good news, such as it is, is that the MPU6050/Mega seems to recover fairly quickly after each ‘bad data’ excursion, so maybe we are most of the way there!

As a next step, I plan to replace the current DRV8871 switch-mode motor drivers with a single L298N dual-motor linear driver, to see if my theory about the RFI problem being mostly due to the high-frequency transients generated by the drivers and not the motors themselves.  If my theory holds water, replacing the drivers should eliminate (or at least significantly suppress) the RFI problems.

28 July 2019 Update:

So today I got the L298N driver version of the robot running, and I was happy (but not too surprised) to see that the MPU6050 can operate properly with the motors ON  or OFF when mounted on the robot’s Mega controller, as shown in the following photo and Excel plots

2-motor robot with L298N motor driver installed.

However, there does still seem to be one ‘fly in the ointment’ left to consider.  When I re-installed the wireless link to allow me to reprogram the 2-motor robot remotely and to receive wireless telemetry, I found that the MPU6050 exhibited an abnormally high yaw drift rate unless I allowed it to stabilize for about 10 sec after applying power and before the motors started running, as shown in the following plots.

2-motor robot with HC-05 wireless link re-installed.

I have no idea what is causing this behavior.

31 July 2019 Update

So, I found a couple of posts that refer to some sort of auto-calibration process that takes on the order of 10 seconds or so, and that sounds like what is happening with my project.  I constructed the following routine that waited for the IMU yaw output values to settle

This was very effective in determining when the MPU6050 output had settled, but it turned out to be unneeded for my application.  I’m using the IMU output for relative yaw values only, and over a very short time frame (5-10 sec), so even high yaw drift rates aren’t deleterious.  In addition, this condition only lasts for a 10-15 sec from startup, so not a big deal in any case.

At this point, the MPU6050 IMU on my little two-motor robot seems to be stable and robust, with the following adjustments (in no particular order of significance)

  • Changed out the motor drivers from 2ea switched-mode DRV8871 motor drivers to a single dual-channel L298N linear mode motor driver.  This is probably the most significant change, without which none of the other changes would have been effective.  This is a shame, as the voltage drop across the L298N is significantly higher than with the switch-mode types.
  • Shortened the I2C cable to zero length by plugging the GY-521 breakout board directly into the I2C pins on the Mega.  This isn’t an issue on my 2-motor test bed, but will be on the bigger 4-motor robot
  • Shortened the IMU power cable from 12-15cm to about 3cm, and installed a 10V 1uF capacitor right at the PWR & GND pins on the IMU breakout board.  Again, this was practical on my test robot, but might not be on my 4-motor robot.
  • Changed from an interrupt driven architecture to a polling architecture.  This allowed me to remove the wire from the module to the Mega’s interrupt pin, thereby eliminating that possible RF path.  In addition, I revised the code to be much stricter about using only valid packets from the IMU.  Now the code first clears the FIFO, and then waits for a data ready signal from the IMU (available every 50 mSec at the rate I have it configured for).  Once this signal is received, the code immediately reads a packet from the FIFO if and only if it contains exactly one packet (42 bytes in this configuration).  The code shown below is the function that does all of this.

Here’s a short video of the robot making some planned turns using the MPU6050 for turn management.  In the video, the robot executes the following set of maneuvers:

  1. Straight for 2 seconds
  2. CW for 20 deg, starting an offset maneuver to the right
  3. CCW for 20 deg, finishing the maneuver
  4. CCW for 20 deg, starting an offset maneuver to the left
  5. CW for 20 deg, finishing the maneuver
  6. 180 deg turn CW
  7. Straight for 3 sec
  8. 20 deg turn CCW, finishing at the original start point

So, I think it’s pretty safe to say at this point that although both the DFRobots and GY-521 MPU6050 modules have some serious RFI/EMI problems, they can be made to be reasonably robust and reliable, at least with the L298N linear mode motor drivers.  Maybe now that I have killed off this particular ‘alligator’, I can go back to ‘draining the swamp’ – i.e. using relative heading information to make better decisions during wall-following operations.

Stay tuned!

Frank

 

Arduino Remote Programming Using A HC-05 Bluetooth Module

Posted 10 June 2019

As part of my recent Wall-E2 Motor Controller Study, I reincarnated my old 2-motor robot as a test platform for Pololu’s ’20D’ metal gear motors.  When I got the robot put together and started testing the motors, I realized I needed a way to remotely program the Arduino controller and remotely receive telemetry, just as I currently do with my 4-wheel Wall-E2 robot.

On my Wall-E2 robot, remote programming/telemetry is accomplished using the very nice Pololu Wixel Shield.  However, I have been playing around with the cheap and small HC-05 Bluetooth module,  and decided to see if there was maybe a way to use this module as a replacement for the Wixel.

As I usually do, I started with LOTS of web research.  I found some posts claiming to have succeeded in remotely programming an Arduino using a HC-05 module, but the information was sketchy and incomplete, so I decided I would try and pull all the various sources together into a (hopefully) more complete tutorial for folks like me who want to use a HC-05 module for this purpose.

Overall Approach:

In order to remotely program an Arduino using a HC-05, the following basic parts are required:

  • A wireless link (obviously) between the PC and the HC-05.
  • A serial link between the PC and the Arduino and between the Arduino and the HC-05. This part is also well established, and the Arduino-to-HC-05 link can be done with either a hardware port (as with the Mega 2560) or a SoftwareSerial port using the SoftwareSerial library.  My tutorial uses the Mega 2560, so I use Tx/Rx1 (pins 18/19) for the Arduino-to-HC-05 link
  • A way of resetting the Arduino to put it back into programming mode, so the new firmware can be uploaded.
  • A serial connection between the HC-05 and Tx/Rx0 on the microcontroller – more about this later.

The Wireless Link

The HC-05 is a generic Bluetooth device, and as such is compatible with just about everybody’s Bluetooth setup – phones and PC’s.  I plan to use this with my Dell XPS15 9570 laptop, and I can pair with the HC-05 no problem.  Here’s a link to a tutorial on pairing with the HC-05, and here’s another.  As another poster mentioned, the pairing mechanism creates multiple ‘outgoing’ and ‘incoming’ COM ports, and it’s hard for me to figure out which to use.  In this last iteration, I found that I could remove the two ‘incoming’ COM ports and use just the ‘outgoing’ one. Don’t know if that is the right thing, but….

A serial link between the PC, the Arduino and the HC-05

This part is discussed and demoed in many tutorials, but the piece that is almost always missing is why you need to have this link in the first place. The reason is that several AT commands must be used in order to configure the HC-05 correctly for wireless Arduino program upload, and (as I understand it anyway), AT commands can only be communicated to the HC-05 via it’s hardware serial lines, and only when the HC-05 is in ‘Command’ or ‘AT’ mode.  The configuration step is a one-time deal; once the HC-05 is configured, it does not need to be done again unless the application requirements change.

A way of resetting the Arduino to accept firmware uploads

This is the tricky part.  As ‘gabinix’ said in this post:

Hi Paul… To be honest I couldn’t find any tutorials to explain how to program/upload sketches with the HC-05. In fact, the conclusion you came up with is in-line with all the information out there. But it’s actually an extremely simple solution.

The only thing that keeps the HC-05 from uploading a program to arduino is that it doesn’t have a DTR (Data Terminal Ready) pin which tells the arduino to reset and accept a new sketch.

The solution is to re-purpose the “state” pin (PI09)  on the breakout board. It’s purpose is to attach to an LED and indicate the connection status. It’s default setting is to send the pin HIGH when a connection is made, but you can simply enter into command mode of the HC-05 and use an AT COMMAND to tell it to send the pin LOW when a connection is made.

Voila! In about 1 minute of time you have successfully re-purposed the LED pin to a DTR pin which will reset your arduino to accept a new sketch when you hit the upload button.

A couple things to note… This will work for a pro-mini without additional hardware by connecting to the DTR pin. If you’re using an UNO or similar, you will need a capacitor in between our custom “state” pin and the reset pin on the uno. The reason is that the HC-05 will drive our custom pin LOW for the entire connection which would essentially be the same as holding the reset button the entire time. Having the cap in between solves that problem.

It a quick easy fix, takes about a minute to do. It’s just a lot harder to explain the steps to do it in a couple sentences.

Here’s a link to the AT COMMAND set —> http://robopoly.epfl.ch/files/content/sites/robopoly/files/Tutoriels/bluetooth/hc-05-at_command_set.pdf

and here’s a link to a tutorial, video, and sketch on how to enter the AT COMMANDS. —> http://www.techbitar.com/modify-the-hc-05-bluetooth-module-defaults-using-at-commands.html  <<< no longer available 🙁

So, the trick is to re-purpose the STATE output (PI09, AKA Pin 32, AKA LED2, see this link) via the AT+POLAR(X,0) command to go LOW when the connection to upload the program is first started.  This signal is then connected to the Arduino’s RESET pin via the capacitor noted above (to make this signal momentary).  The ‘Instructables’ tutorial on this subject at this link actually gets most of this right, except it doesn’t explain why the AT commands are being entered or what they do – so I found it a bit mysterious.  In addition, it recommends soldering a wire directly to pin 32 rather than re-purposing the STATE output pin (re-purposing the STATE pin allows a no-solder setup). Eventually I ran across this link which contains a very good explanation of the AT commands used by the HC-05.  The required AT commands are:

My module is the variety with a small pushbutton already installed on the ‘EN’ pin, so entering ‘Command’ mode is accomplished by holding the pushbutton depressed while cycling the power, and then releasing the button once power has been applied.

When this is done, the LED will change from fast-blink to a very slow (like 2 sec ON, 2 sec OFF) blink mode, as shown in the following short video:

This indicates the HC-05 is in ‘Command’ mode and will accept AT commands.  If you have the style without the pushbutton, you’ll have to figure out a way to short across the pads where the pushbutton should be, while cycling the power.

The screenshot below shows the result of executing these commands using the wired USB connection to the Arduino and the hard-wired serial connection between the Mega’s Tx1/Rx1 port and the HC-05 running in ‘Command’ mode.

HC-05 configuration using the wired serial port connection to the HC-05

NOTE:  The various posts and tutorials on the HC-05 describe separate AT ‘mini’ and ‘full’ command modes; the ‘mini’ mode only recognizes a small subset of all AT commands, while ‘full’ recognizes them all.  ‘Mini’ mode is entered by momentarily applying VCC to pin 34, and ‘full’ mode is entered by holding pin 34 at VCC for the entire session.  One poster described this as a flaw in the HC-05 version 2 firmware which might be corrected in later versions.  It appears this may have been the case, as the HC-05 module I used responded with VERSION:3.0-20170601 and recognized all the commands I gave it (not a comprehensive test, but enough to make me think this problem has gone away).

Wiring Layout for HC-05 Configuration via AT commands

I decided that this post was my chance to learn how to make ‘pictoral’ wiring diagrams using the Fritzing app.  I had seen other posts with this kind of layout, and initially thought it was kinda childish.  However, when I started working with Fritzing (in English, ‘Fritzing’ sounds like an adverb, not a proper noun – so a bit strange to my ears…), I realized it has a LOT of power, so now I’m a convert ;-).

HC-05 wired for initial configuration using AT commands

In the diagram above, I’m using the Rx1/Tx1 (pins 19/18) hardware serial port available on the Mega.  If you are using a Uno, you’ll need to use SoftwareSerial to configure a second port for connection to the HC-05.  A 2.2K/1.0K voltage divider is used to drop Arduino Tx output voltages to HC-05 Rx input levels, but no conversion is required in the other direction. The HC-05 can be powered directly from Arduino +5V, as the HC-05 has an onboard regulator.

Initial AT Configuration Arduino Sketch

All the code above does is transfer keystrokes from the Arduino to the HC-05, and vice versa. This is all that is required to configure the HC-05 using AT commands.

Serial Connection between the HC-05 and Tx/Rx0 for Program Uploads

Most Arduino microcontrollers are shipped with a small program called a ‘bootloader’ already installed.  This small program is only active for a few seconds after a board reset occurs, and it’s job is to detect when a new program is being uploaded.  If the bootloader sees activity on whatever serial port it is watching, it writes the incoming data into program memory and then transfers control to the user program.  The stock Arduino bootloader only monitors Tx/Rx0 for this; activity on other ports (specifically Rx1 in my case) will be ignored and program uploads will fail.  After the HC-05 has been initially configured via AT commands over the PC-to-Arduino-to-HC-05 serial links, the connection from the HC-05 to the Arduino must be changed so that PC-to-HC-05 data transferred over the Bluetooth link arrives at the Arduino’s Rx0 port so the stock bootloader will see it and write it to the Arduino’s program memory.  This minor point wasn’t at all clear (at least not to me) in the various tutorials, so I wasted a LOT of time trying to figure out why I couldn’t get the last part of the puzzle to fit – ugh!

Shown below is my Fritzing diagram for the final configuration of my test setup, showing the Tx/Rx lines changed from Tx/Rx1 (pins 18/19) to Tx/Rx0 (pins 1/0). The HC-05 STATE output is connected to Arduino reset via a 0.22uF capacitor, with resistors to form a simple one-shot circuit.  The STATE line goes LOW (after reconfiguration via the AT+POLAR=1,0 command) which causes a momentary LOW on the Arduino reset line.  This is the magic required to upload programs to the Arduino wirelessly. When the Bluetooth connection is terminated, the STATE line goes HIGH again and the Arduino end of the now-charged capacitor jumps to well above 5V. The diode shown on the diagram clamps this signal to within a volt or so above +5V to avoid damage to the Arduino Reset line when this happens.  This diode isn’t shown on any of the other tutorials I found, so it is possible the Arduino Reset line is clamped internally (good).  It’s also possible it isn’t protected, in which case not having this diode will eventually kill the Arduino (bad).

HC-05 wired for remote program upload. Note that the Tx & Rx lines have been moved from Tx/Rx1 to Tx/Rx0

Testing

The first thing I did after configuring the HC-05 (using the above AT commands) was to see if I could still connect to and communicate with it over Bluetooth from my laptop.  I used RealTerm, although any terminal program (including the Arduino IDE serial monitor) should do.  The very first thing that happened is I had to re-pair the laptop with the HC-05, and the name given by the HC-05 was markedly different, as shown in the captured pairing dialog.

Pairing dialog on my Dell XPS15 9570 laptop

The next thing was to see if I could get characters from my BT serial connection through to my Arduino serial port.  After fiddling around with the baud rates for a while, I realized that now I had to change the BT serial terminal baud rate from 9600 to 115200, and the Arduino-to-HC-05 baud rate from 38400 (the default ‘Command’ mode rate) to 115200.  Once I did this, I could transmit characters back and forth between RealTerm (connected to the HC-05 via Bluetooth) and my Visual Studio/Visual Micro setup (connected to Arduino via the wired USB cable) – yay!

For the next step in the testing, I need to remove the hard-wired USB connection and power the Arduino from an external power source.  When I did this by first removing the USB connector (thereby removing power from the HC-05) and then plugged in external power, I noticed that the HC-05 was no longer connected to my laptop (the HC-05 status LED was showing the ‘fast blink’ status, and my connection indicator LED was OFF).  I checked in my BT settings panel, and the HC-05 (now announcing itself as ‘H-C-2010-06-01’) was still paired with my laptop, but just transmitting some characters from my RealTerm BT serial monitor did not re-establish the connection.  However, when I changed the port number away from and then back to the BT COM port, this did re-establish the connection; the HC-05 status LED changed to the 2-blinks-pause-2-blinks cycle, and my connection LED illuminated.

So, now I connected the output of my STATUS line one-shot circuit to the Arduino reset line and changed my VS2017/VM programming port from the wired USB port to the BT port (interestingly it was still shown as ‘HC-05’ in Visual Studio/Visual Micro).  After some initial problems, I got the ‘Connected’ status light, but the upload failed with the error message “avrdude: stk500v2_getsync(): timeout communicating with programmer” and the communication status changed back to ‘not connected’.

At this point I realized I was missing something critical, and yelled (more like ‘pleaded’) for help on the Arduino forum.  On the forum I got a lot of detailed feedback from very knowledgeable users, most notably ‘dmjlambert’.  Unfortunately dmjlambert was ultimately unsuccessful in solving the problem, but he was able to validate that the steps I had taken so far were correct as far as they went, and ‘it should just work’.  To paraphrase the Edison approach to innovation, “we didn’t know what worked, but we eliminated most potential failure modes”.  See this forum post for the details.

After this conversation (over several days), I decided to put the problem down for a few days and do other things, hoping that a fresh look at things with a clear head might provide some insight.  A few days later when I came back to the project, I ran some tests suggested by dmjlambert to verify that the connection to the Arduino RESET pin via the 0.22uF capacitor did indeed reset the Arduino when the STATE line transitioned from HIGH to LOW.  To do this I created a modified ‘Blink’ program that blinked 10 times rapidly and then transitioned to a steady slow blink.  Using this program I could see that that the Arduino did indeed reset each time a Bluetooth connection to the HC-05 was established.

So, the problem had to be elsewhere, and about this time I realized I was assuming (aka ‘making an ass out of you and me’) that the program upload data being received over the Bluetooth link was somehow magically making it to the bootloader program.  This had been nagging at me the whole time, but I ‘assumed’ (there’s that word again) that since this problem had never been mentioned in any of the tutorials or even in the responses to my forum posts, it must not be a problem – oops!

Anyway, to make a long story short, I moved the HC-05 – to – Arduino connection from Rx/Tx1 to Rx/Tx0 and program uploads started working immediately – YAY!!

I went back through the tutorials I had been following to see if I had missed this magic step, and didn’t find any references to moving the serial connection at all.  So, if you are doing this with a UNO, you’ll need to move the serial connection from whatever pins  you were using (via SoftwareSerial) to Rx/Tx0 as the last step.  If you are using an Arduino Mega or other uino controller that supports additional hardware serial ports as I did, you’ll have to move the connection from Rx/Tx-whatever to Rx/Tx0 as the last step.

This tutorial was put together in the hope that I could maybe help others who are interested in using the HC-05 Bluetooth module for remote program uploads to a Arduino-compatible microcontroller, and maybe save them from some of the frustration I experienced.  Please feel free to comment on this post, especially if you see something that I got wrong or missed.

13 Aug 2019 Update:

Here’s a short video showcasing the ability to program an Arduino Mega 2560 wirelessly from my Windows 10 PC using the HC-05 Bluetooth module

At the start of the video, the HC-05 status light is blinking rapidly, signalling the ‘No Connection’ state.  Then, at about 2 seconds, the light changes to the slow double-blink ‘Connected’ state, the yellow LED on the Mega blinks OFF & then ON again, signalling that the Mega has been reset and is now awaiting program upload, followed immediately by rapid blinking as the new program is uploaded to the Mega’s program memory.  During the upload, the HC-05 status LED continues to show the slow double-blink ‘Connected’ status.  Then, at about 18 seconds, the program upload terminates and the HC-05 returns to the ‘No Connection’ state.

The small white part on the green perf-board is the 220 nF capacitor.  The other two modules on the perf-board are a MPU6050 IMU and a high-side current sensor.

Stay tuned!

Frank

 

25 October 2021 Update:

I came back to this post to refresh my memory when trying to initialize and use a new HC-05 module for my new Wall-E3 project, and failing badly. I finally got something to work, but only after screwing around a lot. I realized I didn’t have a good handle on what mode the HC-05 was in – even though the onboard LED changes behavior to indicate the mode. So, here is a short video showing the LED behavior for the ‘disconnected’ and ‘connected’ modes.

HC-05 LED indications for ‘connected’ and ‘disconnected’ modes

In the above video, the HC-05 starts out in the normal power-on ‘disconnected’ state (rapidly flashing LED). Then after a few seconds a BT connection is established, and the LED behavior changes to ‘connected’ (two short blinks and a long pause). Then after a few more seconds the connection is dropped and the LED behavior changes back to ‘disconnected’ (rapidly flashing)

Wall-E2 Motor Controller Study

Posted 19 May 2019

Over the last few weeks I have noticed that Wall-E2, my wall-following robot, seems to be suffering from a general lack of energy.  I’ve been doing some testing involving a series of 45 degree S-turns, and Wall-E2 is having trouble moving at all, and when it does move, it does so very slowly.  At first I thought this might be due to a low battery condition, but it exhibits the same behavior even with a fully charged battery pack.  Then I thought it might be the battery pack itself dying, but now that I can monitor Wall-E2’s operating current and voltage ‘on the fly’ it is apparent that the battery pack is healthy and delivering rated power – it’s just that the power doesn’t seem to be getting to the motors.  About this same time I began noticing that the cheap L298N motor drivers I have been using were getting pretty hot; enough to burn my finger, and enough to cause a ‘burning insulation’ smell.

So, I decided to go back to the drawing board and see what else is out there in terms of ‘better’ (whatever that means) motor and motor driver technology. As usual I started with a broad internet search and then started narrowing down to specific technologies and modules as I learned more.  What I learned right away is that the L298n technology is notoriously inefficient, as it uses a bipolar transistor H-bridge which pretty much guarantees 1-2V voltage drop between the motor power supply and the motors themselves.  This technology has been superceded by MOSFET-based H-bridge modules with much lower voltage drops and commensurately higher efficiencies.  In fact, most of the modules I found no longer require heat sinks due to the much lower power dissipation.

VNH5019 Motor Driver Carrier

The Pololu VNH5019 Motor Driver Carrier is a single-channel motor driver based on the STMicroelectronics VNH5019 chip, with the following major features:

  • Relatively high cost compared to other products – about $25 ea.
  • 5.5 – 24V operating range. This matches well with Wall-E2’s battery output range of 7-8.4V.
  • Very low Rds(ON) – less than 100mΩ.  This means almost no voltage drop at the typical motor operating current of 100-200mA, and only about 0.2V at 2A stall current, or 0.4W power dissipation, worst case.
  • Peak operating current of 12A – way more than I’ll need.
  • There is also a current sensing output, but it’s only accurate during the active drive portion of the PWM waveform. Although I could probably deal with this by timing measurements to coincide with the drive cycle, I probably won’t bother, as I already have independent current measurement capability.
  • Very simple operation – essentially identical to the L298n scheme.

There are, however, two major drawbacks to this option; the first is that the modules are single-channel only, so I either need to use four (one for each motor) or run two motors in parallel. The second is that they are much more expensive (like an order of magnitude) than the L298n driver modules.

TB67H420FTG Dual/Single Motor Driver Carrier

Pololu’s TB67H420FTG Dual/Single Motor Driver Carrier uses the Toshiba TB67H420FTG part, with the following features:

  • Single or dual channel operation.  In dual motor mode, each channel is limited to 1.7A, which should be OK for Wall-E2’s motors.
  • Minimum motor drive voltage is specified as 10V.  This is too high for my 2-cell LiPo setup that tops out at 8.4V.  It’s still possible it will work OK down to 7V, but only experimentation will tell

Well, this is a much cheaper part ($10 vs $25) and each part can potentially handle twice the number of motors. Nominally $20 for four motors vs $100.  However, the minimum motor voltage of 10V is probably a deal breaker.  Besides, if I parallel two motors on each VNH5019 module, the price differential drops to 2.5:1 vs 5:1 and I don’t have to worry about the minimum motor supply voltage.

TB9051FTG Single Brushed DC Motor Driver Carrier

The Pololu TB9051FTG brushed DC motor driver is based on Toshiba’s TB9051FTG part, and has the following features:

  • Low price – $8.49 (single channel)
  • Compatible motor supply voltage range (4.5 – 28V).
  • Can deliver 2.6A continuously, which should be much more than I’ll ever need
  • 1″ x 1″ form factor, so fitting four modules onto Wall-E2’s chassis shouldn’t be too much of a problem

According to the TB9051FTG’s datasheet, the Rds(ON) has a max of 0.45Ω, so very little IR drop & power loss.  This could make a very nice setup, even if I have to use four modules.  This will still cost much more than my current dual L298n setup, but well worth it for the much lower voltage drop.

Dual TB9051FTG Motor Driver Shield for Arduino

This board is a dual-channel version of the above single-channel part, sized and laid out as an Arduino compatible ‘shield’ module. Pertinent features:

  • $19.95 ea – essentially the same per-channel price as the single channel version
  • Same voltage (4.5 – 28V) & current (2.6A continuous) range per channel
  • Motor driver control pins broken out on one side so can be used without an Arduino
  • Size is 1.9″ x 2.02″ or about 4 times the area of the single-channel boards. This is undoubtedly due to the requirement to physically match the Arduino Uno/Mega pin layout.

This isn’t really a good option for my Wall-E2 project, as I can fit four of the single channel modules in the same footprint as one of these boards – effectively making a 4-channel version in the same footprint.

Dual MAX14870 Motor Driver for Raspberry Pi (Partial Kit)

This module uses the Maxim MAX14870 part on a board physically compatible with the later versions of the Raspberry Pi microcomputer.  Pertinent parameters

  • Reasonable cost – $12.75 means the per-channel cost is around $6.50/channel.
  • Small size:  0.8″ x 1.7″, so all four channels would fit in a 1.6″ x 1.7″ space. This is significantly smaller than the 2 x 2″ space requirement for 4 ea TB9051FTG modules
  • Same 4.5 – 28V motor supply range, and same 1.7A continuous/channel current rating.
  • Low Rds(ON) – less than 0.3Ω

This unit looks very promising; it’s small size and form factor, combined with dual motor control could make it the winner in the ‘replace the L298n’ project.

Adafruit DRV8833 DC/Stepper Motor Driver Board

This is a dual H-bridge for higher current applications than the non-current-limiting Featherwing can handle.  It can handle one stepper or two DC brushed motors and can provide about 1.2A per motor, and expects a motor drive input of 2.7-10.8VDC.  From the documentation:

  • Low MOSFET ON resistance (approx 360 mΩ)
  • 1.5A RMS output current, 2A peak per bridge
  • Power supply range 2.7 – 10.8 V
  • PWM winding current regulation and Current Limiting
  • Overcurrent protection, short-circuit protection, undervoltage lockout, overtemp protection
  • Reasonable per-channel cost; $5 per module, but each module will handle two motors – nice!

Adafruit DRV8871 Single Channel Motor Driver:

This is a single H-bridge for even higher current applications.  From the documentation:

  • 6.5V to 45V motor power voltage
  • Up to 5.5V logic level on IN pins
  • 565mΩ Typical RDS(on) (high + low)
  • 3.6A peak current
  • PWM control
  • Current limiting/regulation without an inline sense resistor
  • Overcurrent protection, short-circuit protection, undervoltage lockout, overtemp protection
  • Higher per-channel cost; $8 per module, so $32 for all 4 motors

I was thinking that I might be able to use just two of these modules and parallel the left motors on one and the right motors on the other; this would give me 4 motor drives for $16, so comparable to the DRV8833.  However, even if I use one per channel, the 4 units would still occupy a smaller footprint than the current setup with two L298N driver modules (and even less if I stack them vertically)

Adafruit Featherwing Motor Driver:

The Adafruit Featherwing motor driver kit is intended to plug into Adafruit’s ‘Feather’ microcontroller product, so to integrate it into my Mega 2560 project I’ll need to make sure I interface to the proper pins, etc.  The module uses I2C for communications and motor control, so the low-level drivers for this module will be quite different than the ones currently in use for my Wall-E2 robot.  Looking over the Adafruit documentation, I get the following:

  • Motor Power and Motor outputs:  these are all on 2-pin terminal blocks, so no changes
  • Logic Power Pins:  The Featherwing requires 3.3V & GND on the 2nd & 4th pins on the ‘long’ header side, counting from the end with three terminal blocks.
  • I2C Data Pins: Last two pins on the ‘short’ header side
  • I2C Addressing:  The default feather address is 0x60, so it should be OK. The Wall-E2 project currently has four devices connected to the I2C bus;
    • IR Homing Teensy uC slave at addr = 0x08
    • FRAM module at addr = 0x50
    • DS3231 RTC at addr = 0x68 (fixed)
    • MPU6050 IMU module at addr = 0x69
  • Smaller voltage range (4.5 – 13.5V), but OK for my 2-cell LiPo application with max V <= 8.5V
  • Lower max current rating – 1.2A/channel; this could be a problem, but I don’t think so; the motors draw less than 0.5A, and won’t ever need 1.2A except in stalled rotor configuration, where it doesn’t matter. However, there is no current limiting on the Featherwing, so it is entirely possible to burn out a channel if the limit is exceeded!
  • Reasonable cost on a per-channel basis. The board costs $20, but it will drive all four motors, for a per-motor cost of only $5 – nice!

So, I ordered several of each of the above parts, with the idea of running some simple tests of each and picking the one that best suits my application

Adafruit Featherwing Motor Driver Testing:

The first unit I tested was the Adafruit FeatherWing.  I set up a spare Mega 2560 as the controller. This turned out to be quite simple, as there are only four connections; 3.3V, GND, SCL and SDA.  I used my DC lab power supply to provide 8.0V, and used two spare JGA25-370 geared DC motors connected to M1 and M3 for the test. As the following short video shows, this worked great.

As a side-effect of this test, I noticed that one of the JGA25-370 210RPM gear motors seemed to be behaving a little differently than the other.  After some experimentation, I discovered that one motor required a speed control input of almost half the full-speed value to even start turning, while the other one seemed to respond correctly to even low level speed drive values.  Some more investigation revealed that the ‘problem’ motor was definitely stiffer than the ‘good’ one, and when the problem motor was being driven from my lab power supply, the power supply output current went up to almost 1A until the motor started moving and then dropped down to under 100mA while the motor was actually running. The ‘good’ motor current stayed under 100mA for the entire range of speeds.

I continued by connecting all four JGA25 gear motors and noticed that only one was really functioning properly, and the other three all had difficulty at low commanded speeds.  To investigate further, I added an Adafruit INA219 high-side current sense module to the circuit so I could record both the current and power for each step on each channel.  I ran all 4 motors in sequence, and then plotted the results in Excel as shown below

Testing all 4 JGA25-370 motors

Current and Power vs time for 4 JGA25-370 motors.  Note the current/power spikes when the ‘sticky’ motors turn ON

After some posts on the Adafruit forum, I was able to troubleshoot the motor problems to some extent. I was able to free up one of the troublesome motors by exercising the gear train, but on two other motors it became obvious that it was the motors themselves, not the gear train that was sticking.  A 50% (or 75% depending on one’s threshold) failure rate isn’t very good.  One of the posters on the Adafruit forum suggested I take a look at the metal gear motors offered  by Pololu, and I ordered some of their ’20D’ motors for evaluation.

After testing the metal gear motors, I tried some of my stock of plastic motors, as shown in the following photo and Excel plot.

two ‘red cap’ and two ‘black cap’ motors

Current and Power vs time for two ‘red cap’ and two ‘black cap’ motors

All four motors behaved reasonably well, but as shown in the above plot, the ‘red cap’ motors drew a LOT more current than the ‘black cap’ ones. This was a mystery to me until I measured the coil resistance of the two types.  The ‘red cap’ motors exhibited coil resistances of about 1Ω, whereas the ‘black cap’ motors were more like 5.7Ω.  I surmise that the ‘red cap’ motors are for 3V applications and the ‘black cap’ ones are for 6-12V.

After this, I decided to test the motors currently installed in my Wall-E2 robot, so I simply placed the prototype board containing the Featherwing and the Mega 2560 controller on top of the robot and rerouted the motor wires from the LN298 controllers to the Featherwing. The following photo and plot show the setup and the results.

proto board on top of Wall-E2

Current and Power vs time for the four ‘red cap’ motors currently installed in Wall-E2

Of the four installed ‘red cap’ motors currently installed in Wall-E2, only two (M1 & M4 above) exhibited reasonable performance – the other two were basically inoperative. Some closer inspection revealed that M3 exhibited essentially infinite coil resistance, i.e. an open circuit.  Apparently I had managed to burn out this coil entirely, probably because the ‘red cap’ motors were intended for lower voltage applications.

Adafruit DRV8871 Single Channel Motor Driver Testing:

I removed the Featherwing module and replaced it with Adafruit’s DRV8871 Driver module. This module uses two PWM outputs to drive and direction control (vs I2C for the Featherwing) and is very simple to hook up and use.  After getting it hooked up I ran a Pololu ’20D’ metal gear motor and a ‘good’ JGA25-370 over one complete up/down speed cycle, with no mechanical load. This produced the plots shown below:

Pololu 20D metal gear motor with DRV8871 Motor Driver

JGA25-370 metal gear motor with DRV8871 Motor Driver

Since I had already tested the same JGA25-370 motor with the Featherwing driver, I thought I would get similar results.  However, the earlier Featherwing test produced a significantly different plot, as shown below

JGA25-370 metal gear motor with Featherwing driver

Why the differences?  The coarseness of the DRV8871 plot versus Featherwing may well be because the DRV8871 uses the Arduino PWM outputs at the default frequency of around 980Hz, and I suspect the Featherwing PWM frequency is much higher.  However, this doesn’t explain the dip in both power & current at the peak of the speed curve.

 

Frank

 

Custom B-Ball Face Mask Project

Posted 14 May 2019

In March of this year, I suffered yet another broken nose while playing basketball.  Off to the emergency room where, following the normal interminable wait, I was told “yep – you have broken your nose – here’s a referral to an ENT guy – have a nice day!”  The next day I went to the ENT guy, who said “yep – you have a broken nose, and there’s nothing I can do for you; you need an ‘open reduction’ (aka ‘nose job’), and here’s the name of the plastic surgeon I recommend”.  At this next appointment Dr. Bapna (the plastic surgeon) said “yep – you have a broken nose, and you’re going to need an open reduction.  It’s not going to be a whole lot of fun, but I should be able to get you squared away (literally)” (or words to that effect, anyway).

So, in early April I endured a ‘functional rhinoplasty’ (aka nose job), and indeed it wasn’t much fun.  Fortunately I had learned from an earlier rotator cuff operation that I could rent a powered recliner on a short-term basis, and this at least made the convalescence a little less terrible.

In the subsequent post-op appointments with Dr. Bapna, he made it quite clear that while the operation was an unqualified success, another broken nose while playing basketball might not be repairable.  He strongly recommended that I either give up basketball (and what is a 70-year old man doing playing b-ball anyway?) or wear a protective face mask.  Since I wasn’t really interested in giving up round-ball, I started investigating face mask options.

Some research showed that a number of clear face masks are available on Amazon and other retail outlets, and there were a few firms advertising custom face masks.  When I mentioned this to Dr. Babna, he told me that a local prosthetic business (Capital Prosthetic and Orthotic Center, Inc) also does custom face masks (who knew?).  Apparently the process involves making a plaster impression of the face, and then using the impression as the mold for a custom polycarbonate mask.  While I was researching the possibilities, it occurred to me that I might be able to use the knowledge of 3D modelling I had gained from an earlier project to create a duplicate of a chess piece to create a 3D model of my face, and then print a full-size plastic face replica to use as the basis for a polycarbonate mask.  This would eliminate the need to make a plaster impression, and might open up a new technique for custom face mask fabrication.

So, I talked my lovely wife into helping me make a 3D representation of my head, using the same Canon SX260 HX digital camera I used for my chess piece replication project.  It took us a couple of iterations to get enough good shots, but soon I had sucked 185 photos into Meshroom and it was busily cranking away to create the 3D model.

Except when it crashed.  I had experienced this problem during the chess piece project, and had solved it by finding and removing the problem photos, usually a shot that was badly out of focus. So, I found and removed the photo pointed to by the crash log, and restarted Meshroom’s processing.

And it crashed again, and kept crashing even as I removed more and more photos.  In addition, there wasn’t anything apparently wrong with the photos that caused the crash.

After a LOT of research on the Meshroom GitHub site, I finally ran across a post where one responder noted that Meshroom-2019.1.0-win64, the version I was using had ‘issues’ with photos that weren’t exactly perfect, and recommended downgrading to the 2018.1.0 version.

So, I downgraded to 2018.1.0, and voila – Meshroom processed all 185 photos without complaint and produced a startlingly accurate 3D model of my head, shown below

Screenshot of Meshroom 2018.1.0. From left to right; input photos, selected photo for comparison, textured 3D model

Leveraging on my experiences with the chess piece project, I immediately sucked the 57+ MByte texturedMesh.obj output from Meshroom into Microsoft 3D builder, and set about removing all the background artifacts, resulting in the revised model shown in the screenshot below:

Model in Microsoft 3D Builder, after removal of background artifacts

If you are doing the sorts of 3D modelling projects involving lots of photos and 50+ MByte object files, I highly recommend Microsoft 3D builder; it seems to be one of those little-known unappreciated gems in the Microsoft ecosystem; using 3D Builder was like expecting a tricycle and actually getting a 12,000HP supercar; 3D Builder not only accommodated my 57+ MByte  .OBJ file, it didn’t even seem to be breathing hard;  more like “Yawn – is that all you’ve got?”

After removing all the background artifacts, I exported the model from 3D Builder as a .3MF file that I was delighted to see is compatible with Prusa’s Slic3r PE, as shown below

The .3MF file from 3D Builder imported into Slic3r PE

I fired up my Prusa MK3 printer and printed out the model, and got the following off the printer

Very small scale version of my 3D head model. 0.5mm mechanical pencil provided for scale

Then I scaled the model up a bit and reprinted it, getting the following model:

 

Once I was convinced the model was reasonably accurate, I set out to print a full-sized model.  To get the proper scale multiplier, I measured the distance between the outer rims of my eye sockets and compared this to the measurement on my mid-scale model. This gave me a scale factor of almost exactly 3.5, so I used this to print the full-scale model. The full scale model just barely fit on my Prusa MK3/S print bed, and took an astounding 24 hours to print!  Also, this is the only model I’ve ever printed that actually cost a non-trivial amount of money –

Full scale print setup. Note the print time of almost 24 hours, and the used filament – over 100 m/$8 in cost – wow!!

Partially finished model, showing the internal structure (5% fill)

Finished print

With the finished 3D model, it should be possible to create the desired custom face mask directly, without having to take a plaster cast impression of my face.  However,  to verify that the full scale model was in fact a faithful representation of my face/nose structure, I decided to make a plaster cast of the printed model, and then compare the plaster cast to my actual face.  This is sort of the backwards process used by a prosthetics house to create a custom face mask; they make a plaster cast using the patient’s face, and then use the plaster cast as the model for the final product.

Plaster cast impression using the 3D printed model instead of my face

Plaster cast separated from the 3D model

Side view of plaster cast on my face, showing that the 3D model is an accurate representation.

Summary:

All in all, this project was a blast; I was able to create an accurate 3D model of my face, which should be usable for the purpose of creating a custom face mask for me so I can go back to abusing my body playing basketball.  However, I have to say that if I added up all the time and effort required to take all the photos, deal with Meshroom’s idiosyncrasies, actually print the full-scale model (24 hours, $8), and still have to take the plaster cast impression to verify the model, I might have been better off to just get the plaster cast impression made by a professional.  OTOH, I learned a lot and had loads of fun, so…

Stay tuned!

Frank