Hearing Aid AGC Testing

Posted 11 March 2023,

I have been wearing hearing aids for quite some time, compliments of a lifetime around airplanes, long before hearing protection became a thing. I recently got a set of Jabra ‘Enhance 200’ aids via direct order, and I like them very much, EXCEPT I have noticed that my perceived hearing acuity seems to vary quite distinctly over a period of a minute or two. I first noticed this when I would turn on a water tap while washing dishes or preparing to take a shower. I would turn on the tap, and then 20-30 sec later, the perceived sound of the water coming out of the tap would increase significantly – even though the water flow rate had not changed. Later, in a social setting (bridge club), I will experience significantly lower and higher perceived speech volumes – very frustrating! I hypothesized that the aids employ an AGC (Automatic Gain Control) of some sort that is getting triggered by an initial loud noise which reduces the gain (and my perceived noise/speech level), and then 10-50sec later the gain would go back up again.

Just recently though, another thought popped into my head – what if the perceived volume changes aren’t due to some property of the hearing aids, but instead are a physiological feature of my current hearing/understanding processes? Hmm, I know I have some issues with my Eustachian tubes blocking and unblocking, and I also know that on occasion the volume changes are correlated with a ‘blocked Eustachian tube’ feeling, so this isn’t a completely crazy hypothesis.

So, how to distinguish ‘meat space’ audio response from hearing aid responses? I decided I could set up an experiment where I could expose one or both of my hearing aids to a volume ‘step function’ and monitor the output for AGC-like responses (an initial rapid drop in output, followed by an eventual return to normal). Something like the following block diagram:

The idea is that the Teensy would produce an audio output in the human audible range that can be controlled for amplitude and duration. The audio would be presented to the hearing aid, and the amplified output of the hearing aid would then be captured with an external microphone connected back to the Teensy. A plot of captured amplitude vs time, with a ‘step function’ input should indicate if the hearing aid is employing an AGC-like response function.

After some fumbling around and searching through the posts on the Teensy forum, I ran across this post describing how to create a simple 440Hz sinewave output from the DAC pin. The original poster was having problems, but after Paul Stoffregen added the ‘magic sauce’ (adding the ‘AudioMemory(10);’) line, everything worked fine. When I copy/pasted the posters code and added the line, I got the nice 440Hz waveform shown below – yay!!

The next step is to hook the DAC output to a small speaker, so I can drive the hearing aid. When I tried hooking a speaker directly to the DAC output, it clipped badly – oops! Fortunately, I remembered that long ago I had purchased a ‘prop shield’ for the Teensy LC/3.1/3.2 controllers, and this contains a 2W audio amp whose input connects directly to the DAC output – nice! So I dug around and found the part, soldered on headers, and plugged it on top of the Teensy.

15 March 2024 Update:

So, yesterday Space X launched IFT3 (Integrated Flight Test #3), consisting of ‘Super-Heavy Booster 10’ and ‘Ship 28’. The combination was the largest, most powerful rocket ever launched, by a fair margin. The upper stage (Starship 28) flew from Boca Chica Texas to the Indian Ocean near Australia in about 49 minutes – wow! At that point, the upper stage was the largest object ever launched into space in all history – Wow Again!

OK, back to reality. I managed to get the prop shield working – after the usual number of false-starts and errors. One ‘gotcha’ was that I hadn’t realized the 2W audio amp was a Class-C type, which means it switches on and off at a very high frequency (100KHz or so) – way above human hearing range, and the audio AM modulates that signal. When it is connected to a speaker or other audio transducer, it acts like a low-pass filter and all that comes out is the audio; this is a really neat trick, but it means that the audio amp output signal is basically impossible to look at directly with a scope – oops. So, I got a pair of MEMS microphone breakout boards from Sparkfun and used the microphone to turn the speaker audio back into an electrical signal that I could view with the scope. Here’s the setup:

Teensy ‘propshield’ mounted on Teensy 3.2. Note Sparkfun MEMS microphone suspended over speaker

This worked great, and I was able to verify that the speaker audio output was a reasonable replica of what the T3.2 DAC was putting out.

The next step was to feed the MEMS mic output back into a Teensy 3.2 analog input so I could measure the signal amplitude (A4 in the above block diagram). Then I would modify the Teensy program to deliver a 1-2 sec ‘pulse’ of high amplitude audio to the hearing aid, followed by a constant low-amplitude signal. The measured amplitude of the hearing aid output (as received by the MEMS mic) would be monitored to see if the hearing aid exhibited AGC-like behavior.

However, as I started setting this up, I realized I would have to solder yet another flying lead to the top of the prop shield, as the Teensy 3.2 pins were no longer accessible directly. So I decided to fix this problem by adding female headers to the top side of the prop shield to allow access to all Teensy pins. The result is shown in the ‘before/after’ photos below:

19 March 2024 Update:

I soon discovered that my plan for routing the MEMS mic back to a Teensy analog input and just averaging the results over time wasn’t going to work, as a glance at the MEMS output to the Teensy (shown below) would make quite obvious:

MEMS output with 1000Hz sinewave input to speaker. Average value is 3.3V/2

The average value for this signal is just the DC offset, which will always be the same. The only thing that varies is the amplitude – not the average value – oops!

OK, so the obvious work-around to this problem would be to put a half-wave or full-wave rectifier circuit between the MEMS output and the Teensy so the analog input could measure the half- or full-wave amplitude instead of the average. But, I really didn’t want to add any more circuitry, and besides I have this entire 72MHz computer at my beck and call – surely I can get it to emulate a half- or full-wave rectifier?

So, after the usual number of screwups, I got this working reasonably well – at least enough for a ‘proof-of-concept. The basic idea is to take analog input readings as fast as possible, and use the resulting values to compute the average value (in A/D units – not voltage), and then take the absolute value of the difference between each measurement and the average value – this essentially implements a full wave rectifier circuit in software.

The following data and Excel plot shows the results for the waveform shown above:

The above data was collected by sampling the input at about 20KHz (50 Usec). As can be seen from the above, the average value is a constant 511.64 (out of a zero to 1023 scale), and the actual measured values varied from about 264 to about 755. Here are Excel plots of both the measured input and the calculated amplitude:

So it looks like this idea will work. For the intended application (determining if my hearing aids exhibit AGC-like behavior, I can perform a running average of the full-wave rectified signal using something like a 0.1 Sec interval (2000 samples). That should accurately capture the onset and release of the 1-sec HIGH tone, and have plenty of resolution to capture any AGC-like sensitivity increase over a longer time – say 30 Sec or so.

Here’s the code that produced the above outputs:

I made another run with the A/D resolution set to 12 bits to see if it made any appreciable difference. As can be seen in the following Excel plot – it didn’t:

Here’s another plot showing the microphone output, but this time with the DAC sinewave output amplitude reduced to the point where the microphone output isn’t large enough to clip.

Microphone Input to A/D Converter

In the reduced sinewave amplitude plot above, the ‘Meas’ plot is still centered about the halfway mark in the 12-bit range of values, while the average value of the ‘Amp’ plot has been reduced from about 1800 to about 200.

So, now that I know that the DAC-Speaker-Microphone-ADC loop works, I need to extend it to record amplitude values over an extended period – at least 30 sec, and more like a minute or more.

I modified my control program to create a 1-second ‘burst’ of a HIGH amplitude sinewave, followed by an infinitely long period of a LOW amplitude sinewave. The HIGH amplitude was chosen to fully clip the microphone output, and the LOW amplitude was chosen to be well above the noise floor, but still very small compared to the HIGH amplitude signal. Here are O’scope photos of both the HIGH & LOW signals:

Here is the output from the program (the LOW amplitude output was manually terminated after a few seconds):

31 March 2024 Update:

After getting all of the above working, I then installed one of my Jabra ‘Enhance 200’ aides in between the speaker and the microphone, as shown in the photos below:

With the aid installed, I got the following microphone output using my ‘burst + long-term low level audio’ setup.

Even though the Jabra aid did NOT exhibit anything like the AGC behavior I expected, there *was* a sort of cyclical response with the Jabra aid that wasn’t there without the aid in the middle. This cyclical behavior repeats about once every five seconds and *could* be some sort of AGC-like behavior – just not the one I was expecting.

Stay tuned,

Frank

Convert Condor Task Briefing Custom Waypoint Description Blocks to XCSoar-compatible .CUP Format ‘Additional Waypoints’ file

Posted 23 March 2024

After a multi-year hiatus, I recently started flying contests again in the Condor Soaring Simulator. As sort of a side project, I have also been working with the XCSoar glider navigation program, to see if I could use XCSoar to help navigate AAT/TAT tasks in Condor (Condor doesn’t support AAT/TAT tasks natively with the in-sim PDA).

After using XCSoar for a while, I became frustrated with XCSoar’s inability to define ‘custom’ turnpoints based on LAT/LON coordinates, which are used quite frequently in Condor contest tasks. After a long-fought and ultimately unsuccessful battle with XCSoar’s source code to see if I could modify the program to facilitate this, I admitted defeat and decided to try another way to skin this cat. XCSoar will accept an ‘Additional Waypoints’ file, so I decided to see if I could create a program to convert the ‘new TP’ blocks in the Condor ‘Task Briefing’ description to XCSoar-compatile .CUP file waypoint lines, which could then be loaded into XCSoar for selection as task waypoints.

Here is the .CUP file format defintion page from the SeeYou (naviter) program website:

The above description is NOT very easy to read. It is full of errors, so some imagination is required to make sense of it. The ‘hardpoints’ in the description are as follows:

  • Latitude strings are exactly 9 characters long. Longitude strings are exactly 10 characters long.
  • In latitude strings, the decimal point is exactly the 5th character (Char4) . In longitude strings, the decimal point is exactly the 6th character (Char5).
  • Both latitude and longitude strings apparently must be zero-padded as necessary to make the string character counts work out. For instance, in the longitude example the ‘degree’ value of ‘014’ must be exactly three characters.

Example Run:

Here’s a recent task briefing from Condor-Club:

As can be seen in the above screengrab, TP2-TP2 are all ‘custom’ turnpoints defined only by Lat/Lon coordinates. Manually adding these to a .CUP formatted file for use in XCSoar would be essentially impossible, given that the turnpoint coordinates only become visible 15 minutes before server start.

To start the process, the turnpoint blocks from the above briefing were copy/pasted one at a time into a text document (I use NotePad++), as follows (note that I manually changed the name of the last turnpoint from ‘Finish’ to TP 6, as my Python script currently only looks for ‘Start’ and ‘TP’ starting strings)

My CondorTPX_to_CupWP.py Python script opens a ‘FileOpen’ dialog where the input file (in this case ‘NewTPs_IN.txt’) can be selected by the user, and a ‘FileSave’ dialog where the output file (in this case ‘NewTPS_OUT.CUP’) can be selected, and then parses through the blocks in the input file, converting them to equivalent .CUP-formatted lines compatible with XCSoar. Here is the console printout from the ‘verbose’ (-v) version of the script:

At the very end of the above printout, the newly-written contents of the output file are read back out again as a verification that the conversion was successful. Here is the actual contents of the ‘NewTPS_OUT.CUP’ file:

This file now has to be transferred to the directory used by XCSoar for waypoints, and then selected in XCSoar (Config->System->Site Files) to load as the ‘More Waypoints’ selection. After this, all the above turnpoints will be available for task construction. Here’s a photo of my Android tablet with the above task turnpoints loaded:

XCSoar task map, using converted task briefing turnpoint blocks
Same task as above, from Condor-Club briefing

It is clear from the above images that the Condor-Club ‘custom’ task turnpoints have been converted properly from text blocks to SeeYou .CUP format waypoint strings, so now I can use XCSoar to navigate Condor tasks with ‘custom’ turnpoints – Yay!

Here’s the Python script I created to do the conversion:

Enjoy!

Frank

Bridgemate II Keypad Membrane Replacement

Posted 22 February 2024

COBA, the Central Ohio Bridge Association, owns a number of Bridgemate II table pads, and lately we have been hearing a number of complaints about having to press buttons harder than normal to get the expected response. After some research, one of our members discovered that we could purchase replacement membranes from Bridgemate for $15 each, so we decided to undertake a membrane replacement program.

Opening up a Bridgemate II

The keyboard cover/upper-half can be removed by disengaging a number of flexible plastic tabs, as shown in the following page from the Bridgemate site:

Opening the case on a Bridgemate II

While this wasn’t quite as trivial as the above description, it wasn’t really all that hard. Most of the difficulty was in trying NOT to break things during the opening process on the first unit. Once that was accomplished (thankfully without breaking anything), I’m sure the rest will go much easier. Here are some photos showing the disassembled Bridgemate II

Visual inspection of this first unit showed everything to be very clean – basically indistinguishable from a brand-new unit. In particular, the key membrane seemed to completely intact, and could be easily manipulated between the ‘pressed’ and the ‘unpressed’ states with just fingertip pressure.

Stay tuned,

Frank

Using VS Code to Debug Linux Makefile Projects

Posted 20 February 2024,

I recently got re-interesting in soaring (glider racing) after a number of years away. As part of this new journey, I became interested in contributing to the development of the XCSoar glider racing navigation software. I had contributed to this program some years ago in a very minor way, and thought maybe now that I’m retired and have plenty of time to waste (NOT!!), I could maybe contribute in a more meaningful way.

In any case, XCSoar development is done in Linux (Specifically in Debian Linux, but that’s another story), so I started thinking about creating a development environment on a Debian Linux box. I had an old Dell Precision M6700 ‘desktop replacement’ laptop that I hadn’t turned on for years, so I dug it out, installed Linux (Ubuntu first, then Debian), and with the help of Ronald Niederhagen (see this post), I was able to clone the XCSoar repo from Github and built both the Linux and Android versions.

However, I still needed to find a way to ‘break into’ the code, and to do that I was going to need a way of running the program under debug control. I have done this sort of thing for decades on the Windows world, but not so much in Linux, so I was sort of starting from scratch. After a LOT of research, I found that Microsoft had a free Linux IDE called VS Code – sort of a lightweight version of Visual Studio aimed at the Linux and IOS world.

So, I installed VS Code on my Linux box and started the ‘lets learn yet another coding IDE’ dance, again starting with a lot of Google searches, running tutorials, etc etc. After creating some basic ‘hello world’ programs using VS code, I started thinking about how to use VS Code on the XCSoar project, comprised of thousands of source files and a Makefile almost three hundred lines long. I knew just enough about Makefiles to type ‘make’ or ‘make all’, so there was a steep learning curve ahead to use VSCode for debugging.

After yet another round of Google searches and forum posts, I found a relevant tutorial on the ‘HackerNoon’ website. the HackerNoon tutorial seems to be aimed at Windows users (Linux doesn’t use extensions to denote executable files), but I was able to work my way through and translate as appropriate. I suggest you open the HackerNoon tutorial in one window, and this ‘translation’ in another.

Original:

Translation for Linux:

Download & Install VS Code: See this link. Everything else should already be available as part of any modern Linux distro (I’m using Debian 12 – ‘bookworm’).

main.cpp: (same as original – no translation required)

Makefile (Original):

Makefile (Translation for Linux):

Run the ‘main.exe’ make target, then run the executable to make sure everything works (Original):

Translation for Linux:

I placed ‘mymain.cpp’ and ‘Makefile’ in my home directory tree as shown below:

so to test the Makefile and the executable, I did the following:

This test confirms that the source code compiles correctly, that the Makefile is correct, and the output from the compiled program is as expected – yay!

Setting up VSCode debugger:

At this point the HackerNoon tutorial and my experience parted ways. I could get the ‘Run and Debug’ side panel open, and I could open ‘mymain.cpp’ in the VSCode editor window, but I couldn’t find anything that suggested it could “create a launch.json file”. I tried to send a comment to the author, but comments weren’t enabled for some reason (and yes, I did register and log in). I suspect that the disparity between the HackerNoon example and my VSCode reality is due to changes in VSCode after the HackerNoon tutorial was created. Which is why I absolutely hate posts like this that aren’t dated in any way shape or form. Because of this, it is impossible to determine if changes like I encountered are due to mistakes on the part of the author, or just changes in the underlying software being demonstrated.

So, I was left to randomly click on things until, by some miracle, the ‘launch.json’ file was created and appeared in the VSCode edit window. Later I went back and tried to re-create the miracle and I sort of figured it out; with the ‘mymain.cpp’ file open in the editor, click on the ‘gear’ icon (upper right as shown in the screenshot below):

And select ‘(gdb) Launch’ from the menu items presented. This will create a ‘launch.json’ file and open it in the editor, as shown below:

launch.json file created by VSCode

At this point we can reconnect to the ‘HackerNoon’ tutorial and copy/paste the launch.json contents from the tutorial to the newly-created launch.json file in VSCode, as shown below

launch.json after copy/pasting example code from HackerNoon site

Here’s the ‘launch.json’ file from HackerNoon:

And here’s the same file, after translating it for Linux instead of Windows:

The next step is to create a ‘tasks.json’ file using the ‘New File’ (ALT-Ctrl-N) in VSCode. This brings up a dialog forcing the user to specify the folder into which the new file will be placed, which seemed a bit strange for me, but what do I know. Put the new file in the same (hidden) .vscode folder as the companion ‘launch.json’. In my setup this was layed out as shown below:

Here’s the original ‘tasks.json’:

And here’s the same file after translation for Linux:

The above three files (mymain.cpp, launch.json and tasks.json) form a complete set (a ‘configuration?) that enables VSCode to compile and then run – under debug control – a .cpp file. Here’s a short video showing two complete compile/run cycles:

Video showing VSCode compiling/running/debugging using a Linux Makefile

At this point is is clear that the combination of VSCode, in conjunction with appropriately constructed ‘Makefile’, ‘launch.json’ and ‘tasks.json’ files is capable of debugging a Linux-base C++ program. This means that – at least in theory – I should be able to aim VSCode at the humungous Makefile associated with the equally humungous XCSoar program and step through the source code under debug control – yeeehah!

23 February 2024 Update:

Well, the theory (about being able to run XCSoar under debug control) has now been validated. I was able to run XCSoar using the following launch & tasks.json file contents

Launch.json:

Tasks.json:

And here is a short video showing the action:

So now I’m set; I can run XCSoar under debug control, and I should be able to poke around and hopefully make some changes (or at least understand some of the magic tricks). On thing I’ve already found is that (depending on where the breakpoints are) XCSoar can grab the mouse focus (not the keyboad, thankfully), so even though the mouse cursor can move around the screen outside the XCSoar window, all mouse clicks go to XCSoar. Fortunately I found that keyboard inputs still go to VSCode, so I can still (with some difficulty) get around OK.

Trying my Hand at XCSoar Program Development

Long ago and far away, back when I was doing a lot of real-life soaring, I was heavily involved in the development of the ClearNav flight computer/navigator and I’d like to think I made it better than when I first started working with it. Among other things, I helped develop the thermalling assistant in ClearNav, and also helped port that functionality to XCSoar.

Now, many years late, I am no longer doing real-life soaring, but I’m once again starting to fly in virtual soaring races using the Condor2 soaring simulator. As a part of that, I have been attempting to use the XCSoar flight computer/navigation assistant as an external adjunct to the in-sim Condor flight computer. In particular, Condor2’s flight computer has no capability to support AAT/TAT tasks, so using an external PNA (Personal Navigation Assistant) that does support AAT/TAT tasks makes sense. To facilitate this effort, Condor2 can output GPS NMEA sentences to an external device.

I have nice dual-24″ monitor setup at home, so my initial thought was to run Condor2 on one monitor, and the PC version of XC on the other. This actually works, but with a big ‘gotcha’ – when Condor is running, it captures the mouse and keyboard input, and won’t let it go unless you use ‘ALT-TAB’ to switch to another running program. This means that if you wish make an adjustment in XCSoar, you have to ALT-TAB out of Condor2 and into XCSoar, make whatever adjustments, and then ALT-TAB back to Condor2 – more than a little bit messy. In addition, while ‘away’ from Condor2, it is continuing to fly along without pilot input – maybe OK for a flatland task, but definitely bad for one’s (simulated) health in gnarly terrain.

So, my next idea was to buy a cheap Android device and run XCSoar on it, connected to Condor2 on my PC via Bluetooth. I found a really nice PRITOM M10 10 inch tablet with 2 GB RAM, 32 GB internal storage on Amazon for about $50USD, and figured out how to get NMEA data to it using Bluetooth – nice!

Then I constructed a small AAT/TAT task in Condor2’s default Slovenia scenery, loaded the same task into the M10, and flew it multiple times to see if I could figure out how to best use XCSoar to optimize navigation through both defined areas. This worked really well, but I ran into some problems with the XCSoar software. At least in the Android version, the ‘AAT Time’ and ‘AAT Delta Time’ readouts blanked out several times during the tests, rendering XCSoar pretty much useless for AAT/TAT tasks (see this post for all the gory details).

The above experience led me to think about looking through the source code to see if I could find out why these AAT-related values were going missing. Way back in the day I had done this once before when I ported the ClearNav thermalling assistant algorithm to XCSoar, but that time I didn’t know enough about Github repos and pull requests to do it the right way – I had to throw myself on the mercy of the other developers to make it happen. Since I’m now retired and not flying in real-life anymore, I have a bit more time, so I decided to give it a go.

I started by reading though the XCS Developer Manual, where I found that the primary development platform for XCSoar is Linux, and I haven’t played in Linux-land since my days as IT manager at The Ohio State University ElectroScience Lab a couple of decades ago. Nevertheless, I had an old Dell Precision M6700 15″ ‘desktop replacement’ laptop hanging around doing nothing, so I dug it out and installed Ubuntu 12.4 LTS on it. I chose Ubuntu because it was reportedly easier to use by beginners, and known to install OK on my laptop model. Installation was pretty easy, and using the information in the developer’s manual I was able to clone the Github repo, install the required packages, and actually get the default UNIX version of XCSoar compiled and running on my Ubuntu laptop – yay!

With XCSoar running on my laptop, I decided to try running the same AAT/TAT test task with Condor2 GPS NMEA sentences connected to XCSoar on my Linux laptop using VSPE on my windows box to connect Condor2 to a TCP port, and ‘socat’ on the Linux box to create a bridge between TCP and UDP ports. Then in XCSoar I set ‘Device A’ to point to the UDP port created by socat. Amazingly, this all worked! (See this post for the details). With this setup I ran the test AAT, but the ‘AAT Time’ and ‘AAT Delta Time’ values stayed rock-solid through the entire task – yikes!

This result led me to believe that maybe I could just abandon the Android M10 and just use XCSoar on my Linux box to fly AAT tasks. However, when I actually tried this on a Condor2 race, the Linux box version of XCSoar kept dropping GPS inputs – don’t know why.

So, I decided to see if I could compile XCSoar for Android on the Linux box and ‘side-load’ it onto my M10 Android tablet. If this worked, then I could instrument the code on my Linux box, run it on my M10, and maybe figure out why the AAT Time/AAT Delta Time data was getting corrupted. After all, how hard could it be – I had already cloned the source repo and gotten the UNIX target to compile properly? As it turned out – “Pretty Darned Hard!”

I was stumbling around on the XCSoar developer’s forum, trying to figure out why compiling for Android wasn’t working, when Ronald Niederhagen took pity on me and sent me a direct email with some suggestions. The foremost of those was, ‘Install Debian’ and your life will be easier’. Although I had noticed some references to Debian on some of the development steps, I hadn’t paid much attention to it – after all, all Linux installation are all the same, right? Wrong.

So, I went off into a side project for replacing Ubuntu with Debian on my Dell laptop. Once I got that part accomplished, Ronald sent me the following instructions:

The first ‘make’ command above ran successfully, and I was able to launch XCSoar on my Linux Debian laptop with no problem. However, the second ‘make’ command to compile XCSoar for Android failed miserably – ouch!

When I reported this result to Ronald, his reply was:

Ronald’s first sentence — “You are not far from success” was pretty heart-warming. I have done a LOT of programming over a half-century of engineering, and I was well aware how easily projects of this nature can spiral out of control, so ‘hearing’ such encouragement from such an obviously competent source was a life-saver.

Anyway, I ran the command to install the Java jre, confirmed success with the ‘which command, and then re-ran the ‘install-android-tools.sh’ script. This time it completed OK, so then I ran the ‘make -j4 TARGET=ANDROID command. This time it seemed to complete OK, but no corresponding *apk file was generated.

When I reported this to Ronald, he asked me to re-run the compile, but this time redirect both the normal (i.e. stdout) and the error (i.e. stderr) outputs to files and send them to him. OK, so I resurrected (with the help of Google) my 20-year old memories of how to redirect to files in Linux, got the job done, and sent them off to Ronald. In no time at all, Ronald sent back the following:

I had no idea what ‘javac’ was (other than a suspicion it was java-related). When I ran the ‘which’ command on my box it came up blank, so obviously Ronald was on the right track. After a couple more back-and-forths, Ronald said:

I ran the install, and FINALLY I was able to get the XCSoar Android version installed:

With more help from Google, I was finally able to get XCSoar ‘side-loaded’ to the M10. As it turned out, the trick was to copy the *.apk file to my Google Drive site, and then use ‘Drive’ on the M10 to ‘install’ the app. Basically the process is:

  • Copy the *.apk file from my Linux laptop to my Google Drive site
  • Uninstall XCSoar from the M10 by dragging the icon to the trashcan
  • Use ‘Files->Drive’ to access my Google Drive account from the M10
  • Double-click on the *.apk file in Google Drive
  • Select ‘install’ on the resulting dialog.

Here is a short video showing the process:

Installing a fresh Android version of XCSoar on my Android M10 tablet

So now, Thanks to Ronald Niederhagen, I’m all set for XCSoar development. In the meantime, however, the original reason I wanted to get into XCSoar development (disappearing AAT infobox data) seems to have — disappeared. Not to worry though, as I have lately gotten some other ideas for XCSoar ‘improvements’ 😉

A last thought on this subject from a septuagenarian engineer/programmer, I appreciate how much time and effort Ronald put into helping a noob along. Ronald didn’t know me from Adam, and yet he took the time to help. He didn’t know that I’m a 75-year old broke-down engineer, ex-pilot, (ex – everything, for that matter!), and I don’t know anything about him, either. Heck, Ronald could be a 12-year old kid with acne programming in his parents’ basement wearing pajamas, but in this case he was clearly the teacher and I was clearly the student. It’s pretty cool when the internet and free discourse facilitate this kind of international (and maybe intergenerational) collaboration.

Stay Tuned,

Frank

XCSoar Soaring Computer AAT Task Study, Part II

Posted 14 January 2024

This is the second installment in my study of the XCSoar cross-country race navigation software with respect to its use in AAT/TAT tasks in Condor2. In my last post, I created a small AAT task in Condor and flew it with XCSoar on a Android M10 tablet, connected to my Condor PC via bluetooth. In this installment, I fly the same task, but this time I video’d the entire task and then afterwards picked out screenshots to highlight points of interest during the task.

Before task start

This next shot illustrates a problem I had right at the start. I used my finger to swipe down (hoping to zoom in or out), but instead it froze the XCSoar app. Had to reboot the M10 tablet and go through some other gyrations to get going again. I sure would hate to have this happen just before task opening on a AAT race in Condor.

XCSoar crashed after the ‘swipe down’ gesture

After getting XCSoar back up and reconnected to Condor, I got going with the task again. Here’s a screenshot showing the situation just before exiting the start cylinder

Just before task start. All the data values look OK

Now just after the start

Just after exiting the start cylinder, with the ‘Task Start’ popup visible

Comments:

  • The ‘AAT Time’ value has decreased by 4 sec, which seems OK
  • ‘AAT delta time’ seems a bit odd, as it shows I’m going to arrive early by about 1 minute
  • The AAT Dmax/Dmin and AAT Vmax/Vmin values look consistent. IOW, to consume 45 min covering the min distance of 24.9mi, I need an average speed of 29.9mph (24.9mi / 29.9mph = 0.833hr –> 50min), and for the max dist of 92.9mi I need 111mph (92.9mi/111mph = 0.833hr — 50min). This gives me fair bit of confidence that I have the necessary data to optimize the task.
Just before entering the first turn circle

Comments:

  • All the numbers still look reasonable here. The ‘AAT Time’ has gone down by about 3.5min, and the arrival is still shown as 1:09min (according the documentation, BLUE indicates arrival will be over by at least 5 minutes).

Now, just after entering the first turn circle,

Just after entering the first turn circle

Comments:

  • It was nice to see the ‘In sector, arm advance when ready’ popup show, but I wasn’t entirely sure what it meant.
  • It was also very nice to see that the ‘target’ started following the glider symbol, meaning that I didn’t have to move it manually – yay!
  • I noted that the ‘AAT Dmin’ value changed from 24.9 to 27.4mi, so that sounds right.

Next, I brought up the Task Status page (after a LOT of fumbling), and got this:

Task Status page

Comments:

  • I was amazed by how little information on this page was useful. The only values that I found believable were the ‘Assigned Task Time’, the ‘Speed Average’, and ‘Achieved speed’ values.
  • The ‘Estimated task’ value of 2:47 and the ‘Remaining time’ value of 2:40 makes no sense. Where did they come from?

At about the halfway point something happened (or I did something stupid – AGAIN) and my ‘AAT time and ‘AAT delta time’ values – the very most critical information required for successfully completing an AAT – disappeared, only to return again when I made the the turn toward JAVORJEV. At the time I didn’t notice until well after the fact, so having the entire flight on video really paid dividends – yay! Here’s a short (~ 35 sec) video showing the point at which they disappeared.

watch as the ‘AAT Time’ and ‘AAT delta time’ values disappear, starting at about 22sec

Comments:

  • The two values that disappeared are the whole reason for using an external navigation device to fly AATs in Condor. After they disappeared, I was basically winging it from then on.
  • It is possible that the data disappearance is related in some way to the buttons I was pressing around the same time. The first screen tap happens at 13.35sec into the video clip and the ‘AAT delta time’ value starts going GAGA about 10sec later.

In the next screenshot I’m about 3/4 of the way through the first turn, and thinking about turning around. Since I lost my ‘AAT Time’ & ‘AAT delta time’ readouts I’m flying blind on timing:

About 3/4 through the first turn area

Comments:

  • The only information I have to work with are the Dmin/max and Vmin/max values, and some notion of my average speed. I think it’s around 80-90mph, and as long as it is less than 111mph I’m OK to turn at this point.

The next shot shows the ‘Show Target’ page for this turnpoint

‘Target Show’ page for this turnpoint

Comments:

  • This page shows a value for ‘V ach’ of 70.8mph, which is almost identical to the value shown for ‘AAT Vmin. Assuming I believe this number, and assuming that value will continue to increase because I’ll be ridge running the rest of the task, I should be OK turning here.
  • I have no idea what the ‘ETE’ and ‘Delta T’ values mean on this page – they don’t look consistent with the ‘AAT Vmin/max’ and ‘AAT Dmin/max numbers on the main navigation page. I don’t think there’s any way I can make it back 26 minutes and 39 seconds early, even if my glider suddenly acquire orbital velocity.

The next shot shows the same page, but for the JAVORJEV turn circle, just as I’m getting ready to turn in the KURJIV circle

Comments:

  • Apparently, moving the target position in the KURJIV circle also moves the corresponding one in the JAVORJEV circle – I didn’t expect that. I wonder what happens in a 3, 4, or 5 turn circle AAT – do ALL the targets move in unison?
  • What does the ‘Optimized’ checkbox do?

The next shot shows the situation just as I made the turn for JAVORJEV. Again I didn’t notice this at the time, but my ‘AAT Time’ and ‘AAT delta time’ datablocks returned from the dead. Here’s a short (20sec) video showing the action. Just from the video, it looks like tapping on the ‘Arm turn’ button also resurrected the AAT info boxes.

However, my joy over getting my AAT datablocks back was short-lived. A short time after making the turn, the datablock info disappeared again – for good. This time there were no button pushes to blame. The following short video shows the action

AAT datablock info disappears for the second- and final – time

The next shot shows me just before exiting the first circle on the way to JAVORJEV

Comments:

  • The AAT datablocks are still missing
  • The Target in the JAVORJEV circle is now toward the near side of the circle, so is there some optimization going on in the background?

The next shows my attempt to manually move the JAVORJEV target.

Comments:

  • The ETE & Delta T values look reasonable, and the implication is that I should be able to use up all the time by moving all the way to the back of the JAVORJEV circle
  • However, when I manually move the target to the front of the circle, there is almost NO change in the ETE/Dt values, and the small change that shows is in the wrong direction. Moving the target forward like this should make me way earlier, but the numbers show that I’ll be almost 1 minute LATER than I was before. how can this be?

As shown in the next video, I decided to try the ‘Optimized button to see what it did. This radically changed the target location, and the ETE/Dt values. After a few iterations, it looked like the Optimize function was indeed working properly.

Comments:

  • It takes a while for the optimization to converge. At first, the values for ETE & Dt are WAY off, and then they oscillate back and forth several times before stabilizing on believable numbers.

The last video covers the finish (or NOT-finish, in this case)

Comments:

  • At the start of this clip, XCSoar is in ‘Final Glide’ mode, but switches back to ‘Cruise’ just before entering the finish circle.
  • I was expecting a ‘Task Finished’ notification when I crossed into the circle, but didn’t get one. In fact, AFACT, XCSoar never finished this task at all. I’m sure this was an operator error on my part, but I don’t know what I screwed up – bummer!

Charging Station Connect/Disconnect Cycle

Posted 11 November 2023

After getting the wide-body IR homing PID values defined properly, the next challenge was to actually get the robot to connect to the charging station, and after getting charged, to disconnect from it successfully. After the usual number of errors and goofs, I believe I have it working now. Here is a short video and the telemetry for a complete walltracking – IR home to charging station – disconnect from charging station – back to walltracking cycle

And here is the telemetry for the run:

WallE3 Wall Track Tuning Review

Posted 22 January 2023

This post is intended to get me synched back up with the current state of play in my numerous wall track tuning exercises. I am using these posts as a memory aid, as my short-term memory sucks these days.

Difference between WallE3_WallTrackTuning_V5 & _V4:

  • V5 uses ‘enums.h’ to eliminate VS2022 intellisense errors
  • V5 RotateToParallelOrientation(): ported in from WallE3_ParallelFind_V1
  • V5 MoveToDesiredLeftDistCm uses Left distance corrected for orientation, but didn’t make change from uint16_t to float (fixed 01/22/23)
  • V5 changed parameter input from Offset, Kp,Ki,Kd to Offset, RunMsec, LoopMsec
  • V5 modified to try ‘pulsed turn’ algorithm.

Difference between WallE3_WallTrackTuning_V4 & _V3:

  • V4 added #define NO_LIDAR
  • V4 changed distance sensor values from uint16_t to float
  • V4 experimented with ‘flip-flopping’ WallTrackSetPoint from -0.2 to +0.2 dep on how close the corrected center distance was to the desired offset (However, I believe this was done incorrectly – the PID engine compared the corrected center distance to WallTrackSetPoint – literally apples to oranges)

So WallE3_WallTrackTuning_V5 seems to be the latest ‘tuning’ implementation.

Comparison of WallE3_WallTrack_Vx files:

WallE3_WallTrack_V2 vs WallE3_WallTrack_V1 (Created: 2/19/2022)

  • V2 moved all inline tracking code into TrackLeft/RightWallOffset() functions (later ported back into V1 – don’t know why)
  • V2 changed all ‘double’ declarations to ‘float’ due to change from Mega2560 to T3.5

WallE3_WallTrack_V3 vs WallE3_WallTrack_V2 (Created: 2/22/2022)

  • V3 Chg left/right/rear dists from mm to cm
  • V3 Concentrated all environmental updates into UpdateAllEnvironmentParameters();
  • V3 No longer using GetOpMode()

WallE3_WallTrack_V4 vs WallE3_WallTrack_V3 (Created: 3/25/2022)

  • V4 Added ‘RollingForwardTurn() function

WallE3_WallTrack_V5 vs WallE3_WallTrack_V4 (Created: 3/25/2022)

  • No real changes between V5 & V4

24 January 2023 Update:

I returned WallE3_WallTrackTuning_V5 to its original configuration, using my custom PIDCalcs() function, using the following modified inputs:

So this is the ‘other’ method – using the modified steering value as the input, and trying to drive the system to zero. Within just a few trials I rapidly homed in on one of the PID triplets I had used before, namely PID(3,0,1). Here’s the raw output, a plot of orientation-corrected distance vs setpoint, and a short video on my 4m straight wall section:

So, the ‘steering value tweaked by offset error’ method works on a straight wall with a PID of (3,0,1). This result is consistent with my 11 January 2023 Update of my ‘WallE3 Wall Tracking Revisited‘ post, which I think was done with WallE3_WallTrackTuning_V5 (too many changes for my poor brain to follow).

Unfortunately, as soon as I put breaks in the wall, the robot could no longer follow it. It runs into the same problems; the robot senses a significant change in distance, starts to turn to minimize the distance, but the distance continues to go in the wrong direction due to the change in the robot’s orientation. This feedback continues until the robot is completely orthogonal to the wall.

26 January 2023 Update:

I went back and reviewed the post that contained the successful 30 October 2022 ‘two 30deg wall breaks’ run, and found that the run was made with the following wall following code:

As can be seen from the above, the line

Compares the orientation-corrected wall distance to the desired offset distance, as opposed to comparing the computed steering value to a desired steering value of zero, with a ‘fudge factor’ of the distance error divided by 10 as shown below:

So, I modified WallE3_WallTrackTuning_V5 to use the above algorithm to see if I could reproduce the successful ‘two breaks’ tracking run.

Well, the answer was “NO” – I couldn’t reproduce the successful ‘two breaks’ tracking run – not even close – grrr!!

So, as kind of a ‘Hail Mary’ move, I went back to WallE3_WallTrack_V2, which I vaguely remember as the source of the successful ‘two break’ run, and tried it without modification. Lo and Behold – IT WORKED! Whew, I was beginning to wonder if maybe (despite having a video) it was all a dream!

So, now I have a baseline – yes!!! Here’s the output and video from a successful ‘two break’ run:

Successful ‘two break’ run with WallE3_WallTrack_V2

And here is the wall tracking code for this run:

And here is just the wall tracking portion of the above function:

From the above, it appears that WallE3_WallTrack_V2 uses a ‘steering value’ setpoint of zero, and also ‘tweaks’ the input to the PID engine using the error between the measured wall distance and the desired wall offset, as shown in the following snippet:

This is very similar to what I was trying to do in WallE3_WallTrackTuning_V5 initially below (copied here from ’24 January 2023 Update:’ above for convenience):

So, the original Tuning_V5 math appears to be identical to the WallTrack_V2 math; both use an offset factor as shown:

WallE3_WallTrackTuning_V5:

WallE3_WallTrack_V2:

Aha! In WallE3_WallTrack_V2 the offset error (in mm) is divided by 1000, but in WallE3_WallTrackTuning_V5 the offset error (in cm) is divided by 10, which still leaves a factor of 10 difference between the two algorithms! I should be dividing the cm offset by 100 – not 10!

28 January 2023 Update:

SUCCESS!!! So I went back to my WallE3_WallTrackTuning_V5 program, and modified it to be the same as WallE3_WallTrack_V2, except using distances in cm instead of mm, and dividing by 100 instead of 10. After the usual number of stupid errors, I got the following successful run on the ‘two break’ wall setup:

WallE3_WallTrackTuning_V5 using ‘tweaked’ steering value. Average LCCorr = 36.9 cm

After running this test a couple more times to assure myself that I wasn’t dreaming, I started to play around with the PID values to see if I could get a bit better performance. The first run (shown above) produced an average offset distance of about 36.9cm. A subsequent run showed an average of 26.4cm. This implies that the steering value ‘tweak’ isn’t really doing much. For instance, this line:

shows that for a corrected distance of 28.13cm, the calculated steerval is (21.4-19.3) /100 = 2.1/100 = 0.021, the ‘offset_factor’ is (40-19.3)/100 = 20.7/100 = 0.207. So, the ‘tweaked’ steerval should be 0.228 and should produce an error term of +0.228 but it is only reporting an error of 0.00!

I added the steerval and the offset_factor to the output telemetry and redid the run with 300,0,0 as before. This time I got

In this case, steerval = (37.7 – 37.1)/100 = +0.06 (pointing slightly away from the wall), tweak = (38.87-40)/100 = -1.13/100 = -0.013, so the total error of +0.0487, giving left/right motor speeds of 60/89, i.e. correcting slightly back toward the wall – oops! It looks like I need to use a slightly smaller divisor for the ‘tweak’ calculation.

I added the divisor for the offset_factor to the parameter list for ‘Tuning_V5’ and redid the run, using ‘100’ to make sure I got the same result as before.

I was able to confirm that using this method with the ‘tweak’ divisor as a parameter I got pretty much the same behavior. Then I started reducing the divisor to see what would happen as the ‘tweak’ became more of a factor in the PID calculation.

For a divisor of 75, we got the following output:

Looking at the line at time 124142, we see:

The steering value is very low (0.02) because the front and rear sensor distances are very close, but because the robot is well inside the intended offset distance of 40cm, the ‘tweak value’ of -0.11 is actually dominant, which drives the robot’s left motors harder than the right ones, which should correct the robot back toward 40cm offset. When we look at the Excel plot, we see:

tweak divisor 75, two break wall.

The plot shows the ‘tweak’ value becoming more negative as the corrected distance becomes smaller relative to the desired offset distance of 40cm, and thus tends to correct the robot back toward the desired offset. At the 12.41sec mark (shown by the vertical line in the above plot) the ‘tweak’ value is -0.11, compared to the steering value of +0.02, so the ‘tweak’ input should dominate the output. With a P of 300, the output with just the steering value would be -300 x 0.02 = -6 resulting in left/right motor speeds of 69/81, steering the robot very slightly toward the wall. However, with the ‘tweak’ value of -0.02 the output is -300 x (0.02 -0.11) = +27 (actually +28), resulting in motor speeds of 103/46, or moderately away from the wall, as desired.

To simply the tuning problem, I changed the wall configuration back to a single straight wall, but started the robot with a 30cm offset (10cm closer than desired) but still parallel (steering value near zero). Here’s the output:

As can be seen from the above plot, the robot starts out at about 32cm, and slowly closes to about 28cm. Simultaneously the ‘tweak’ value goes from about -0.12 to about -0.18 (at 109749 mSec, the black line). The result is the robot starts moving away from the wall, getting to the desired 40cm offset a little over 2sec later (the red line). After this point it maintains about 40cm offset, as desired. The average offset distance from the red line to the end of the run is about 37.5cm – nice!

So it looks like P = 300 and tweak divisor = 75 is a nice starting point.

30 January 2023 Update:

Now that I have both the PID and divisor values as parameters, I plan to make some more ‘straight wall’ runs to see how the robot behaves. My belief is that a slightly lower divisor ratio, and possibly a slightly higher P value will be beneficial – we’ll see

Starting with P = 300 and divisor = 50:

note starting offset of approx 31cm.

This run started with the robot placed roughly parallel to and offset about 32cm from the start of the 4m straight wall section. For the first two seconds the offset increased monotonically to about 38cm, and after that the robot maintained an offset between 35 and 39cm. The average for the ‘maintenance’ portion of the run was approximately 38cm – nice!

PID(350,0,0), divisor = 50:

note starting offset of approx 25cm

As can be seen in the above plot. the robot started off at an offset of approx 27cm, and rapidly (about 1.5sec) moved to an offset of about 38cm. After that it maintained an offset of between 35 and 41cm for the rest of the run for an average of 37.5cm. This is excellent performance, and now I have to wonder just a bit if a separate ‘offset capture’ phase is really required.

Making another run with the same parameters (350,0,0) div 50 but with the robot placed more or less parallel but about 10cm from the wall:

As can be seen from the above raw data and plot, the robot starts out at about 15cm and rapidly (within about 1sec) moves to about 37cm. After that the robot maintains a wall distance between 35 and 40cm, with an average distance of 38.4cm – very nice!

Here’s a short video showing the action:

After this run I decided to try a wall configuration with a single 30º break using the same PID(350,0,0) and div factor (50) as before. The robot was placed nearly parallel with an approx 13cm offset. Here’s the telemetry, the corresponding Excel plot, and a short video.

note starting offset of approx 12cm

After this run I decided to push my luck and try a ‘two break’ wall configuration, again with a very small initial offset. Here’s the raw telemetry output, the corresponding Excel plot, and a short video showing the action.

Black and red lines show approx location of first and second breaks, respectively

From the above it is pretty clear that PID(350,0,0) and ‘tweak’ divisor 50 does a very good job of tracking a straight, one-break or two-break wall at a defined offset distance. In addition it is pretty clear that this configuration does not require a separate ‘offset capture’ function – it does just fine all by itself. I guess I’m a little bummed out that I spent so much time ‘perfecting’ (to the degree that anything I do can be said to be ‘perfect’) the capture function.

23 July 2023 Update:

I have been trying to address the tendency of WallE3 to oscillate back and forth after navigating past the 45º break from the entrance hallway into the kitchen, and I kept getting the feeling I had already solved this problem at least once before. I searched through my older posts and found this one, which clearly shows much better performance than I was now seeing. After carefully perusing the above data, I finally figured out the difference; the above runs used a 50mSec interval, and I was currently using a 100mSec interval – oops!

I changed my current code to 50mSec, and ‘sure nuff’ WallE3’s wall tracking performance around breaks improved dramatically – yay!

This episode proves once again the value of copious documentation – you never know when you will need advice from your former self!

After changing the PID update interval to 50 vs 100 mSec, I made a few runs on my ’45º break’ wall configuration with PID = (350,0,0), (350,0,10), (350,0,20), and (350,0,30). As shown in the following three short videos, both the (350,0,10) and (350,0,20) produced better tracking performance, but the (350,0,30) was definitely inferior.

PID = (350,0,0)
PID = (350,0,10)
PID = (350,0,20)
PID = (350,0,30)

After these tests, I edited WallE3_Complete_V3 to use PID = (350,0,20) with a 50mSec interval.

Move to a Specified Distance, Revisited

Posted 24 December, 2022

As part of the suite of tools associated with wall tracking and IR beam homing, I created a set of ‘move to specified distance’ routines using my home-grown PID algorithm as follows:

  • MoveToDesiredFrontDistCm (MTFD)
  • MoveToDesiredRearDistCm(MTRD)
  • MoveToDesiredLeftDistCm(MTLD)
  • MoveToDesiredRightDistCm(MTRD)

I saw some odd problems in my past wall-tracking exercises, so I thought it would be a good idea to go back and test these in isolation to work out any bugs. As usual, I constructed a limited part-task program for this purpose, and ran a number of tests on my desktop ‘range’. I started with the ‘MoveToDesiredFrontDistCm()’ function, as shown below:

MoveToDesiredFrontDistCm ():

Here’s some output from a typical run:

The movement goes OK, and the robot telemetry says it stopped very close to the desired 60cm distance. However, when I measured the actual distance from the ‘wall’ to the robot, I got more like 67 or 68cm, probably indicating that the robot coasted some after the motors were turned off. When I instrumented the code to show the next 10 distance measurements after the exit from the subroutine, it became easy to see that this was the case – the robot coasted from 59 to 67cm. The robot should correct this by going back the opposite way, but it doesn’t because the last measurement (59cm) fits inside the 59-61cm ‘basket’ for subroutine termination (the ‘while’ loop termination criteria).

So, I thought what I could do is check the reported front distance after function exit, and just call it again if the robot had coasted too far from the target. The second run should get much closer, as the starting error term would be much smaller. So now the test code looks like this:

This should have worked well, except when I tried it, the function exited with a ‘STUCK_AHEAD’ error – oops! Here’s a run going from 60 from 30cm:

The ‘stuck’ checks have to be there because the robot can’t ‘see’ obstacles that are too low to interrupt the front LIDAR beam, or aren’t directly in the line of sight, but how to manage? The ‘stuck’ checks depend on a variance calculation on the last 50 measurements (held in a bit-bucket array), so I began to wonder why the front variance was decreasing so rapidly while the rear variance wasn’t (one would think they would behave more or less the same). So I went back and printed out the front & back distance array contents after each run, and saw that the reason the front variance was decreasing so rapidly was because I was using a 50mSec time interval, meaning the 50-element array was getting filled in 2500mSec – or about 2.5sec. So, in order to get more variance in a normal run, either the run has to be done at a faster speed (leading to more overshoot) or the timing interval has to be increased. In earlier work I had discovered that the front LIDAR system produces errors for long distance measurements when using short measurement intervals, so I increased the measurement interval from 50 to 200mSec, and now the variance decreases at a much slower rate – yay!

With the longer 200mSec measurement interval, the movement routine now has time to adjust for overshoots, as shown below:

As the above run shows, the robot stopped very close (59cm) to the desired 60cm, without any significant overshoot, and the front variance actually increased significantly during the run – nice!

The following run was in the other direction – from 60 to 30cm:

The robot did a very good job of stopping right on the mark, but coasted 2cm further over the next 2sec. This caused the test program to run the movement routine a second time, but it almost immediately exited again. The front/rear variance numbers were quite high, well out of the danger zone (the error codes shown did not effect the routine – it only looks for ‘STUCK_AHEAD’ and ‘STUCK_BEHIND’ which trigger on front and rear variance thresholds respectively.

So, for at least the ‘MoveToDesiredFrontDistCm()’ function, it appears that PID = (1.5, 0.1, 0.2) works very well. On to the next one!

MoveToDesiredRearDistCm():

Using the same PID set as for MoveToDesiredFrontDistCm() produces excellent results for the ‘rear motion’ routine as well. Here’s a run going from 60 to 30cm based on the rear distance sensor:

MoveToDesiredLeftDistCm():

Next, I tackled the ‘MoveToDesiredLeftDistCm()’ function, which uses the left center distance measurement (corrected for orientation angle). This went fairly quickly, as the PID values for the front/back case seemed to work well for the ‘side’ case too. Here’s the output and a short video from a typical run:

MoveToDesiredRightDistCm():

For this side, I simply copied ‘MoveToDesiredLeftDistCm()’ and changed all occurrences of ‘Left’ to ‘Right’. Here’s the telemetry and a short video from a typical run:

Summary:

It looks like all four (front/back/left/right) motion features are now working fine, with the same PID = (1.5, 0.1, 0.2) configuration – yay!

Now the challenge is to integrate this all back into my mainline program and start testing wall tracking again.

Stay Tuned,

Frank

Boca Chica Space X Pilgrimage

Posted 26 January 2022.

I grew up on Florida’s east coast (Daytona Beach) in the 50’s and 60’s. We could watch the launces from our front yard, and once my dad and I travelled the 100 miles or so south to Cape Canaveral to witness one of the Apollo launches. Back in those days folks just parked alongside the roads and watched. I vividly remember watching the launch, and many seconds later actually feeling the sound vibrate through my chest – what an experience! Later, working as an engineer for the government, I watched the Challenger disaster from a Florida highway overpass. Since then I have watched NASA become a cringing shadow of itself and the U.S. become a nation without the capacity to launch humans into space. It took Elon Musk and Space X to show the world how to make space accessible again, and show us an even bigger dream – Starship and Mars.

I have been following developments at Boca Chica, Texas since shortly after StarHopper made it’s famous 500′ flight in August of 2019, and have watched through the eyes of Boca Chica Mary and all the other NSF crew as the launch site grew from nothing but a single concrete landing pad to the state it is in today, with sub-orbital and orbital flights visible on the horizon. Regardless of whatever else happens or where it happens, Boca Chica Texas will be forever known as the place where human interplanetary travel got its start.

I have wanted for some time now to see this magical place with my own eyes, and this week my wife and I finally got to do our Starbase pilgrimage. We flew from Columbus, OH to Harlingen, Tx on Southwest. We stayed overnight in Brownsville, and then made the drive out to Starbase this morning. All the pictures, videos, and commentary can’t do justice to the real thing. In this post I hope to add my ‘pilgrim’s point of view’ in the hope that others will be encouraged to do their own pilgrimage.

In preparation for the trip, I had been asking about Starbase do’s and don’ts since last November, but I hadn’t gotten much in the way of useful information. NSF Discord member xredbaron62x was the most helpful, passing along links to some other threads talking about Boca Chica trips, but the information itself wasn’t very useful. In particular, most posters mentioned that South Padre Island (SPI) was the best place to stay, but that option doubles the round-trip distance to the launch site. You have to go from SPI to Brownsville, and then Brownsville to BC. I couldn’t see the logic for this unless there was a specific reason, like a known launch date, so we booked a hotel in B’ville.

We arrived late Tuesday night, and sacked out at the hotel. The next morning we got up, had breakfast there, and headed out about 9 am. There were closures scheduled for both the days we were going to be there, but I had checked the night before and the Wednesday one had already been cancelled, so we didn’t have to worry about the 10am closure. As it turned out, leaving B’ville at 9am meant we were ‘off-shift-change’ and traffic along Hwy 4 was almost non-existent. As we drove south on a beautiful sunny day, I happened to notice what appeared to be brand-new high-voltage utility poles alongside the road (I’m an Electrical Engineer – we tend to notice these things).

brand-new high-voltage utility lines? There’s nothing out here, except for…..

With my background as an EE I could tell these poles were carrying some serious high voltage and high current, based on the number of conductors per insulator position, and the spacing between the insulators. Since there’s nothing on this road except Space X’s production and launch facilities, I jumped directly to the conclusion that these lines were intended to upgrade power at the Space X sites. As we travelled south, we could see that these power lines were just being installed, as we started passing construction crews and very large conductor rolls.

Here the power lines switched to the west side, and you can see the big rolls of conductors waiting for installation

Further on, we saw an odd roadside sign, complete with a yellow smiley-face. This is the entrance to the shooting range we’ve all heard about in conjunction with Starbase.

Sometime after that, I started to see the instantly recognizable skyline of the production site, and very faintly beyond that, the launch site itself. I like this photo because it gives the viewer some idea of the isolation of this place; there’s nothing around except miles and miles of miles and miles.

Now entering the Twilight Zone…..

Here’s a short video to emphasize the ‘miles and miles of miles and miles’

We stopped a little later on so I could take some additional photos, and I happened to notice a roadside historical memorial to ‘Camp Belknap’ where 7,000-8,000 volunteers to fight in the 1840’s war with Mexico were housed in terrible conditions over a Texas summer. Looking out over the terrain, I can’t imagine anyone lasting for long in such desolation, much less 7,000 – 8,000 soldiers. Then I noticed that the site is well-kept, with red, white, and blue flowers on one side, and an American flag on the other – maybe there’s still hope for our country after all!

very well kept roadside historical monument

As we got closer to the production site, details – like SN16 and SN15 and (Booster 3??) became visible

Here’s a short video taken as we passed the iconic ‘S T A R B A S E’ sign

We drove on past the production site, me catching just a glimpse of a woman with a big camera (BC Mary?). Between the production site and the launch site, we came to this – a big stop-sign in a road construction area.

Big-time road construction

The next mile or so between the production and launch sites was all torn up; I think they have been working on this section of the road forever.

And here we are at the launch site, watched over by the one and only Starhopper

Starhopper rules the launch site

As we wandered along the road-edge on the far side of the road, I noticed that there wasn’t much going on, and thought I might be able to get a ‘I was here’ photo from near the security booth. So, I trotted over to the security booth and asked the guard if it would be OK. He said “Sure – see that line in the pavement (pointing to a faint line that, apparently marked the near edge of the public road)? – just as long as you are on the other side of that line you are OK”. Cool! So I stepped forward about 1 foot and my wife took this shot – Yay!

“Just over the line” at the entrance to the launch site

Next we went down to the beach, as I wanted to see if I could find one of the many robot cameras we all have been watching.

Nice beach, with some rain clouds coming in from offshore
My wife and chauffer for our Boca Chica adventure!

I didn’t find any NSF cameras, but I did find a guy with a huge cameral linked to a Starlink terminal. He said he was with a German outfit that was doing regular narrated Starbase updates (I forgot the name of the app), and he pointed further down the beach where he said some Lab Padre guys were set up. By this time the offshore rain clouds had gotten a lot closer, so we beat feet back to the car.

On our way back to the hotel, we ran across this sign, which we thought summed up our hopes and wishes for Starbase

Stay tuned

Frank