Iterating on the first Arduino project.
The project was something that could definitely work well in theory, but ran into a lot of issues and shortcomings that I didn’t initially expect. The original idea was to be able to type out words without the use of fingers or fine motor skills. It would be used in a way like this:
- Once the ultrasonic sensor detects your hand, it keeps measuring the distance. It uses this distance select a letter.
- Once the sensor detects a sudden change in the distance (in that, you moved your hand away from the sensing area), it would select and type the letter corresponding to the most recent letter.
But it had a lot of problems.
This seemed like a good idea in theory, but a lot of its shortcomings can be summarized through this:
(sorry for the dark video, the screen was just too bright in comparison)
Digging into what might be the problem for it being inconsistent, I printed out what the sensor measured as distance and got this:
Despite its errors, it did work well enough for a short demonstration and video for the assignment, but it really was nowhere near what I imagined. The whole thing needed a better frame to make the hardware more reliable, and an entire software rewrite to accommodate for the small errors. Even with that, I found some oversimplifications in my original description of the project.
So immediately, the distance measurements are unreliable. I saw three sources of this, and three solutions to them that I would try to implement first:
- When you’re really far out (the letter Z is ~50cm away from the sensor) you can’t really tell if the sensor is facing in your hand’s direction.
- I need to make some solid frame that holds the sensor perpendicular to some ruler/surface
- “A sudden change in distance” is really vague and hard to quantify. Especially with an unreliable sensor like this.
- Once I get the frame, have something at the end so I can tell exactly when a hand is/isn’t on the sensor path.
- “Hands” are oddly shaped and aren’t always detected too well.
- Use a 90 degree reflector instead
First, making more reliable hardware
I knew that about 2cm per letter was a good amount of room where it doesn’t take too much space, but you also don’t have to be too precise. So, I started preparing a ~55cm long scaffold to hold the sensor and display. I created a cutting pattern for an open box using Boxes.py, and added a few more cutouts of my own for the sensor, screen, and wires.
The pieces at the top will make up the open box, while the 4 bottom pieces in the cut pattern will be used to solve problem #4 and create 90 degree reflectors.
Once cut, I started putting together the pieces!
My proudest part of this, really.
The ultrasonic sensor was a amazing fit, with 10/10 positioning of the wiring hole. I was honestly way too happy for just getting this one bit so perfectly 🙂
Downhill from there
The display might look like an almost perfect fit, but the cut hole was just barely too small because of a tiny part of it I didn’t account for. I amend this later.
The 90 degree reflector! I heard that using angled pieces like this more reliably reflect lights and sounds back to the source, so I thought it would help a lot.
I made the mistake of cutting a little too big a piece because I didn’t correctly tolerance it when measuring, cutting, and Pythagorean Theorem-ing.
And then finally, it’s all coming together! Some last things to fix though.
Measure once, cut twice
Now as I mentioned before, the screen was not fitting perfectly, and the reflector was cut too big. I quickly fixed those issues.
The oversized reflector was marked, cut, then put back:
And then for the screen, this tiiiny part of the screen/wires were getting caught in the wood and would not let it fit.
And so, a bit of careful carving out the wood later, I managed to make some room. (There’s a pretty guilty-looking utility knife in the background)
After some hot gluing, taping, and writing the letters on the board as a guide, it came together pretty well! From a hardware perspective, it had everything to make the ultrasonic sensor as reliable as possible, addressing all 4 of the problems.
Actually writing good code this time
It was now time for the software rework.
After toying with the old version of the code, I saw several issues that I should address similarly:
Issue #1: No matter how reliable everything is, I’ll get the occasional outlier from the ultrasonic sensor.
The first method used the statistical mean of ~30 samples, but outliers affected that a lot.
Instead of using the mean, I now used median. That way, huge outliers won’t have any affect as long as they’re not the majority.
Issue #2: As I move my hand away to “select” a letter, I might accidentally select an adjacent letter instead.
For example, say the user wants to select ‘h’ which is between 18cm and 20cm. The sensor might return series of numbers like this:
- 19.3cm (h)
- 18.8cm (h)
- 19.1cm (h)
- 21.2cm (i)
- 57.8cm (hand completely moved away)
Once the sensor realizes the hand completely moved away, it needs to select the letter that was most present in the last few seconds (which is ‘h’), not just the most recent measurement (which would be ‘i’).
I solved this by keeping track of the ~7 most recent measurements, and then taking the median value once it realizes the hand moved away.
I set a delay at the end so that the user gets some moment to see that the letter they want has been selected.
Issue #3: There’s no good way to tell if you’re in the ‘border’ between two letters.
If you take a look at the past video or images, you can say it displays a “Distance”. This was meant to serve the above purpose, but it only really works for me because I know roughly where the letters are supposed to land. But even then, it’s not very useful because I don’t know the thresholds memorized of course.
I had all this screen space to work with, so I thought I could put a pretty intuitive indicator of how “centered” you are on the letter! It would work like this:
If you’re currently selected on the letter ‘c’, and perfectly centered, the display would show something like this:
- b c d
Otherwise, if you’re selected on the letter ‘c’ but a little over to the left:
- b c d
Or if you’re a little over to the right:
- b c d
Hopefully that would indicate that you should move your hand a little over to the right to get a more precise selection of the letter ‘c’.
And with that, we’re done!
I’ll let the video speak for itself 🙂
Most of the trial-and-error in this project was really in the programming. Seeing what seems like would intuitively or theoretically work, trying to make it work, and not being sure if my method is bad or if the program is buggy. There was one hour where everything was working fine, but it would inexplicably type “z” no matter what. I was completely sure I was writing the correct letter to the LCD, and then I realized I was just overwriting the correct letter with “z” just a couple milliseconds later because of a bad if statement 🙁
It was also a lot of tinkering with how many samples I should take for a reliable median, many samples I should “remember” for the selection, as well as how much space to give per letter. If these numbers were too big then the system felt sluggish, but too small would lead to more mistakes.
Then, there was the reorganizing and commenting everything, because holy moly the original code I wrote was awful.
When I saw how well it was working in the end, I was genuinely so happy with its consistency and reliability. The typing, the “Ready” notice, as well as the visual guide and feedback for each letter was all exactly how I imagined it to be. (Almost) everything fits together perfectly, and there’s so little about it that feels hacked together or improvised. I really can’t express just how “complete” this project feels, really having made no compromises in terms of quality, accuracy, or usability.
One thing I considered was laser cutting the letters onto the board, but I thought it would be better to hand-write them on so that I can fiddle with the letter spacing and locations in the code. Next time though!