Storyboard for Autonomous Vehicle System

In Class:

Within the class I made a cookie monster head. I really liked the idea of using motors and combining them with fabrics to create automated stuffed animals, and this was an application of that. Personally, I was already familiar with Arduino’s so most of the material was review, however I’ve never used motors for a use case like this and had a lot of fun doing that. Reminds me of what Disney’s Imagineers do, making all the animatronic characters in the parks. They definitely have one of the coolest jobs.

Here’s a video of the cookie monster head.

I’m really happy with how it turned out, the motion of the mouth looks great, and the use of pipecleaners and fabric worked out nicely to create the look of cookie monster.


The goal of this assignment was to make a Arduino mockup similar to that of a real self driving car obstacle detection and handling system. The overall idea is that an ultrasonic sensor would work in tandem with a camera to determine if an object was close by, and would then turn on an LED if those conditions were met similar to a brake light. The project was broken into the following sections:

  1. Computer Vision system for real-time object detection: To accomplish this a lite CNN was used with Python and OpenCV to detect objects. I wasn’t using a GPU and the FPS for this was roughly ~1 FPS, which is obviously not ideal for real world usage, but suitable for this mockup. 
  2. CAN based microcontroller for UltraSonic sensor and LED light. We use an Arduino to receive distance data from the ultrasonic sensor. The data would then be sent through the CAN module to the CAN to IP Gateway detailed below.
  3. CAN to IP gateway to translate between CAN messages and IP packets. Modern cars require the throughput of an Ethernet network to transmit sensor and camera data, as well as receiving software update from the cloud. To do this I used an Ethernet Shield connected to a CAN module.

Please note: CAN is used for communication between Arduinos to send packets of data with very low latency. 

Below is the architecture/storyboard for this:

The flow of data is Ultrasonic Sensor data sent to computer/RPi through CAN and Ethernet, where it is processed alongside the detection of objects, and a result to either turn on or off the LED is sent back to the Arduino. 

This led to the fritzing diagram shown below:

The final result was a working system that was able to detect objects and distance. However because the frame rate of the object detection was so low, it would not be feasible for a real system. However this was just a mockup of the real thing, and it was successful in that.

Below is a photo taken of the object detection system, which was able to detect that a phone was close to the webcam, and draw a bounding box around it.


Below is video demonstrating the ultrasonic sensor in action, please note that this is also functioning in tandem with the object detection system, and data is being sent to the computer via Ethernet.




Overall I’m not satisfied with the result of this project. While the ultrasonic combines with the camera detection and it does make a great estimate of users distance to the “car” on a frame by frame basis. However the neural network takes so long to run that it detects objects at 1 FPS. This is obviously not useable for a real world scenario and needs to be improved upon to have a higher frame rate. I’ll be looking into optimizations to improve this project.