I (Vijay Kethanaboyina) am excited to begin the internship! This week, we were able to run a preliminary version of the experiment using a store-bought RC car. To record data, we mounted a cell phone camera onto the car. This approach helped us gain a basic understanding of our project, but we still need to figure out how to mount a camera onto the actual, 3D-printed car and get the tracking software working in real time. In this preliminary experiment, we instead used a cell phone video which I then manually inputted into the tracking software, but in the future, we intend find a solution that allows real-time tracking. [Update: Later on in the project, we did end up finding a method to track the balloon in real time. See the subsequent weeks for more info.] I spent some time drafting a plan for how we are going to improve our car design so that the captured footage will work with the program, and I played around with some alternate code for my tracking software. Our next steps are to do another version of our car experiment with some added modifications to our RC car so that I have an opportunity to test my software.
Please see the following image of the car:
When we run our next experiment, we will need to consider the following changes:
- The cork should be spray-painted or brightly colored to make it stand out from the background.
- We should put black, opaque tape on back side of the see-through container (the one that is farthest away from the camera) so that the camera cannot see past the cork.
- We can use a selfie stick to mount the phone onto the car instead of using a complicated cardboard contraption. I have a selfie stick at home that I am willing to modify for the purposes of the internship. Mounting a camera onto the car was the hardest part of making the setup. We had to experiment with many jury-rigged solutions involving cardboard boxes; we even contemplated putting duct tape onto the screen of a cell phone, though that idea was ultimately scrapped due to risk of damaging the device, even if a screen protector were used.
|
And now, the Python example:
|
Observe how the two examples are very similar in structure.
I have been working on updating my tracking software to use object-based tracking instead of color-based tracking; this results in greater precision, but it also means that once a given object goes out of frame, the entire experiment has to be repeated. There is a way around this -- you could have someone manually re-calibrate the tracker each time it goes out of frame -- but it's not a particularly effective solution. I still have yet to find a way to deal with the fact that the tracker works at 10-20 FPS max, which is often too slow to capture the fast movements going on in our project. However, if instead of using my laptop's built-in camera (which has a capped framerate of 24fps), we can instead use a cell phone camera that records at a higher framerate (e.g. 60fps) and then run the video files through the software. Using pre-recorded videos that I took while we met in person this Friday, I'm going to try adapting my tracking software to track the location of a spring and generate a real-time graph to see how the spring moves (a simple harmonic motion experiment, in other words). The spring experiment won't involve a lot of coding, because I can simply re-use a lot of what I already wrote for the original tracking software.
Here are some pictures of the oscillatory motion sub-project:
I've been experimenting with a Python library called Mediapipe a bit more as well -- more on that and the spring project next week.
I met with my team once over zoom and once in person. We managed to get a simple version of our experiment working, though it involved a lot of jury-rigging; we need to transition from our cardboard-and-duct-tape solution to something more sleek-looking in the future, but in the meantime, the work we've done so far suffices as a proof of concept. We're already, I estimate, 60-70% finished with the circular motion experiment we intend to do. All that's left is to attach all of the electronics to our platform correctly, place the helium-balloon container on our platform, and record data using a raspberry pi with a camera attached to it (or a cell phone). I did some work on the oscillatory motion sub-project, and I managed to get some basic tracking code working, allowing me to track an extremely fast-moving spring without needing to slow down footage with an external program. I switched from using object-based tracking to background subtraction (see this link https://docs.opencv.org/4.x/d1/dc5/tutorial_background_subtraction.html). The way that the OpenCV algorithms work is quite elegant; it uses a lot of physics concepts like moments, centroids, etc.
I met with my team over zoom and in person. We recorded updated footage of the rotating platform experiment. The new videos work well with the software, but one problem is that the speed of the computer running the software may impact the results because of way we ran our time calculations. I plan to fix this by gauging the number of elapsed seconds based on the number of elapsed frames rather than a built-in clock. The spring video side project will be put aside until I get this done. The platform experiment itself is close to done, but we have yet to purchase a container and put a balloon inside of it.
I was able to get the program to run much faster using multithreading, but this created other problems, namely the fact that the tracker framerate was so high that it interfered with the program's ability to recognize objects: https://drive.google.com/drive/u/2/folders/1J1bo39o0pS-4FkoyiuubsSYXK_xavdqh
I will work on fixing the multithreading issue later (it is likely an issue with the imutils library). In the meantime, I have also been working on fixing the framerate issue. The problem is that the code outputs different data on different computers. I fixed this by calculating the time based on the number of elapsed frames, which remains consistent across all devices. See this video: https://drive.google.com/file/d/1wnfzL_WIaIEe_ph3aeNKDwDNFKqXIBYK/view?usp=sharing. Even though my laptop runs quite slowly, the data itself is still valid; I manually checked it to ensure that this was the case.
We met three times. We collected several sets of data this week. We updated the apparatus used, the type of object being tracked, the object's color, the calculations that the program makes, the type of camera software being used (I installed an external camera app that automatically stabilizes the frames and prevents shaky footage). We have enough data, in other words. During one of our meetings, we discussed the physics of the project in detail, and I made some notes for future changes to make to my code.
The next steps of our project will involve me making substantial updates to my program to account for potential noise and inaccuracies. We are planning to incorporate ML/AI into our program and use regression to fix some inconsistencies and noise in our data.
Data, videos, and images will be uploaded to this site once Bryce and I can get the videos uploaded from our phones. Additional information to follow!
I have been working on getting my program to work with the data that we recorded last week. I made several changes to the program:
1) I implemented a few filters (simple if-else statements) that remove any data that is clearly bogus, e.g. angle values greater than 180 degrees or acceleration values in the tens of thousands.
2) I overhauled a large chunk of trigonometry calculation code, fixing several bugs and calculation errors. I completely changed the way the program draws the triangles needed for angle calculation.
3) I entered the data into excel to create several graphs so that I could visualize what was going on
4) I added a snippet of code that outputs the data to a CSV file as well as to the console
5) I modified the filter system to work with both phones in our experiment apparatus (you have to use a separate set of checks depending on the type of video being recorded)
I also worked on some vector physics concepts with my team. The data that we have is good, but to make it more accurate, we will need to re-record several videos. The physics of the activity is complicated by the fact that we are working in three dimensions, not two, hence the need for several cameras and more complicated physics. But if we modify the experiment apparatus and change how the video is recorded, we can greatly simplify the physics.
As you can see in the below image, implementing a few basic filters and fixing the trigonometry calculation leads to data that is much more accurate. However, there is an issue with filtering out data: when working with more objectionable videos, you are often forced to either filter out the majority of the data (in which case you have little to work with) or leave the objectionable data alone (in which case you have to work with data that is corrupted and of questionable accuracy): the only solution here is to re-record any problematic videos.
As you can see in the below image, the code fixes implemented have improved the primary phone's data, but the secondary phone's data remains more erratic and hard to use. This is not because of a programming bug but rather an issue with how the video itself was recorded, meaning that the only way to fix the problem is to re-record the video.
We ran into several roadblocks while working out the physics of the project, including an odd fluid mechanics phenomenon that causes the object in our apparatus to behave unusually, so we had to filter out parts of the data and alter the scope of the project. I made several other changes to my tracking code; the re-filmed videos were much easier to work with because we simplified the algorithm and trigonometry necessary to get the data. I compiled data from several graphs and posted them on the Gav engineering sets.
There are a few steps we could take next: we could try doing the experiment with a more advanced apparatus (e.g. a 3-d printed car), we could try using machine learning and regression to smooth out the unusual oscillations present throughout the data, or we could go deeper into the physics to understand what exactly is causing the unusual physics phenomena present in the more recent videos.
The updated tracking software displays more information on frame and draws a simpler triangle to make the calculations easier and more accurate:
For reference, here's what the old tracker looked like:
Notice how in the older version of the program, the anchor point isn't custom-selected. The newer tracker has other neat features as well, such as a beep sound that plays when the program is finished, better file output naming, the ability to adjust the tracker mid-video without resetting the timer or changing the output file, etc.
Screenshots of all of the graphs I generated can be found below:
An updated version of the tracking code can be found in the table below:
|
The poster presentation took up a lot of our time this week as we worked on figuring out a way to visually represent all of the work we've done this semester. We went through several iterations of the poster, focusing on reducing the number of words present in each "cell" of our presentation. Our goal is not to create a large block of text per se, but rather to use a combination of short phrases and images to act as queues for our presentation on Monday.
In the meantime, I've been working on graph generation software that can automatically generate graphs of any dataset outputted by the tracking software. Furthermore, I've been experimenting with rudimentary noise reduction code that can smoothen out curves and remove sudden fluctuations in the data. See the below screenshot:
The problem with the current code is that although it can smoothen out curves to make them look nicer, it can't predict future data; a more advanced ML model could provide better results, something I hope to look into in the coming weeks once GEAR club is in session. Time series forecasting (using libraries like Tensorflow) looks to be a particularly promising method.
This is the most-up-to-date version of the tracking code:
|
And this is the most up-to-date version of the graph generation code:
|
If you'd like to test the tracker on your own system, you will need the below sample video:
https://github.com/vkethana/balloon-tracking-software/blob/main/video.mov
A sample dataset, updated tracking code, and many other useful resources can be found here on the project's GitHub repository:
https://github.com/vkethana/balloon-tracking-software
Finally, if you would like to assemble the experiment apparatus yourself or simply want to see the equipment used during the internship, see the below Google Doc. The document also contains a quick tutorial explaining how to use the tracker software.
https://docs.google.com/document/d/1BlSEalDKJdNWVXWj4crKdMKH4CGX_jqdr1-A1DC-urU/edit
If you have further questions about the project or the tracking software, you can contact me at vijaykethanaboyina at gmail dot com or talk to me during the weekly GEAR meetings held on Wednesdays at 12:20 PM in PS 102.