Helium Balloon Car: OpenCV Accelerometer

May 11, 2022, 5:11 p.m.
vijayk

June 17th, 2022

We were actually able to get a preliminary version of the experiment completed by using a store-bought RC car and mounting a cell phone camera onto the car. This approach is sufficient to gain a basic understasnding of how the project works, but we still need to figure out how to mount a camera onto the car and get the tracking software working in real time. In this preliminary experiment, we instead used a cell phone video which I then manually inputted into the tracking software, but in the future, we hope to find a solution that allows real time tracking. I spent some time drafting a plan for how we are going to improve our car design so that the captured footage will work with the program, and I played around with some alternate code for my tracking software. Our next steps are to do another version of our car experiment with some added modifications to our RC car so that I have an opportunity to test my software.

Please see the following videos and images:

https://drive.google.com/file/d/1LMQC1n0WN2Ym85cUOMD60zcH4hTMPs8Q/view?usp=drivesdk 

When we run our next experiment, we will need to consider the following changes

- The cork should be spray-painted or brightly colored to make it stand out from the background

- We should put black, opaque tape on back side of the see-through container (the one that is farthest away from the camera) so that the camera cannot see past the cork

- We can use a selfie stick to mount the phone onto the car instead of using complicated cardboard contraption. I have a selfie stick at home that I am willing to modify for the purposes of the internship. Mounting a camera onto the car was the hardest part of making the setup. We had to experiment with many jury-rigged solutions involving cardboard boxes; we even contemplated putting duct tape onto the screen of a cell phone, though that idea was ultimately scrapped due to risk of damaging the device, even if a screen protector was used.

June 24th, 2022
 
Unfortunately, due to issues with Gavilan scheduling, we were unable to start 3-D printing our car this week. We have, however, been able to start assembling the electronics. I worked with Bryce and Jonathan in-person to help assemble the electronics. In the meantime, I worked on programming as well. I've been tweaking my program, and I'm contemplating moving it from Python3 to C++. I may start using the Mediapipe library in addition to OpenCV in hopes of achieving a more effective form of object tracking that involves machine learning and doesn't require color-based tracking.
 
Python and C++ have fairly similar implementations of OpenCV, so I predict switching from one language to the other wouldn't take too long. Compare the following C++ and Python examples (note: I didn't create these examples myself, they come from this link: https://riptutorial.com/opencv/example/21401/get-image-from-webcam).
 
First, the C++ example.
 
#include "opencv2/opencv.hpp" #include "iostream" int main(int, char**) { cv::VideoCapture camera(0); if (!camera.isOpened()) { std::cerr << "ERROR: Could not open camera" << std::endl; return 1; } cv::namedWindow("Webcam", CV_WINDOW_AUTOSIZE); cv::Mat frame; camera >> frame; // display the frame until you press a key while (1) { // show the image on the window cv::imshow("Webcam", frame); // wait (10ms) for a key to be pressed if (cv::waitKey(10) >= 0) break; } return 0; }
And now the python code:
import numpy as np import cv2 # Video source - can be camera index number given by 'ls /dev/video* # or can be a video file, e.g. '~/Video.avi' cap = cv2.VideoCapture(0) while(True): # Capture frame-by-frame ret, frame = cap.read() # Our operations on the frame come here gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Display the resulting frame cv2.imshow('frame',gray) if cv2.waitKey(1) & 0xFF == ord('q'): break # When everything done, release the capture cap.release() cv2.destroyAllWindows()

As you can see, the two examples are quite similar.

July 1, 2022

I have been working on updating my tracking software to use object-based tracking instead of color-based tracking; this results in greater precision, but it also means that once a given object goes out of frame, the entire experiemnt has to be repeated. There is a way around this -- you could have someone manually re-calibrate the tracker each time it goes out of frame -- but it's not a particularly effective solution. I still have yet to find a way to deal with the fact that the tracker works at 10-20 FPS max, which is often too slow to caputre the fast movements going on in our project. However, if instead of using my laptop's built-in camera (which has a capped framerate of 24fps), we can instead use a cell phone video which records at a higher  framerate (e.g. 60fps) and then import that into the project. Using pre-recorded videos that I took while we met in-person this Friday, I'm going to try adapting my tracking software to track the location of a spring and generate a real-time graph to see how the spring moves (a simple harmonic motion experiment, in other words). The spring experiment won't involve a lot of coding, because I can simply re-use a lot of what I already wrote for the original tracking software.

Here are some pictures of the oscillatory motion sub-project:

I've been experimenting with a Python library called Mediapipe a bit more as well -- more on that and the spring project next week.

July 8th, 2022

met with my team once over zoom and once in-person. We managed to get a primiitve version of our experiment working, though it involved a lot of jury rigging; we need to transition from our cardboard-and-duct-tape solution to something more sleek-looking in the future, but in the meantime, the work we've done so far suffices as a proof of concept. We're already, I estimate, 60-70% finished with the circular motion experiment we intend to do. All that's left is to attach all of the electronics to our platform correctly, place the helium-balloon container on our platform, and record data using a raspberry pi with a camera attached to it (or a cell phone). I did some work on the oscillitory motion sub-project, and I managed to get some very primitive tracker code working, allowing me to track an extremely fast-moving spring without needing to slow down footage with an external program. I switched from using object-based tracking to background subtraction (see this link https://docs.opencv.org/4.x/d1/dc5/tutorial_background_subtraction.html). The way that the OpenCV algorithms work is quite elegant; it uses a lot of physics concepts like moments, centroids, etc. 

https://drive.google.com/file/d/1C-AO-oufjAjmmtXxu-qo0qkLYSWCy3G5/view?usp=sharing

https://cdn.discordapp.com/attachments/976950450429497382/995186037787742308/unknown.png

July 15th, 2022

I met with my team over zoom and in-person. We met again in-person to take some videos of the rotating platform expeirment because unfortunately, the original videos we took were taken from an angle that made the calculations too hard to do. The new videos work well with the software, but one problem is that the speed of the computer running the software may impact the results because of the the way we ran our time calculations. I plan to fix this by gauging the number of elapsed seconds based on the number of elapsed frames rather than a built-in clock. The spring video side roject will be put aside until I get this done. The platform experiment itself is close to done, but we have yet to purchase a container and put a balloon inside of it. 

I was able to get the program to run much faster using multithreading, but this created other problems, namely the fact that the tracker framerate was so high that it interfered with the program's ability to recognize objects: https://cdn.discordapp.com/attachments/978392692922921055/997931837085192292/2022-07-16_11-17-33.mp4

I will work on fixing the multihreading issue later (it is likely an issue with the imutils library). In the meantime, I have also been working on fixing the framerate issue. The problem is that the code outputs different data on different computers. I fixed this by calculating the time based on the number of elapsed frames, which remains consistent across all devices. See this video: https://cdn.discordapp.com/attachments/978392692922921055/997961752354492486/2022-07-16_13-13-39.mp4. Even though my laptop runs quite slowly, the data itself is still valid; I manually checked it to ensure that this was the case.

July 22nd, 2022

We met three times. We collected several sets of data this week. We updated the apparatus used, the type of object being tracked, the object's color, the calculations that the program makes, the type of camera software being used (I installed an external camera app that automatically stabilizes the frames and prevents shaky footage). We have enough data, in other words. During one of our meetings, we discussed the physics of the project in detail, and I made some notes for future changes to make to my code.

The next steps of our project will involve me making substantial updates to my program to account for potential noise and innacuracies. We are planning to incorporate ML/AI into our program and use regression to fix some inconistencies and noise in our data.

Data, videos, and images will be uploaded to this site once Bryce and I can get the videos uploaded from our phone. Additional information to follow!

July 29nd, 2022

I have been working on getting my program to work with the data that we recorded last week. I made several changes to the program:
1) I implemented a few filters (simple if-else statements) that remove any data that is clearly bogus, e.g. angle values greater than 180 degrees or acceleration values in the tens of thousands.
2) I overhauled a large chunk of trigonometry calculation code, fixing several bugs and calculation errors. I completely changed the way the program draws the triangles needed for angle calculation.
3) I entered the data into excel to create several graphs so that I could visualize what was going on
4) I added a snippet of code that outputs the data to a CSV file as well as to the console
5) I modified the filter system to work with both phones in our experiment apparatus (you have to use a separate set of checks depending on the type of video being recorded)

I also worked on some vector physics concepts with my team. The data that we have is good, but to make more accurate, we will need to re-record several videos. The physics of the activitiy is complicated by the fact that we are working in three dimensions, not two, hence the need for several cameras and more complicated physics. But if we modify the experiment apparatus and change how the video is recorded, we can greatly simplify the physics.

 

As you can see in the below image, implementing a few basic filters and fixing the trigonometry calculation leads to data that is much more accurate. However, there is an issue with filtering out data: when working with more objectionable videos, you are often forced to either filter out the majority of the data (in which case you have little to work with) or leave the objectionable data alone (in which case you have to work with data that is corrupted and of questionable accuracy): the only solution here is to re-record any problematic videos.

As you can see in the below image, the code fixes implemented have improved the primary phone's data, but the secondary phone's data remains more erratic and hard-to-use. This is not because of a programming bug but rather an issue with how the video itself was recorded, meaning that the only way to fix the problem is to re-record the video.

The entirety of my tracking code can be found below. Some of the code comes from other sites -- not all of it was made by me.

# Code is from python tutorial site
# Not all code was made by me 

# libraries to be downloaded
import cv2
import numpy as np

# libraries that you dont need to dl
import sys
import time
from math import dist
from math import acos
import math
import numpy
from datetime import datetime as dt
'''
'''

#filename = "vids/secondtest.mp4"
#filename = "vids/eightfoldwire.mov"
#filename = "vids/clip6.mp4"
#filename = "0"
# Set this to the name of an mp4 file
video = cv2.VideoCapture("vids/bmank/output2.mov")
#video = cv2.VideoCapture("vids/goodvideo.MOV")
TESTfps = video.get(cv2.CAP_PROP_FPS)
print ("This video runs at a framerate of : {0}".format(TESTfps))

#video = cv2.VideoCapture(0) # for using CAM
# You set it to 0 if you want the default computer camera
# Set it to 1 to use the external camera

#video.set(3, 1920)
#video.set(4, 1080)

# ALTERNATIVE FILENAMES
#filename = "BEST_VID_resizedtrimmed2.mp4"
#filename = "aadhav_rotated_2.mp4"
#filename = "newbestvid.mp4"
CONVFAC = 57.295779
filename = ('data/filename' + str(time.time()) + '.csv')
def resize(img):
    # https://www.tutorialkart.com/opencv/python/opencv-python-resize-image/
    scale = 35
    new_w = int(img.shape[1] * scale / 100)
    new_h = int(img.shape[0] * scale / 100)
    dim = (new_w, new_h)
    return cv2.flip(cv2.resize(img, dim, interpolation = cv2.INTER_AREA), -1)

def distance(x1, y1, x2, y2):
    # Calculate distance between two points
    dist = math.sqrt(math.fabs(x2-x1)**2 + math.fabs(y2-y1)**2)
    return dist

(major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.')
start_time = time.time()

if __name__ == '__main__' :
    # Set up tracker.
    # Instead of CSRT, you can also use
    tracker_types = ['BOOSTING', 'MIL','KCF', 'TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT']
    #                   0         1     2      3       4           5           6        7
    tracker_type = tracker_types[7]
    # it is recommended to use 7
    # the other algorithms do not work as well
    print("Using tracker type: ", tracker_type)
    if int(minor_ver) < 3:
        tracker = cv2.Tracker_create(tracker_type)
    else:
        if tracker_type == 'BOOSTING':
            tracker = cv2.TrackerBoosting_create()
        elif tracker_type == 'MIL':
            tracker = cv2.TrackerMIL_create()
        elif tracker_type == 'KCF':
            tracker = cv2.legacy.TrackerKCF_create()
        elif tracker_type == 'TLD':
            tracker = cv2.legacy.TrackerTLD_create()
        elif tracker_type == 'MEDIANFLOW':
            tracker = cv2.legacy.TrackerMedianFlow_create()
        elif tracker_type == 'GOTURN':
             tracker = cv2.legacy.TrackerGOTURN_create()
        elif tracker_type == 'MOSSE':
            tracker = cv2.legacy.TrackerMOSSE_create()
        elif tracker_type == "CSRT":
            tracker = cv2.TrackerCSRT_create()

# Read video
def mainLoop():
  cntr = 0.0
  # Exit if video not opened.
  if not video.isOpened():
    print("Could not open video")
    sys.exit()

  # Read first frame.
  ok, frame = video.read()
  frame = resize(frame);
  if not ok:
    print ('Cannot read video file')
    sys.exit()

  # Define an initial bounding box
  #bbox = (100, 400,         160, 410)
  #       x1 y1         x2  y2
  # Uncomment the line below to select a different bounding box
  bbox = cv2.selectROI(frame, False)
  p1 = (int(bbox[0]), int(bbox[1]))
  p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))
  # Initialize tracker with first frame and bounding box
  height, width = frame.shape[:2]
  #anchor_x = int(width/2)
  orig_x = int((p1[0]+p2[0])/2.0)
  orig_y = int((p1[1]+p2[1])/2.0)
  anchor_x = orig_x
  anchor_y = int(height)

  ok = tracker.init(frame, bbox)
  prev_angle = -1.00

  while True:
       # Read a new frame
       ok, frame = video.read()
       if not ok:
           break
       frame = resize(frame)
       # Start timer
       timer = cv2.getTickCount()

       # Update tracker
       ok, bbox = tracker.update(frame)
       cur_time = time.time() - start_time

       # divide by one thousand in order to get the time from microseconds to normal seconds
       # Calculate Frames per second (FPS)
       fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer)

       # Draw bounding box
       if ok:
           # Tracking success
           p1 = (int(bbox[0]), int(bbox[1]))
           p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))

           centroid_x = int((p1[0]+p2[0])/2.0)
           centroid_y = int((p1[1]+p2[1])/2.0)

           rt_x = orig_x
           rt_y = centroid_y

           hypotenuse = distance(centroid_x, centroid_y, anchor_x, anchor_y)
           horizontal = distance(centroid_x, centroid_y, orig_x, orig_y)
           vertical = distance(anchor_x, anchor_y, orig_x, orig_y)
           extension = distance(centroid_x, centroid_y, rt_x, rt_y)
           otherextension = distance(rt_x, rt_y, orig_x, orig_y)

           #print("difference between two legs of triangle (should be zero):", float(hypotenuse-vertical))

           #print("HORIZ, EXT:", horizontal, extension)
           #print((math.isnan(horizontal)),  (math.isnan(extension)),(abs(horizontal - 0.000) <= 0.01))
           #if ((int(hypotenuse) == 0) or (int(horizontal) == 0) or (int(vertical) == 0) or math.isnan(horizontal) or math.isnan(extension)):
           angle3 = 0.00
           if ((math.isnan(otherextension)) or (math.isnan(extension)) or (abs(extension - 0.000) <= 0.01) or (abs(hypotenuse - 0.000) <= 0.01)):
             angle = -1.00
             ACCEL = -1.00
           else:
             #angle = int((np.arcsin(extension/horizontal)) * (57.23))
             angle = (np.arctan(otherextension/extension)) # this is equal to beta
             ACCEL = (math.tan((2*angle)))*9.81
             #angle3 = ((np.arccos(extension/hypotenuse)) * (57.295779)) + (90.00000-angle)
             #angle = 180.00000-(angle+angle3)

             #print("result of arcsin calculation: ", angle)
             #print("the angle we are looking for: ", 180-(2*angle))
             #angle = int((np.arcsin(vertical/hypotenuse)) * (57.23))
             #angle2 = int((np.arccos(vertical/hypotenuse)) * (57.23))
             #angle = (180.00-(2.00*angle))
             #print("EXT, OTHEREXT, ANGLE:", extension, otherextension, angle)

           cv2.rectangle(frame, p1, p2, (255,0,0), 2, 1)
           #print(int(cur_x), int(cur_y))
           #cv2.circle(frame, (int(cur_x), int(cur_y)), 5, (0, 0, 255), 5)
           cv2.circle(frame, (centroid_x, centroid_y), 2, (0, 0, 255), 5)
           cv2.circle(frame, (anchor_x, anchor_y), 2, (255, 0, 0), 5)
           cv2.circle(frame, (orig_x, (orig_y)), 2, (255, 0, 255), 5)
           cv2.circle(frame, (rt_x, rt_y), 2, (0, 255, 0), 5)
           cv2.line(frame, (centroid_x, centroid_y), (anchor_x, anchor_y), (255, 0, 0), 1)
           cv2.line(frame, (anchor_x, anchor_y), (orig_x, orig_y), (255, 0, 0), 1)
           cv2.line(frame, (orig_x, orig_y), (centroid_x, centroid_y), (255, 0, 0), 1)
           cv2.line(frame, (rt_x, rt_y), (centroid_x, centroid_y), (255, 255, 0), 1)

           AcceptableAngleJump = True
           if ((prev_angle == -1.00) or (int(prev_angle) == int(-1))):
            AcceptableAngleJump = True
            prev_angle = angle
           else:
             #print("difference:", abs(angle-prev_angle))
             if (abs((angle*CONVFAC)-(prev_angle*CONVFAC)) > 45):
               AcceptableAngleJump = False
             prev_angle = angle

           maxThreshold = 45

           #isValid = ((centroid_y >= (orig_y)) and (AcceptableAngleJump) and (int(angle*CONVFAC) <= 89) and (angle > 0) and (int(angle) != -1) and (abs(ACCEL) <= maxThreshold) and (ACCEL >= 0))
           isValid = ((AcceptableAngleJump) and (int(angle*CONVFAC) <= 89) and (angle > 0) and (int(angle) != -1) and (abs(ACCEL) <= maxThreshold) and (ACCEL >= 0))
           print((AcceptableAngleJump) , (int(angle*CONVFAC) <= 89) , (angle > 0) , (int(angle) != -1), (abs(ACCEL) < maxThreshold), (ACCEL >= 0), ACCEL)
           #isValid = 1
           if (isValid):
             print((cntr+1.0)/TESTfps, cur_time, angle*CONVFAC, ACCEL, sep=',')
             with open(filename, "a") as text_file:
               print((cntr+1.0)/TESTfps, cur_time, centroid_x, centroid_y, angle*CONVFAC, ACCEL, sep=',', file=text_file)

       else:
           # Tracking failure
           cv2.putText(frame, "Tracking failure detected", (100,80), cv2.FONT_HERSHEY_SIMPLEX, 0.75,(0,0,255),2)

       # Display tracker type on frame
       cv2.putText(frame, tracker_type + " Tracker", (100,20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50),2);

       # Display FPS on frame
       cv2.putText(frame, "FPS : " + str(int(fps)), (100,50), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50), 2);
       # Display result
       cv2.imshow("Tracking", frame)

       cntr += 1.0

       # Exit if ESC pressed
       if cv2.waitKey(1) & 0xFF == ord('q'): # if press SPACE bar
         print("q has been pressed")
         return;

try:
  mainLoop();
  while True:
    mainLoop();

finally:
  video.release()
  cv2.destroyAllWindows()

 

August 5th, 2022

 

We ran into several roadblocks while working out the physics of the project, including an odd fluid mechanics phenomenon that causes the object in our apparatus to behave unusually, so we had to filter out parts of the data and alter the scope of the project. I made several other changes to my tracking code; the re-filmed videos were much easier to work with because we simplified the algorithm and trigonometry necessary to get the data. I compiled data from several graphs and posted them on the Gav engineering sets.
There are a few steps we could take next: we could try doing the experiment with a more advanced apparatus (e.g. a 3-d printed car), we could try using machine learning and regression to smooth out the unusual oscillations present throughout the data, or we could go deeper into the physics to understand what exactly is causing the unusual physics phenomena present in the more recent videos.

The updated tracking software displays more information on frame and draws a simpler triangle to make calculation easier and more accurate:

For reference, here's what the old tracker looked like:

Notice how in the older version of the program, the anchor point isn't custom-selected. The newer tracker has other neat features as well, such as a beep sound that plays when the program is finished, better file output naming, the ability to adjust the tracker mid-video without resetting the timer or changing the output file, etc.

Screenshots of all of the graphs I generated can be found below:

 

 

An updated version of the tracking code can be found in the table below:

# Some of this code is from a python tutorial site

# libraries to be downloaded
import cv2 # make sure you install opencv contrib from pip, not ordinary opencv
import numpy as np

# libraries that you dont need to dl
import winsound
import sys
import time
from math import dist
from math import acos
import math
import numpy
from datetime import datetime as dt
'''
'''

video_path =  "vids/final/"
video_id = "jt3"
extension = ".mov"
video = cv2.VideoCapture(video_path + video_id + extension)
#video = cv2.VideoCapture(0) # for using CAM

#video.set(3, 1920)
#video.set(4, 1080)

filename = ('data/filename' + str(time.time()) + video_id + '.csv')
TESTfps = video.get(cv2.CAP_PROP_FPS)
print ("This video runs at a framerate of : {0}".format(TESTfps))
print ("Outputting data to file:", filename)
CONVFAC = 57.295779

def resize(img):
    # https://www.tutorialkart.com/opencv/python/opencv-python-resize-image/
    scale = 35
    new_w = int(img.shape[1] * scale / 100)
    new_h = int(img.shape[0] * scale / 100)
    dim = (new_w, new_h)
    return cv2.flip(cv2.resize(img, dim, interpolation = cv2.INTER_AREA), 1)
    # 0 means flipping around the x-axis and positive value (for example, 1) means flipping around y-axis. Negative value (for example, -1) means flipping around both axes.
    # ^^^ source: geeks4geeks

def distance(x1, y1, x2, y2):
    # Calculate distance between two points
    dist = math.sqrt(math.fabs(x2-x1)**2 + math.fabs(y2-y1)**2)
    return dist

(major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.')
start_time = time.time()

if __name__ == '__main__' :
    # Set up tracker.
    # Instead of CSRT, you can also use
    tracker_types = ['BOOSTING', 'MIL','KCF', 'TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT']
    #                   0         1     2      3       4           5           6        7
    tracker_type = tracker_types[7]
    # it is recommended to use 7
    # the other algorithms do not work as well
    print("Using tracker type: ", tracker_type)
    if int(minor_ver) < 3:
        tracker = cv2.Tracker_create(tracker_type)
    else:
        if tracker_type == 'BOOSTING':
            tracker = cv2.TrackerBoosting_create()
        elif tracker_type == 'MIL':
            tracker = cv2.TrackerMIL_create()
        elif tracker_type == 'KCF':
            tracker = cv2.legacy.TrackerKCF_create()
        elif tracker_type == 'TLD':
            tracker = cv2.legacy.TrackerTLD_create()
        elif tracker_type == 'MEDIANFLOW':
            tracker = cv2.legacy.TrackerMedianFlow_create()
        elif tracker_type == 'GOTURN':
             tracker = cv2.legacy.TrackerGOTURN_create()
        elif tracker_type == 'MOSSE':
            tracker = cv2.legacy.TrackerMOSSE_create()
        elif tracker_type == "CSRT":
            tracker = cv2.TrackerCSRT_create()

# Read video
cntr = 0.0
def mainLoop(cntr):
  # Exit if video not opened.
  if not video.isOpened():
    print("Could not open video")
    sys.exit()

  # Read first frame.
  ok, frame = video.read()
  frame = resize(frame);
  if not ok:
    print ('Cannot read video file')
    sys.exit()

  # Define an initial bounding box
  #bbox = (100, 400,         160, 410)
  #       x1 y1         x2  y2
  # Uncomment the line below to select a different bounding box
  cv2.putText(frame, tracker_type + " Choose Balloon Point", (20,20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0,0,255),2);
  bbox = cv2.selectROI(frame, True)

  ok, frame = video.read()
  frame = resize(frame);
  if not ok:
    print ('Cannot read video file')
    sys.exit()

  cv2.putText(frame, tracker_type + " Choose Anchor Point", (20,20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0,0,255),2);
  bbox2 = cv2.selectROI(frame, True)
  p1 = (int(bbox[0]), int(bbox[1]))
  p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))
  # Initialize tracker with first frame and bounding box
  height, width = frame.shape[:2]
  #anchor_x = int(width/2)
  orig_x = int((p1[0]+p2[0])/2.0)
  orig_y = int((p1[1]+p2[1])/2.0)

  p1b = (int(bbox2[0]), int(bbox2[1]))
  p2b = (int(bbox2[0] + bbox2[2]), int(bbox2[1] + bbox2[3]))

  anchor_x = int((p1b[0]+p2b[0])/2.0)
  anchor_y = int((p1b[1]+p2b[1])/2.0)

  ok = tracker.init(frame, bbox)
  prev_angle = -1.00

  cntr += 2.00
  while True:
       # Read a new frame
       ok, frame = video.read()
       if not ok:
           break
       frame = resize(frame)
       # Start timer
       timer = cv2.getTickCount()

       # Update tracker
       ok, bbox = tracker.update(frame)
       cur_time = time.time() - start_time

       # divide by one thousand in order to get the time from microseconds to normal seconds
       # Calculate Frames per second (FPS)
       fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer)

       # Draw bounding box
       if ok:
           # Tracking success
           p1 = (int(bbox[0]), int(bbox[1]))
           p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))

           centroid_x = int((p1[0]+p2[0])/2.0)
           centroid_y = int((p1[1]+p2[1])/2.0)

           A = distance(centroid_x, centroid_y, anchor_x, anchor_y)
           B = distance(anchor_x, anchor_y, anchor_x, centroid_y)
           C = distance(anchor_x, centroid_y, centroid_x, centroid_y)

           angle = -1.00

           if ((math.isnan(C)) or (math.isnan(B)) or (abs(B - 0.000) <= 0.01)):
             angle = -1.00
             acceleration = -1.00
           else:
             angle = (np.arctan(C/B))
             acceleration = (math.tan((2*angle)))*9.81

           cv2.rectangle(frame, p1, p2, (255,0,0), 2, 1)
           #print(int(cur_x), int(cur_y))
           #cv2.circle(frame, (int(cur_x), int(cur_y)), 5, (0, 0, 255), 5)
           cv2.circle(frame, (centroid_x, centroid_y), 2, (0, 0, 255), 5)
           cv2.circle(frame, (anchor_x, anchor_y), 2, (255, 0, 0), 5)
           cv2.circle(frame, (orig_x, (orig_y)), 2, (255, 0, 255), 5)

           cv2.line(frame, (centroid_x, centroid_y), (anchor_x, anchor_y), (255, 0, 0), 1)
           cv2.line(frame, (anchor_x, anchor_y), (anchor_x, centroid_y), (255, 0, 0), 1)
           cv2.line(frame, (anchor_x, centroid_y), (centroid_x, centroid_y), (255, 0, 0), 1)

           AcceptableAngleJump = True
           if ((prev_angle == -1.00) or (int(prev_angle) == int(-1))):
            AcceptableAngleJump = True
            prev_angle = angle
           else:
             #print("difference:", abs(angle-prev_angle))
             if (abs((angle*CONVFAC)-(prev_angle*CONVFAC)) > 45):
               AcceptableAngleJump = False
             prev_angle = angle

           maxThreshold = 45

           #isValid = ((centroid_y >= (orig_y)) and (AcceptableAngleJump) and (int(angle*CONVFAC) <= 89) and (angle > 0) and (int(angle) != -1) and (abs(acceleration) <= maxThreshold) and (acceleration >= 0))
           isValid = ((AcceptableAngleJump) and (int(angle*CONVFAC) <= 89) and (angle > 0) and (int(angle) != -1) and (abs(acceleration) <= maxThreshold) and (acceleration >= 0))
           print((AcceptableAngleJump) , (int(angle*CONVFAC) <= 89) , (angle > 0) , (int(angle) != -1), (abs(acceleration) < maxThreshold), (acceleration >= 0), acceleration, angle*CONVFAC)
           #isValid = 1
           if (isValid):
             cv2.putText(frame, ("Angle: "+ str(round(angle*CONVFAC,3))), (abs(anchor_x-40),anchor_y), cv2.FONT_HERSHEY_SIMPLEX, 0.75,(0,0,255),2)
             cv2.putText(frame, ("Accel: "+ str(round(acceleration,3))), (abs(anchor_x-40),anchor_y+50), cv2.FONT_HERSHEY_SIMPLEX, 0.75,(0,0,255),2)
             print((cntr+1.0)/TESTfps, cur_time, angle*CONVFAC, acceleration, sep=',')
             with open(filename, "a") as text_file:
               print((cntr+1.0)/TESTfps, cur_time, centroid_x, centroid_y, angle*CONVFAC, acceleration, sep=',', file=text_file)

       else:
           # Tracking failure
           cv2.putText(frame, "Tracking failure detected", (100,80), cv2.FONT_HERSHEY_SIMPLEX, 0.75,(0,0,255),2)

       # Display tracker type on frame
       cv2.putText(frame, tracker_type + " Tracker", (100,20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50),2);
       cv2.putText(frame, "Frames Elapsed: " + str(cntr), (100,45), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50),2);

       # Display FPS on frame
       cv2.putText(frame, "FPS : " + str(int(fps)), (100,65), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50), 2);
       # Display result
       cv2.imshow("Tracking", frame)

       cntr += 1.0

       # Exit if ESC pressed
       if cv2.waitKey(1) & 0xFF == ord('q'): # if press SPACE bar
         print("q has been pressed")
         return cntr;

try:
  cntr = mainLoop(cntr);
  while True:
    cntr = mainLoop(cntr);

finally:
  video.release()
  cv2.destroyAllWindows()
  #  https://stackoverflow.com/questions/16573051/sound-alarm-when-code-finishes
  duration = 1000
  freq = 440

  winsound.Beep(duration, freq)
  # Play beep sound once program is done
  # Only works on windows

August 12, 2022

The poster presentation took up a lot of our time this week as we worked on figuring out a way to visually represent all of the work we've done this semester. We went through several iterations of the poster, focusing on reducing the number of words present in each "cell" of our presentation. Our goal is not to create a large block of text per se, but rather to use a combination of short phrases and images to act as queues for our presentation on Monday.

In the meantime, I've been working on graph generation software which can automatically generate graphs of any dataset outputted by the tracking software. Furthermore, I've been experimenting with rudimentary noise reduction code that is able to smoothen out curves and sudden jumps in the data. See the below screenshot:

The problem with the current code is that although it can smoothen out curves to make them look nicer, it can't predict future data; a more advanced ML model could provide better results, something I hope to look into in the coming weeks once GEAR club is in session. Time series forecasting (using libraries like Tensorflow) looks to be a particularly promising method.

 

This is the most-up-to-date version of the tracking code:

# Some of this code is from a python tutorial site

# libraries to be downloaded
import cv2 # make sure you install opencv contrib from pip, not ordinary opencv
import numpy as np

# libraries that you dont need to dl
import winsound
import sys
import time
from math import dist
from math import acos
import math
import numpy
from datetime import datetime as dt
'''
'''

video_path =  "vids/final/"
video_id = "jt3"
extension = ".mov"
video = cv2.VideoCapture(video_path + video_id + extension)
#video = cv2.VideoCapture(0) # for using CAM

#video.set(3, 1920)
#video.set(4, 1080)

filename = ('data/filename' + str(time.time()) + video_id + '.csv')
TESTfps = video.get(cv2.CAP_PROP_FPS)
print ("This video runs at a framerate of : {0}".format(TESTfps))
print ("Outputting data to file:", filename)
CONVFAC = 57.295779

def resize(img):
    # https://www.tutorialkart.com/opencv/python/opencv-python-resize-image/
    scale = 35
    new_w = int(img.shape[1] * scale / 100)
    new_h = int(img.shape[0] * scale / 100)
    dim = (new_w, new_h)
    return cv2.flip(cv2.resize(img, dim, interpolation = cv2.INTER_AREA), 1)
    # 0 means flipping around the x-axis and positive value (for example, 1) means flipping around y-axis. Negative value (for example, -1) means flipping around both axes.
    # ^^^ source: geeks4geeks

def distance(x1, y1, x2, y2):
    # Calculate distance between two points
    dist = math.sqrt(math.fabs(x2-x1)**2 + math.fabs(y2-y1)**2)
    return dist

(major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.')
start_time = time.time()

if __name__ == '__main__' :
    # Set up tracker.
    # Instead of CSRT, you can also use
    tracker_types = ['BOOSTING', 'MIL','KCF', 'TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT']
    #                   0         1     2      3       4           5           6        7
    tracker_type = tracker_types[7]
    # it is recommended to use 7
    # the other algorithms do not work as well
    print("Using tracker type: ", tracker_type)
    if int(minor_ver) < 3:
        tracker = cv2.Tracker_create(tracker_type)
    else:
        if tracker_type == 'BOOSTING':
            tracker = cv2.TrackerBoosting_create()
        elif tracker_type == 'MIL':
            tracker = cv2.TrackerMIL_create()
        elif tracker_type == 'KCF':
            tracker = cv2.legacy.TrackerKCF_create()
        elif tracker_type == 'TLD':
            tracker = cv2.legacy.TrackerTLD_create()
        elif tracker_type == 'MEDIANFLOW':
            tracker = cv2.legacy.TrackerMedianFlow_create()
        elif tracker_type == 'GOTURN':
             tracker = cv2.legacy.TrackerGOTURN_create()
        elif tracker_type == 'MOSSE':
            tracker = cv2.legacy.TrackerMOSSE_create()
        elif tracker_type == "CSRT":
            tracker = cv2.TrackerCSRT_create()

# Read video
cntr = 0.0
def mainLoop(cntr):
  # Exit if video not opened.
  if not video.isOpened():
    print("Could not open video")
    sys.exit()

  # Read first frame.
  ok, frame = video.read()
  frame = resize(frame);
  if not ok:
    print ('Cannot read video file')
    sys.exit()

  # Define an initial bounding box
  #bbox = (100, 400,         160, 410)
  #       x1 y1         x2  y2
  # Uncomment the line below to select a different bounding box
  cv2.putText(frame, tracker_type + " Choose Balloon Point", (20,20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0,0,255),2);
  bbox = cv2.selectROI(frame, True)

  ok, frame = video.read()
  frame = resize(frame);
  if not ok:
    print ('Cannot read video file')
    sys.exit()

  cv2.putText(frame, tracker_type + " Choose Anchor Point", (20,20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0,0,255),2);
  bbox2 = cv2.selectROI(frame, True)
  p1 = (int(bbox[0]), int(bbox[1]))
  p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))
  # Initialize tracker with first frame and bounding box
  height, width = frame.shape[:2]
  #anchor_x = int(width/2)
  orig_x = int((p1[0]+p2[0])/2.0)
  orig_y = int((p1[1]+p2[1])/2.0)

  p1b = (int(bbox2[0]), int(bbox2[1]))
  p2b = (int(bbox2[0] + bbox2[2]), int(bbox2[1] + bbox2[3]))

  anchor_x = int((p1b[0]+p2b[0])/2.0)
  anchor_y = int((p1b[1]+p2b[1])/2.0)

  ok = tracker.init(frame, bbox)
  prev_angle = -1.00

  cntr += 2.00
  while True:
       # Read a new frame
       ok, frame = video.read()
       if not ok:
           break
       frame = resize(frame)
       # Start timer
       timer = cv2.getTickCount()

       # Update tracker
       ok, bbox = tracker.update(frame)
       cur_time = time.time() - start_time

       # divide by one thousand in order to get the time from microseconds to normal seconds
       # Calculate Frames per second (FPS)
       fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer)

       # Draw bounding box
       if ok:
           # Tracking success
           p1 = (int(bbox[0]), int(bbox[1]))
           p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))

           centroid_x = int((p1[0]+p2[0])/2.0)
           centroid_y = int((p1[1]+p2[1])/2.0)

           A = distance(centroid_x, centroid_y, anchor_x, anchor_y)
           B = distance(anchor_x, anchor_y, anchor_x, centroid_y)
           C = distance(anchor_x, centroid_y, centroid_x, centroid_y)

           angle = -1.00

           if ((math.isnan(C)) or (math.isnan(B)) or (abs(B - 0.000) <= 0.01)):
             angle = -1.00
             acceleration = -1.00
           else:
             angle = (np.arctan(C/B))
             acceleration = (math.tan((2*angle)))*9.81

           cv2.rectangle(frame, p1, p2, (255,0,0), 2, 1)
           #print(int(cur_x), int(cur_y))
           #cv2.circle(frame, (int(cur_x), int(cur_y)), 5, (0, 0, 255), 5)
           cv2.circle(frame, (centroid_x, centroid_y), 2, (0, 0, 255), 5)
           cv2.circle(frame, (anchor_x, anchor_y), 2, (255, 0, 0), 5)
           #cv2.circle(frame, (orig_x, (orig_y)), 2, (255, 0, 255), 5)

           cv2.line(frame, (centroid_x, centroid_y), (anchor_x, anchor_y), (255, 0, 0), 1)
           cv2.line(frame, (anchor_x, anchor_y), (anchor_x, centroid_y), (255, 0, 0), 1)
           cv2.line(frame, (anchor_x, centroid_y), (centroid_x, centroid_y), (255, 0, 0), 1)

           AcceptableAngleJump = True
           if ((prev_angle == -1.00) or (int(prev_angle) == int(-1))):
            AcceptableAngleJump = True
            prev_angle = angle
           else:
             #print("difference:", abs(angle-prev_angle))
             if (abs((angle*CONVFAC)-(prev_angle*CONVFAC)) > 45):
               AcceptableAngleJump = False
             prev_angle = angle

           maxThreshold = 45

           #isValid = ((centroid_y >= (orig_y)) and (AcceptableAngleJump) and (int(angle*CONVFAC) <= 89) and (angle > 0) and (int(angle) != -1) and (abs(acceleration) <= maxThreshold) and (acceleration >= 0))
           isValid = ((AcceptableAngleJump) and (int(angle*CONVFAC) <= 89) and (angle > 0) and (int(angle) != -1) and (abs(acceleration) <= maxThreshold) and (acceleration >= 0))
           print((AcceptableAngleJump) , (int(angle*CONVFAC) <= 89) , (angle > 0) , (int(angle) != -1), (abs(acceleration) < maxThreshold), (acceleration >= 0), acceleration, angle*CONVFAC)
           #isValid = 1
           if (isValid):
             cv2.putText(frame, ("Angle: "+ str(round(angle*CONVFAC,3)) + " deg."), (abs(anchor_x-40),anchor_y), cv2.FONT_HERSHEY_SIMPLEX, 0.75,(0,0,255),2)
             cv2.putText(frame, ("Accel: "+ str(round(acceleration,3)) + " m/s^2"), (abs(anchor_x-40),anchor_y+50), cv2.FONT_HERSHEY_SIMPLEX, 0.75,(0,0,255),2)
             print((cntr+1.0)/TESTfps, cur_time, angle*CONVFAC, acceleration, sep=',')
             with open(filename, "a") as text_file:
               print((cntr+1.0)/TESTfps, cur_time, centroid_x, centroid_y, angle*CONVFAC, acceleration, sep=',', file=text_file)

       else:
           # Tracking failure
           cv2.putText(frame, "Tracking failure detected", (100,80), cv2.FONT_HERSHEY_SIMPLEX, 0.75,(0,0,255),2)

       # Display tracker type on frame
       #cv2.putText(frame, tracker_type + " Tracker", (100,20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50),2);
       cv2.putText(frame, "Frames Elapsed: " + str(cntr), (100,20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50),2);

       # Display FPS on frame
       cv2.putText(frame, "FPS : " + str(int(fps)), (100,40), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50), 2);
       # Display result
       cv2.imshow("Tracking", frame)

       cntr += 1.0

       # Exit if ESC pressed
       if cv2.waitKey(1) & 0xFF == ord('q'): # if press SPACE bar
         print("q has been pressed")
         return cntr;

try:
  cntr = mainLoop(cntr);
  while True:
    cntr = mainLoop(cntr);

finally:
  video.release()
  cv2.destroyAllWindows()
  #  https://stackoverflow.com/questions/16573051/sound-alarm-when-code-finishes
  duration = 1000
  freq = 440

  winsound.Beep(duration, freq)
  # Play beep sound once program is done
  # Only works on windows

And this is the most up-to-date version of the graph generation code, not all of which was made by me (esp. the decomposition code at the end): 

import warnings
import itertools
import numpy as np
import matplotlib.pyplot as plt
warnings.filterwarnings("ignore")
plt.style.use('fivethirtyeight')
import pandas as pd
import statsmodels.api as sm
import matplotlib
from pylab import rcParams
import datetime

def fixTime(x):    
    return datetime.datetime.fromtimestamp(x)

def fixTimeTwo(d):
    return datetime.datetime.strptime(np.datetime_as_string(d,unit='s'), '%Y-%m-%dT%H:%M:%S')

matplotlib.rcParams['axes.labelsize'] = 14
matplotlib.rcParams['xtick.labelsize'] = 12
matplotlib.rcParams['ytick.labelsize'] = 12
matplotlib.rcParams['text.color'] = 'k'

from tkinter import filedialog as fd
filename = fd.askopenfilename()

print("Processing this data file:", filename)


df = pd.read_csv(filename)
#if unsure which filename to choose, use filename .csv

#df['time'] = pd.to_datetime(df['time'])
#df['time'] = df['time'].apply(fixTimeTwo)

#df.to_csv("sampledata2.csv", index=False)

df = df.groupby('time')['angle'].sum().reset_index()

print(df.head())
df = df.sort_values('time')
print(df.head())

df = df.set_index('time')
print(df.index)
print("How many nulls: ", df.isnull().sum())
df = df.groupby('time')['angle'].sum().reset_index()
#y = df['angle'].resample('MS').mean()
y = df['angle']
#y = df['angle'].resample('1S').mean()
#y.plot(figsize=(15, 6))

rcParams['figure.figsize'] = 18, 8

decomp = sm.tsa.seasonal_decompose(y, model='additive', period=50) # decomposition 
print(decomp)
print(decomp.trend)

fig = decomp.plot()
plt.show()