Saturday, August 19, 2023

B.C. under state of emergency as fast-moving wildfire destroys homes near West Kelowna



B.C. under state of emergency as fast-moving wildfire destroys homes near West Kelowna

Premier David Eby urged British Columbians to avoid non-essential travel to affected areas

Read the story here:

https://www.cbc.ca/news/canada/british-columbia/what-you-need-to-know-about-bc-wildfires-aug-18-2023-1.6940311

Friday, March 10, 2023

Today I found out "The hard way" that Google drive Enterprise has a limit to the number of files you can store..

 So I just got the following error while uploading files with rclone:


googleapi: Error 403: This account has exceeded the creation limit of 5 million items. To create more items, move items to the trash and delete them forever., activeItemCreationLimitExceeded


I was unaware there was a limit, I have "unlimited" storage with about 9.3 TB used, and I pay about $26 USD/month for this..


I've deleted a few million files and emptied the trash but as of yet I cannot create new files..


We shall see what happens.. how long it takes to 'update' and let me create files again.. 


I've added a comment on the following issue tracker.. 

https://issuetracker.google.com/issues/268606830

 

Thursday, March 9, 2023

Video editor using ffmpeg that supports HDR and any other type of video, no GPU required

I made a python script that lets you edit video without re-encoding and automatically cuts on the closest keyframe (iframe)

This means you can edit video without having a strong cpu, including HDR video!!!

This barely uses any CPU, and runs really really fast, as long as you're ok with jump cuts.

Check it out on github!

https://github.com/igmrlm/KeyframeCutandConcat

Friday, February 24, 2023

How-To make Trunking Recorder Play Audio Faster

If you're running trunking recorder on a system that has lots of calls, you may have an issue where the calls are coming in faster than playback and a backlog of calls piles up and you have to either have to pick and choose what calls you want to hear or get far behind realtime.

I don't like either choice, so I spent a fair bit of time researching how the software works, reading through the javascript files and such and found that if you add a single line to trunkingrecorder.js you can change the default playback speed!

I'm not good enough at coding to modify it so that there's a dropdown menu to change the speed at will, but for my case, I changed the playback speed to 1.5x and find that's the perfect speed to keep up with the system I'm monitoring, Bell Fleetnet Zone 2 in Ottawa.

So how do you do this:

Open TrunkingRecorder.js in a text editor, it's located by default at:

C:\Program Files\Trunking Recorder\Website\js\TrunkingRecorder.js

Go to line 1375, where it says:

1372            audio.addEventListener('timeupdate', AudioTimeUpdated);

1372            audio.addEventListener('ended', PlayerEnded);

1373            audio.addEventListener('pause', PlayerPaused);

1374            audio.addEventListener('play', PlayerPlay);

1375            audio.addEventListener('error', PlayerError);

And add a new line:

1376            audio.defaultPlaybackRate = 1.5;

Where 1.5 is the playback rate you want to use, this could be 1.25, 1.75, 3.20 or whatever playback speed you want.


Save the file and refresh the page and VOILA!! It should now be playing faster!

Hope this helps, leave a comment if it did!

While you're at it, check out my scanner feed at https://www.youtube.com/nathanaelnewton

Thursday, February 9, 2023

Why Atheism is not a religion

 Atheism is referred to as a lack of religion, rather than a religion in itself. This distinction is important because atheism is defined as the disbelief or lack of belief in the existence of a god or gods. In contrast, a religion is typically defined as a set of beliefs concerning the cause, nature, and purpose of the universe, often involving devotional and ritual practices and a moral code.


Therefore, it can be argued that atheism cannot be considered a religion because it does not have a belief system, set of practices, or moral code that unites its followers. It is simply a rejection of the idea of a god or gods. For example, just as the lack of belief in ghosts is not considered a religion, the lack of belief in a deity is not considered a religion either.


Furthermore, the concept of atheism itself is not cohesive enough to be considered a religion. Atheism can take on many different forms, from strong atheism, which asserts that there is no god, to weak atheism, which merely states that there is not enough evidence to support the existence of a god. These differing beliefs and perspectives among atheists do not provide a common foundation for a religion.


Atheism can also be differentiated from religion in terms of its origins. Religions typically have a history, tradition, and cultural background that shape their beliefs and practices, whereas atheism is a relatively modern concept that emerged as a response to organized religion. Many atheists may have personal reasons for their lack of belief in a deity, but these reasons do not form the basis for a religion.


Additionally, atheism does not provide a framework for understanding the world or for making moral decisions. While some individuals who identify as atheist may have a personal moral code, there is no universally accepted ethical system among atheists. This contrasts with religious systems, which often include codes of conduct and moral guidance for followers.


Another important aspect to consider is the role of community. Religions often bring people together and provide a sense of belonging and social support. Atheism, on the other hand, does not typically provide the same level of community or social organization. While there are atheist organizations and communities, they are often centered around activism, education, or political advocacy, rather than around religious beliefs and practices.


Finally, it is worth noting that the idea of atheism being a religion has often been used as a criticism or as a means of marginalizing atheism. Some religious groups may argue that atheism is a religion in order to discredit it or to deny the validity of atheist beliefs. However, this argument is flawed, as atheism simply lacks the defining characteristics of a religion, such as a belief system, a moral code, and communal practices.


So, the lack of a belief in a deity, combined with the absence of a belief system, moral code, communal practices, and historical tradition, distinguish atheism from religion. While some may argue that atheism is a religion, this argument is not supported by the definition of religion and the absence of key elements that are present in established religious systems.

Wednesday, January 11, 2023

The Incredible Tomato Suspension Bridge Project




This was shot during the summer of 2020 when I was staying at my friend's house on their small farm, we had over 85 tomato plants and due to the drip irrigation that I set up, they had grown so large that they were about to fall over with their cages, likely destroying the plants.


So, I decided to build a suspension bridge.


I had never built a suspension bridge before, I knew the basic physics behind how they worked and I figured I had enough raw materials so over the course of a few days I cut, doug, screwed, unscrewed, tore down what I had built, rebuilt it better, and eventually had what I'm calling the 'Incredible tomato suspension bridge'


I'm very proud of how it turned out, I didn't have a post hole digger so I ended up using a Shop-Vac and a small garden shovel, which was surprisingly super effective and produced an exceptional result!


Anyway, let me know what you think.. the video is a bit long I know I'm sorry, I cut out a lot and sped up most of the video 10x


Some spots I wanted to go faster than 10x but DaVinci Resolve kept glitching out so I trimmed out a few sections.


#YouTuber #SmallBusiness #Gardening #TimeLapse #Farming #tomatoes #engineering #redneckengineering  #DIY #Canada #Ontario #SuspensionBridge

Saturday, January 7, 2023

(Guide with code) The colorama names are too long so I made shorter ones

So you're apparently supposed to use colorama like this:

import colorama

# Initialize the library
colorama.init()

# Foreground colors
print(colorama.Fore.BLACK + "Black text")
print(colorama.Fore.RED + "Red text")
print(colorama.Fore.GREEN + "Green text")
print(colorama.Fore.YELLOW + "Yellow text")
print(colorama.Fore.BLUE + "Blue text")
print(colorama.Fore.MAGENTA + "Magenta text")
print(colorama.Fore.CYAN + "Cyan text")
print(colorama.Fore.WHITE + "White text")

# Reset the foreground color
print(colorama.Fore.RESET + "Text with the foreground color reset to the default")

# Background colors
print(colorama.Back.BLACK + "Black background")
print(colorama.Back.RED + "Red background")
print(colorama.Back.GREEN + "Green background")
print(colorama.Back.YELLOW + "Yellow background")
print(colorama.Back.BLUE + "Blue background")
print(colorama.Back.MAGENTA + "Magenta background")
print(colorama.Back.CYAN + "Cyan background")
print(colorama.Back.WHITE + "White background")

# Reset the background color
print(colorama.Back.RESET + "Text with the background color reset to the default")

# Style options
print(colorama.Style.DIM + "Dim text")
print(colorama.Style.NORMAL + "Normal text")
print(colorama.Style.BRIGHT + "Bright text")
print(colorama.Style.RESET_ALL + "Text with all styles reset to the default")

print("\n**************************************************************")
print("Now, the same thing but using f-strings to print the statments")
print("**************************************************************")

# Foreground colors
print(f"{colorama.Fore.BLACK}Black text")
print(f"{colorama.Fore.RED}Red text")
print(f"{colorama.Fore.GREEN}Green text")
print(f"{colorama.Fore.YELLOW}Yellow text")
print(f"{colorama.Fore.BLUE}Blue text")
print(f"{colorama.Fore.MAGENTA}Magenta text")
print(f"{colorama.Fore.CYAN}Cyan text")
print(f"{colorama.Fore.WHITE}White text")

# Reset the foreground color
print(f"{colorama.Fore.RESET}Text with the foreground color reset to the default")

# Background colors
print(f"{colorama.Back.BLACK}Black background")
print(f"{colorama.Back.RED}Red background")
print(f"{colorama.Back.GREEN}Green background")
print(f"{colorama.Back.YELLOW}Yellow background")
print(f"{colorama.Back.BLUE}Blue background")
print(f"{colorama.Back.MAGENTA}Magenta background")
print(f"{colorama.Back.CYAN}Cyan background")
print(f"{colorama.Back.WHITE}White background")

# Reset the background color
print(f"{colorama.Back.RESET}Text with the background color reset to the default")

# Style options
print(f"{colorama.Style.DIM}Dim text")
print(f"{colorama.Style.NORMAL}Normal text")
print(f"{colorama.Style.BRIGHT}Bright text")
print(f"{colorama.Style.RESET_ALL}Text with all styles reset to the default")

I don't know what you think, but I think those names are too long and annoying, so I created shorter versions:

import colorama

# Initialize the library
colorama.init()

# Create a lookup table of shorter variable names for the Fore options
F = {
    "BLK": colorama.Fore.BLACK,
    "RED": colorama.Fore.RED,
    "GRN": colorama.Fore.GREEN,
    "YEL": colorama.Fore.YELLOW,
    "BLU": colorama.Fore.BLUE,
    "MAG": colorama.Fore.MAGENTA,
    "CYN": colorama.Fore.CYAN,
    "WHT": colorama.Fore.WHITE,
    "RST": colorama.Fore.RESET,
}

# Create a lookup table of shorter variable names for the Back options
B = {
    "BLK": colorama.Back.BLACK,
    "RED": colorama.Back.RED,
    "GRN": colorama.Back.GREEN,
    "YEL": colorama.Back.YELLOW,
    "BLU": colorama.Back.BLUE,
    "MAG": colorama.Back.MAGENTA,
    "CYN": colorama.Back.CYAN,
    "WHT": colorama.Back.WHITE,
    "RST": colorama.Back.RESET,
}

# Create a lookup table of shorter variable names for the Style options
S = {
    "DIM": colorama.Style.DIM,
    "NRM": colorama.Style.NORMAL,
    "BRT": colorama.Style.BRIGHT,
    "RST": colorama.Style.RESET_ALL,
}


# Print text with different colors and styles
print(f"{F['BLK']}{S['DIM']}Black text with dim style")
print(f"{F['RED']}{S['NRM']}Red text with normal style")
print(f"{F['GRN']}{S['BRT']}Green text with bright style")
print(f"{F['YEL']}{S['RST']}Yellow text with all styles reset")

# Print text with different backgrounds and styles
print(f"{B['BLK']}{S['DIM']}Text with black background and dim style")
print(f"{B['RED']}{S['NRM']}Text with red background and normal style")
print(f"{B['GRN']}{S['BRT']}Text with green background and bright style")
print(f"{B['YEL']}{S['RST']}Text with yellow background and all styles reset")

# Print text with different colors, backgrounds, and styles
print(f"{F['BLU']}{B['CYN']}{S['DIM']}Blue text on cyan background with dim style")
print(f"{F['MAG']}{B['WHT']}{S['NRM']}Magenta text on white background with normal style")
print(f"{F['CYN']}{B['BLK']}{S['BRT']}Cyan text on black background with bright style")
print(f"{F['WHT']}{B['RED']}{S['RST']}White text on red background with all styles reset")

# Print text with multiple styles on the same line
print(f"{S['DIM']}{F['BLK']}Dim {S['BRT']}Bright {S['RST']}Normal {F['RST']}text")
print(f"{S['DIM']}{F['RED']}Dim {S['BRT']}Bright {S['RST']}Normal {F['RST']}text")
print(f"{S['DIM']}{F['GRN']}Dim {S['BRT']}Bright {S['RST']}Normal {F['RST']}text")
print(f"{S['DIM']}{F['YEL']}Dim {S['BRT']}Bright {S['RST']}Normal {F['RST']}text")

# Print text with different colors, backgrounds, and styles
print(f"{F['BLU']}{B['CYN']}{S['DIM']}Blue text on cyan background with dim style")
print(f"{F['MAG']}{B['WHT']}{S['NRM']}Magenta text on white background with normal style")
print(f"{F['CYN']}{B['BLK']}{S['BRT']}Cyan text on black background with bright style")
print(f"{F['WHT']}{B['RED']}{S['RST']}White text on red background with all styles reset")

# Print text with different colors, backgrounds, and styles using multiple styles on the same line
print(f"{F['BLU']}{B['CYN']}{S['DIM']}Dim {S['BRT']}Bright {S['RST']}Normal {F['RST']}Blue text on cyan background")
print(f"{F['MAG']}{B['WHT']}{S['DIM']}Dim {S['BRT']}Bright {S['RST']}Normal {F['RST']}Magenta text on white background")
print(f"{F['CYN']}{B['BLK']}{S['DIM']}Dim {S['BRT']}Bright {S['RST']}Normal {F['RST']}Cyan text on black background")
print(f"{F['WHT']}{B['RED']}{S['DIM']}Dim {S['BRT']}Bright {S['RST']}Normal {F['RST']}White text on red background")

# Print all of the possible combinations of colors, backgrounds, and styles using multiple styles on the same line
for fg in F:
    for bg in B:
        print(f"{F[fg]}{B[bg]}{S['DIM']}Dim {S['BRT']}Bright {S['RST']}Normal {F['RST']}Text with {fg} foreground on {bg} background")

That's better right? Let me know what you think in the comments!


Friday, January 6, 2023

I've published my first (complicated) software!! https://github.com/igmrlm/SmartAutoTimelapse

Check it out here:

https://github.com/igmrlm/SmartAutoTimelapse

This is the text script of a YouTube video I'll be uploading soon, I very rarely publish updates here so follow on YouTube to keep up-to-date: https://youtube.com/NathanaelNewton

For now, here's a video edited with this software:



Hello everyone I know this is a bit different than some of my usual videos, however, In today's video, I'll be showcasing and explaining a series of Python scripts that I’ve written recently with the help of OpenAI’s ChatGPT bot that completely automates the process of creating timelapse videos.

Before we get started, THE LEGAL STUFF!!

This code makes use of Python, FFmpeg, and OpenCV in this tutorial and is subject to their respective licenses. 

Python is licensed under the Python Software Foundation License, which can be found at: https://docs.python.org/3/license.html

FFmpeg is licensed under the GNU Lesser General Public License, which can be found at: https://www.gnu.org/licenses/old-licenses/lgpl-2.1.html

OpenCV is licensed under the BSD 3-Clause License, which can be found at: https://opencv.org/license/. By using these software tools in this tutorial, you agree to be bound by their respective licenses.

This code was originally generated with assistance from the OpenAI chatbot GPT-3, but has since been heavily modified by me to better fit the needs of my case use and significant troubleshooting and re-rewriting to make it actually work.

Please note that the use of OpenAI generated content is subject to the OpenAI API terms of use, which can be found at: https://beta.openai.com/docs/api-terms/terms. By using this script, you agree to these terms and the following:

Please be aware that this script is provided "as is" without warranty of any kind, express or implied. I make no representations or warranties of any kind with respect to this script or its use, and I shall not be liable for any damages of any kind arising from the use of this script.

By using this script, you agree to assume all responsibility for any damages or losses that may occur as a result of its use.

I hope you find this tutorial helpful. Please be sure to read and understand the legal notices outlined above before using the script. Let's get started!

 

Before we dive into the code, let's talk about what timelapse videos are and how they're created. A timelapse video is created from a series of photos taken at a slow interval and plays them back at a faster frame rate, creating the illusion of time passing faster. Timelapse videos can also be created by manually capturing a series of still frames from a video at a set interval and then using a video editing software to combine them into a single video. This is not as high quality as using individual raw frames but it's much simpler from a production perspective.

But what if you have a large number of videos that you want to turn into a timelapse video? Doing it manually can be time-consuming and tedious, especially if there’s a lot of parts with nothing happening, say you left the room or forgot to stop recording and these sections have to be cut out of the timelapse. That's where this Python script comes in. It completely automates this process!!

Overview and Command line Arguments

This setup uses three other scripts: one to detect motion in each frame and select only the frames with significant motion, one to concatenate the selected frames into a single video, and one to create a time-lapse of that video.

First, let's take a look at the arguments we can pass to the batch creation script. 

  • -i or --inputfolder: the path to the input folder containing the video files that will be used to create the timelapse.

  • -t or --tvalue: the motion detection threshold for the timelapse, which should be a numeric value between 1 and 100.

  • -b or --box: the size of the box blur filter applied to the video files in the GPU. This should be an integer value.

  • -y or --yes: a flag that tells the script to execute the timelapse.bat file without prompting the user for confirmation.

  • -s or --speedup: a numerical multiplier for the speed at which the final timelapse video will play. For example, a value of 10 would result in a timelapse video that plays 10 times faster than normal.

Example Command Line Usage: 

python.exe .\create_Batch.py -y -t 2 -s 5 -b 10 -i “C:\FolderWithVideos”

create_Batch.py:

import os
import subprocess
import argparse

# Parse the command-line arguments
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--inputfolder", help="path to the input folder")
parser.add_argument("-t", "--tvalue", help="Motion detection threshold, Numeric value 1-100")
parser.add_argument("-b", "--box", type=int, help="Size of box blur filter applied in GPU")
parser.add_argument("-y", "--yes", action="store_true", help="execute timelapse.bat without prompting")
parser.add_argument("-s", "--speedup", help="Numerical speedup multiplier for the final timelapse video, E.G. 1=1x, 10=10x")


args = parser.parse_args()

# Get the input folder from the command-line argument or ask the user for it
if args.inputfolder:
  input_folder = args.inputfolder
else:
  input_folder = input("Enter the path to the input folder: ")

# Get the -t value from the command-line argument or ask the user for it
if args.tvalue:
  t_value = args.tvalue
else:
  t_value = input("Enter a numeric value for the motion detection threshold from 1-100: ")
  
if args.speedup:
    speedx = args.speedup
else:
    speedx = float(input("Enter the speed at which to speed up the final timelapse (e.g. 2 for 2x speed) : "))
    
if args.box:
    box = args.box
else:
    box = int(input("Enter size of the desired box blur filterm E.g. 10 for 10x10: "))    

print("\nCreate Batch: speedx (Timelapse Speed Multiplier setting)")
print(speedx)

# Get a list of all the mp4 files in the input folder
mp4_files = [f for f in os.listdir(input_folder) if f.lower().endswith('.mp4')]

# Write the list of files  and the input folder to the console for debugging purposes
print("\n")
print(mp4_files)

print("\nCreate Batch: input_folder")
print(input_folder)

# Set the output path to the base directory of the input folder with spaces replaced with underscores

# Get the directory portion of the path
base_dir = os.path.basename(input_folder)

print("\nCreate Batch: base_dir:\n")
print(base_dir)

output_filename = base_dir.replace(" ", "_") + "_Timelapse_" + "Threshold_" + t_value + "_Speedup_" + speedx + "x.mp4"    

print("\nCreate Batch: Default output timelapse filename:\n")
print(output_filename)
  
# Open a text file for writing
with open('./timelapse.bat', 'w') as f:
  # Write the commands to the text file
  for file in mp4_files:
    f.write(f'python .\\ffac3_cuda.py -i "{input_folder}\\{file}" -o .\\work\\{file}_AutoTimeLapse_t{t_value}.MP4 -s .\\edit.txt -hh dxva2 -v "hevc_nvenc -rc vbr -cq 10 -qmin 6 -qmax 14" -f C:\\ffmpeg\\bin\\ffmpeg.exe -t {t_value} -k -b {box}\n')

  # Write the additional command to the text file
  f.write('python.exe .\\ffmpegAutoConcat.py -i .\work\ -f ' + output_filename + ' -s ' + speedx +'\n')

# Execute the file "timelapse.bat" if the user wants to
if args.yes or input("Execute timelapse.bat? [Y/n] ").lower() == "y":
  subprocess.run(['timelapse.bat'])
  print("Commands written to output.txt and timelapse.bat executed")
else:
  print("Commands written to output.txt")

Now let's take a closer look at the create_Batch.py script. The first thing the script does is import several libraries that will be used later in the script. The argparse library is used to parse command-line arguments, which allows the user to specify certain options when running the script.

Next, we have the line parser = argparse.ArgumentParser(), which creates an ArgumentParser object that will be used to define the command-line arguments that the script accepts. The script accepts several arguments, including --inputfolder, which specifies the path to the input folder containing the input videos; --tvalue, which specifies the motion detection threshold to be used when creating the time lapse videos; --box, which specifies the size of the box blur filter to be applied to the videos; --yes, which tells the script to run the batch file without prompting the user for confirmation; and --speedup, which specifies the speedup multiplier for the final timelapse video.

After the arguments have been defined, the script uses the parse_args method of the ArgumentParser object to parse the command-line arguments and store them in a variable called args.

The script then checks if the --inputfolder argument was specified, and if it was, it sets the input_folder variable to the value of the argument. If the argument was not specified, the script prompts the user to enter the path to the input folder. The script does the same thing for the --tvalue, --box, and --speedup arguments, prompting the user to enter a value if the argument was not specified.

Next, the script gets a list of all the MP4 files in the input folder using the os library, and stores the list in the mp4_files variable.

The script then sets the base_dir variable to the base directory of the input folder, with any spaces replaced with underscores. The script uses this value to create a default output filename for the final timelapse video.

Finally, the script opens a text file called timelapse.bat and writes a series of commands to the file. The first set of commands writes the command to run the ffac3_cuda.py script on each input video, creating an intermediate timelapse video for each input video. The second command writes the command to run the ffmpegAutoConcat.py script, which combines the intermediate time-lapse videos into the final timelapse video.

That's a high-level overview of the create_Batch.py script. In summary, this script is used to create a batch file that will process the input videos and create the final timelapse video. It does this by writing a series of commands to a text file, which will be run by the batch file

In the next part of the tutorial, we'll be taking a detailed look at the code in the ffac3_cuda.py script the batch file executes in sequence and explaining how it works. We'll start with a quick overview of the code, and then move on to a more detailed explanation of the motion detection section and the ffmpeg filter_complex file generation.

ffac3_cuda.py:

import os
os.environ['OPENCV_CUDA_BACKEND'] = 'cuda'
import cv2
import numpy as np
import argparse
import subprocess
import tqdm

# Ensure that the GPU is initialized
cv2.cuda.setDevice(0)

# Get the active CUDA device
device = cv2.cuda.getDevice()

# Get the device properties
device_info = cv2.cuda.DeviceInfo(device)

# Construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--input", required=True, help="path to input video file")
ap.add_argument("-k", "--show_thumbnail", action="store_true",
                help="show a thumbnail image of the frame currently being processed")
ap.add_argument("-vv", "--verbose", action="store_true", help="Print verbose output")
ap.add_argument("-o", "--output", required=True, help="path to output video file")
ap.add_argument("-s", "--edit", required=True, help="path to output edit script file")
ap.add_argument("-f", "--ffmpeg", type=str, default="c:\ffmpeg\bin\ffmpeg.exe", help="path to ffmpeg executable")
ap.add_argument("-hh", "--hwaccel", type=str, default="cuda", help="hardware encoder to use")
ap.add_argument("-v", "--vcodec", type=str, default="h264", help="video codec to use")
ap.add_argument("-t", "--threshold", type=float, default=15.0, help="relative motion threshold (0-100)")
ap.add_argument("-b", "--box", type=int, default=10, help="Size of box blur filter applied in GPU")


args = vars(ap.parse_args())

# Set the verbose flag
verbose = (args["verbose"])
filepath = (args["input"])
rawfilepath = (f"\"{filepath}\"")
filename = os.path.basename(filepath)
# Open the video file
video = cv2.VideoCapture(args["input"])

# Initialize the box filter size

if args["box"]:
    box_size = (args["box"])
else:
    box_size = 10

# Get the video properties

fps = video.get(cv2.CAP_PROP_FPS)
width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
num_frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT))

# Calculate the absolute threshold based on the relative threshold specified by the user
relative_threshold = args["threshold"]
threshold = args["threshold"] / 10.0 * 570 * 320

print("\n\n************************************************************\n")
print(f"Starting nate's automagical motion detection editor program! https://YouTube.com/NathanaelNewton")
print("\n************************************************************\n")
# Print the GPU device properties
print("Active CUDA device:", device)
print("Total memory:", device_info.totalMemory() / (1024**2), "MB")
print("\n************************************************************\n")


print(f"Now processing {filepath}")
print(f"If the code worked this path has quotes on it! {rawfilepath}")
print(f"The input video is {fps} frames per second")
print(f"The resolution is {width}x{height}")
print(f"The total number of frames in the video is: {num_frames}")
print("\n************************************************************\n")
print(f"The relative threshold of {relative_threshold} is calculated to have an absolute value of {threshold} \n"
      f"Remember! A higher threshold number = less sensitivity and less total frames in the output video") 
print("\n************************************************************\n\n")
print("Box filter Size:")
print(f"{box_size} x {box_size}")
print("\n************************************************************\n\n")

# Initialize the edit script
edit_script = ""

# Initialize the previous frame
prev_frame = None

# Initialize the start and end times for the current section
start_time = 0
end_time = 0

# Initialize the flag for detecting motion
detect_motion = False

# Initialize the counter for the number of frames
frame_count = 0

# create a frame on GPU for images
gpu_frame = cv2.cuda_GpuMat()

# Create the progress bar
pbar = tqdm.tqdm(total=num_frames)

# Initialize the counters for motion and no motion frames (These should be 0 that makes a divide by 0 error, This is why I say Approx haha)

motion_count = 1
no_motion_count = 1

# Initialize variables to keep track of the average, minimum, and maximum values of calc_thresh
average_calc_thresh = 0
min_calc_thresh = float('inf')
max_calc_thresh = float('-inf')


# Loop through the frames of the video
while True:
    # Read the next frame
    ret, frame = video.read()

    # Check if we have reached the end of the video
    if not ret:
        break

    gpu_frame.upload(frame)

    # Convert the frame to grayscale
    gpu_frame_small = cv2.cuda.resize(gpu_frame, (570, 320))
    gray = cv2.cuda.cvtColor(gpu_frame_small, cv2.COLOR_BGR2GRAY)
    filter = cv2.cuda.createBoxFilter(gray.type(), -1, (box_size, box_size))

    # Apply the filter to the GpuMat
    gpu_blurred = filter.apply(gray)
    
    # Calculate the absolute difference between the current frame and the previous frame
    gpu_diff = cv2.cuda.absdiff(gpu_blurred, prev_frame) if prev_frame is not None else gpu_blurred
    
    if args["show_thumbnail"]:
            gputhumbnail = gpu_diff
            gthumb = gputhumbnail.download()
            cv2.imshow('Difference Calculation 2', gthumb)
            cv2.waitKey(1)
    
    diff = gpu_diff.download()
     
    # Calculate the sum of the absolute differences
    sum_diff = np.sum(diff)

    overage = round(sum_diff / threshold, 3)
    overage = (f'{overage:.3f}')
    calc_thresh = round(sum_diff * 10.0 / 570 / 320,4)
    
    # Update the running average, minimum, and maximum values of calc_thresh
    average_calc_thresh = (average_calc_thresh * 49 + calc_thresh) / 50

    # Check if the sum is above the threshold
    if sum_diff > threshold:
        # Motion has been detected
        motion_count += 1

        # Check if we were previously detecting motion

        if detect_motion:
            # Update the end time
            end_time = frame_count / fps
        else:
            # Start a new section
            start_time = frame_count / fps
            end_time = start_time
            detect_motion = True

        # Display a thumbnail image of the frame being currently processed if the user specifies the -k flag
        if args["show_thumbnail"]:
            thumbnail = cv2.resize(frame, (570, 320))
            cv2.imshow('Frames being included 2', thumbnail)
            cv2.waitKey(1)

        # Calculate the ratio and percentile of frames with motion to frames with no motion
        motion_ratio = round(motion_count / (motion_count + no_motion_count), 4)
        motion_percent = round(100 * motion_count / (frame_count + 1), 1)

        # Updated the rounded versions of the averages
        
        ave_round = round(average_calc_thresh,4)

        # Print a message indicating that motion has been detected
 
        print(
            f"\033[F\033[0K\u001b[42;1m***** Motion\u001b[0m in frame: {frame_count} \tFrames in {filename} at threshold of {relative_threshold} with motion: {motion_count}  \t "
            f"Without: {no_motion_count} \t Approx. \u001b[40;1m{motion_percent} %\u001b[0m of the frames have motion \t Detected: {sum_diff}\t\tRelative multiplier {overage}\t\tCalculated threshold is {calc_thresh}\t\tAverage of last 50 frames: {ave_round}") 
    else:
        # No motion has been detected
        no_motion_count += 1

        # Check if we were previously detecting motion

        if detect_motion:
            # Motion has stopped, so add the current section to the edit script
            edit_script += f"between(t,{start_time},{end_time})+"
            detect_motion = False
        else:

            if args["show_thumbnail"]:
                thumbnail2 = cv2.resize(frame, (570, 320))
                cv2.imshow('Frames being disguarded 2', thumbnail2)
                cv2.waitKey(1)

        motion_ratio = round(motion_count / (motion_count + no_motion_count), 4)
        motion_percent = round(100 * motion_count / (frame_count), 1)
        # Updated the rounded versions of the averages
        
        ave_round = round(average_calc_thresh,4)
        
        print(
            f"\033[F\033[0K\u001b[41;1m** No motion\u001b[0m in frame: {frame_count} \tFrames in {filename} at threshold of {relative_threshold} with motion: {motion_count} \t "
            f"Without: {no_motion_count} \t Approx. \u001b[40;1m{motion_percent} %\u001b[0m of the frames have motion \t Detected: {sum_diff}\t\tRelative multiplier {overage}\t\tCalculated threshold is {calc_thresh}\t\tAverage of last 50 frames: {ave_round}") 

    # Update the previous frame
    prev_frame = gpu_blurred

    # Increment the frame counter
    frame_count += 1

    # Update the progress bar
    pbar.update(1)

# Close the progress bar
pbar.close()

# Print the total number of frames with motion and no motion
print(f"Total number of frames with motion: {motion_count}")
print(f"Total number of frames with no motion: {no_motion_count}")

# Calculate the ratio of frames with motion to frames with no motion

motion_ratio = motion_count / (motion_count + no_motion_count)

print(f"Ratio of frames with motion to frames with no motion: {motion_ratio:.2f}")

# Check if we were still detecting motion at the end of the video
if detect_motion:
    # Add the final section to the edit script
    edit_script += f"between(t,{start_time},{end_time})"

# Add the trailing code to the edit command

edit_video = f"[0:v]select=\'{edit_script}"
edit_audio = f"[0:a]aselect=\'{edit_script}"

# Remove the extra '+' sign at the end of the edit script, if any
edit_video = edit_video.rstrip("+")
edit_audio = edit_audio.rstrip("+")

edit_video += f"\',setpts=N/FRAME_RATE/TB[v];\n"
edit_audio += f"\',asetpts=N/SR/TB[a]"

filter_complex = edit_video + edit_audio
if verbose:
    print("Printing the complex filter\n************\n")
    print(filter_complex)

# Close the video file
video.release()

# Print the Edit scripts for debugging purposes

if verbose:
    print("\n\n************\nVideo script \n")
    print(edit_video)
    print("\n\n************\nAudio script \n")
    print(edit_audio)

# Check if the edit script is not empty
if edit_video:
    # Create a temporary file to store the edit script
    with open("filter_complex.txt", "w") as f:
        f.write(filter_complex)

    # Get the path of the temporary file
    edit_video = f.name

    # Create the command for ffmpeg

    command = f"{args['ffmpeg']} -hwaccel {args['hwaccel']} -i {rawfilepath} -filter_complex_script .\\{edit_video} -vcodec {args['vcodec']} -map [v] -map [a] {args['output']} "

    print("\n\n************\nNOW Executing the following ffmpeg command:\n")
    print(command)
    print("\n\n************\n")
    # Execute the command
    subprocess.run(command)
    

So let's get started with a quick overview of the code. The ffac3_cuda.py script is written in Python and uses the OpenCV library to process the input video. The script accepts a number of command-line arguments, including the path to the input video, the path to the output video, and the path to the output edit script file. The script also allows the user to specify the hardware encoder to use, the video codec to use, and the relative motion threshold to use when creating the timelapse video.

  • -i or --input: the path to the input video file that will be processed. This argument is required.

  • -k or --show_thumbnail: a flag that tells the script to show a thumbnail image of the frame currently being processed.

  • -vv or --verbose: a flag that tells the script to print verbose output to the console.

  • -o or --output: the path to the output video file. This argument is required.

  • -s or --edit: the path to the output edit script file. This argument is required.

  • -f or --ffmpeg: the path to the ffmpeg executable. The default value is "c:\ffmpeg\bin\ffmpeg.exe".

  • -hh or --hwaccel: the hardware encoder to use. The default value is "cuda".

  • -v or --vcodec: the video codec to use. The default value is "h264".

  • -t or --threshold: the relative motion threshold for the timelapse, which should be a numeric value between 0 and 100. The default value is 15.

  • -b or --box: the size of the box blur filter applied to the video file in the GPU. This should be an integer value. The default value is 10.

Example command line usage with custom parameters to use h.265:

python .\ffac3.py -i .\MVI_0248.MP4 -o .\MVI_0248_AutoTimeLapse_t12.MP4 -s .\edit.txt -hh dxva2 -v "hevc_nvenc -rc vbr -cq 10 -qmin 6 -qmax 14" -f  C:\ffmpeg\bin\ffmpeg.exe -t 12

The script begins by importing several libraries and setting up the GPU for use with OpenCV. It then parses the command-line arguments and gets the properties of the input video, such as the frame rate, resolution, and number of frames.

The script then calculates the absolute motion threshold based on the relative motion threshold specified by the user, and opens the input video using the cv2.VideoCapture function.

Next, the script enters a loop that reads each frame of the input video, processes it to detect motion, and writes the resulting frame to the output video. 

The motion detection section of the code is responsible for analyzing each frame of the input video and determining whether it should be included in the timelapse video. The script uses the OpenCV cv2.absdiff function to compare each frame to the previous frame and calculate the absolute difference between them. This difference is then compared to the absolute motion threshold that was calculated earlier in the script. If the difference is greater than the threshold, it means that there was enough motion in the frame to be included in the timelapse video. If the difference is less than the threshold, it means that there was not enough motion in the frame and it should be discarded.

The script also uses the OpenCV cv2.cuda.GpuMat function to create a GPU memory buffer that is used to store the intermediate frames as they are processed. This allows the script to take advantage of the GPU's parallel processing capabilities and significantly speed up the motion detection process, although this feature is barebones and has significant room for improvement. Personally I run two copies of the program at once from different working directories to maximize the CPU and GPU usage on my pc. Your PC may be more powerful and able to run even more instances. 

While this version uses cuda, it’s quite simple to convert the function calls to cpu versions, in fact, the first version of this software i created used the cpu to process everything. Leave a comment if you would like me to release that version as well in a followup video.

Once the motion detection process is complete, the script writes the resulting frames to the output video using the OpenCV cv2.VideoWriter function. It also writes the start and end times of each frame to the output edit script file, which will be used later to create the final timelapse video using ffmpeg.

[In the video: Show the section of the code that performs the motion detection, the code in action and explain all the console output]

That's a detailed explanation of the motion detection section of the ffac3_cuda.py script. As you can see, this script uses a combination of OpenCV functions and GPU acceleration to efficiently and accurately detect motion in the input video and create a timelapse video from the resulting frames.

In the next part of this tutorial, we'll be taking a closer look at the ffmpeg filter_complex file generation section of the code and explaining how it works.



[In the video: Some kind of time lapse intermission]


In this part of the tutorial, we'll be taking a detailed look at the ffmpeg filter_complex file generation section of the ffac3_cuda.py script.

The ffmpeg filter_complex file generation section of the code is responsible for creating a file that specifies the filters that will be applied to the intermediate timelapse videos to create the final timelapse video. This file is written in the ffmpeg filtergraph syntax and is used by the ffmpeg command-line tool to apply the filters to the videos.

The script generates this file by first creating an empty list called filters that will be used to store the filters that will be applied to the videos. It then enters a loop that reads each line of the output edit script file, which contains the start and end times of each frame in the intermediate time lapse videos. For each line in the edit script file, the script creates a filter string in the ffmpeg filter_complex syntax and adds it to the filters list.

Once all of the filters have been added to the filters list, the script writes the list to the ffmpeg filter_complex file using the Python write function. The filter file is then closed and the script continues with the next step in the process.

[Show the section of the code that generates the ffmpeg filter_complex file]

ffmpegAutoConcat.py:


import os
import subprocess
import argparse

# Parse the command-line arguments
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--inputfolder", help="path to the input folder")
parser.add_argument("-f", "--filename", help="Output file name")
parser.add_argument("-s", "--speedup", help="amount to speed up the video")

# parser.add_argument("-y", "--yes", action="store_true", help="execute timelapse.bat without prompting")
args = parser.parse_args()

def concatenate_files(folder_path):
    # Create a list of all the file paths in the specified folder
    file_paths = [os.path.join(folder_path, file) for file in os.listdir(folder_path)]

    # Write the list of file paths to the file mylist.txt, with each line prepended with "file " and the path wrapped in single quotes
    with open("mylist.txt", "w") as file:
        file.write("\n".join(['file \'' + path + '\'' for path in file_paths]))
  
    # Use ffmpeg to concatenate the files
    concat_command = (["c:\\ffmpeg\\bin\\ffmpeg.exe", "-y", "-f", "concat", "-safe", "0", "-i", "mylist.txt", "-c:v", "copy", "-an", "temp_output_delete_me.mp4"])
    print("\n\n************\n")
    print("Starting nate's automagical automatica timelapse generator! https://YouTube.com/NathanaelNewton")
    print("NOW Executing the following ffmpeg command:\n")
    print(concat_command)
    if args.inputfolder:
       print(f"Input folder: {args.inputfolder}")
    if args.speedup:
       print(f"Speedup multiplier: {args.speedup}")
    
    print("\n\n************\n")
    
    subprocess.run(concat_command)


def create_time_lapse(input_path, output_path, speed):
    # Use ffmpeg to create a time lapse of the input video
    time_lapse_command = (["c:\\ffmpeg\\bin\\ffmpeg.exe", "-y", "-hwaccel", "dxva2", "-i", input_path, "-vf", f"setpts={1 / speed}*PTS", "-vcodec", "hevc_nvenc", "-rc", "vbr", "-cq", "10", "-qmin", "6", "-qmax", "14", "-r", "60", output_path])
    
    print("\n\n************\nNOW Executing the following ffmpeg command:\n")
    print(time_lapse_command)
    print("\n\n************\n")
    subprocess.run(time_lapse_command)

if __name__ == "__main__":
    if args.inputfolder:
      folder_path_response = args.inputfolder
    else:
      folder_path_response = input("Enter the path to the input folder: ")

    # Concatenate the files
    folder_path = folder_path_response
    concatenate_files(folder_path)
    
    # Set the default input and output paths to the current directory
    input_path = "temp_output_delete_me.mp4"
    
    # Ask the user the speedup value if not in the commandline execution options
    
    if args.speedup:
      speed = float(args.speedup)
    else:
      speed = float(input("Enter the speed at which to speed up the final timelapse (e.g. 2 for 2x speed) : "))

    if args.filename:
        print("Default Name:\n")
        print(args.filename)
        output_path = args.filename
    
    output_path_response = input("\n\n************\nEnter the path for the output video (press Enter for default): ")
    if output_path_response:
        output_path = output_path_response

    # Create the time lapse
    create_time_lapse(input_path, output_path, speed)


Overall, the script is a powerful tool that allows you to create timelapse videos from a series of input videos with ease. It uses a combination of Python, OpenCV, and GPU acceleration to efficiently and accurately detect motion in the input videos and create the intermediate time lapse videos.

One of the benefits of using open source software like this script is that it is freely available for anyone to use and modify. This means that you can use the script to create timelapse videos for your own personal use, or you can modify the script to fit your specific needs (Provided you follow the respective license agreements.)

Another benefit of using open source software is that it is typically developed and maintained by a community of volunteers who are passionate about the software and are dedicated to making it better. This means that open source software is often of high quality and is regularly updated to fix bugs and add new features.

One thing to note is that figuring out the section with the filter complex was very difficult and it seems to be insufficiently documented. However, with a little perseverance and some online research, it is possible to understand how it works and modify it to fit your specific needs.

Have you ever used ffmpeg or a similar tool to create timelapse videos? Do you have any ideas for how the scripts could be improved or modified? I know I can think of a few.. feel free to share your suggestions!

Are you interested in learning more about Python, OpenCV, or other programming languages and technologies? Let me know what you think of my presentation style and what subjects you'd like to learn more about.