Friday, January 6, 2023

I've published my first (complicated) software!! https://github.com/igmrlm/SmartAutoTimelapse

Check it out here:

https://github.com/igmrlm/SmartAutoTimelapse

This is the text script of a YouTube video I'll be uploading soon, I very rarely publish updates here so follow on YouTube to keep up-to-date: https://youtube.com/NathanaelNewton

For now, here's a video edited with this software:



Hello everyone I know this is a bit different than some of my usual videos, however, In today's video, I'll be showcasing and explaining a series of Python scripts that I’ve written recently with the help of OpenAI’s ChatGPT bot that completely automates the process of creating timelapse videos.

Before we get started, THE LEGAL STUFF!!

This code makes use of Python, FFmpeg, and OpenCV in this tutorial and is subject to their respective licenses. 

Python is licensed under the Python Software Foundation License, which can be found at: https://docs.python.org/3/license.html

FFmpeg is licensed under the GNU Lesser General Public License, which can be found at: https://www.gnu.org/licenses/old-licenses/lgpl-2.1.html

OpenCV is licensed under the BSD 3-Clause License, which can be found at: https://opencv.org/license/. By using these software tools in this tutorial, you agree to be bound by their respective licenses.

This code was originally generated with assistance from the OpenAI chatbot GPT-3, but has since been heavily modified by me to better fit the needs of my case use and significant troubleshooting and re-rewriting to make it actually work.

Please note that the use of OpenAI generated content is subject to the OpenAI API terms of use, which can be found at: https://beta.openai.com/docs/api-terms/terms. By using this script, you agree to these terms and the following:

Please be aware that this script is provided "as is" without warranty of any kind, express or implied. I make no representations or warranties of any kind with respect to this script or its use, and I shall not be liable for any damages of any kind arising from the use of this script.

By using this script, you agree to assume all responsibility for any damages or losses that may occur as a result of its use.

I hope you find this tutorial helpful. Please be sure to read and understand the legal notices outlined above before using the script. Let's get started!

 

Before we dive into the code, let's talk about what timelapse videos are and how they're created. A timelapse video is created from a series of photos taken at a slow interval and plays them back at a faster frame rate, creating the illusion of time passing faster. Timelapse videos can also be created by manually capturing a series of still frames from a video at a set interval and then using a video editing software to combine them into a single video. This is not as high quality as using individual raw frames but it's much simpler from a production perspective.

But what if you have a large number of videos that you want to turn into a timelapse video? Doing it manually can be time-consuming and tedious, especially if there’s a lot of parts with nothing happening, say you left the room or forgot to stop recording and these sections have to be cut out of the timelapse. That's where this Python script comes in. It completely automates this process!!

Overview and Command line Arguments

This setup uses three other scripts: one to detect motion in each frame and select only the frames with significant motion, one to concatenate the selected frames into a single video, and one to create a time-lapse of that video.

First, let's take a look at the arguments we can pass to the batch creation script. 

  • -i or --inputfolder: the path to the input folder containing the video files that will be used to create the timelapse.

  • -t or --tvalue: the motion detection threshold for the timelapse, which should be a numeric value between 1 and 100.

  • -b or --box: the size of the box blur filter applied to the video files in the GPU. This should be an integer value.

  • -y or --yes: a flag that tells the script to execute the timelapse.bat file without prompting the user for confirmation.

  • -s or --speedup: a numerical multiplier for the speed at which the final timelapse video will play. For example, a value of 10 would result in a timelapse video that plays 10 times faster than normal.

Example Command Line Usage: 

python.exe .\create_Batch.py -y -t 2 -s 5 -b 10 -i “C:\FolderWithVideos”

create_Batch.py:

import os
import subprocess
import argparse

# Parse the command-line arguments
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--inputfolder", help="path to the input folder")
parser.add_argument("-t", "--tvalue", help="Motion detection threshold, Numeric value 1-100")
parser.add_argument("-b", "--box", type=int, help="Size of box blur filter applied in GPU")
parser.add_argument("-y", "--yes", action="store_true", help="execute timelapse.bat without prompting")
parser.add_argument("-s", "--speedup", help="Numerical speedup multiplier for the final timelapse video, E.G. 1=1x, 10=10x")


args = parser.parse_args()

# Get the input folder from the command-line argument or ask the user for it
if args.inputfolder:
  input_folder = args.inputfolder
else:
  input_folder = input("Enter the path to the input folder: ")

# Get the -t value from the command-line argument or ask the user for it
if args.tvalue:
  t_value = args.tvalue
else:
  t_value = input("Enter a numeric value for the motion detection threshold from 1-100: ")
  
if args.speedup:
    speedx = args.speedup
else:
    speedx = float(input("Enter the speed at which to speed up the final timelapse (e.g. 2 for 2x speed) : "))
    
if args.box:
    box = args.box
else:
    box = int(input("Enter size of the desired box blur filterm E.g. 10 for 10x10: "))    

print("\nCreate Batch: speedx (Timelapse Speed Multiplier setting)")
print(speedx)

# Get a list of all the mp4 files in the input folder
mp4_files = [f for f in os.listdir(input_folder) if f.lower().endswith('.mp4')]

# Write the list of files  and the input folder to the console for debugging purposes
print("\n")
print(mp4_files)

print("\nCreate Batch: input_folder")
print(input_folder)

# Set the output path to the base directory of the input folder with spaces replaced with underscores

# Get the directory portion of the path
base_dir = os.path.basename(input_folder)

print("\nCreate Batch: base_dir:\n")
print(base_dir)

output_filename = base_dir.replace(" ", "_") + "_Timelapse_" + "Threshold_" + t_value + "_Speedup_" + speedx + "x.mp4"    

print("\nCreate Batch: Default output timelapse filename:\n")
print(output_filename)
  
# Open a text file for writing
with open('./timelapse.bat', 'w') as f:
  # Write the commands to the text file
  for file in mp4_files:
    f.write(f'python .\\ffac3_cuda.py -i "{input_folder}\\{file}" -o .\\work\\{file}_AutoTimeLapse_t{t_value}.MP4 -s .\\edit.txt -hh dxva2 -v "hevc_nvenc -rc vbr -cq 10 -qmin 6 -qmax 14" -f C:\\ffmpeg\\bin\\ffmpeg.exe -t {t_value} -k -b {box}\n')

  # Write the additional command to the text file
  f.write('python.exe .\\ffmpegAutoConcat.py -i .\work\ -f ' + output_filename + ' -s ' + speedx +'\n')

# Execute the file "timelapse.bat" if the user wants to
if args.yes or input("Execute timelapse.bat? [Y/n] ").lower() == "y":
  subprocess.run(['timelapse.bat'])
  print("Commands written to output.txt and timelapse.bat executed")
else:
  print("Commands written to output.txt")

Now let's take a closer look at the create_Batch.py script. The first thing the script does is import several libraries that will be used later in the script. The argparse library is used to parse command-line arguments, which allows the user to specify certain options when running the script.

Next, we have the line parser = argparse.ArgumentParser(), which creates an ArgumentParser object that will be used to define the command-line arguments that the script accepts. The script accepts several arguments, including --inputfolder, which specifies the path to the input folder containing the input videos; --tvalue, which specifies the motion detection threshold to be used when creating the time lapse videos; --box, which specifies the size of the box blur filter to be applied to the videos; --yes, which tells the script to run the batch file without prompting the user for confirmation; and --speedup, which specifies the speedup multiplier for the final timelapse video.

After the arguments have been defined, the script uses the parse_args method of the ArgumentParser object to parse the command-line arguments and store them in a variable called args.

The script then checks if the --inputfolder argument was specified, and if it was, it sets the input_folder variable to the value of the argument. If the argument was not specified, the script prompts the user to enter the path to the input folder. The script does the same thing for the --tvalue, --box, and --speedup arguments, prompting the user to enter a value if the argument was not specified.

Next, the script gets a list of all the MP4 files in the input folder using the os library, and stores the list in the mp4_files variable.

The script then sets the base_dir variable to the base directory of the input folder, with any spaces replaced with underscores. The script uses this value to create a default output filename for the final timelapse video.

Finally, the script opens a text file called timelapse.bat and writes a series of commands to the file. The first set of commands writes the command to run the ffac3_cuda.py script on each input video, creating an intermediate timelapse video for each input video. The second command writes the command to run the ffmpegAutoConcat.py script, which combines the intermediate time-lapse videos into the final timelapse video.

That's a high-level overview of the create_Batch.py script. In summary, this script is used to create a batch file that will process the input videos and create the final timelapse video. It does this by writing a series of commands to a text file, which will be run by the batch file

In the next part of the tutorial, we'll be taking a detailed look at the code in the ffac3_cuda.py script the batch file executes in sequence and explaining how it works. We'll start with a quick overview of the code, and then move on to a more detailed explanation of the motion detection section and the ffmpeg filter_complex file generation.

ffac3_cuda.py:

import os
os.environ['OPENCV_CUDA_BACKEND'] = 'cuda'
import cv2
import numpy as np
import argparse
import subprocess
import tqdm

# Ensure that the GPU is initialized
cv2.cuda.setDevice(0)

# Get the active CUDA device
device = cv2.cuda.getDevice()

# Get the device properties
device_info = cv2.cuda.DeviceInfo(device)

# Construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--input", required=True, help="path to input video file")
ap.add_argument("-k", "--show_thumbnail", action="store_true",
                help="show a thumbnail image of the frame currently being processed")
ap.add_argument("-vv", "--verbose", action="store_true", help="Print verbose output")
ap.add_argument("-o", "--output", required=True, help="path to output video file")
ap.add_argument("-s", "--edit", required=True, help="path to output edit script file")
ap.add_argument("-f", "--ffmpeg", type=str, default="c:\ffmpeg\bin\ffmpeg.exe", help="path to ffmpeg executable")
ap.add_argument("-hh", "--hwaccel", type=str, default="cuda", help="hardware encoder to use")
ap.add_argument("-v", "--vcodec", type=str, default="h264", help="video codec to use")
ap.add_argument("-t", "--threshold", type=float, default=15.0, help="relative motion threshold (0-100)")
ap.add_argument("-b", "--box", type=int, default=10, help="Size of box blur filter applied in GPU")


args = vars(ap.parse_args())

# Set the verbose flag
verbose = (args["verbose"])
filepath = (args["input"])
rawfilepath = (f"\"{filepath}\"")
filename = os.path.basename(filepath)
# Open the video file
video = cv2.VideoCapture(args["input"])

# Initialize the box filter size

if args["box"]:
    box_size = (args["box"])
else:
    box_size = 10

# Get the video properties

fps = video.get(cv2.CAP_PROP_FPS)
width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
num_frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT))

# Calculate the absolute threshold based on the relative threshold specified by the user
relative_threshold = args["threshold"]
threshold = args["threshold"] / 10.0 * 570 * 320

print("\n\n************************************************************\n")
print(f"Starting nate's automagical motion detection editor program! https://YouTube.com/NathanaelNewton")
print("\n************************************************************\n")
# Print the GPU device properties
print("Active CUDA device:", device)
print("Total memory:", device_info.totalMemory() / (1024**2), "MB")
print("\n************************************************************\n")


print(f"Now processing {filepath}")
print(f"If the code worked this path has quotes on it! {rawfilepath}")
print(f"The input video is {fps} frames per second")
print(f"The resolution is {width}x{height}")
print(f"The total number of frames in the video is: {num_frames}")
print("\n************************************************************\n")
print(f"The relative threshold of {relative_threshold} is calculated to have an absolute value of {threshold} \n"
      f"Remember! A higher threshold number = less sensitivity and less total frames in the output video") 
print("\n************************************************************\n\n")
print("Box filter Size:")
print(f"{box_size} x {box_size}")
print("\n************************************************************\n\n")

# Initialize the edit script
edit_script = ""

# Initialize the previous frame
prev_frame = None

# Initialize the start and end times for the current section
start_time = 0
end_time = 0

# Initialize the flag for detecting motion
detect_motion = False

# Initialize the counter for the number of frames
frame_count = 0

# create a frame on GPU for images
gpu_frame = cv2.cuda_GpuMat()

# Create the progress bar
pbar = tqdm.tqdm(total=num_frames)

# Initialize the counters for motion and no motion frames (These should be 0 that makes a divide by 0 error, This is why I say Approx haha)

motion_count = 1
no_motion_count = 1

# Initialize variables to keep track of the average, minimum, and maximum values of calc_thresh
average_calc_thresh = 0
min_calc_thresh = float('inf')
max_calc_thresh = float('-inf')


# Loop through the frames of the video
while True:
    # Read the next frame
    ret, frame = video.read()

    # Check if we have reached the end of the video
    if not ret:
        break

    gpu_frame.upload(frame)

    # Convert the frame to grayscale
    gpu_frame_small = cv2.cuda.resize(gpu_frame, (570, 320))
    gray = cv2.cuda.cvtColor(gpu_frame_small, cv2.COLOR_BGR2GRAY)
    filter = cv2.cuda.createBoxFilter(gray.type(), -1, (box_size, box_size))

    # Apply the filter to the GpuMat
    gpu_blurred = filter.apply(gray)
    
    # Calculate the absolute difference between the current frame and the previous frame
    gpu_diff = cv2.cuda.absdiff(gpu_blurred, prev_frame) if prev_frame is not None else gpu_blurred
    
    if args["show_thumbnail"]:
            gputhumbnail = gpu_diff
            gthumb = gputhumbnail.download()
            cv2.imshow('Difference Calculation 2', gthumb)
            cv2.waitKey(1)
    
    diff = gpu_diff.download()
     
    # Calculate the sum of the absolute differences
    sum_diff = np.sum(diff)

    overage = round(sum_diff / threshold, 3)
    overage = (f'{overage:.3f}')
    calc_thresh = round(sum_diff * 10.0 / 570 / 320,4)
    
    # Update the running average, minimum, and maximum values of calc_thresh
    average_calc_thresh = (average_calc_thresh * 49 + calc_thresh) / 50

    # Check if the sum is above the threshold
    if sum_diff > threshold:
        # Motion has been detected
        motion_count += 1

        # Check if we were previously detecting motion

        if detect_motion:
            # Update the end time
            end_time = frame_count / fps
        else:
            # Start a new section
            start_time = frame_count / fps
            end_time = start_time
            detect_motion = True

        # Display a thumbnail image of the frame being currently processed if the user specifies the -k flag
        if args["show_thumbnail"]:
            thumbnail = cv2.resize(frame, (570, 320))
            cv2.imshow('Frames being included 2', thumbnail)
            cv2.waitKey(1)

        # Calculate the ratio and percentile of frames with motion to frames with no motion
        motion_ratio = round(motion_count / (motion_count + no_motion_count), 4)
        motion_percent = round(100 * motion_count / (frame_count + 1), 1)

        # Updated the rounded versions of the averages
        
        ave_round = round(average_calc_thresh,4)

        # Print a message indicating that motion has been detected
 
        print(
            f"\033[F\033[0K\u001b[42;1m***** Motion\u001b[0m in frame: {frame_count} \tFrames in {filename} at threshold of {relative_threshold} with motion: {motion_count}  \t "
            f"Without: {no_motion_count} \t Approx. \u001b[40;1m{motion_percent} %\u001b[0m of the frames have motion \t Detected: {sum_diff}\t\tRelative multiplier {overage}\t\tCalculated threshold is {calc_thresh}\t\tAverage of last 50 frames: {ave_round}") 
    else:
        # No motion has been detected
        no_motion_count += 1

        # Check if we were previously detecting motion

        if detect_motion:
            # Motion has stopped, so add the current section to the edit script
            edit_script += f"between(t,{start_time},{end_time})+"
            detect_motion = False
        else:

            if args["show_thumbnail"]:
                thumbnail2 = cv2.resize(frame, (570, 320))
                cv2.imshow('Frames being disguarded 2', thumbnail2)
                cv2.waitKey(1)

        motion_ratio = round(motion_count / (motion_count + no_motion_count), 4)
        motion_percent = round(100 * motion_count / (frame_count), 1)
        # Updated the rounded versions of the averages
        
        ave_round = round(average_calc_thresh,4)
        
        print(
            f"\033[F\033[0K\u001b[41;1m** No motion\u001b[0m in frame: {frame_count} \tFrames in {filename} at threshold of {relative_threshold} with motion: {motion_count} \t "
            f"Without: {no_motion_count} \t Approx. \u001b[40;1m{motion_percent} %\u001b[0m of the frames have motion \t Detected: {sum_diff}\t\tRelative multiplier {overage}\t\tCalculated threshold is {calc_thresh}\t\tAverage of last 50 frames: {ave_round}") 

    # Update the previous frame
    prev_frame = gpu_blurred

    # Increment the frame counter
    frame_count += 1

    # Update the progress bar
    pbar.update(1)

# Close the progress bar
pbar.close()

# Print the total number of frames with motion and no motion
print(f"Total number of frames with motion: {motion_count}")
print(f"Total number of frames with no motion: {no_motion_count}")

# Calculate the ratio of frames with motion to frames with no motion

motion_ratio = motion_count / (motion_count + no_motion_count)

print(f"Ratio of frames with motion to frames with no motion: {motion_ratio:.2f}")

# Check if we were still detecting motion at the end of the video
if detect_motion:
    # Add the final section to the edit script
    edit_script += f"between(t,{start_time},{end_time})"

# Add the trailing code to the edit command

edit_video = f"[0:v]select=\'{edit_script}"
edit_audio = f"[0:a]aselect=\'{edit_script}"

# Remove the extra '+' sign at the end of the edit script, if any
edit_video = edit_video.rstrip("+")
edit_audio = edit_audio.rstrip("+")

edit_video += f"\',setpts=N/FRAME_RATE/TB[v];\n"
edit_audio += f"\',asetpts=N/SR/TB[a]"

filter_complex = edit_video + edit_audio
if verbose:
    print("Printing the complex filter\n************\n")
    print(filter_complex)

# Close the video file
video.release()

# Print the Edit scripts for debugging purposes

if verbose:
    print("\n\n************\nVideo script \n")
    print(edit_video)
    print("\n\n************\nAudio script \n")
    print(edit_audio)

# Check if the edit script is not empty
if edit_video:
    # Create a temporary file to store the edit script
    with open("filter_complex.txt", "w") as f:
        f.write(filter_complex)

    # Get the path of the temporary file
    edit_video = f.name

    # Create the command for ffmpeg

    command = f"{args['ffmpeg']} -hwaccel {args['hwaccel']} -i {rawfilepath} -filter_complex_script .\\{edit_video} -vcodec {args['vcodec']} -map [v] -map [a] {args['output']} "

    print("\n\n************\nNOW Executing the following ffmpeg command:\n")
    print(command)
    print("\n\n************\n")
    # Execute the command
    subprocess.run(command)
    

So let's get started with a quick overview of the code. The ffac3_cuda.py script is written in Python and uses the OpenCV library to process the input video. The script accepts a number of command-line arguments, including the path to the input video, the path to the output video, and the path to the output edit script file. The script also allows the user to specify the hardware encoder to use, the video codec to use, and the relative motion threshold to use when creating the timelapse video.

  • -i or --input: the path to the input video file that will be processed. This argument is required.

  • -k or --show_thumbnail: a flag that tells the script to show a thumbnail image of the frame currently being processed.

  • -vv or --verbose: a flag that tells the script to print verbose output to the console.

  • -o or --output: the path to the output video file. This argument is required.

  • -s or --edit: the path to the output edit script file. This argument is required.

  • -f or --ffmpeg: the path to the ffmpeg executable. The default value is "c:\ffmpeg\bin\ffmpeg.exe".

  • -hh or --hwaccel: the hardware encoder to use. The default value is "cuda".

  • -v or --vcodec: the video codec to use. The default value is "h264".

  • -t or --threshold: the relative motion threshold for the timelapse, which should be a numeric value between 0 and 100. The default value is 15.

  • -b or --box: the size of the box blur filter applied to the video file in the GPU. This should be an integer value. The default value is 10.

Example command line usage with custom parameters to use h.265:

python .\ffac3.py -i .\MVI_0248.MP4 -o .\MVI_0248_AutoTimeLapse_t12.MP4 -s .\edit.txt -hh dxva2 -v "hevc_nvenc -rc vbr -cq 10 -qmin 6 -qmax 14" -f  C:\ffmpeg\bin\ffmpeg.exe -t 12

The script begins by importing several libraries and setting up the GPU for use with OpenCV. It then parses the command-line arguments and gets the properties of the input video, such as the frame rate, resolution, and number of frames.

The script then calculates the absolute motion threshold based on the relative motion threshold specified by the user, and opens the input video using the cv2.VideoCapture function.

Next, the script enters a loop that reads each frame of the input video, processes it to detect motion, and writes the resulting frame to the output video. 

The motion detection section of the code is responsible for analyzing each frame of the input video and determining whether it should be included in the timelapse video. The script uses the OpenCV cv2.absdiff function to compare each frame to the previous frame and calculate the absolute difference between them. This difference is then compared to the absolute motion threshold that was calculated earlier in the script. If the difference is greater than the threshold, it means that there was enough motion in the frame to be included in the timelapse video. If the difference is less than the threshold, it means that there was not enough motion in the frame and it should be discarded.

The script also uses the OpenCV cv2.cuda.GpuMat function to create a GPU memory buffer that is used to store the intermediate frames as they are processed. This allows the script to take advantage of the GPU's parallel processing capabilities and significantly speed up the motion detection process, although this feature is barebones and has significant room for improvement. Personally I run two copies of the program at once from different working directories to maximize the CPU and GPU usage on my pc. Your PC may be more powerful and able to run even more instances. 

While this version uses cuda, it’s quite simple to convert the function calls to cpu versions, in fact, the first version of this software i created used the cpu to process everything. Leave a comment if you would like me to release that version as well in a followup video.

Once the motion detection process is complete, the script writes the resulting frames to the output video using the OpenCV cv2.VideoWriter function. It also writes the start and end times of each frame to the output edit script file, which will be used later to create the final timelapse video using ffmpeg.

[In the video: Show the section of the code that performs the motion detection, the code in action and explain all the console output]

That's a detailed explanation of the motion detection section of the ffac3_cuda.py script. As you can see, this script uses a combination of OpenCV functions and GPU acceleration to efficiently and accurately detect motion in the input video and create a timelapse video from the resulting frames.

In the next part of this tutorial, we'll be taking a closer look at the ffmpeg filter_complex file generation section of the code and explaining how it works.



[In the video: Some kind of time lapse intermission]


In this part of the tutorial, we'll be taking a detailed look at the ffmpeg filter_complex file generation section of the ffac3_cuda.py script.

The ffmpeg filter_complex file generation section of the code is responsible for creating a file that specifies the filters that will be applied to the intermediate timelapse videos to create the final timelapse video. This file is written in the ffmpeg filtergraph syntax and is used by the ffmpeg command-line tool to apply the filters to the videos.

The script generates this file by first creating an empty list called filters that will be used to store the filters that will be applied to the videos. It then enters a loop that reads each line of the output edit script file, which contains the start and end times of each frame in the intermediate time lapse videos. For each line in the edit script file, the script creates a filter string in the ffmpeg filter_complex syntax and adds it to the filters list.

Once all of the filters have been added to the filters list, the script writes the list to the ffmpeg filter_complex file using the Python write function. The filter file is then closed and the script continues with the next step in the process.

[Show the section of the code that generates the ffmpeg filter_complex file]

ffmpegAutoConcat.py:


import os
import subprocess
import argparse

# Parse the command-line arguments
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--inputfolder", help="path to the input folder")
parser.add_argument("-f", "--filename", help="Output file name")
parser.add_argument("-s", "--speedup", help="amount to speed up the video")

# parser.add_argument("-y", "--yes", action="store_true", help="execute timelapse.bat without prompting")
args = parser.parse_args()

def concatenate_files(folder_path):
    # Create a list of all the file paths in the specified folder
    file_paths = [os.path.join(folder_path, file) for file in os.listdir(folder_path)]

    # Write the list of file paths to the file mylist.txt, with each line prepended with "file " and the path wrapped in single quotes
    with open("mylist.txt", "w") as file:
        file.write("\n".join(['file \'' + path + '\'' for path in file_paths]))
  
    # Use ffmpeg to concatenate the files
    concat_command = (["c:\\ffmpeg\\bin\\ffmpeg.exe", "-y", "-f", "concat", "-safe", "0", "-i", "mylist.txt", "-c:v", "copy", "-an", "temp_output_delete_me.mp4"])
    print("\n\n************\n")
    print("Starting nate's automagical automatica timelapse generator! https://YouTube.com/NathanaelNewton")
    print("NOW Executing the following ffmpeg command:\n")
    print(concat_command)
    if args.inputfolder:
       print(f"Input folder: {args.inputfolder}")
    if args.speedup:
       print(f"Speedup multiplier: {args.speedup}")
    
    print("\n\n************\n")
    
    subprocess.run(concat_command)


def create_time_lapse(input_path, output_path, speed):
    # Use ffmpeg to create a time lapse of the input video
    time_lapse_command = (["c:\\ffmpeg\\bin\\ffmpeg.exe", "-y", "-hwaccel", "dxva2", "-i", input_path, "-vf", f"setpts={1 / speed}*PTS", "-vcodec", "hevc_nvenc", "-rc", "vbr", "-cq", "10", "-qmin", "6", "-qmax", "14", "-r", "60", output_path])
    
    print("\n\n************\nNOW Executing the following ffmpeg command:\n")
    print(time_lapse_command)
    print("\n\n************\n")
    subprocess.run(time_lapse_command)

if __name__ == "__main__":
    if args.inputfolder:
      folder_path_response = args.inputfolder
    else:
      folder_path_response = input("Enter the path to the input folder: ")

    # Concatenate the files
    folder_path = folder_path_response
    concatenate_files(folder_path)
    
    # Set the default input and output paths to the current directory
    input_path = "temp_output_delete_me.mp4"
    
    # Ask the user the speedup value if not in the commandline execution options
    
    if args.speedup:
      speed = float(args.speedup)
    else:
      speed = float(input("Enter the speed at which to speed up the final timelapse (e.g. 2 for 2x speed) : "))

    if args.filename:
        print("Default Name:\n")
        print(args.filename)
        output_path = args.filename
    
    output_path_response = input("\n\n************\nEnter the path for the output video (press Enter for default): ")
    if output_path_response:
        output_path = output_path_response

    # Create the time lapse
    create_time_lapse(input_path, output_path, speed)


Overall, the script is a powerful tool that allows you to create timelapse videos from a series of input videos with ease. It uses a combination of Python, OpenCV, and GPU acceleration to efficiently and accurately detect motion in the input videos and create the intermediate time lapse videos.

One of the benefits of using open source software like this script is that it is freely available for anyone to use and modify. This means that you can use the script to create timelapse videos for your own personal use, or you can modify the script to fit your specific needs (Provided you follow the respective license agreements.)

Another benefit of using open source software is that it is typically developed and maintained by a community of volunteers who are passionate about the software and are dedicated to making it better. This means that open source software is often of high quality and is regularly updated to fix bugs and add new features.

One thing to note is that figuring out the section with the filter complex was very difficult and it seems to be insufficiently documented. However, with a little perseverance and some online research, it is possible to understand how it works and modify it to fit your specific needs.

Have you ever used ffmpeg or a similar tool to create timelapse videos? Do you have any ideas for how the scripts could be improved or modified? I know I can think of a few.. feel free to share your suggestions!

Are you interested in learning more about Python, OpenCV, or other programming languages and technologies? Let me know what you think of my presentation style and what subjects you'd like to learn more about.


1 comment: