Machine Learning and Neural Networks - page 38

 

Real-Time Object Tracking using YOLOv8 and DeepSORT | Vehicles Counting (Vehicles Entering& Leaving)



Real-Time Object Tracking using YOLOv8 and DeepSORT | Vehicles Counting (Vehicles Entering& Leaving)

The video demonstrates the implementation of real-time object tracking using YOLOv8 and DeepSORT to count the number of vehicles entering and leaving a highway road. The presenter provides a step-by-step guide, starting with cloning the GitHub repository, installing required packages, setting the directory, and examining the tracking script. The tutorial covers the use of double-ended queues, pre-processing, non-max regression, and Deep SORT function to generate unique IDs and determine class names. The presenter also explains how to add a vehicle counting feature by using a line on the screen, and each time a vehicle's trail intersects with this line, the count increases. The UI is set using a specific function. Finally, the presenter shows how the current output of the script can detect intersections with the line and count the vehicles entering and leaving the area.

  • 00:00:00 section of the video shows a demonstration of the implementation of object tracking using deep sort in YOLO V8. The tutorial will cover the creation of an application to count the number of vehicles entering and leaving a highway road. The presenter provides a step-by-step guide to follow along using PyCharm IDE, and explains each step in detail. The first step is to clone the GitHub repository and install the required packages. The video also includes a section on setting the directory and examining the tracking script.

  • 00:05:00 In this section of the video, the speaker explains how to set up the tracking dot file and download and place the DeepSORT files from Google Drive into the YOLOv8 folder. The DeepSORT files are too large to be uploaded on GitHub, so they must be downloaded from the Google Drive link provided. The video demonstrates how to extract the downloaded files and place them in the appropriate folder. The speaker also shows how to download a sample video and run the script. The script may take some time to run since they are testing it on CPU, but the speaker goes on to explain the code in more detail while the script runs.

  • 00:10:00 In this section, the speaker explains the use of a double-ended queue (DQ) and why it is preferred over a list when performing insertion or pop operation at the same time. The speaker uses DQ to append values in the form of bottom center coordinates of the bounding box and explains how the values are removed once the car disappears from the frame. The maximum length of the DQ is defined as 64, and once it reaches that value, it does not accept further values. The speaker also defines the color palette and the UI to create a stylish border around the detected objects.

  • 00:15:00 In this section, the video breaks down the code for creating a user interface (UI) with a rounded rectangle and text label, as well as how the program appends detected objects to a double-ended queue in order to generate trails. The code also includes pre-processing and non-max regression to resize frames and perform detections. The Deep SORT function is called to assign unique IDs and determine class names, while set.modern.names helps retrieve information from the COCO dataset and generate bounding boxes.

  • 00:20:00 In this section, the speaker is explaining their code for object detection using YOLOv8 and DeepSORT, providing an overview of the script and how it works. They demonstrate the current output of the script and note that they are using a nano model, so the object detection is not yet perfect. The speaker also mentions that they will be creating a series of computer vision lectures for Europa and encourages viewers to subscribe for more content. Finally, they show how their script can detect when a trail intersects with a line, indicating when an object has passed and allowing for the implementation of vehicle counting for entering and exiting.

  • 00:25:00 In this section, the presenter explains how the vehicle counting feature was added to the real-time object tracking system using YOLOv8 and DeepSORT. The system uses a line on the screen, and each time a vehicle's trail intersects with this line, the count increases. The count can increase whether a vehicle is entering or leaving the area, depending on the vehicle's direction of movement. The presenter also shows how the UI is set through a particular function.
Real-Time Object Tracking using YOLOv8 and DeepSORT | Vehicles Counting (Vehicles Entering& Leaving)
Real-Time Object Tracking using YOLOv8 and DeepSORT | Vehicles Counting (Vehicles Entering& Leaving)
  • 2023.01.11
  • www.youtube.com
#yolo #yolov8 #objectdetection #objectracking #opencvpython #opencv #computervision #machinelearning #artificialintelligence #deepsort #multiobjecttracking#...
 

Real-Time Object Segmentation and Tracking using YOLOv8 | Vehicles Counting (Entering and Leaving)



Real-Time Object Segmentation and Tracking using YOLOv8 | Vehicles Counting (Entering and Leaving)

This video tutorial focuses on implementing real-time object segmentation and tracking using YOLOv8 and deep sort algorithm. Specifically, it demonstrates how to count and distinguish between the different subtypes of vehicles entering and leaving a given area. The tutorial covers various aspects, including speed estimation, direction measurement, and accurate segmentation and tracking of each vehicle with their ID and trails. The presenter also provides the necessary steps to implement this on various IDEs and offers the final code for their Patreon supporters.

  • 00:00:00 In this section of the video, the presenter explains how to implement object segmentation with tracking using YOLOv8 and deep sort algorithm. The video demonstrates vehicle segmentation and tracking, including counting the number of vehicles that enter and leave, as well as the types of vehicles. The implementation also includes speed calculation of the vehicles, assigning a unique ID to each vehicle, and defining the trails as well. The presenter provides the necessary steps to implement this on various IDEs, including Anaconda Navigator, PyCharm, Spider, and Visual Studio. The required dependencies, such as libraries, need to be installed to avoid any errors while running the predict.5 script.

  • 00:05:00 In this section, the video tutorial focuses on implementing segmentation and tracking using YOLOv8 and deep sort. The Deep sort files are downloaded from Google Drive, and a sample video is used for testing. The script is modified to implement vehicle counting for entering and leaving, with a dictionary used to store the count of vehicles by subtype. The object counter one is used to store the number of vehicles leaving by subtype, while the check counter dictionary is used to store the number of vehicles entering. The modified script tracks and counts vehicles based on their subtype and whether they enter or leave the frame.

  • 00:10:00 In this section, the video discusses how to create a vehicle counting system using YOLOv8, with an object counter that can distinguish between subtypes of vehicles such as cars, big trucks, and motorbikes entering and leaving a given area. The video explains the use of a speed estimation function, and how to create a line that, when passed by a vehicle, increments the counter. Additionally, the video defines the constant PPM, or picture parameter, which helps to measure the number of pixels in an image.

  • 00:15:00 In this section, the speaker discusses the concept of dynamic distance measurement based on the distance between an object and the camera. They explain that this can be achieved using a formula that involves the distance in meters being equal to distance pixel divided by PPM (pixels per meter) multiplied by a constant of 3.6 divided by the frame rate per second. The speaker then proceeds to write a function to calculate the angle and return the speed based on the distance and time. They also create two more functions - set_d and CCW - and write a get_direction function to calculate the direction of an object based on the y-axis.

  • 00:20:00 In this section of the video, the presenter reviews their code for real-time object segmentation and tracking using YOLOv8. They make modifications to the script, including adding a direction structure and computing the remaining code. The presenter also discusses how to calculate object speed and append it to the ID list. They then copy and paste an intersection code and continue making adjustments to fix errors in the code.

  • 00:25:00 In this section of the video, the presenter adds a counter code to count the number of vehicles entering and leaving. The counter code displays the total count of vehicles and the sub count, which includes the number of cars, trucks, and motorcycles entering. The presenter also suggests adding a display for the number of vehicles exiting. An error occurs in the draw boxes function, and the presenter fixes it by declaring a global variable for the count up and countdown. The script runs successfully, and the output demo video shows the speed estimation for each vehicle and the total vehicle count with subcounts for each type of vehicle that entered.

  • 00:30:00 In this section, the speaker discusses the results of their real-time object segmentation and tracking using YOLOv8 for vehicle counting. They have achieved precise results and are even able to estimate the speed of each vehicle, along with tracking ID and trails. Moreover, the detection and segmentation of each vehicle have been performed accurately, and the total count of vehicles is also mentioned. They have made this code available exclusively to their Patreon supporters who will have access to all the code and projects they share. Finally, they invite viewers to test the demo videos, implement the code on their sites or join their Patreon membership.
Real-Time Object Segmentation and Tracking using YOLOv8 | Vehicles Counting (Entering and Leaving)
Real-Time Object Segmentation and Tracking using YOLOv8 | Vehicles Counting (Entering and Leaving)
  • 2023.01.29
  • www.youtube.com
#yolo #yolov8 #objectdetection #objectsegmentation #objecttracking #computervision #machinelearning #artificialintelligence Real-Time Object Segmentation and...
 

Object Tracking with YOLOv8: Vehicles Tracking, Counting (Entering & Leaving) and Speed Estimation



Object Tracking with YOLOv8: Vehicles Tracking, Counting (Entering & Leaving) and Speed Estimation

The video tutorial describes how to implement object tracking, vehicle counting, and speed estimation using YOLOv8 and DeepSORT. The presenter shares a link to the GitHub repo containing the code and walks through the process of cloning the repository, downloading DeepSORT files, importing relevant libraries, and defining a data DQ list to track objects. They also explain how to determine vehicle direction and increment the count accordingly. Additionally, the presenter shows how to estimate the speed of vehicles by implementing the Euclidean distance formula based on X and Y coordinates of tracked objects, and set up a space for the count display. Ultimately, the output of the script shows object counts and speeds, thus indicating that the implementation has been successful.

  • 00:00:00 In this section of the tutorial, the presenter explains how to implement object detection tracking and speed estimation using YOLOv8. To begin, the presenter provides a link to the YOLOv8 object tracking with Deep Sort GitHub repo, which contains a Google Colab file as well as step-by-step instructions to run the code on Windows. The presenter then goes through the necessary steps to clone the repository, set the current directory, and install all necessary libraries. After that, the presenter navigates to the detect folder and downloads the Deep Sort files from the provided Google Drive link.

  • 00:05:00 In this section, the speaker explains the steps to download and implement object tracking with YOLOv8 using DeepSORT. They guide the viewers to download the DeepSORT folder by unzipping the folder and placing it in the main folder for the project. They also explain the library imports used in the predict.py file and describe the concept of double-ended queue (DQ) used to store unique IDs for objects entering and leaving the frame. The speaker also mentions downloading the sample video from Google Drive and waiting for a few seconds for the output, as they are using CPU rather than GPU for processing.

  • 00:10:00 In this section, the speaker explains various functions used in the deep sort tracker, a tracking algorithm for object tracking. They include the dot append function, used to add values to the right end of a deque, a container data type used to store objects, and the dot append left function used to insert values to the left end of a deque. The speaker also discusses the compute colorful labels function, which assigns different colors to objects based on their types. Additionally, the draw Dash border and draw Dash boxes functions are explained in detail, where they create a UI with a rounded rectangle that displays the object's unique ID and name and a trace line that is created using CB2.9.

  • 00:15:00 In this section, the speaker explains how they create a Trail Line and implement the concept of data DQ to track objects. They initialize a DQ list and append the bottom coordinates of each detected object into it. When an object leaves the frame, its points get removed from the data DQ list. They also create a class detection predictor, use the Deep Sort algorithm to assign unique IDs to each object, and get their names. They call a draw-boxes function and run the predictions. Additionally, they implement vehicle counting by downloading and pasting a file and defining the direction as north, south, east, or west. If an object enters or leaves in a specific direction, they increment the count.

  • 00:20:00 In this section, the speaker explains how they implemented object counting in their YOLOv8 object tracking script. They defined a direction for objects entering or leaving based on their movement across designated lines. They also showed the count of objects entering or leaving and set the space where the count needs to be displayed. The speaker then added speed estimation by implementing the Euclidean distance formula, using X and Y coordinates of tracked objects to calculate their speeds. They used a pixel-per-meter value to make the calculation, which can be made dynamic based on the camera's position. The output of the script showed object counts and speeds, indicating that the script was working correctly.

  • 00:25:00 In this section, the speaker explains the formula used to calculate the speed of vehicles in meters per hour, which involves dividing the distance in pixels by the pixel per meter value (PPM) and multiplying the result by a time constant. When a vehicle crosses a line set by the user's defined detection area, the script will increment the vehicle count and discern whether the vehicle is entering or leaving the area, based on the direction it is traveling. Ultimately, the speed and vehicle count estimates are appended to the output list, and the user can implement this project by following the tutorial's steps.
Object Tracking with YOLOv8: Vehicles Tracking, Counting (Entering & Leaving) and Speed Estimation
Object Tracking with YOLOv8: Vehicles Tracking, Counting (Entering & Leaving) and Speed Estimation
  • 2023.02.09
  • www.youtube.com
#ObjectDetection #ObjectTracking #SpeedEstimation #yolov8 #yolo #computervision #deeplearning #ai #machinelearning #opencv #opencvpython #pytorch -------...
 

Automatic License Plate Recognition using YOLOV8 and EasyOCR ( Images & Videos)



Automatic License Plate Recognition using YOLOV8 and EasyOCR ( Images & Videos)

In this YouTube video, the presenter explains how to implement automatic license plate recognition using YOLOV8 and EasyOCR. They guide viewers through the implementation process using a Google Colab notebook and a GitHub repository, providing step-by-step instructions and explaining how to install dependencies and download the necessary dataset. The presenter demonstrates the model's success rate and the validation process, and also explains how to use EasyOCR to read license plate numbers. They walk through the final steps of running the script and encounter some errors that they fix, resulting in impressive results. Although the license plate recognition script will only be provided on the presenter's GitHub repo for Patreon supporters, viewers can learn about the changes made to the predict.py file for similar results.

  • 00:00:00 In this section of the video, the presenter demonstrates how to implement license plate detection and recognition using YOLOV8 and EasyOCR. The presenter uses a Google Colab notebook and a GitHub repository to guide viewers through the implementation process. The notebook contains step-by-step instructions for running the code, and the GitHub repository provides access to multiple exclusive projects for Patreon supporters who contribute five dollars a month. The presenter also explains how to install the necessary dependencies and download the required dataset from Roboflow to train a custom model for license plate detection. Finally, the presenter shows the confusion matrix that reveals how well the model handled different classes, with an 84% success rate for detecting license plates.

  • 00:05:00 In this section, the speaker discusses the results of the model on the validation set, which shows a decrease in loss and an increase in mean average precision and recall score as the number of epochs increases. The speaker also demonstrates how to download the saved model from Google Drive and validate it on a custom video, achieving a mean average precision of 0.926 with an IOU of 50. However, the focus of this section is on the implementation of EasyOCR for reading the license plate numbers from the detected plates, and the adjustments made to the predict.py file to include the OCR reader and image coordinates.

  • 00:10:00 In this section, we see an explanation of how to implement automatic license plate recognition using YOLOv8 and EasyOCR. The video discusses the process of cropping and converting the license plate image into grayscale to help read the text of the plate. They show how to use EasyOCR to read the text from the grayscale image of the license plate by writing "reader. read_text(gray)." They finally test the script to see if any errors occur.

  • 00:15:00 In this section, the speaker walks through the final steps of running the license plate recognition script using the YOLOv8 and EasyOCR models. They encounter a couple of errors in the code but manage to fix them and successfully run the script. The output video demonstrates impressive results, with the model detecting license plates and reading text using EasyOCR. The license plate recognition script will be provided on the speaker's GitHub repo for Patreon supporters, but viewers can still learn about the changes made to the predict.pi file in the video to achieve similar results.
Automatic License Plate Recognition using YOLOV8 and EasyOCR ( Images & Videos)
Automatic License Plate Recognition using YOLOV8 and EasyOCR ( Images & Videos)
  • 2023.02.04
  • www.youtube.com
#yolo #yolov8 #objectdetection #license #license #computervision #machinelearning #artificialintelligence Automatic License Plate Recognition using YOLOv...
 

Real-Time Object Detection and Tracking using YOLOv8 on Custom Dataset: Complete Tutorial



Real-Time Object Detection and Tracking using YOLOv8 on Custom Dataset: Complete Tutorial

In this video tutorial, the presenter introduces a custom dataset containing images of cars, trucks, motorcycles, pickups, planes, and camping cars, which is used to demonstrate the implementation of YOLOv8 with detection and tracking. They explain the importance of a balanced dataset and provide step-by-step instructions for navigating the GitHub repository, setting up the required environment, and implementing object tracking using the deep sort algorithm. The presenter also discusses the confusion matrix and the importance of training and validation losses while testing the accuracy of the model by running inference with a demo video downloaded from Google Drive. They conclude by sharing the collab notebook file for those interested.

  • 00:00:00 In this section of the video tutorial, the presenter introduces a publicly available multi-class dataset that contains around 4680 images of cars, trucks, motorcycles, pickups, planes, and camping cars. The dataset is not balanced as it is overrepresented by car annotations with around 8,389 annotations while other annotations are very low. However, the dataset is used for this tutorial but when implementing any project or using any publicly available dataset, it is important to make sure that the dataset is balanced. The video also introduces a GitHub repository that is used for the project and provides the step-by-step instructions to implement YOLO V8 with detection and tracking on any custom dataset. The instructions include cloning the repository, installing dependencies, implementing object tracking using deep sor,t and downloading a sample video for testing.

  • 00:05:00 In this section of the video, the presenter explains the libraries used for displaying images and training information, and demonstrates how to set up the required environment for YOLOv8 using Google Colab. The presenter clones the GitHub repository for YOLOv8, installs the necessary libraries, and navigates to the detection folder. The presenter then shows how to download the required custom dataset from Roboflow and unzips it to obtain the train, test, and validation folders.

  • 00:10:00 In this section of the video, the presenter discusses implementing object tracking using the deep sort algorithm. They download the deep sort files and unzip them before showing the confusion matrix, which is a chart showing how well a model handles different classes. They explain that the confusion matrix indicates that the model detects camping cars correctly 67% of the time, but sometimes classifies them as simple cars. They also discuss the importance of training and validation losses, and how they are checking the model prediction on validation batches. Finally, they download the weights for the custom model from Google Drive and validate it, showing that the mean average precision is high when IOU is at 50%.

  • 00:15:00 In this section, the presenter is testing the model's accuracy by running inference with the constructed model using a demo video downloaded from their Google Drive. The video shows cars and trucks on a highway road, and the results show that the model is performing fine with a unique ID assigned to each object, and tracking is possible as well. The collab file used in this tutorial is available on their GitHub repo. Another video is also tested, and the results are similarly fine, with some cases requiring improvement. The presenter shares that they have implemented object detection tracking using YOLOv8 on a custom dataset and shares the collab notebook file for those interested.
Real-Time Object Detection and Tracking using YOLOv8 on Custom Dataset: Complete Tutorial
Real-Time Object Detection and Tracking using YOLOv8 on Custom Dataset: Complete Tutorial
  • 2023.01.22
  • www.youtube.com
#yolo #yolov8 #objectdetection #objectracking #opencvpython #opencv #computervision #machinelearning #artificialintelligence #deepsort #multiobjecttracking#...
 

Real Time Object Segmentation and Tracking using YOLOv8 on Custom Dataset: Complete Tutorial



Real Time Object Segmentation and Tracking using YOLOv8 on Custom Dataset: Complete Tutorial

This video tutorial is a comprehensive guide on using YOLOv8 for real-time object segmentation and tracking on custom datasets. The tutorial goes through the entire process, including importing datasets, training custom models using YOLOv8 and Deep Sort algorithms, and testing the models on demo videos. The speaker provides code and libraries required for the implementation and showcases the results of the model's predictions. They also explain confusion matrix and provide links to access the output videos and polar files on GitHub. Overall, this tutorial is a great resource for anyone looking to learn about object segmentation and tracking using YOLOv8.

  • 00:00:00 In this section, the video tutorial covers how to use YOLOv8 for segmentation and tracking on custom datasets. The tutorial goes through the implementation step by step and uses a repository to help with the implementation. The presenter explains how to import the required libraries and dependencies, as well as how to access and download a drone traffic dataset from Roboflow. The code required for importing the dataset is also provided, along with instructions on how to run the code successfully. It is emphasized that it is important to watch the complete video to fully understand the core concepts.

  • 00:05:00 In this section, the speaker explains how to train custom models for object tracking using YOLOv8 and Deep Sort algorithms. They mention the procedure to download a dataset from Roboflow into Google colab notebook, download Deep Sort files, and train the custom models for detecting different classes of objects, including bicycle, bus, car, and lorry. The speaker shares the results of the model's prediction on the validation batch, saved the model weights into Google Drive, and downloaded a demo video to test the model's performance. They explain the confusion matrix and how it reveals the accuracy of the model for different classes of objects. Overall, the video provides a complete tutorial on real-time object segmentation and tracking using YOLOv8 on custom datasets.

  • 00:10:00 In this section of the video, the presenter tests the model on two demo videos and showcases the results. The model is able to accurately detect and track objects, giving each object a unique ID and creating trails to follow their movement. The presenter provides a link to access the output video and polar file on GitHub.

  • 00:15:00 In this section, the speaker presents the results of their object segmentation and tracking project using YOLOv8 on a custom dataset. The speaker shows their output videos and mentions that the detections are very good, and viewers can download the files to see the results themselves. The speaker also reminds viewers to subscribe to the channel for future videos on new topics.
Real Time Object Segmentation and Tracking using YOLOv8 on Custom Dataset: Complete Tutorial
Real Time Object Segmentation and Tracking using YOLOv8 on Custom Dataset: Complete Tutorial
  • 2023.01.25
  • www.youtube.com
#yolo #yolov8 #objectdetection #objectracking #opencvpython #opencv #computervision #machinelearning #artificialintelligence #deepsort #multiobjecttracking#...
 

Road Signs and Traffic Lights Detection and Color Recognition using YOLOv8



Road Signs and Traffic Lights Detection and Color Recognition using YOLOv8

This YouTube tutorial showcases the use of YOLOv8 for road signs detection and color recognition. The presenter introduces the dataset, which contains 17 different classes of road signs with a balanced distribution of images. The YOLOv8 model is trained and fine-tuned over 100 epochs, resulting in good mean average precision scores for iou50 and ioub50. The presenter demonstrates how to interpret the confusion matrix and validate the model on the validation dataset. The model is then tested on two demo videos, both showing accurate detection results. Overall, YOLOv8 performs well for detecting road signs and traffic lights.

  • 00:00:00 In this section of the video tutorial, the presenter introduces the road signs dataset that will be used for training the YOLOv8 model on road sign detection. The dataset contains 17 different classes of road signs, with a total of 2093 images for training, validation, and testing purposes. The dataset is balanced, meaning that all classes have an almost equal distribution of images, except for green lights that have a slightly higher number of images. The presenter also shows how to import the dataset from RoboFlow into Google Colab notebook and install the necessary libraries for the implementation of YOLOv8, including ultralytics and glob libraries for managing file paths, and image and display libraries for displaying input and output images. The presenter also provides options for installing YOLOv8 either by cloning the GitHub repository or by installing it from the ultralytics package via pip.

  • 00:05:00 iou50 are good indicators of the performance of the trained YOLOv8 model on the road sign detection and color recognition task. The model has been trained on a large dataset and fine-tuned over 100 epochs resulting in a mean average Precision with iou50 of 95.8% and mean average Precision with ioub 50 of 81.3%. The best weights, with the highest mean average Precision, have been saved as "best.pt". Overall, the results are quite good for detecting 17 different classes of road signs and traffic lights in the image data set.

  • 00:10:00 In this section of the video, the presenter explains the different files saved in the training folder after running the YOLOv8 model for road signs and traffic lights detection and color recognition. These files include the confusion matrix, F1 curve, precision curve, recall curve, and model performance for each epoch. They also demonstrate how to interpret the confusion matrix and how to validate the model on the validation dataset using the best weights of the model. Finally, they suggest that training the model for a longer duration could further improve the mean average precision of the results.

  • 00:15:00 In this section of the video, the presenter validates the model on the validation dataset images and gets an impressive Main Average Precision score. The model is then tested on two demo videos, where it successfully detects signs such as "do not turn left," "parking," "red light," and more. The demo videos are downloaded from Pixel site and then uploaded to Google Drive. The output videos are displayed and shown to have accurate detection results. Overall, the model performs well in detecting road signs and traffic lights using YOLOv8.
Road Signs and Traffic Lights Detection and Color Recognition using YOLOv8
Road Signs and Traffic Lights Detection and Color Recognition using YOLOv8
  • 2023.03.02
  • www.youtube.com
#yolo #yolov8 #objectdetection #computervision #opencv #opencv #opencvpython #pytorch #python Road Signs and Traffic Lights Detection and Color Recognition u...
 

Potholes Detection and Segmentation using YOLOv8 (Images & Videos)| Custom Dataset | Complete Guide



Potholes Detection and Segmentation using YOLOv8 (Images & Videos)| Custom Dataset | Complete Guide

This video demonstrates how to create a custom dataset for pothole detection and segmentation using YOLOv8. The presenter shows the steps for cloning and annotating image data and recommends using Google Collab to train the model. Necessary dependencies for YOLOv8 are also discussed, as well as setting up the data set location and training the model. The model achieved a mean average precision of 0.532 for detection and 0.531 for segmentation, and performed well in detecting potholes in videos. The presenter concludes the video after validating the custom model and achieving good results.

  • 00:00:00 In this section of the video, the author demonstrates how to create a custom dataset for pothole detection and segmentation using YOLOv8. The author uses Roboflow to create their own workspace and project, and explains the importance of choosing a public plan for the workspace to be able to export the dataset to Google Colab. The author also shows how to import videos or images, as well as how to clone publicly available datasets to supplement their own. Finally, the author shows how to clone images and annotations, ultimately resulting in a dataset of 50 images for pothole detection and segmentation.

  • 00:05:00 In this section of the video, the presenter demonstrates how to clone and annotate image data for pothole detection and segmentation using YOLOv8. They start by cloning the remaining 46 images from the original dataset, bringing the total to 96 images. The annotation process involves connecting dots in a polygon shape to indicate the location of the pothole on each image. The presenter then demonstrates how to save and assign the annotations to each image. The process is time-consuming, but it can be done easily by following the steps outlined in the video.

  • 00:10:00 In this section of the video, the presenter discusses the process of annotating a dataset for pothole detection and segmentation. He demonstrates how to generate a dataset using the annotation tool and then proceeds to train the model using the YOLOv8 algorithm on Google Collab. The presenter also mentions that one can train the model on RoboFlow with a single click and without any coding, but he recommends using Google Collab to learn new things. The video shows how to import the required libraries and clone a GitHub repo before setting the working directory and installing dependencies.

  • 00:15:00 In this section of the video, the presenter provides instructions for installing the necessary dependencies for YOLOv8 and segmentation, emphasizing that it's important to do so before training to avoid library issues later on. The presenter also walks through importing the dataset from RoboFlow into Google folder and downloading it, as well as setting up the data set location and training the model. The presenter also acknowledges and fixes an error in the predict.py file that was adjusted for tracking.

  • 00:20:00 able to detect potholes with a mean average precision of 0.532 at an IOU of 50 and 0.218 when IOU varies from 0.50 to 0.95. The model was tested on a sample video and performed well in detecting potholes. The confusion matrix displayed shows the distribution of the data among different classes. Overall, the model works fine in detecting potholes.

  • 00:25:00 In this section of the video, the presenter attempts to validate the custom model for pothole detection and segmentation using YOLOv8. They ran into an issue where the model was failing to detect potholes 50% of the time, resulting in a blank screen. The presenter then added the best DOT weights path and was able to rectify the issue. They then proceeded to display the prediction on the validation batch and achieved a mean average precision of 0.538 for detection and 0.531 for segmentation. The model was tested on multiple demo videos and worked well, and the presenter concludes the video.
Potholes Detection and Segmentation using YOLOv8 (Images & Videos)| Custom Dataset | Complete Guide
Potholes Detection and Segmentation using YOLOv8 (Images & Videos)| Custom Dataset | Complete Guide
  • 2023.02.07
  • www.youtube.com
#objectdetection #potholes #yolo #yolov8 #computervision #segmentation #customdataset #machinelearning In this video, we will implement the Potholes detec...
 

YOLOv8 Custom Object Detection and Tracking | Ships Detection | Complete Tutorial



YOLOv8 Custom Object Detection and Tracking | Ships Detection | Complete Tutorial

The YouTube tutorial covers the implementation of YOLOv8 with deep sort object tracking on a custom ship detection dataset. The video discusses how to download the dataset from RoboFlow, set up a project in Expense ID, and train the model in Google Colab. The training script was run for 70 epochs and resulted in a mean average precision of 0.968 with IOU 50. The presenter analyzes the loss and average precision graphs to show that training for more epochs will yield better results. They then demonstrate how to validate the model on a validation dataset and show the mean average precision on validation dataset images. Finally, they show some demo videos of the model in action, including an example of a false prediction.

  • 00:00:00 In this section, the video tutorial covers the dataset for the custom object detection and tracking project, which includes around 794 images of ships captured by drones, with only one class named "board" or "shape." The tutorial then shows how to download the dataset from RoboFlow and export it into Google Colab or Expense ID, which offers 25 free credits and a Tesla T4 or V100 GPU to train machine learning models. Finally, the tutorial discusses setting up a project in Expense ID and opening a Jupyter notebook with Expense server options for training the YOLOv8 object detection model on the dataset.

  • 00:05:00 In this section of the YouTube tutorial, the instructor explains the project setting and provides a GitHub repo link to implement YOLO v8 with deep sort object tracking on a custom ship detection dataset. The video shows how to clone the GitHub repo into a Google Colab notebook and install the necessary libraries using pip. The notebook script is explained in detail, including how to import the image library, setting current working directories, and downloading pre-trained models. The instructor emphasizes the importance of running the script in the correct order and installing all required libraries before running the training, testing, or validation scripts to avoid errors.

  • 00:10:00 In this section, the speaker explains how to download a dataset of ship images from RoboFlow and implement object tracking using DeepSort. They then train a YOLOv8 model on the ship dataset, setting the data set location and default image size before running the training script for 70 epochs. The results show a good mean average precision of 0.968 with IOU 50, meaning the model correctly identified 96.8% of the ships in the images. The weights file is saved in the train folder, and a confusion matrix shows that the model was 96% accurate in detecting the presence of a ship, with a 4% failure rate. The training loss continuously decreases, indicating that training for more epochs will yield better results.

  • 00:15:00 In this section of the video, the presenter analyzes the loss graph and average precision graph to show that the training loss is continually decreasing and the mean average precision is continuously increasing as the epochs increase. They then demonstrate how to validate the model on a validation dataset and show the mean average precision with iou50 on validation dataset images which is 96.4% and mean average precision with iou50 to 95 is 71.6%. The presenter then shows some demo videos, using the predict.pi script by passing the best model weights, to test the model on multiple videos and demonstrates how the model can detect boards and assign unique IDs to each object. Finally, they show an example of a false prediction made by the model.
YOLOv8 Custom Object Detection and Tracking | Ships Detection | Complete Tutorial
YOLOv8 Custom Object Detection and Tracking | Ships Detection | Complete Tutorial
  • 2023.03.09
  • www.youtube.com
#yolo #yolov8 #objectdetection #objecttracking #deepsort #computervision #opencv #pytorch #python A complete YOLOv8 custom object detection and Tracking t...
 

YOLOv8 and VGG16 for Face, Gender Detection, Face Counting, and People Tracking | Custom Dataset



YOLOv8 and VGG16 for Face, Gender Detection, Face Counting, and People Tracking | Custom Dataset

The video tutorial explains the process of face detection, gender classification, face counting, and people tracking using YOLOv8 and VGG16 models. The tutorial covers various aspects of implementing and training these models, including data preparation, data augmentation, fine-tuning the pre-trained VGG16 model, using transfer learning, and training the YOLOv8 model for face detection. The presenter also explains how to mount a Google Drive in a Google Colab notebook, access and convert image datasets, download required libraries, and integrate object tracking using deepsort. The tutorial provides detailed code explanations for drawing bounding boxes around detected objects, integrating the gender classification model, counting the number of faces in a frame, and assigning each detected face a unique ID using deepsort.update.

  • 00:00:00 In this section of the video tutorial, the workflow for face detection with gender classification and face counting with tracking using YOLOv8 and VGG16 is explained. The first step is to prepare the dataset with images of men and women faces, to train the VGG16 model for gender detection, followed by training the YOLOv8 model for face detection. With the face detection from YOLOv8, gender classification is done using the trained VGG16 model. Object tracking is then implemented using Deepsort, assigning a unique ID to each detected face or person. The notebook is divided into nine steps, including importing required libraries, mounting Google Drive, loading the dataset, converting images and labels to arrays, applying data augmentation, fine-tuning the VGG16 model on gender classification data, plotting training and validation loss, and testing with a sample image.

  • 00:05:00 In this section, the speaker discusses various libraries and functions that can be used for converting images to numpy arrays, and vice versa. They also explain the use of two dash categorical library and the concept of sequential and functional approaches to arranging layers in the neural network. The flattened layer is used for converting multi-dimensional inputs into one dimensional, while the dense layer is used for defining the output layer size. Additionally, they discuss the use of transfer learning with the VGG16 model and the import of train test split, numpy, and OS libraries. Finally, they mention the use of the gdob library for accessing all files in a folder and the random library for shuffling image datasets.

  • 00:10:00 In this section, the video explains how to mount a Google Drive with a Google Colab notebook for accessing a dataset, which is uploaded in a zip format. The dataset includes images of men and women faces, and the video shows how to unzip and access the folder containing these images. Using the glob library, the video accesses all the image files in the dataset folders and converts them into an array format with labels indicating whether the image is a man or woman face. The video shows a sample image and explains how the image files variable contains all the image file paths of the men and women folder, which can be read using cb2.im read.

  • 00:15:00 In this section, the speaker explains how they prepared their dataset for face and gender detection. They created a 'men' and 'women' folder, resized the images within them, and converted them into arrays which were then stored in a data list. They appended the corresponding label values into a label list, with 1 for women and 0 for men. The data and label lists were then converted into arrays using NumPy. The speaker also demonstrates data augmentation with the use of an image data generator, generating multiple images from a single image by applying various transformations. They then fine-tuned a pre-trained VGG16 model on their gender classification dataset and implemented softmax activation to define the output layer. The output size was set to 2 to classify either men or women.

  • 00:20:00 In this section of the video tutorial, the speaker demonstrates the VGG16 model for face and gender detection, and shows how to train it on a general classification dataset. The model is saved in the .H5 format, and its accuracy, validation accuracy, and loss parts are calculated. Using cv2.dsize, the image is resized to 100x100 dimensions and converted to an array, and the model predicts whether the image contains a man or a woman. In the next part of the tutorial, the YOLOv8 model will be trained on a face dataset to detect faces and assign a unique ID for tracking. The speaker also mentions that face counting will be implemented using a simple code added to the predict.pi file. Overall, the tutorial is divided into seven steps.

  • 00:25:00 In this section, the presenter introduces the GitHub repository they will use to implement object tracking using deep sort, as well as the YOLO V8 model for face reduction. They discuss how they will detect faces and perform gender classification before integrating deep sort object tracking code to assign each person a unique ID. The presenter then sets their current directory as the cloning repository and installs all required libraries and dependencies needed for the script. They also download the dataset from RoboFlow into their Google Colab notebook, but encounter issues due to having a private account.

  • 00:30:00 be using the weights of the pre-trained YOLOv8 model for face detection. The dataset is downloaded and saved into my Google Drive account, and I have already trained a YOLOv8 model for 80 epochs for face detection. The trained model has already been saved, and the weights have been downloaded into the Google Colab notebook. The deep sort files are also downloaded into the notebook, as object tracking will be implemented using deep sort. Additionally, a VGG16 model has been trained for gender detection, and the model's .h5 file has been saved and downloaded to the Google Drive account. The sample videos are downloaded from the Google Drive to test the predict.pi script, which includes the gender classifier code and face counting.

  • 00:35:00 In this section, the speaker explains the code that was added to implement object tracking using deep sort. The deep sort object tracking port is initialized, and a function is defined to convert the output received from the YOLOv8 model into a compatible format for deep sort. The UI box function creates bounding boxes around the detected objects, while the draw box function calls the UI box and draw border functions to draw a rounded rectangle for the text. The deep sort tracking code is integrated along with the ability to draw trails. The speaker then explains the code for the gender classifier and count function in the Paint file. The count function is used to count each object in each frame. Overall, the code explains how to convert X1, Y1, X2, and Y2 output values from YOLOv8 into center coordinates, height, and width values for object tracking using deep sort, implementing object detection and object tracking in a real-world application.

  • 00:40:00 In this section, the speaker discusses the conversion of the YOLOv8 model into the xcyc center coordinates of the bounding box and width and height of the bounding box to make it compatible with deep sort object tracking. They also explain how the compute color for label function assigns unique colors to the detected objects and the draw Dash border function creates a rectangle above the bounding box where the label and confidence score are written. The speaker also talks about the gender classifier class and how it is loaded and used on each frame of the video to classify the gender of the detected objects. Furthermore, they mention the UI Dash box function and the draw Dash boxes function, which are used to create bounding boxes and call the gender classifier function.

  • 00:45:00 In this section, the presenter explains how to use the gender classification model to detect whether a face belongs to a man or woman. After detecting the face, only the bounding box coordinates are passed to the gender classification model. The model then predicts whether the face belongs to a man or woman, and the label is added above the bounding box accordingly. The presenter then explains the function count, which uses a dictionary called Foundry-classes to store the number of faces detected in the frame. The count is displayed in the UI above the top of the video or image.

  • 00:50:00 In this section, the speaker explains that the count of how many faces were detected in the current frame is stored in the Foundry class dictionary in the FaceDetails class. The dictionary contains two values, the key variable which contains "face" and the value variable which contains how many faces were detected in the current frame. The speaker uses the count function to show how many faces were detected in each frame and assigns a unique ID to each detected face using deepsort.update. The speaker also creates a general class, a classification class, and a danger class. The model detections are tested on multiple demo videos and the speaker shows the results of the detections in each frame.
YOLOv8 and VGG16 for Face, Gender Detection, Face Counting, and People Tracking | Custom Dataset
YOLOv8 and VGG16 for Face, Gender Detection, Face Counting, and People Tracking | Custom Dataset
  • 2023.03.05
  • www.youtube.com
#objectdetection #computervision #yolo #yolov8 #ultralytics #transferlearning #vgg16 #objecttracking #deepsort #facedetection #opencv #opencvpython #pytorc...