Vehicle Distance Measurement System

1. Introduction

This project implements a real-time vehicle distance measurement system designed for in-vehicle dashcam applications. Using YOLOv12 object detection and monocular depth estimation through perspective geometry, the system calculates distances to surrounding vehicles with region-specific warning systems. The system monitors three ROI lanes (LEFT, MAIN, RIGHT) and automatically blurs license plates for GDPR compliance, providing an adaptable safety assistance system for autonomous vehicles and advanced driver monitoring.

Core Features:

2. Methodology / Approach

The system employs a multi-component architecture combining object detection, perspective geometry, and region-based analysis:

Object Detection: YOLOv12 detects vehicles (cars, motorcycles, buses, trucks) and license plates in real-time with high confidence.

Distance Estimation: Uses monocular depth estimation via perspective projection, calculating distance from bounding box height using calibrated focal length and known vehicle dimensions.

ROI Zone Analysis: Frame divided into three trapezoidal regions (LEFT, MAIN, RIGHT) with separate warning thresholds and display limits for context-aware alerting.

Privacy Protection: Automatic license plate detection and Gaussian blurring for regulatory compliance.

Adaptive Visualization: Color-coded distance labels (red for warnings, green for safe) with zone-specific thresholds.

2.1 System Architecture

[Dashcam Video Input]
    โ†“
[YOLOv12 Vehicle Detection]
    โ†“
[ROI Zone Classification]
    โ†“
[Distance Calculation] โ†’ [Perspective Correction]
    โ†“
[License Plate Detection] โ†’ [Blurring]
    โ†“
[Warning Assessment] โ†’ [Color Coding]
    โ†“
[Visualization & Output]
    โ†“
[Annotated Video Output]

2.2 Processing Pipeline

  1. Capture frame from dashcam video
  2. Run YOLOv12 detection on full frame
  3. Filter detections to vehicle classes (2, 3, 5, 7)
  4. Classify vehicle center into ROI zone (LEFT/MAIN/RIGHT)
  5. Calculate distance using perspective projection formula
  6. Apply perspective distortion correction based on offset
  7. Detect license plates within vehicle regions
  8. Apply Gaussian blur to license plates
  9. Evaluate distance against zone-specific warning threshold
  10. Color-code distance label (RED/GREEN) accordingly
  11. Overlay distance text and warning indicators
  12. Record annotated frame to output video

3. Mathematical Framework

3.1 Perspective Projection Distance Calculation

The fundamental equation relating observed bounding box height to actual distance:

\[d = \frac{h_{\text{real}} \cdot f}{h_{\text{image}}}\]

where:

3.2 ROI Zone Definition

Three trapezoidal regions for multi-lane monitoring (2042ร—1148 resolution):

\[\mathbf{ROI}_{\text{LEFT}} = \{(x, y) : (x, y) \in \text{Polygon}([[240, 600], [925, 550], [312, 1100], [100, 1100]])\}\]

\[\mathbf{ROI}_{\text{MAIN}} = \{(x, y) : (x, y) \in \text{Polygon}([[925, 550], [1025, 550], [1712, 1100], [312, 1100]])\}\]

\[\mathbf{ROI}_{\text{RIGHT}} = \{(x, y) : (x, y) \in \text{Polygon}([[1025, 550], [1802, 600], [1942, 1100], [1712, 1100]])\}\]

3.3 Vehicle Height Classification

Reference heights for distance calculation by vehicle type:

\[H_{\text{vehicle}} = \begin{cases} 1.55 \text{ m} & \text{if class ID} = 2 \text{ (Car)} \\ 1.2 \text{ m} & \text{if class ID} = 3 \text{ (Motorcycle)} \\ 3.0 \text{ m} & \text{if class ID} = 5 \text{ (Bus)} \\ 2.5 \text{ m} & \text{if class ID} = 7 \text{ (Truck)} \end{cases}\]

3.4 Perspective Distortion Correction

Correction factor for off-center vehicles due to perspective distortion:

\[d_{\text{corrected}} = d \cdot (1 + \alpha \cdot \delta)\]

where:

\[\delta = \sqrt{(x_{\text{vehicle}} - x_{\text{center}})^2 + (y_{\text{vehicle}} - y_{\text{center}})^2}\]

4. Requirements

requirements.txt

python>=3.7
opencv-python>=4.5.0
numpy>=1.21.0
ultralytics>=8.0.0

5. Installation & Configuration

5.1 Environment Setup

# Clone the repository
git clone https://github.com/kemalkilicaslan/Vehicle-Distance-Measurement-System.git
cd Vehicle-Distance-Measurement-System

# Install required packages
pip install -r requirements.txt

5.2 Project Structure

Vehicle-Distance-Measurement-System/
โ”œโ”€โ”€ Vehicle-Distance-Measurement-System.py
โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ requirements.txt
โ””โ”€โ”€ LICENSE

5.3 Required Files

6. Usage / How to Run

6.1 Basic Execution

python Vehicle-Distance-Measurement-System.py

6.2 Configuration

Update the script parameters for your specific setup:

# Video input/output
video_capture = cv2.VideoCapture("dashcam_video.mov")
output_file = 'Vehicle-Distance-Measurement.mp4'

# ROI Zones (modify for different camera angles)
ROI_ZONES = {
    'LEFT': [[240, 600], [925, 550], [312, 1100], [100, 1100]],
    'MAIN': [[925, 550], [1025, 550], [1712, 1100], [312, 1100]],
    'RIGHT': [[1025, 550], [1802, 600], [1942, 1100], [1712, 1100]]
}

# Optical centers per zone (camera calibration)
OPTICAL_CENTERS = {
    'LEFT': (500, 800),
    'MAIN': (1025, 900),
    'RIGHT': (1550, 800)
}

# Warning distances per zone (meters)
WARNING_DISTANCES = {
    'LEFT': 1.0,
    'MAIN': 2.0,
    'RIGHT': 1.0
}

# Camera parameters
FOCAL_LENGTH = 500  # pixels
VEHICLE_CONFIDENCE = 0.7
LICENSE_PLATE_CONFIDENCE = 0.475

6.3 Controls

6.4 Output

The processed video is saved as:

Vehicle-Distance-Measurement.mp4

7. Application / Results

7.1 Input Video

Vehicle License Plate Blurring:

7.2 Vehicle Distance Measurement in Region of Interest

7.3 Output Video

Vehicle Distance Measurement:

8. System Configuration

8.1 Vehicle Classes

Class ID Vehicle Type Reference Height
2 Car 1.55 m
3 Motorcycle 1.2 m
5 Bus 3.0 m
7 Truck 2.5 m

8.2 Warning System Thresholds

Lane Warning Distance Display Limit
LEFT 1.0 m 5.0 m
MAIN 2.0 m 15.0 m
RIGHT 1.0 m 5.0 m

8.3 Color Coding

8.4 System Parameters

Parameter Value Unit Description
Focal Length 500 pixels Camera focal length calibration
Vehicle Confidence 0.7 - Detection threshold for vehicles
License Plate Confidence 0.475 - Detection threshold for plates
Max Display Distance 15 meters Maximum distance shown in MAIN lane
Displacement Coefficient 0.0001 - Perspective correction factor

9. How It Works

9.1 Detection and Distance Calculation Flow

                     โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
                     โ”‚  Input Frame  โ”‚
                     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                             โ”‚
                             โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚         YOLOv12 VEHICLE AND LICENSE PLATE DETECTION     โ”‚
โ”‚  Vehicles: Boxes with class IDs and confidence scores   โ”‚
โ”‚  Plates: Separate detections within vehicle regions     โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                             โ”‚
                             โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ•‘  ROI ZONE       โ”‚  DISTANCE       โ”‚  LICENSE PLATE      โ”‚
โ•‘  CLASSIFICATION โ”‚  CALCULATION    โ”‚  BLURRING           โ”‚
โ•‘  (LEFT/MAIN     โ”‚  Via perspectiveโ”‚  (Gaussian blur)    โ”‚
โ•‘   /RIGHT)       โ”‚  projection     โ”‚                     โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                             โ”‚
                             โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚         PERSPECTIVE DISTORTION CORRECTION               โ”‚
โ”‚  Adjust distance based on optical center offset         โ”‚
โ”‚  d_corrected = d ร— (1 + ฮฑ ร— ฮด)                          โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                             โ”‚
                             โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚         WARNING ASSESSMENT AND COLOR CODING             โ”‚
โ”‚  Compare distance to zone-specific warning threshold    โ”‚
โ”‚  RED: Below threshold | GREEN: Above threshold          โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                             โ”‚
                             โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚         VISUALIZATION AND ANNOTATION                    โ”‚
โ”‚  โ€ข Overlay distance labels with color coding            โ”‚
โ”‚  โ€ข Display ROI zones with transparency                  โ”‚
โ”‚  โ€ข Mark warning indicators                              โ”‚
โ”‚  โ€ข Draw license plate blur regions                      โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                             โ”‚
                             โ–ผ
                     โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
                     โ”‚ Output Frame โ”‚
                     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

9.2 Perspective Projection Model

The system calculates distance using the pinhole camera model. For a vehicle with known height \(h_{\text{real}}\) and observed bounding box height \(h_{\text{image}}\), the distance is inversely proportional to the image height. The focal length \(f\) (determined through camera calibration) acts as a scaling constant. Additional corrections account for vehicles off the optical axis due to perspective distortion.

10. Tech Stack

10.1 Core Technologies

10.2 Dependencies

Library Version Purpose
opencv-python 4.5+ Video I/O, image processing, visualization
ultralytics 8.0+ YOLOv12 vehicle and license plate detection
numpy 1.21+ Array operations and geometric calculations

10.3 Pre-trained Models

YOLOv12 (Extra Large): yolo12x.pt

License Plate Detection Model: license-plate.pt

11. License

This project is open source and available under the Apache License 2.0.

12. References

  1. Ultralytics YOLOv12 Documentation.
  2. OpenCV Camera Calibration and 3D Reconstruction Documentation.

Acknowledgments

Special thanks to the Ultralytics team for developing and maintaining the YOLO framework and YOLOv12 models. This project benefits from the OpenCV community's excellent camera calibration and computer vision tools. The perspective projection methodology is based on established pinhole camera models in computer vision literature. Sample dashcam footage used for demonstration purposes only.


Note: This system is calibrated for specific dashcam configurations. Recalibrate focal length and ROI zones when using different camera equipment. Ensure compliance with local laws regarding vehicle data collection and dashcam recording. This project is intended for research, educational, and authorized commercial applications in vehicle safety systems. Always prioritize driver safety and avoid distraction when using in-vehicle monitoring systems.