The Client's Challenge
There are dozens of traffic victims in our country every day. Accident prevention is key. This is one of the authorities of the Flemish Government’s Department of Mobility and Public Works (MOW). Recently, it introduced a new governing approach that is focused on, among others, experimenting with proactive detection of unsafe traffic situations to prevent accidents. They came to us for help with the smart image recognition.
The Solution
Since March 2021, the department MOW has launched multiple pilot projects in Flanders whereby fixed and drone cameras detect unsafe traffic situations. In West Flanders, there are projects set up in Kortrijk, Roeselare and Zonnebeke. Here, the MOW’s data lab investigates how unsafe traffic situations involving cyclists are detected before an accident would actually occur. Drones and cameras register traffic flows (mainly at busy intersections) and detect problems like: traffic lights, inappropriate speeds, disregarded right-of-way rules, crossability or accessibility of public spaces, and more. For example, they can detect how a cyclist on a roundabout in Roeselare just barely avoided being hit.
Over a period of 4 months, we thus received thousands of images for analysis. We used these to scan for pedestrians, cyclists, buses, cars and more specifically their respective movements. To make effective evaluations of a safer solution in traffic, we initially measured calibration points on the road, to transform the pixel coordinates to geographical coordinates. With this transformation, we project the trajectory of each road user on a map.
We then analyze the observed projected trajectories. The analyses is varied, ranging from measuring traffic violations to the frequency of violations over time. More importantly, however, is to research the proactive detection of dangerous traffic intersections. Detecting near misses identifies the possibility of a dangerous intersection.
Through a secure connection, the images are transmitted to an Azure Storage Container. The application itself is written in Python and relies on a dataset of labeled data. Since we did not have access to such a labeled dataset ourselves, we used a pre-trained YOLOv5 model in Pytorch, an object detection model trained on similar camera images in Montreal (Canada). To track the many moving objects in a video recording, we rely on BYTEtrack, a state-of-the-art Multi Object Tracking algorithm that determines the identity, location and trajectory of each moving object, even in cases where the object is partially or completely covered (occluded). Processing of all camera images is done on dockers in Azure Cloud.
The trajectories are stored in Citus, a distributed PostgreSQL database, and all the infrastructure is managed with Terraform, an infrastucture-as-code tool that allows us to manage the data infrastructure.