Create an end-to-end video analytics pipeline to detect people and calculate the social distance between people from multiple input video feeds. Multi-Camera Detection of Social Distancing demonstrates how to use the Video Analytics Microservice in an application and store the data to InfluxDB*. This data can be visualized on a Grafana* dashboard.
Select Configure & Download to download the reference implementation and the software listed below.
Configure & Download
Time to Complete: 45 minutes
Refer to OpenVINO™ Toolkit System Requirements for supported GPU and VPU processors.
† Use Kernel 5.8 for 11th generation Intel® Core™ processors.
This is a reference implementation that demonstrates how to use the Video Analytics Microservice in an application for creating a social distancing detection use case. The reference implementation consists of the pipeline and model config files that are used with the Video Analytics Microservice Docker image by volume mounting and the docker-compose.yml file for starting the containers. The results of the pipeline execution are routed to MQTT broker and can be viewed from there. The inference results are also used by the vision algorithms in the Docker image mcss-eva for population density detection, social distance calculation, etc.
The package uses Docker* and Docker Compose for automated container management.
A multi-camera surveillance solution demonstrates an end-to-end video analytics application to detect people and calculate the social distance between people from multiple input video feeds. It uses the video analytics microservice to ingest input videos and perform deep learning inference. Based on the inferenced output data published on MQTT topic, it then calculates the social distance violations and serves the inference results to a webserver.
The steps below are performed by the application:
Figure 1: Solution Flow Diagram
Select Configure & Download to download the reference implementation and then follow the steps below to install it.
Configure & Download
unzip multi_camera_detection_of_social_distancing.zip
cd multi_camera_detection_of_social_distancing
chmod 755 edgesoftware
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker
./edgesoftware install
NOTE: If you have issues with the image pull, try this command: sudo ./edgesoftware install
See Troubleshooting at the end of this document for details.
Figure 3: Install Success
cd multi_camera_detection_of_social_distancing/MultiCamera_Detection_of_Social_Distancing_/MCSD_Resources
NOTE: In the command above,
The application works better with input feed in which cameras are placed at eye level angle.
Download sample video at 1280x720 resolution, rename the file by replacing the spaces with the _ character (for example, Pexels_Videos_2670.mp4), and place it in the following directory:
multi_camera_detection_of_social_distancing/MultiCamera_Detection_of_Social_Distancing_/MCSD_Resources/resources
Where
(Data set subject to this license. The terms and conditions of the dataset license apply. Intel® does not grant any rights to the data files.)
To use multiple videos or any other video, download the video files and place them under the resources directory.
The model to download is defined in the models.list.yml present in the model_list folder.
Execute the below commands to download the required object_detection model (person-detection-retail-0013) from Open Model Zoo:
sudo chmod +x ../Edge_Video_Analytics_Resources/tools/model_downloader/model_downloader.sh
sudo ../Edge_Video_Analytics_Resources/tools/model_downloader/model_downloader.sh --model-list models_list/models.list.yml
You will see output similar to:
Figure 4: Download Object Detection Model
The pipeline for this RI is present in the /MCSD_Resources/pipelines/ folder. This pipeline uses the person-detection-retail-0013 model downloaded in the above step. The pipeline template is defined below:
"template": [
"{auto_source} ! decodebin",
" ! gvadetect model={models[object_detection][person_detection][network]} name=detection",
" ! gvametaconvert name=metaconvert ! gvametapublish name=destination",
" ! appsink name=appsink"
]
The pipeline uses standard GStreamer elements for input source and decoding the media files, gvadetect to detect objects, gvametaconvert to produce json from detections, and gvametapublish to publish results to MQTT destination. The model identifier for gvadetect element is updated to the new model. The model downloaded is present in the models/object_detection/person_detection directory.
Refer to Defining Media Analytics Pipelines for understanding the pipeline template and defining your own pipeline.
cd multi_camera_detection_of_social_distancing/MultiCamera_Detection_of_Social_Distancing_/MCSD_Resources
NOTE: In the command above, export HOST_IP=$(hostname -I | cut -d' ' -f1)
sudo -E docker-compose up -d
This will volume mount the pipelines and models folders to the edge video analytics microservice.sudo docker-compose ps
Figure 5: Check Status of Running Containers
sudo docker logs -f mcss-eva
You will see output similar to: Figure 6: Logs of Running Containers
To view the output streams, open the browser with url
In some cases, the video execution concludes even before you get to view this dashboard. In order to see the output streams, you may need to rerun the mcss-eva container by executing this command:
sudo docker restart mcss-eva
Figure 7: View Output Stream
NOTE: In case of failure in proxy enabled network and failure in the sudo docker-compose up command, refer to the Troubleshooting section of this document.
hostname -I | cut -d' ' -f1
Figure 8: Grafana Home Screen
Figure 9: Select MCSS Main Dashboard
Figure 10: MCSS Main Grafana Dashboard
Figure 11: MCSS Channel Grafana Dashboard
This application successfully leverages Intel® Distribution of OpenVINO™ toolkit plugins using Video Analytics Microservice for detecting and measuring distance between the people and storing data to InfluxDB. It can be extended further to provide support for feed from network stream (RTSP camera) and the algorithm can be optimized for better performance.
To continue learning, see the following guides and software resources:
In a proxy environment, if single user proxy is set (i.e. in .bashrc file) then some of the component installation may fail or installation hangs. Make sure you have set the proxy in /etc/environment.
If your system is in a proxy network, add the proxy details in the environment section in the docker-compose.yml file.
HTTP_PROXY=http:///
HTTPS_PROXY=http:////
NO_PROXY=localhost,127.0.0.1
If proxy details are missing, then it fails to get the required source video file for running the pipelines and installing the required packages inside the container.
Run the command: sudo -E docker-compose up
Additionally, if your system is in a proxy network, add the proxy details for Docker at path /etc/system/system/docker.service.d and add the details inside the proxy.conf file:
HTTP_PROXY=http:///
HTTPS_PROXY=http:////
NO_PROXY=localhost,127.0.0.1
Once the proxy has been updated, execute the following commands:
sudo systemctl restart docker
sudo systemctl daemon-reload
Stop/remove the containers if it shows conflict errors.
To remove the edge_video_analytics_microservice and eiv_mqtt, follow the steps below:
First, run one of the following commands:
sudo docker stop edge_video_analytics_microservice eiv_mqtt
Or
sudo docker rm edge_video_analytics_microservice eiv_mqtt
Next, run sudo docker-compose down from the path specified below:
cd multi_camera_detection_of_social_distancing/MultiCamera_Detection_of_Social_Distancing_/Edge_Video_Analytics_Resources
sudo docker-compose down
NOTE: In the command above,
If you're unable to resolve your issues, contact the Support Forum.