We are a group of three passionate engineers that love to learn new skills. We have teamed up to develop an IoT Edge Application that uses computer vision to make the workplace safer for everyone. This is a brief description of our solution that outlines the different modules that made this application possible.
Our challenge was to use the Edge IoT platform to create a video analysis application for use in the workplace in different scenarios. Video analysis applications can be used to detect leaks, H&S violations, combustion patterns, etc.
For our solution we have applied computer vision concepts along with IoT solutions to develop different modules that run on an Edge IoT device. These modules included an object detection module that monitors people, cars and animals in a video feed. It also included an MQTT publisher module that uses the Agora SDK to collect data from other modules and send them on an MQTT channel. Lastly, we trained a helmet detection model to monitor H&S in the workplace. We deployed it to the device and linked it to a live camera feed.
These modules were setup to run on the Edge Device on a virtual machine running ubuntu 20 due to the requirements of the code. This was set up on a windows machine and all the prerequisites were installed on the machine to run the modules and the device.
All the collected data was sent through MQTT and Node-Red to InfluxDB cloud and then injested into Grafana for visualisation. A demo of all the modules being visualised can be seen below:
Below is a block diagram showing the complete solution with the different modules deployed on the IoT device.
This was the main and first requirement for the challenge. Implementing this required learning about Docker Containers in order to adjust the provided dockerfile to work correctly. As a team we haven't had prior experience in Docker so that was a challenge that took a significant amount of time. The video below shows the setup and functioning of the object detection application that detects people, cars and animals in a given video.
This application was further developed to detect person attributes (eg. has_hat, has_backpack, etc.) and was deployed as a dockerized Edge Application to our IoT Device. The data from the module was sent through the EdgeHub using the code below to the MQTT publisher module.
from azure.iot.device import IoTHubModuleClient
module_client = IoTHubModuleClient.create_from_edge_environment()
payload = {'data': {
'start': 0,
'persons': currentPersons,
'cars' : currentCar,
'animals' : currentAnimal
}}
module_client.send_message_to_output(json.dumps(payload), "extData")
In order to deploy the modules on the Azure Edge Platform we had to create a device on the DGIoTHub provided. Below you can see the configuration of the running device. This had all three modules running on it and the device itself was simulated on the development machine.
The device used the MQTT standard to send the data to external sources. The "receiver" module used the Agora SDK collected the data from the other modules through the EdgeHub and then send the data on different topics to the MQTT broker running on the local machine. This was processed using Node-Red and sent to a cloud bucket on InfluxDB.
The video below shows the setup of the Agora SDK Environment and the development and running of the module includeing the MQTT Broker, Node-Red and InfluxDB.
Below is a code snippet for the main logic inside the module:
import paho.mqtt.client as mqtt
client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message
client.connect("172.17.0.1", 1883, 60)
client.loop_start()
def on_connect(client, userdata, flags, rc):
logging.debug("Connected:" + str(rc))
client.subscribe($SYS/#")
def on_message(client, userdata, msg):
logging.debug(msg.topic + " " + str(msg.payload))
if SrcModule == "videoAnalysisModule":
client.publish("det/object", payload)
elif SrcModule == "helmetDetection":
client.publish("det/helmet", payload)
The development of this module oncluded collecting a dataset, training the custom model, dockerizing the application and deploying to the Edge Device and linking it to a live camera feed.
The trainig is based on the YOLOv5 repository by Ultralytics, and it is inspired by PeterH0323
Dataset is downloaded from nuvisionpower and transformed into the required format using Roboflow
Follow this link to view the training on Code byters COLAB
- Install YOLOv5 dependencies
- Download custom YOLOv5 object detection data
- Write YOLOv5 Training configuration
- Run YOLOv5 training
- Evaluate YOLOv5 performance
- Visualize YOLOv5 training data
- Run YOLOv5 inference on test images
- Export saved YOLOv5 weights for future inference
- Run detection code on local device using custom weight
The code could use the model and detect whether a person is wearing a helmet or not in a video file. This was first modyfied to use the webcam on the laptop as a live camera feed. This was analysed by the program and an output live feed could run showing the result. The program was then dockerised in order for us to be able to run it on the Edge Device. A pytorch docker base was used and requirements installed and the code was run inside the container. The docker image was pushed to the local registry and ran on an Edge module.
In order to give access to the webcam from inside the docker container the Container Create Options had to be correctly set up on the Edge device while setting up the module as seen below.
{
"HostConfig": {
"Devices": [
{
"PathOnHost": "/dev/video0",
"PathInContainer": "/dev/video0",
"CgroupPermissions": "mrw"
}
]
}
}
The data was collected from the different modules theough the EdgeHub and sent to influxdb on the cloud. Influxdb is a realtime timeseries database that is widely uused in IoT applications. Using the cloud allowed us to visualize the data on any device and not just the development device.
We created a dashboard on Grafana for data visualization and monitoring. The data source is configured to import real time data from Influx DB. This dashboard has three main sections that are used to monitor differernt helath and safety conditions :
- Helmet Detection: used to monitor if the personas in workplace are wearing proper PPE or not, an alert will be creatd and sent to the manager when the HSE rule is violated.
- Workplace Monitoring: This monitored the number of people and car in the workplace and was used mainly to limit the number of people in an area and alert when this is violated.
- Vehicle Monitoring: The same model was used to restrict an area and allow no vehicles in it. Again, an alert was triggered when a vehicle enters the area.
Below you can see the different alerts that can be seen on the alerts panel on Grafana. An email is sent whenever a violation occurs to notify the adimns.