The client had a request: Is it be possible to determine, how energetically people are dancing?
The idea behind it was, to measure people’s energy level. As you may already guess, it was an ad campaign for one of the energy drink brands.
The campaign wasn’t designed to be an online campaign. The plan was that people will be dancing on the stage that built for the ad campaign, and our software should be measuring their energy level of the dance in real time.
This is how I designed the system:
The goal was, every dancer should see his result in real-time on the big screen, to know how well he/she’s doing. In this way, the person who’s performing would know if he/she should speed-up or can slow down a bit if he/she is out of breath.
To achieve this, we had to show a progress bar that changes according to the dancer’s performance. And at the end of the performance, we had to send a recorded act of dance with the progress bar’s fluctuations to the server to store.
And last but not least, we had to be able to control the software remotely, because we shouldn’t interact with the notebook that mirrors the big screen.
Once requirements are established, it was the time to build the actual system.
To rate the performance, I’ve set the minimum threshold to check if an amount of the pixel changes were bigger than the threshold that meant person on camera dancing energetic enough and added 1% to his progress if pixel changes were below the threshold I penalized dancer by reducing his progress by 1%. Which meant if the dancer would decide to stop a while for taking a breath, his progress would keep declining. It would continue until dancer hits 100%, which meant he won.
If you want to dig a little deeper to understand the technical background of motion detection I strongly recommend you to check out the articles I mentioned at the end of this note. Once you’ve understood how things work under the hood, then when you’re using a library you won’t feel like you’re shooting in the dark. For example, I’ve used “diff-cam-engine” for motion detection.
When I was done with the game mechanism of the app, now I had to work on the part where I had to record the video of the performance and merge it with the progress bar animation and add the music to the background which played when dancer performed.
To do it I’ve decided to use hidden canvas, where I drew video and the animation of the progress bar side by side, in real-time. When the performer was done with the act, then I added the music to the background using WebRTC and sent the final product to the server. This was the only time when I’ve sent a huge chunk of data to the backend. (I’ve listed tutorials and code examples of the WebRTC at the end of this note)
Example of the canvas recording:
After dealing with the video editing part of the app, now it was the time to handle the remote control part of the system. If you remember one of the requirements was that we shouldn’t interact with the monitor screen in any way. In order to achieve seamless integration with the backend, socket came to rescue. Front-end of the app listened to the socket to receive new player’s data, as well as start and stop command of the game. These pieces of information were sent from the control panel of the app, where listened to the socket to receive info about the process that happened on the front-end of the app.
When it comes to the backend, Python is my go-to language. Because of Django’s solid structure and built-in features, I’ve decided to use Django at the backend this time as well. And handling the socket connections was fairly easy too, because of Django channels. Simple Django setup inside the docker container did the trick. Here’s the setup example that I’ve used.
As a conclusion, the project completed successfully.
Below a list of the articles and libraries that I looked into before starting the project, may be helpful in your situation as well: