AIHunters helps University of Melbourne research auction attendance
Lithuania — February 14, 2023 — AIHunters, a company specializing in intelligent video content analysis and media automation, has partnered with the researchers from the University of Melbourne. The AI team has built a pipeline around their proprietary CognitiveMill™ platform that would help the researchers track the auction participation using the footage from said auctions.
The issue the team tried to solve with video analysis
The researchers set out to study the state of the real estate market through tracking of auction attendance. To do that, they have obtained the recordings of around 1000 auctions which took place in the Australian state of Victoria over two years.
Which means that there are a lot of people to count. Too many for humans, but a piece of cake for video analytics tech, if you apply it correctly.
That’s where AIHunters steps in to use their top-shelf expertise on video analysis in the form of CognitiveMill™.
What challenge AIHunters came across
Upon reviewing the analysis options for auction recordings, the AIHunters team encountered an issue with their quality. All of the 1000 videos were filmed with a smartphone and no other equipment whatsoever. Therefore, the footage has sharpness issues and no stabilization.
That issue would make analyzing the footage ineffective, since the neural networks would have a hard time picking up the details from grainy and shaky footage.
In addition, the shire volume of footage would require a large portion of both computing power and time dedicated to analysis.
So the team had to come up with a more effective solution using their expertise in cognitive computing. So they used the CognitiveMill™ cloud platform, which has proven to be very effective in video analysis, as a framework for the task.
Book a meeting and let's talk!
How the engineers applied applied video analytics
The team had to figure out the way to go around that “less-than-perfect” image quality, which might hinder the effectiveness of computer vision tech.
So AI engineers started with figuring out how a human would solve such a task.
Here’s what they found out:
- A human goes through the footage and tries to pinpoint major unique environments from the footage;
- Then they would rewind the footage and count the groups of people appearing in those scenes separately, recognizing the unique appearances.
According to human logic, there is no point tracking each and every attendee in detail to get consistent results. Which means that having a computer vision algorithm to track individual features of persons attending the auction would be an overkill.
There’s a much more subtle way to approach the problem.
Adopting human logic allows the team to spare the hardware from extra work and speed up the process by a considerable margin and get the results accurate enough for this specific case.
Which is why the team went with it:
- They started off by applying our motion perception technology to segment the environment from the footage into non-overlapping scenes and choose the most representative frames from each scene;
- Then the team used DL neural networks to detect bodies and faces of attendees;
- The network calculates the number of visible bodies and faces on each scene.
The outcome of AIHunters’ video content analysis approach
Adapting the technology to the human way of thinking, AI engineers have been able to tackle the challenge, providing precise auction attendance evaluation in the most effective way possible.
The researchers got their auction attendees counted nice and fast, and the team has one more case under the belt. Everybody is happy.