The visitors of the Taylor Swift concert at the 2018 Rose Bowl were shocked to find this. Only one year later, the reports started to pop up, describing that one of the event’s organizers applied a facial recognition technology to survey the crowd.
Wow. Can they really do that without informing the attendees? When the journalists have finally discovered the name of the company behind the undertaking – which turned out to be ISM Connect, an event marketing and security organization – they received the following answer.
According to the statement given by the head of marketing over at ISM Connect, the company placed large screens at each of the entrance points in the venues that hosted Taylor’s concerts that contained smart cameras equipped with facial recognition technology. The sole purpose of the project was to increase the safety and security of the event by using cameras to identify persons who presented a security risk based on pre-existing information possessed by the company.
Unsurprisingly, Swift was shocked, too. There’s a good reason for it: in addition to security measures, ISM Connect is also known for collecting metrics to inform future projects.
But is ISM Connect alone in this? Surely, it can’t be, as many tech giants have been described as experimenting with facial recognition technology. Should we worry about this? Are they already tracking us just like ISM did at Swift’s concerts? Here’s what we know.
The $20 Billion Industry
The facial recognition technology has been attracting a lot of interest from leading tech companies from all around the world. Indeed, it has a great promise for many industries; for example, some claim that an AI-enabled system can detect a potential terrorist by reading their facial impressions and even behavior.
Having such a system would equip law enforcement with new-level tools against terrorists and help prevent mass shootings, suicide bombings, and save hundreds, and even thousands of lives every year. That’s why police and security forces are testing out automated facial recognition systems based on AI algorithms to give anti-terrorist forces a powerful tool.
However, increasing public security isn’t the only purpose behind the interest in AI-enabled facial recognition systems. Many tech giants including Microsoft and IBM are actively involved in their development, and their projects have nothing to do with identifying terrorists. Some of the main purposes for emotion detection research and development is to define how people respond to new products or tell whether a candidate is the right one for the job during their impression during the interview.
Clearly, using AI-powered chatbots to determine people’s emotional states has stopped being the stuff of science fiction, and some sources suggest that it’s already a $20 billion industry. Facial recognition in particular accounts for about 21 percent of the biometric technology market, says Statista, which, added by fingerprint identification, hand geometry recognition, and vein recognition, makes a strong business case for a lot of large tech companies.
So, what exactly are companies like Microsoft and IBM are doing with facial recognition technology? Is there anyone holding them accountable for how they collect, utilize, and store biometric data of people?
So, who’s involved in tracking people’s emotions?
There have been reports of companies like Amazon, IBM, Microsoft, and Google being involved in the development of facial recognition algorithms and systems. Let’s talk a little bit about what they have been doing as well as the results of their projects.
Microsoft has been doing its facial recognition initiatives pretty quietly. Recently, the company shocked the world when reputable news sources described that it has taken down a large facial recognition database that contained 10 million images of about 100,000 people. According to reports, the database used the images that the company collected from search engines. Later, these images were utilized to train facial recognition systems around the world.
Despite Microsoft’s claims about the database used for academic purposes only, there have been reports about it being connected to an AI project conducted in China with the goal of cracking down on ethnic minorities in the country. The news has raised concerns about the potential misuse of the technology, especially against minorities and immigrants. Not surprisingly, many were relieved when the company decided to hit the brakes and took down the database.
However, that didn’t mean the end of the company’s effort. Currently, it’s selling Face API – Facial Recognition Software to companies, and a lot of customer reviews suggests that many organizations have already tested and benefitted from the solution. So, it’s safe to assume that Microsoft’s algorithm is being used as you read this article.
Related read: “Enterprises derive full value of conversational AI only through end-to-end chatbot platforms”— Sairam Vedam, CMO, Kore.ai
One thing should be said in the defense of Microsoft, though. Brad Smith, the company’s president, has repeatedly urged the government to work on legislation that requires facial recognition to be tested independently. He also wrote a powerful article last year where he warned that the technology can be used “in ways that exacerbate societal issues,” thus supporting the need for better control over the development and utilization of facial recognition.
Other companies are also catching up.
For example, Amazon is another big player in the facial recognition game. Their latest product – Amazon Rekognition – claims to deliver an “intelligent image and video analysis to applications,” and can identify objects, activities, scenes, text, and, of course, people. In fact, the company boasts that the system ensures highly accurate facial recognition, analysis, and even comparison.
On top of that, Amazon Rekognition is also equipped with the facial data of celebrities, so the users should also be able to identify well-known individuals in images and videos.
Many news sources have reached out to Amazon asking to reveal the names of the businesses using the system. However, besides mentioning several non-profit organizations utilizing Rekognition to find missing children, the company didn’t disclose any for-profit organizations. According to their statement, they won’t disclose any information on Rekognition customers without their consent.
However, after having obtained access to Amazon’s documentation, the American Civil Liberties Union claimed that a number of local law enforcement agencies are using the system. Soon, news about Orlando’s police tracking people with Amazon’s technology in several places around the city emerged. According to reports, the program began in 2017 but the City has recently decided to end it because of technical difficulties as well as uncertainties over its effectiveness to identify suspects in real-time.
“This led to numerous Amazon employees writing to the management, asking to cancel sales of the facial recognition technology to law enforcement,” says Andy Grier, a news researcher from Studicus. “The company responded by saying that it’s ready to participate in the national discussion of the technology to make sure that its utilization is legal and transparent.”
So, both Microsoft and Amazon have expressed their desire to contribute to an appropriate national legislative framework that ensures the protection of personal civil rights. However, the sales of their facial recognition software continue; in fact, there was news about some of the Amazon’s stakeholders trying to limit the sale of the software but their effort has failed.
Instead, the company is actively involved in adding new features. For example, one of the latest updates to Rekognition reported by the media was the facial analysis that now features improved age estimation and even identification of fear. Now, Rekognition can track and analyze the emotional state of hundreds of people using a database as large as several million records of individuals.
At this point, the Washington County Sheriff’s Office in Oregon is the only confirmed law enforcement agency which is officially listed as a Rekognition customer, but predicting the future of technology is really difficult.
Another notable player that was responsible for stirring controversy is IBM. Reportedly, the company has quietly used photos from Flickr to test its own facial recognition project, which is a good way to make AI recognition systems more accurate and reliable. Even though the company has issued a statement where they explained their careful approach to the safety of private data, opting out from the project was impossible because no one was aware that their data had been used.
Naturally, this news has caused a strong reaction from the public. Things got worse when BuzzFeed released an extensive report describing that IBM has been involved in selling biometric surveillance systems based on facial recognition to the UAE’s police. The report has made a number of serious accusations of the local government, which calls for even more international attention and regulation of the use of facial recognition.
Unclear Future
Taylor Swift was reportedly shocked when she was told that the fans were checked with facial recognition software during her concert at the Rose Bowl. Many people share the same reaction, especially after hearing the news about tech giants selling the technology and keeping the names of the customers in secret.
Indeed, this technology can be a huge threat to privacy, so an appropriate international legal framework should be in place to protect the public. At this point, however, it’s clear that things are a bit messy, as facial recognition is becoming a multi-billion industry with little to no regulation.
Guest Author- Estelle Liotard, Kore.ai