Can AI detect this emotion? It is marked with deep concern

0

Perhaps the best news to come out of the interviews Microsoft researcher Kate Crawford did for her new book AI Atlas is that she kept her job after sharing her thoughts on the subject.

Crawford is one of a woefully small (and all-female) group of industry insiders from leading companies, speaking out against the mistakes and shortcomings of AI development and marketing.

Two Google executives tasked with ensuring that ethics are built into their employers’ AI development were not given the chance to keep their jobs after doing precisely what they were tasked to do.

Crawford is a Senior Principal Investigator at Microsoft Research as well as a Senior Lecturer in Communication and Science and Technology Studies at the University of Southern California. AI Atlas examines the costs and benefits of life increased by the algorithm.

The term AI is too abstract for most people (including many CEOs), and that’s a problem considering how it increasingly affects life, from suggesting content to biometric monitoring.

It’s marketed – by Microsoft and others – as a green, unbiased, democratic, and widely ready approach to life and death issues, but the industry is wrong about it, according to Crawford.

Interviewed in The Guardian, Crawford said that in writing her book she went to the extreme – for example, visiting a mine producing raw materials for technology – to understand for herself the costs, the impacts and the intellectual efforts that result in algorithmic decision making.

It’s his position that most people think AI is electronics chewing up innocent bits of data, but much underrated and invisible human labor goes into the end product.

AI is truly a human endeavor that makes digital systems appear to be self-sufficient, according to Crawford.

And like anything that is created by people, it is biased.

Industry marketing is trying to convince potential buyers and regulators that any bias in coding is insignificant, easily removed by tapping into increasingly large databases to train algorithms.

The message could be that accidental and malicious bias was a problem until, perhaps, 2020. New, perfectly balanced databases will dilute substandard material. End of the problem.

Crawford appears to lean towards some sort of mass filtering of public and private databases, rigid standards to minimize bias, and a recognition by all involved that bias cannot be eliminated.

She’s much less optimistic about recognizing emotions or affective computing – an area Microsoft is playing in. Crawford has company with his doubts.

In the Guardian interview, she says the claim that thoughts, intentions, urges and plans can be read from facial expressions is deeply flawed. As unreliable as it may be, the market is expected to grow to $ 37.1 billion.

A Vice article revealed that four software companies – Cerence, Eyeris, Affectiva and Xperi – were either selling or preparing to sell emotion detection algorithms to automakers.

They are relatively small fry, however. Microsoft has been researching and developing emotion recognition since at least 2015 and it is a feature with the Face API, which is part of Azure Cognitive Services.

So it’s worrying that AI developers, especially emotion recognition, continue to grapple with concerns about bias, performance, and perhaps even legitimacy. But Microsoft, at least, is allowing an employee to sound the alarm bells.

It’s progress, isn’t it?

Articles topics

precision | affective biometrics | IA | biometrics | biometric research | emotion detection | recognition of emotions | Microsoft | standards


Source link

Share.

Leave A Reply