Timnit Gebru helped expose how artificial intelligence replicates prejudice. She’s not waiting for Big Tech to fix it


Gebru is quick to detect bias in others but there is no sign in the long article below that she detects any bias in herself.  Yet as a woman with some African ancestry she can be expected to show bias in favour of her own group.  And that can mean bias against other groups, whites in particular.  And, as she is a woman, bias against men  can also be suspected.  

But those are not mere suspicions.  She repeatedly reveals an animus against white men.  They are the demons in her theogony, the constant offenders against what is right: Her enemies.  She could not be clearer about her attitude to white men

And that leads to a serious blindness, albeit a common blindness.  She appears to have no idea about the way biases such as hers work.  She may be aware that her biases are conventional Leftism but, if so, she no critique of that.  She offers no critique of such biases and shows no awareness of how intergroup biases in general arise.  Allport's old insight that there is a "kernel of truth" in stereotypes seems unknown to her. Or, if it is known, she allows it no influence on her thinking or advocacy.  She shows no insight into how intergroup beliefs in general, "stereotypes", arise.

She can perhaps be forgiven for that great gap in her thinking, that  lack of insight.  There is no hint that she has ever studied psychology.  But, as a much-published psychologist,  I am well aware of the studies of belief  formation.  And I can put my conclusions from that research quite starkly: Racial prejudice to a major degree reflects racial reality.  To be even more explicit,  negative beliefs about blacks mostly arises from bad behavior by many blacks.  What she detects as bias is in fact lsrgely realism.

So her finding that discourse about minorities presents them negatively is no fault of the data she has gathered.  It is a feature, not a bug.  So her whole enterprise is misconceived.  She is tilting at widmills.  The windmills are there but they are there for a good reason


Google hired Gebru in 2018 to help ensure that its AI products did not perpetuate racism or other societal inequalities. In her role, Gebru hired prominent researchers of color, published several papers that highlighted biases and ethical risks, and spoke at conferences. She also began raising her voice internally about her experiences of racism and sexism at work. But it was one of her research papers that led to her departure. “I had so many issues at Google,” Gebru tells TIME over a Zoom call. “But the censorship of my paper was the worst instance.”

In that fateful paper, Gebru and her co-authors questioned the ethics of large language AI models, which seek to understand and reproduce human language. Google is a world leader in AI research, an industry forecast to contribute $15.7 trillion to the global economy by 2030, according to accounting firm Pwc. But Gebru’s paper suggested that, in their rush to build bigger, more powerful language models, companies including Google weren’t stopping to think about the kinds of biases being built into them—biases that could entrench existing inequalities, rather than help solve them. It also raised concerns about the environmental impact of the AIs, which use huge amounts of energy. In the battle for AI dominance, Big Tech companies were seemingly prioritizing profits over safety, the authors suggested, calling for the industry to slow down. “It was like, You built this thing, but mine is even bigger,” Gebru recalls of the atmosphere at the time. “When you have that attitude, you’re obviously not thinking about ethics.”

Gebru’s departure from Google set offa firestorm in the AI world. The company appeared to have forced out one of the world’s most respected ethical AI researchers after she criticized some of its most lucrative work. The backlash was fierce.

The dispute didn’t just raise concerns about whether corporate behemoths like Google’s parent Alphabet could be trusted to ensure this technology benefited humanity and not just their bottom lines. It also brought attention to important questions: If artificial intelligence is trained on data from the real world, who loses out when that data reflects systemic injustices? Were the companies at the forefront of AI really listening to the people they had hired to mitigate those harms? And, in the quest for AI dominance, who gets to decide what kind of collateral damage is acceptable?

For the past decade, AI has been quietly seeping into daily life, from facial recognition to digital assistants like Siri or Alexa. These largely unregulated uses of AI are highly lucrative for those who control them, but are already causing real-world harms to those who are subjected to them: false arrests; health care discrimination; and a rise in pervasive surveillance that, in the case of policing, can disproportionately affect Black people and disadvantaged socioeconomic groups

https://time.com/6132399/timnit-gebru-ai-google/

No comments:

Post a Comment

All comments containing Chinese characters will not be published as I do not understand them