[ Tips ]
Neeley, Tsedal;Ruper, Stefani
Dr. Timnit Gebru-a leading artificial intelligence (AI) computer scientist and co-lead of Google''s Ethical AI team-was messaging with one of her colleagues when she saw the words: "Did you resign?? Megan sent an email saying that she accepted your resignation." Heart rate spiking, Gebru was shocked to find that her company account had been cut off. She scrolled through her personal inbox to find an email stating that the company could not agree to the conditions she had stipulated about a research paper critiquing large language models and also expressing disapproval of a message she had sent to an internal listserv about halting diversity, equity, and inclusion (DEI) efforts without accountability. Therefore, Google was accepting Gebru''s "resignation," effective immediately. Gebru who hadn''t submitted a formal resignation realized she had been fired. Gebru had been concerned that large language models were racing ahead with little appraisal of their potential risks and debiasing strategies. Her ousting sent shockwaves through the AI and tech community. Thousands of people signed a petition against what they characterized as unprecedented research censorship. Nine members of congress would write the CEO of the company-Sundar Pichai-questioning his commitment to Ethical AI. The outspoken Gebru''s experience raises fundamental questions about countering AI bias. Could tech companies lead the way with in-house AI ethics research? Should that type of work reside with more objective actors outside of companies? On the other hand, shouldn''t those who best understand the technology at play be the ones to investigate the bias or ethical challenges that might creep up? The answers to these questions remain central to the exponentially growing AI domain that companies have to consider.