I can tell Buolamwini finds the cover amusing. She takes a picture of it. Times have changed a lot since 1961. In her new memoir, Unmasking AI: My Mission to Protect What Is Human in a World of Machines, Buolamwini shares her life story. In many ways she embodies how far tech has come since then, and how much further it still needs to go. 

Buolamwini is best known for a pioneering paper she co-wrote with AI researcher Timnit Gebru in 2017, called “Gender Shades,” which exposed how commercial facial recognition systems often failed to recognize the faces of Black and brown people, especially Black women. Her research and advocacy led companies such as Google, IBM, and Microsoft to improve their software so it would be less biased and back away from selling their technology to law enforcement. 

Now, Buolamwini has a new target in sight. She is calling for a radical rethink of how AI systems are built. Buolamwini tells MIT Technology Review that, amid the current AI hype cycle, she sees a very real risk of letting technology companies pen the rules that apply to them—repeating the very mistake, she argues, that has previously allowed biased and oppressive technology to thrive.

“What concerns me is we’re giving so many companies a free pass, or we’re applauding the innovation while turning our head [away from the harms],” Buolamwini says. 

A particular concern, says Buolamwini, is the basis upon which we are building today’s sparkliest AI toys, so-called foundation models. Technologists envision these multifunctional models serving as a springboard for many other AI applications, from chatbots to automated movie-making. They are built by scraping masses of data from the internet, inevitably including copyrighted content and personal information. Many AI companies are now being sued by artists, music companies, and writers, who claim their intellectual property was taken without consent

The current modus operandi of today’s AI companies is unethical—a form of “data colonialism,” Buolamwini says, with a “full disregard for consent.”  

“What’s out there for the taking, if there aren’t laws—it’s just pillaged,” she says. As an author, Buolamwini says, she fully expects her book, her poems, her voice, and her op-eds—even her PhD dissertation—to be scraped into AI models. 

“Should I find that any of my work has been used in these systems, I will definitely speak up. That’s what we do,” she says.  

Source