Booker, experts highlight civil rights concerns in artificial intelligence

0 59

Booker, experts highlight civil rights concerns in artificial intelligence

Sen. Cory Booker (D-N.J.) gathered a host of experts and advocates Thursday to discuss the implications of artificial intelligence (AI) on civil rights.

“I love technology and innovation,” Booker said at a panel discussion Thursday from the Capitol. “But I also know that technology can elevate the power of discrimination.”

The panel discussed how mortgage lending algorithms have been more likely to deny home loans to people of color than to white people; how AI-enabled recruiting and hiring tools have been known to discriminate against women and minority candidates; and how automatic systems used in hospitals have repeatedly understated the needs of Black patients, exacerbating health care disparities.

Panelists Thursday also pointed out instances in which AI has altered people’s looks, including lightening some skin tones, as well as some products like ChatGPT that have spewed stereotypes or even slurs.

“Equal opportunity and civil rights and racial justice are inextricably linked and impacted by today’s and tomorrow’s technology,” said Damon Hewitt, president and executive director of the Lawyers’ Committee for Civil Rights Under Law.

“Algorithms are used to make decisions about all aspects of our lives: Determining who gets bailed, who can rent a house and where we can go to school. Although these systems are so widely used, we know that they pose a high risk of discrimination, disproportionately harming the communities that we focus on at the Lawyers’ Committee and other civil rights organizations. Because algorithmic technologies are built using data that reflects generations of redlining, segregation and such, they often build on bad data, discriminatory data that is going to be likely to harm people.”

But the panel also discussed how AI can be used to counter these issues. 

Suresh Venkatasubramanian, director of the Center for Technological Responsibility, Reimagination, and Redesign at Brown University, highlighted that it is humans who choose how to build AI systems, and therefore humans who can choose how to eliminate racial biases in the construction. 

“AI systems are very good at doing what we tell them to do,” Venkatasubramanian said. “This means that once you’ve trained a system to do what we’ve told it do, we have to test to make sure it’s doing what we wanted. The task of testing gets harder and harder as the system is getting more and more complicated or deep.

“We have to make sure the system’s not doing anything harmful or discriminatory. We have to examine the data used to train the system. We have to examine the design choices used when building the system. And we have to make sure the system is deployed under the same conditions it was tested on. They’re intelligent and conscious. They can learn things about the world that we cannot even comprehend.”

Others, like Fabian Rogers, a community advocate in New York, called for legislative protections over things like facial recognition software, which Rogers said can be “deadly.”

The American Civil Liberties Union, which has also spoken out about such technology, previously highlighted that facial recognition could give anyone the power to track faces at protests, political rallies and places of worship.

Advocates Wednesday called for the passage of an AI Bill of Rights. The White House has already released a five-point blueprint, which identified five principles that should guide the design, use and deployment of AI systems to protect Americans.

The principles call for systems to be developed with consultation from diverse communities; for AI systems to be designed in an equitable way so as to limit artificial bias; for protections of data privacy; for ensuring users are properly informed if an AI system is being used and how it affects them; and for the right to to opt out of AI systems in favor of accessing a live person. 

Congress is now working to pass legislation on the matter.

Earlier this month, Booker, Sen. Ron Wyden (D-Ore.) and Rep. Yvette Clarke (D-N.Y.), introduced the Algorithmic Accountability Act of 2023. The bill would create new protections for people affected by AI systems that are already impacting areas like housing, credit and education.

“If you know there is implicit racial bias in systems, AI could actually be a tool to design a way to find them, to level the playing field, to expand the opportunity,” Booker said Wednesday.

Source link

Denial of responsibility! YoursTelecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave A Reply

Your email address will not be published.