Biden Signs Executive Order on Artificial Intelligence, Drawing Mixed Reviews

AI expert Marva Bailer explains how the average person now has more access than ever to create deepfakes of celebrities, despite existing laws. President Biden signed what he called a “landmark” executive order (EO) on artificial intelligence, which has received mixed reviews from experts in the rapidly developing technology.

The executive order focuses on several key areas, including the provision of “testing data” for review by the federal government. This provision could potentially allow the government to examine the “black box” algorithms that may lead to biased AI algorithms. Experts like Christopher Alexander, chief analytics officer of Pioneer Development Group, believe that this could be helpful in providing oversight and commercial protections, as core algorithms are typically proprietary.

However, Alexander also emphasizes the need for a bipartisan and technocratic effort, free from political ideology, in order to effectively mitigate the threats posed by AI. Biden’s executive order contains new regulations for AI, which he claims are the “most sweeping actions ever taken to protect Americans from the potential risks of AI systems.”

The executive order requires AI developers to share safety test results with the government, establish standards to monitor and ensure the safety of AI, and implement guardrails to protect Americans’ privacy as AI technology continues to advance.

While some experts, like Jon Schweppe, policy director of American Principles Project, agree that concerns about AI warrant government oversight, they argue that the order focuses on the wrong priorities. Schweppe believes that direct government oversight should primarily extend to areas such as scientific research and homeland security, without bureaucratic interference in all facets of AI development.

Schweppe suggests that private oversight should also play a role, and AI developers should be held liable for the actions of their AI systems. He advocates for Congress to create a private right of action for citizens to seek legal recourse when harmed by AI. According to Schweppe, this fear of liability would incentivize AI companies to self-correct and prioritize consumer protection.

The executive order builds on voluntary commitments made by major technology companies to share data about AI safety with the government. Ziven Havens, policy director of the Bull Moose Project, views Biden’s order as a decent first attempt at AI policy, particularly in addressing crucial topics such as watermarks, workforce impact, and national security.

However, Havens raises concerns about the timeline for developing the necessary guidance and warns against falling behind in the global AI race due to bureaucratic inefficiencies. Phil Siegel, founder of the Center for Advanced Preparedness and Threat Response Simulation, commends the thoroughness of the executive order but questions whether it attempts to take on too much.

Siegel identifies four pillars for AI regulation: protecting vulnerable populations, developing comprehensive laws, ensuring algorithm fairness, and establishing trust and safety. He believes that the executive order excels in the last two pillars but falls short in addressing the first two. Siegel emphasizes the need for collaboration between Congress and the White House to transform some of the executive order’s provisions into law.

The White House has not yet responded to requests for comment on the executive order.

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments

Zeen is a next generation WordPress theme. It’s powerful, beautifully designed and comes with everything you need to engage your visitors and increase conversions.

0
Would love your thoughts, please comment.x
()
x