Starting Monday Biden Signed Executive Order Regulating AI

On Monday, the Biden administration is set to publish an executive order that marks its first major attempt at regulating artificial intelligence (AI).

AI is advancing so quickly that some experts think we are already behind in our attempts to control this technology, which promises much and could also have the potential to destroy us.

The government will be significantly larger if AI is regulated.

According to Politico the executive order “will pave the way for more AI to be used in almost every aspect of federal life, from education to health care, housing to trade.”

The draft order of Oct. 23, however, calls for new and extensive checks to be made on technology. It directs agencies to establish standards that will ensure data privacy, cybersecurity, and fairness, as well as prevent discrimination and enforce fairness. Multiple people who were consulted or seen drafts of the document have verified that it is a draft order.

The White House didn’t respond to a request for confirmation of the draft.

The order is not a law, and prior White House AI initiatives have been criticized as lacking teeth in enforcement. However, the new guidelines are expected to give federal agencies more influence on the US market by using their purchasing power and enforcement tools. Biden’s directive, for example, directs the Federal Trade Commission to focus on anticompetitive behavior and consumer harms within the AI industry. This is a mission that Chair Lina Khan publicly accepted.

Biden pledged to “lead the way” in “responsible AI innovations.” But what exactly does “responsible AI innovations” mean?

Fox News Digital quoted Phil Siegel as the founder of the Center for Advanced Preparedness and Threat Response Simulation. “We should applaud this first step, but we need a framework to guide the next steps that will truly protect our freedoms.”

Siegel said that to do so would require the government to put its weight behind what he called four pillars of regulation, which would address concerns regarding AI safety. Siegel said that the first pillar was to protect vulnerable groups, such as children, from “scams” and “other harms”. The second pillar would be to introduce new rules to the criminal justice code in order to prevent AI from being used to cover criminals. Siegel said that the third pillar would be to ensure fairness by preventing current biases from being rooted in AI data and models. The fourth pillar would be to focus on the “trust” and “safety” of AI systems, which “includes an agreement on how they are used and not utilized.”

It’s not exactly Isaac Asimov’s “Three Laws of Robotics”, but it’s a great start.

1. A robot cannot harm a person or allow them to be injured by inaction.

2. Robots must obey human orders, except when they are in conflict with the First Law.

3. Robots must be protected from harm, as long as they do not violate the First and Second Laws.

It is a complex project with many moving pieces. The risks are immense and the industries that may be affected could be unprecedented. It is inevitable that mistakes will be made. There will be areas that are overregulated or underregulated. This is a process of regulation with higher stakes than any regulatory undertaking before.

We should do our best to keep AI’s regulatory process out of the partisan food wars that typically rage in Washington, and instead offer constructive criticism on how to best protect us and our country.