15.5 C
New York

Biden’s Executive Order on AI Safeguards to Ensure USA “Leads the Way”

The White House has unveiled a broad set of rules and principles meant to ensure America “leads the way” in regulating artificial intelligence (AI).

President Joe Biden signed an executive order on Monday, October 30, 2023, outlining a broad set of principles and rules for regulating artificial intelligence (AI) in the United States. The order is intended to ensure that the US “leads the way” in AI regulation, amid fierce international competition.

Among other things, the executive order requires AI developers to submit the results of their security tests to the federal government, if their projects pose a “serious risk” to national security, economic security, or public health. This requirement is based on the Defense Production Act, a Cold War-era law that gives the federal government the power to constrain companies when national security is at stake.

The safety test criteria will be set at the federal level and made public.

The order also addresses consumer protection. The Commerce Department is to issue guidance to label and watermark AI-generated content, helping differentiate between authentic interactions and those generated by the software. This move builds on voluntary commitments already made by technology companies.

The Biden administration is particularly concerned about the risks that AI development may pose in the field of biotechnology and infrastructure. The executive order also promises to publish recommendations on discrimination, given the potential biases that AI systems may carry reflecting the biases of the data they are trained on.

It also directs the federal government to monitor the impact of AI on employment, as AI is expected to automate many jobs in the coming years.

Despite the ambition of the executive order, President Biden’s room for maneuver is limited. Any truly binding and ambitious legislation on artificial intelligence would need to pass through the US Congress. However, with Congress currently divided between Democrats and Republicans, the adoption of a large-scale law seems unlikely.

President Biden has nevertheless called on lawmakers to legislate in order to “protect the privacy” of Americans. This comes at a time when artificial intelligence not only makes it easier to extract, identify, and exploit personal data but also encourages doing so, as companies use this data to train algorithms.

The US has seen initial regulations emerge, and 2023 promises to provide further – and more general – AI compliance obligations. 

The White House Deputy Chief of Staff Bruce Reed hailed the measures as “the strongest set of actions any government in the world has ever taken on AI safety, security and trust.

What’s missing?

The executive order does not address a number of important issues related to AI regulation, including:

  • Liability: Who is liable if an AI system causes harm?
  • Intellectual property: Who owns the intellectual property rights to AI systems and the data they are trained on?
  • Transparency: How can AI systems be made more transparent and accountable?
  • International cooperation: How can the US cooperate with other countries to develop and implement international AI standards?

One area where the executive order could be stronger is in its enforcement provisions. The order does not specify how the federal government will enforce its requirements on AI developers. It is also unclear how the federal government will assess the risks posed by different AI projects.

Who will benefit from Biden’s AI safeguards executive order?

President Biden’s executive order on AI safeguards is a welcome step, but it is important to ask who will benefit the most from these new rules.

One group that is likely to benefit is large technology companies. These companies have the resources to comply with the executive order’s requirements, and they are likely to be able to use their expertise to develop new AI products and services that meet the government’s safety and security standards.

Another group that could benefit from the executive order is the US government itself. The government is increasingly using AI for a variety of purposes, including national security, law enforcement, and economic development. The executive order’s requirements could help the government to ensure that its own AI systems are safe and secure.

Small businesses and startups may find it difficult to comply with the order’s requirements, which could give larger companies an unfair advantage. Additionally, the order’s requirements could stifle innovation in the AI field.

It is also important to consider the potential impact of the executive order on international competition.

The United States is not the only country that is developing AI safeguards.

Other countries, such as China and the European Union, are also developing AI regulations. If the US executive order is too burdensome, it could make it difficult for US companies to compete internationally.

Several countries have issued regulations on AI in the last three years, each with a unique approach:

  1. Brazil: In early October 2021, Brazilian lawmakers passed a bill outlining the legal regulations for AI. The bill emphasizes transparency in the public sector and states that operating systems should be disclosed through an AI agent.
  2. European Union (EU): In April 2021, the EU Commission proposed new rules aiming to turn Europe into a global hub for trustworthy AI. The regulations are directly applicable across the EU once adopted by the European Parliament and the Member States.  The proposed Artificial Intelligence Act classifies AI systems by risk and mandates various development and use requirements. European lawmakers agreed to more stringent amendments in June 2023.
  3. United Kingdom (UK): The UK has not introduced blanket AI-specific regulation so far, preferring a sector-led approach. However, this might change once the UK’s Office for AI releases its white paper on governing and regulating AI in early 2022.
  4. India: Prime Minister Narendra Modi highlighted India’s role in shaping the global supply chain and turned the spotlight on technology’s frontier: Artificial Intelligence (AI) and cryptocurrency at the B20 Summit India 2023. He called for a global framework for expanding “ethical” artificial intelligence (AI) which signals India’s transformation from a passive stance on AI regulation to an active role in shaping regulations based on a “risk-based, user-harm” approach.
  5. Latin American Countries: The Latin American Artificial Intelligence Index (ILIA) examines the regulatory situation regarding AI technology in 12 Latin American countries.
  6. Other Countries: According to Stanford University’s 2023 AI Index, 37 bills related to AI were passed into law throughout the world in 2022, the AI Index has broadened its tracking of global AI legislation from 25 countries in 2022 to 127 in 2023.

For more detailed information on each country’s AI policies, you can refer to the OECD’s live repository of over 800 AI policy initiatives from 69 countries, territories, and the EU.

Subscribe

Related articles

Pearson Unveils Generative AI Foundations Certification

AI proficiency is emerging as a must-have skill across...

Three-time Spacewalker Josh Cassada to Retire from NASA

NASA astronaut Josh Cassada retired on Oct. 1, after 11 years of...

SURVEY: Consumers Are Ignoring Cybersecurity Risks Despite Identify Theft Concerns

As consumer reliance on technology deepens—through the convenience of...

Author

editorialteam
editorialteam
If you wish to publish a sponsored article or like to get featured in our magazine please reach us at contact@alltechmagazine.com