New rules on artificial intelligence in the US government

President Biden is expected to announce new rules requiring government agencies to more thoroughly evaluate artificial intelligence tools to ensure they are secure and do not expose sensitive information.

The government is also expected to relax immigration policies for tech workers.

/images/1ef.jpg
President Biden is expected to announce new rules

After previous efforts to rein in generative artificial intelligence (genAI) were criticized as too vague and ineffective, the Biden administration is expected to announce new, more restrictive rules for the use of this technology by federal employees.

Vice President Kamala Harris also met with CEOs of Google, Microsoft and OpenAI - the creator of the popular chatbot ChatGPT - to discuss potential problems with genAI, which include security, privacy and control issues.

/images/1ef41.jpg
CEOs of Google, Microsoft and OpenAI

The new executive order is expected to raise the bar for national cybersecurity defenses by requiring that large language models (LLMs) - the foundation of generative AI - undergo evaluations before they can be used by US government agencies.

These agencies include the US Department of Defense, Department of Energy and intelligence agencies, according to the Post.

The executive order, which is expected to be unveiled Monday, would also change immigration standards to allow for a larger influx of technology workers to help accelerate US development efforts.

Generative AI, which has been advancing at breakneck speed and setting off alarm bells among industry experts, prompted Biden to issue “guidance” last May.

The new rules will strengthen what has been a voluntary commitment by 15 AI development companies to do what they can to ensure that genAI systems are evaluated in a way that is consistent with responsible use.

/images/1ef43.jpg
genAI systems are evaluated in a way that is consistent with responsible use

“Hallucinations happen because LLMs, in their most vanilla form, don’t have a representation of the internal state of the world,” said Jonathan Siddharth, CEO of Turing, a California company that uses AI to find and hire remote software engineers.

At the most basic level, the tools can gather and analyze massive amounts of data from the internet, corporate and even government sources to provide more accurate and insightful content to users.

The downside is that the information collected by AI is not necessarily stored securely.

/images/1ef434.jpg
Information collected by AI is not necessarily stored securely

AI apps and networks can make this sensitive information vulnerable to data exploitation by third parties.

While that tracking software is meant to help the technology better understand habits to serve users more effectively, it also collects personal information as part of large data sets used to train AI models.

For companies developing AI, the executive order could require an overhaul of how they approach their practices, according to Adnan Masood, chief AI architect at digital transformation services company UST.

“However, aligning with national standards could also streamline federal procurement processes for their products and encourage trust among private consumers,” Masood said.

/images/1ef415.jpg
New AI rules: Challenges and Opportunities

“If we tip the balance too far towards restrictive oversight, especially of research, development and open source initiatives, we risk stifling innovation and ceding ground to more permissive jurisdictions globally,” Masood continued.

Masood said the upcoming White House regulations were “long overdue and is a good step at a critical time in the US government’s approach to harnessing and limiting AI technology.”

“I have reservations about expanding the scope of regulation in the area of research and development,” Masood said. “The nature of AI research requires a level of openness and collective scrutiny that can be stifled by excessive regulation.

In particular, I am opposed to any constraints that would impede open-source AI initiatives, which have been a driving force behind most innovations in this field.

Italy categorically banned further development of ChatGPT due to privacy concerns after the natural language processing app suffered a data breach involving user conversations and payment information.

/images/1ef431.jpg
Italy categorically banned further development of ChatGPT

States and municipalities are considering their own restrictions on the use of AI-based bots to find, screen, interview and hire job candidates due to privacy and bias concerns.

Some states have already passed laws to this effect.

The White House is also expected to ask the National Institute of Standards and Technology to tighten industry guidelines for testing and evaluating AI systems

Provisions that would build on the voluntary commitments on safety, security and trust that the Biden administration obtained this year from 15 major AI technology companies.

Biden’s move is particularly critical at a time when genAI is continually booming, leading to unprecedented capabilities in content creation, deepfakes and potentially new forms of cyber threats, Masood said.

/images/1ef4344.jpg
Biden's move is particularly critical

“This landscape makes it obvious that the government’s role is not only as a regulator, but also as a facilitator and consumer of AI technology,” he added.

Masood said he is a strong proponent of a nuanced approach to AI regulation, as oversight of AI product deployments is essential to ensure they meet safety and ethical standards.

Related Content