Opinion editor's note: Editorials represent the opinions of the Star Tribune Editorial Board, which operates independently from the newsroom.
A breakthrough on AI safeguards
Tech giants agree with Biden on need for guardrails. Legislation should come next.
•••
Some of the largest and most powerful artificial intelligence companies in the country — Google, Meta (Facebook), Amazon, Microsoft and others — have, at the behest of the White House, agreed to abide by voluntary safety and security standards, a move needed to protect the public.
In a recent White House meeting with President Joe Biden, those companies — along with Anthropic, Inflection and, notably, OpenAI, the creator of ChatGPT — committed to protective guardrails.
Remarkably, the agreement includes a mutual pledge to allow independent security experts to test the companies' systems before public release and to share safety data with government officials and academics.
The companies also have committed to developing tools that will alert the public whenever an image, video or text is created by artificial intelligence, known as "watermarking." That is another badly needed move in the face of a growing inability to distinguish human-generated text and images from those that are the result of artificial intelligence.
Nick Clegg of Meta, the parent company of Facebook, said in a statement that the safeguards "are an important first step in ensuring responsible guardrails are established for AI, and they create a model for other governments to follow."
In announcing the agreement, Biden rightly noted that emerging AI technologies can pose a threat "to our democracy and our values." Taking the proper precautions, he said, could avoid that scenario.
Biden deserves praise for pulling together yet another compromise under difficult circumstances and getting ahead of potential problems.
Previous congressional attempts to impose regulation on the industry have fallen victim to powerful lobbying efforts as well as infighting among legislators and competing priorities. The agreement, though voluntary and without the force of law, is a testament to Biden's dogged belief in the value of bringing different sides together in search for common ground. Some of his biggest victories — on infrastructure, gun reforms and other issues — have come about in just that way.
But Biden is not content with just voluntary guidelines. He has said he will work with all sides, executives and lawmakers alike, to develop reasonable and appropriate AI legislation that can guide emerging technology while protecting consumers. "This is a serious responsibility," he said, as the top executives from those companies stood by. "We have to get it right." Democrats and Republicans alike have expressed concerns about tech and social media giants for years, and should be able to find a bipartisan way to safeguard the public from potential excesses.
Development of AI, in particular, has been moving rapidly and only accelerated since OpenAI launched ChatGPT, the conversational "chatbot" that has been called the fastest- growing app of all time. Considered an industry game-changer, ChatGPT uses natural language to create humanlike, conversational dialogue. It can respond to questions and even compose written content. It grows more adept and sophisticated — smarter, if you will — with every version and is already in use to write articles, social media posts, essays and emails.
One analysis estimated that ChatGPT had 100 million active users two months after its release. When it was unveiled, TikTok took nine months to hit that number. Other tech companies have been scrambling to come up with their own versions.
All of that has resulted in mounting pressure to find a way to rein in AI tech giants. In June, a bipartisan group of lawmakers introduced legislation, the National AI Commission Act, that would create a 20-member commission on artificial intelligence, with the goal of developing a framework for regulating fast-emerging AI technology, including an examination of federal agencies' capacity to address regulation and enforcement.
Voluntary commitments are a fine start. But it is reassuring to know that the White House and lawmakers intend to develop enforceable AI legislation that can hold the industry accountable.
Biden Chief of Staff Jeff Zients, a former Facebook board member, recently noted in an interview with Axios that the two danger zones the White House is particularly concerned about involve national security, including the potential for biosecurity and cyberattacks, and consumer scams that invade privacy. "One of the lessons we've learned," he said, "is that we've got to move fast — we cannot chase this technology."
Now that Gov. Tim Walz’s vice presidential bid has ended, there’s important work to do at home. Reinvigorating that “One Minnesota” campaign is a must.