article_text
stringlengths 294
32.8k
⌀ | topic
stringlengths 3
42
|
---|---|
In early 2011, Ken Jennings looked like humanity’s last hope. Watson, an artificial intelligence created by the tech giant IBM, had picked off lesser Jeopardy players before the show’s all-time champ entered a three-day exhibition match. At the end of the first game, Watson—a machine the size of 10 refrigerators—had Jennings on the ropes, leading $35,734 to $4,800. On day three, Watson finished the job. “I for one welcome our new computer overlords,” Jennings wrote on his video screen during Final Jeopardy.
Watson was better than any previous AI at addressing a problem that had long stumped researchers: How do you get a computer to precisely understand a clue posed in idiomatic English and then spit out the correct answer (or, as in Jeopardy, the right question)? “Not a hit list of documents where the answer may be,” which is what search engines returned, “but the very specific answer,” David Ferrucci, Watson’s lead developer, told me. His team fed Watson more than 200 million pages of documents—from dictionaries, encyclopedias, novels, plays, the Bible—creating something that sure seemed like a synthetic brain. And America lost its mind over it: “Could Watson be coming next for our jobs in radiology or the law?” NPR asked in a story called “The Dark Side of Watson.” Four months after its Jeopardy win, the computer was named Person of the Year at the Webby Awards. (Watson’s acceptance speech: “Person of the Year: ironic.”)
But now that people are once again facing questions about seemingly omnipotent AI, Watson is conspicuously absent. When I asked the longtime tech analyst Benedict Evans about Watson, he quoted Obi-Wan Kenobi: “That’s a name I’ve not heard in a long time.” ChatGPT and other new generative-AI tools can furnish pastiche poetry and popes wearing Balenciaga, capabilities that far exceed what Watson could do a decade ago, though ones still based in the ideas of natural-language processing that helped dethrone Jennings. Watson should be bragging in its stilted voice, not fading into irrelevance. But its trajectory is happening all over again; part of what doomed the technology is now poised to chip away at the potential of popular AI products today.
The first thing to know about Watson is that it isn’t dead. The machine’s models and algorithms have been nipped and tucked into a body of B2B software. Today IBM sells Watson by subscription, folding the code into applications like Watson Assistant, Watson Orchestrate, and Watson Discovery, which help automate back-end processes within customer service, human resources, and document entry and analysis. Companies like Honda, Siemens, and CVS Health hit up “Big Blue” for AI assistance on a number of automation projects, and an IBM spokesperson told me that the company’s Watson tools are used by more than 100 million people. If you ask IBM to build you an app that uses machine learning to optimize something in your business, “they’ll be very happy to build that, and it will probably be perfectly good,” Evans said.
From the very beginning, IBM wanted to turn Watson into a business tool. After all, this is IBM—the International Business Machines Corporation—a company that long ago carved out a niche catering to big firms that need IT help. But what Watson has become is much more modest than IBM’s initial sales pitch, which included unleashing the machine’s fact-finding prowess on topics as varied as stock tips and personalized cancer treatments. And to remind everyone just how revolutionary Watson was, IBM put out TV commercials in which Watson cheerfully bantered with celebrities like Ridley Scott and Serena Williams. The company soon struck AI-centric deals with hospitals such as Memorial Sloan Kettering and the MD Anderson Cancer Center; they slowly foundered. Watson the machine could play Jeopardy at a very high level; Watson the digital assistant, essentially a swole Clippy fed on enterprise data and techno-optimism, could barely read doctors’ handwriting, let alone disrupt oncology.
The tech just didn’t measure up. “There was no intelligence there,” Evans said. Watson’s machine-learning models were very advanced for 2011, but not compared with bots like ChatGPT, which have ingested much of what has been published online. Watson was trained on far less information and excelled only at answering fact-based questions like the kind you find on Jeopardy. That talent contained obvious commercial potential—at least in certain areas, like search. “I think that what Watson was good at at the time kind of morphed into what you see Google doing,” Ferrucci said: surfacing precise answers to colloquial questions.
But the suits in charge went after the bigger and more technically challenging game of feeding the machine entirely different types of material. They viewed Watson as a generational meal ticket. “There was a lot of hyperbole around it, and a lot of lack of appreciation for what it really can do and what it can’t do, and ultimately what is needed to effectively solve business problems,” Ferrucci said. He left IBM in 2012 and later founded an AI start-up called Elemental Cognition.
When asked about what went wrong, an IBM spokesperson pointed me to a recent statement from CEO Arving Krishna: “I think the mistake we made in 2011 is that we concluded something correctly, but drew the wrong conclusions from the conclusions.” Watson was “a concept car,” Kareem Yusuf, the head of product management for IBM’s software portfolio, told me—a proof of technology meant to prod further innovation.
And yet to others, IBM may have seemed more concerned with building a showroom for its flashy convertible than figuring out how to design next year’s model. Part of IBM’s problem was structural. Richer, nimbler companies like Google, Facebook, and even Uber were driving the most relevant AI research, developing their own algorithms and threading them through everyday software. “If you were a cutting-edge machine-learning academic,” Evans said, “and Google comes to you and Meta comes to you and IBM comes to you, why would you go to IBM? It’s a company from the ’70s.” By the mid-2010s, he told me, Google and Facebook were leading the pack on machine-learning research and development, making big bets on AI start-ups such as DeepMind. Meanwhile, IBM was producing a 90-second Academy Awards spot starring Watson, Carrie Fisher, and the voice of Steve Buscemi.
In a sense, IBM’s vision for a suite of business tools built around machine learning and natural-language processing has come true—just not thanks to IBM. Today, AI powers your search results, assembles your news feed, and alerts your bank to possible fraud activity. It hums in the background of “everything you deal with every day,” Rosanne Liu, a senior research scientist at Google and the co-founder of ML Collective, a research nonprofit, told me. This AI moment is creating even more of a corporate clamor for automation as every company wants a bot of its own.
Although Watson has been reduced to a historical footnote, IBM is still getting in on the action. The most advanced AI work is not happening in IBM’s Westchester, New York, headquarters, but much of it is open-source and has a short shelf life. Tailoring Silicon Valley’s hand-me-downs can be a profitable business. Yusuf invoked platoons of knowledge workers armed with the tools of the 20th century. “You’ve got people with PDFs, highlighters,” he said. IBM can offer them programs that help them do better—that bump their productivity a few points, or decrease their error rates, or spot problems faster, such as faults on a manufacturing line or cracks in a bridge.
Whatever IBM makes next won’t fulfill the promise implied by Watson’s early run, but that promise was misunderstood—in many ways by IBM most of all. Watson was a demo model capable of drumming up enormous popular interest, but its potential sputtered as soon as the C-suite attempted to turn on the money spigot. The same thing seems to be true of the new crop of AI tools. High schoolers can generate A Separate Peace essays in the voice of Mitch Hedberg, sure, but that’s not where the money is. Instead, ChatGPT is quickly being sanded down into a million product-market fits. The banal consumer and enterprise software that results—features to help you find photos of your dog or sell you a slightly better kibble—could become as invisible to us as all the other data we passively consume. In March, Salesforce introduced Einstein GPT, a product that uses OpenAI’s technology to draft sales emails, part of a trend that Evans recently described as the “boring automation of boring processes in the boring back-offices of boring companies.” Watson’s legacy—a big name attached to a humble purpose—is playing out yet again.
The future of AI may still prove to be truly world-changing in the way that Watson once suggested. But the only business that IBM has managed to disrupt is its own. On Monday, International Workers’ Day, it announced that it would pause hiring for roughly 7,800 jobs that it believes AI could perform in the coming years. Vacating thousands of roles in the name of cost-saving measures has rarely sounded so upbeat, but after years of positive spin, why back down now? Yusuf swore that IBM’s future is just around the corner, and this time would be different. “Watch this space,” he said. | AI Research |
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
In the last week there has been a lot of talk about whether journalists or copywriters could or should be replaced by AI. Personally, I’m not worried. Here’s why.
So far, newsrooms have pursued two very different approaches to integrating the buzziest new AI tool, ChatGPT, into their work. Tech news site CNET secretly started using ChatGPT to write entire articles, only for the experiment to go up in flames. It ultimately had to issue corrections amid accusations of plagiarism. Buzzfeed, on the other hand, has taken a more careful, measured approach. Its leaders want to use ChatGPT to generate quiz answers, guided by journalists who create the topics and questions.
You can boil these stories down to a fundamental question many industries now face: How much control should we give to an AI system? CNET gave too much and ended up in an embarrassing mess, whereas Buzzfeed’s more cautious (and transparent) approach of using ChatGPT as a productivity tool has been generally well received, and led its stock price to surge.
But here’s the dirty secret of journalism: a surprisingly large amount of it could be automated, says Charlie Beckett, a professor at the London School of Economics who runs a program called JournalismAI. Journalists routinely reuse text from news agencies and steal ideas for stories and sources from competitors. It makes perfect sense for newsrooms to explore how new technologies could help them make these processes more efficient.
“The idea that journalism is this blossoming flower bed of originality and creativity is absolute rubbish,” Beckett says. (Ouch!)
It’s not necessarily a bad thing if we can outsource some of the boring and repetitive parts of journalism to AI. In fact, it could free journalists up to do more creative and important work.
One good example I’ve seen of this is using ChatGPT to repackage newswire text into the “smart brevity” format used by Axios. The chatbot seems to do a good enough job of it, and I can imagine that any journalist in charge of imposing that format will be happy to have time to do something more fun.
That’s just one example of how newsrooms might successfully use AI. AI can also help journalists summarize long pieces of text, comb through data sets, or come up with ideas for headlines. In the process of writing this newsletter, I’ve used several AI tools myself, such as autocomplete in word processing and transcribing audio interviews.
But there are some major concerns with using AI in newsrooms. A major one is privacy, especially around sensitive stories where it’s vital to protect your source’s identity. This is a problem journalists at MIT Technology Review have bumped into with audio transcription services, and sadly the only way around it is to transcribe sensitive interviews manually.
Journalists should also exercise caution around inputting sensitive material into ChatGPT. We have no idea how its creator, OpenAI, handles data fed to the bot, and it is likely our inputs are being plowed right back into training the model, which means they could potentially be regurgitated to people using it in the future. Companies are already wising up to this: a lawyer for Amazon has reportedly warned employees against using ChatGPT on internal company documents.
ChatGPT is also a notorious bullshitter, as CNET found out the hard way. AI language models work by predicting the next word, but they have no knowledge of meaning or context. They spew falsehoods all the time. That means everything they generate has to be carefully double-checked. After a while, it feels less time-consuming to just write that article yourself.
New report: Generative AI in industrial design and engineering
Generative AI—the hottest technology this year—is transforming entire sectors, from journalism and drug design to industrial design and engineering. It’ll be more important than ever for leaders in those industries to stay ahead. We’ve got you covered. A new research report from MIT Technology Review highlights the opportunities—and potential pitfalls— of this new technology for industrial design and engineering.
The report includes two case studies from leading industrial and engineering companies that are already applying generative AI to their work—and a ton of takeaways and best practices from industry leaders. It is available now for $195.
Deeper Learning
People are already using ChatGPT to create workout plans
Some exercise nuts have started using ChatGPT as a proxy personal trainer. My colleague Rhiannon Williams asked the chatbot to come up with a marathon training program for her as part of a piece delving into whether AI might change the way we work out. You can read how it went for her here.
Sweat it out: This story is not only a fun read, but a reminder that we trust AI models at our peril. As Rhiannon points out, the AI has no idea what it is like to actually exercise, and it often offers up routines that are efficient but boring. She concluded that ChatGPT might best be treated as a fun way of spicing up a workout regime that’s started to feel a bit stale, or as a way to find exercises you might not have thought of yourself.
Bits and Bytes
A watermark for chatbots can expose text written by an AI
Hidden patterns buried in AI-generated texts could help us tell whether the words we’re reading weren’t written by a human. Among other things, this could help teachers trying to spot students who’ve outsourced writing their essays to AI. (MIT Technology Review)
OpenAI is dependent on Microsoft to keep ChatGPT running
The creator of ChatGPT needs billions of dollars a day to run it. That’s the problem with these huge models—this kind of computing power is accessible only to companies with the deepest pockets. (Bloomberg)
Meta is embracing AI to help drive advertising engagement
Meta is betting on integrating AI technology deeper into its products to drive advertising revenue and engagement. The company has one of the AI industry’s biggest labs, and news like this makes me wonder what this shift toward money-making AI is going to do to AI development. Is AI research really destined to be just a vehicle to bring in advertising money? (The Wall Street Journal)
How will Google solve its AI conundrum?
Google has cutting-edge AI language models but is reluctant to use them because of the massive reputational risk that comes with integrating the tech into online search. Amid growing pressure from OpenAI and Microsoft, it is faced with a conundrum: Does it release a competing product and risk a backlash over harmful search results, or risk losing out on the latest wave of development? (The Financial Times)
Deep Dive
Artificial intelligence
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
The original startup behind Stable Diffusion has launched a generative AI for video
Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more. | AI Research |
Zerodha CEO Nithin Kamath Announces Internal AI Policy To Prevent Job Loss Anxiety
With AI gaining pace there has been lot of anxiety among employees regarding their job being redundant
Nitin Kamath, the Founder & CEO of Indian broking firm Zerodha, has announced the company's new AI policy on his official Twitter handle. The policy aims to alleviate job loss anxiety among the team and prevent the disruption of society caused by the rapid adoption of AI.
Weâve just created an internal AI policy @zerodhaonline to give clarity to the team, given the AI/job loss anxiety. This is our stance:— Nithin Kamath (@Nithin0dha) May 12, 2023
"We will not fire anyone on the team just because we have implemented a new piece of technology that makes an earlier job redundant." 1/8
Under the new policy, Zerodha will not fire any team members just because a new piece of technology has made their earlier job redundant. This decision was made in light of recent breakthroughs in AI and the recognition that AI has the potential to take away jobs and accelerate inequality.
Dr K, a member of the Zerodha team, has warned that while AI may not wake up and kill us all, the current capitalist and economic systems will rapidly adopt AI, accelerating inequality and loss of human agency. This is the immediate risk, and Zerodha's new policy seeks to address this concern by prioritizing the welfare of its team members over profits.
Nitin Kamath has expressed concern that many companies will blame job losses on AI to earn more profits and make their shareholders wealthier, worsening wealth inequality. This outcome is not beneficial for humanity, and he hopes that other businesses will follow Zerodha's lead in providing their teams with time to adapt to new technologies.
Kamath acknowledges that it may take a few years for the real impact of AI on humanity to be seen. However, businesses with financial freedom should prioritize their teams' welfare and give them time to adapt.
Overall, Zerodha's new AI policy is a welcome development that puts the welfare of its team members ahead of profits. As AI continues to advance, it is essential that other businesses follow suit to prevent the disruption of society and the acceleration of wealth inequality.
You can read the series of eight tweets made by Nithin Kamath below
Weâve just created an internal AI policy @zerodhaonline to give clarity to the team, given the AI/job loss anxiety. This is our stance:— Nithin Kamath (@Nithin0dha) May 12, 2023
"We will not fire anyone on the team just because we have implemented a new piece of technology that makes an earlier job redundant." 1/8 | AI Policy and Regulations |
Open AI’s recent release of the DALL-E 2 text-to-image generator and Meta’s subsequent announcement of its “Make a Video” tool could erode barriers to creating “deep fakes,” synthetic images or video content created through artificial intelligence. These new products, along with similar tools being developed by other companies, may provide significant advances in promoting the next wave of digital creators. At the same time, they could be exploited to create deep fakes that could have unintended economic, social and geopolitical consequences. Images and videos have been falsified since the first photographs were taken. After the Civil War, “spirit photographers” claimed to be able to capture photos of deceased loved ones (including President Abraham Lincoln), ultimately to be revealed as frauds by P.T. Barnum. But deep fakes are fundamentally different. Their realism, and the scale and ease with which they can be produced, make them incredibly potent disinformation tools. Americans may not be ready for this tsunami wave of deep fakes. In our recent research, subjects struggled to distinguish between deep fakes and authentic videos. When we randomly assigned a set of deep fake and authentic videos to more than 2,000 individuals and asked them to pick the deep fake, our test subjects were wrong over one-third of the time. Perhaps unsurprisingly given the social media savviness of American youth, middle school students outperformed adults, including the educators who might be responsible for helping them learn key skills to avoid online misinformation. Even computer science students at a top U.S. engineering university were susceptible: They were unable to sort out deep fakes from authentic videos more than 20 percent of the time. This could be a huge vulnerability that could be exploited to spread disinformation. During the initial stages of the Russian invasion of Ukraine, a deep fake of Ukrainian President Volodymyr Zelensky urging Ukrainian forces to surrender was circulated. While this video was easily debunked, it’s easy to see how this could have had geopolitical consequences. And other deep fakes have had real-world consequences, like the AI-generated voice deep fake that recently scammed a UK-based CEO out of nearly $250,000 of his company’s money. Regulating deep fakes could be a dilemma for U.S. policymakers. Several states, including Texas, California and Virginia, have passed statutes prohibiting the release of deep fakes related to elections or child pornography. But regulations concerning other applications of deep fakes are not being actively considered. There have been discussions in some legal journals about whether First Amendment protections should cover deep fakes, but the question has not been resolved. And it’s too late for policymakers in the U.S. or the EU to stomp the brakes on the development and commercialization of deep fake technologies. Even if firms were somehow banned from advancing their work in this space, deep fake technology is already out in the wild. Novice programmers can readily use existing technologies to create convincing deep fakes, and the West’s geopolitical foes have been making steady advancements on their own. Technology could provide some assistance in detecting deep fakes, but a technological arms race is already ongoing between deep fake generators and automatic detectors — and the deep fake generators are winning. Our research showed that individuals might be better able to identify deep fakes from authentic videos if they understand the social context of a video. Still, America might need a concerted effort to advance scalable policy, social and technical solutions. Otherwise, the public could find itself drowning in a flood of deep fakes, with potentially disastrous consequences. Jared Mondschein is a physical scientist at the nonprofit, nonpartisan RAND Corporation who researches AI policy and misinformation. Christopher Doss is a policy researcher at RAND who specializes in fielding causal and descriptive studies at the intersection of early childhood education and education technologies. Conrad Tucker is the Arthur Hamerschlag Career Development Professor of Mechanical Engineering and holds courtesy faculty appointments in machine learning, robotics, and biomedical engineering at Carnegie Mellon University. His research focuses on the design and optimization of systems through the acquisition, integration, and mining of large scale, disparate data. Valerie Fitton-Kane is the vice president, Development, Partnerships, and Strategy, at Challenger Center, a not-for-profit leader in science, technology, engineering, and math (STEM) education, providing more than 250,000 students annually with experiential education programs that engage students in hands-on learning opportunities. Lance Bush is president & CEO at Challenger Center, which was founded in 1986 by the STS-51L Challenger shuttle crew. The opinions expressed in this publication are those of the authors. They do not purport to reflect the opinions or views of Carnegie Mellon University, RAND Corporation or Challenger Center. | AI Policy and Regulations |
More than 100 civil society organisations from across the UK and world have today (Monday) branded the government’s AI Summit as “a missed opportunity”.
In an open letter to Prime Minister Rishi Sunak the groups warn that the “communities and workers most affected by AI have been marginalised by the Summit” while a select few corporations seek to shape the rules.
The letter has been coordinated by the TUC, Connected by Data and Open Rights Group and is released ahead of the official AI Summit at Bletchley Park on 1 and 2 November. Signatories to the letter include:
Highlighting the exclusion of civil society from the Summit, the letter says:
“Your ‘Global Summit on AI Safety’ seeks to tackle the transformational risks and benefits of AI, acknowledging that AI “will fundamentally alter the way we live, work, and relate to one another”.
“Yet the communities and workers most affected by AI have been marginalised by the Summit.
“The involvement of civil society organisations that bring a diversity of expertise and perspectives has been selective and limited.
“This is a missed opportunity.”
Highlighting the Summit’s lack of focus on immediate threats of AI and dominance of Big Tech, the letter says:
“As it stands, the Summit is a closed door event, overly focused on speculation about the remote ‘existential risks’ of ‘frontier' AI systems – systems built by the very same corporations who now seek to shape the rules.
“For many millions of people in the UK and across the world, the risks and harms of AI are not distant – they are felt in the here and now.
“This is about being fired from your job by algorithm, or unfairly profiled for a loan based on your identity or postcode.
“People are being subject to authoritarian biometric surveillance, or to discredited predictive policing.
“Small businesses and artists are being squeezed out, and innovation smothered as a handful of big tech companies capture even more power and influence.
“To make AI truly safe we must tackle these and many other issues of huge individual and societal significance. Successfully doing so will lay the foundations for managing future risks.”
Calling for a more inclusive approach to managing the risks of AI, the letter concludes:
“For the Summit itself and the work that has to follow, a wide range of expertise and the voices of communities most exposed to AI harms must have a powerful say and equal seat at the table. The inclusion of these voices will ensure that the public and policy makers get the full picture.
“In this way we can work towards ensuring the future of AI is as safe and beneficial as possible for communities in the UK and across the world.”
Senior Campaigns and Policy Officer for Connected by Data Adam Cantwell-Corn said:
“AI must be shaped in the interests of the wider public. This means ensuring that a range of expertise, perspectives and communities have an equal seat at the table. The Summit demonstrates a failure to do this.”
“The open letter is a powerful, diverse and international challenge to the unacceptable domination of AI policy by narrow interests”.
“Beyond the Summit, AI policy making needs a re-think - domestically and internationally - to steer these transformative technologies in a democratic and socially useful direction.”
TUC Assistant General Secretary Kate Bell said:
“It is hugely disappointing that unions and wider civil society have been denied proper representation at this Summit.
“AI is already making life-changing decisions – like how we work, how we’re hired and who gets fired.
“But working people have yet to be given a seat at the table.
“This event was an opportunity to bring together a wide range of voices to discuss how we deal with immediate threats and make sure AI benefits all.
“It shouldn’t just be tech bros and politicians who get to shape the future of AI.”
Open Rights Group Policy Manager for Data Rights and Privacy Abby Burke said:
“The government has bungled what could have been an opportunity for real global AI leadership due to the Summit’s limited scope and invitees.
“The agenda’s focus on future, apocalyptic risks belies the fact that government bodies and institutions in the UK are already deploying AI and automated decision-making in ways that are exposing citizens to error and bias on a massive scale.
It’s extremely concerning that the government has excluded those who are experiencing harms and other critical expert and activist voices from its Summit, allowing businesses who create and profit from AI systems to set the UK’s agenda."
The letter has been coordinated by the TUC, Connected by Data and Open Rights Group. The list of signatories and the open letter can be found here: https://ai-summit-open-letter.info/
The full letter reads:
An open letter to the Prime Minister on the ‘Global Summit on AI Safety’
Dear Prime Minister,
Your ‘Global Summit on AI Safety’ seeks to tackle the transformational risks and benefits of AI, acknowledging that AI “will fundamentally alter the way we live, work, and relate to one another”.
Yet the communities and workers most affected by AI have been marginalised by the Summit.
The involvement of civil society organisations that bring a diversity of expertise and perspectives has been selective and limited.
This is a missed opportunity.
As it stands, the Summit is a closed door event, overly focused on speculation about the remote ‘existential risks’ of ‘frontier' AI systems – systems built by the very same corporations who now seek to shape the rules.
For many millions of people in the UK and across the world, the risks and harms of AI are not distant – they are felt in the here and now.
This is about being fired from your job by algorithm, or unfairly profiled for a loan based on your identity or postcode.
People are being subject to authoritarian biometric surveillance, or to discredited predictive policing.
Small businesses and artists are being squeezed out, and innovation smothered as a handful of big tech companies capture even more power and influence.
To make AI truly safe we must tackle these and many other issues of huge individual and societal significance. Successfully doing so will lay the foundations for managing future risks.
For the Summit itself and the work that has to follow, a wide range of expertise and the voices of communities most exposed to AI harms must have a powerful say and equal seat at the table. The inclusion of these voices will ensure that the public and policy makers get the full picture.
In this way we can work towards ensuring the future of AI is as safe and beneficial as possible for communities in the UK and across the world.
Want to hear about our latest news and blogs?
Sign up now to get it straight to your inbox
To access the admin area, you will need to setup two-factor authentication (TFA). | AI Policy and Regulations |
AI Safety Newsletter #3
AI policy proposals and a new challenger approaches
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
Policy Proposals for AI Safety
Critical industries rely on the government to protect consumer safety. The FAA approves new airplane designs, the FDA tests new drugs, and the SEC and CFPB regulate risky financial instruments. Currently, there is no analogous set of regulations for AI safety.
From guiding principles to enforceable laws. Previous work on AI policy such as the White House Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework has articulated guiding principles like interpretability, robustness, and privacy. But these recommendations are not enforceable – AI developers can simply choose to ignore them.
A solution with more teeth could be on its way. Axios reports that Senator Chuck Schumer has been circulating a draft framework for AI governance among experts over the last several weeks. To help inform policy making efforts, the Department of Commerce has issued a request for comments on how to effectively regulate AI.
The European Union debates narrow vs. general AI regulation. In Europe, policy conversations are centering around the EU AI Act. The Act focuses on eight “high-risk” applications of AI, including hiring, biometrics, and criminal justice. But the rise of general purpose AI systems like ChatGPT calls into question the wisdom of regulating only a handful of specific applications.
An open letter signed by over 50 AI experts, including CAIS’s director, argues that the Act should also govern general purpose AI systems, holding AI developers liable for harm caused by their systems. Several members from all political blocs of the EU parliament have publicly agreed that rules are necessary for “powerful General Purpose AI systems that can be easily adapted to a multitude of purposes.”
Specific policy proposals for AI safety. With politicians promising that AI regulation is coming, the key question is which proposals they will choose to carry forward into law. Here is a brief compilation of several recent sets of policy proposals:
Create an AI regulatory body. A national agency focused on AI could set and enforce standards, monitor the development of powerful new models, investigate AI failures, and publish information about how to develop AI safely.
Clarify legal liability for AI harm. When ChatGPT falsely accused a law professor of sexual harassment, legal scholars argued that OpenAI should face legal liability for libel and defamatory statements made by its models. Others propose AI developers should be strictly liable for harm caused by AI, but questions remain about where to draw the line between an unsafe product versus deliberate misuse.
Compute governance. AI regulations could be automatically enforced by software built into the cutting edge computer chips used to train AI systems.
Nuclear command and control. Despite persistent problems with the security and reliability of AI systems, some military analysts advocate using AI in the process of launching nuclear weapons. A simple proposal: Don’t give AI influence over nuclear command and control.
China proposes many AI regulations. Last week, China released its own set of AI regulations that go much further than current Western efforts. Under these regulations, AI developers would be required to conduct security assessments, protect user data privacy, prevent impersonation via generative AI, and take legal responsibility for harm caused by their models. While some have opposed safety measures on the grounds that they would slow progress and allow countries like China to catch up, these regulations provide an opportunity for cooperation and taking precautions without fears of competitive loss.
Competitive Pressures in AI Development
The AI developer landscape is changing quickly—one organization is shifting its strategy, two organizations are merging, and one organization is emerging, all in order to adapt to competitive pressures.
Anthropic shifts its focus to products. Anthropic was originally founded by former OpenAI employees who were concerned about OpenAI’s product-focused direction coming at the expense of safety. More recently, however, Anthropic has been influenced by competitive pressures and has shifted their focus toward products. TechCrunch obtained a pitch deck for Anthropic’s Series C fundraising round, which includes plans to build a model that is “10 times more capable than today’s most powerful AI” that will require “a billion dollars in spending over the next 18 months.”
Elon Musk will likely launch a new AI company. Elon Musk is apparently launching a new artificial intelligence start-up to compete with OpenAI. Musk has already “secured thousands of high-powered GPU processors from Nvidia” and begun recruiting engineers from top AI labs.”
While Musk now seeks to compete with OpenAI, he was originally one of OpenAI’s co-founders. He was allegedly inspired to start the company due to concerns that Google co-founder Larry Page was not taking AI safety seriously enough. Many years ago, when Musk brought up AI safety concerns to Page, the Google co-founder allegedly responded by calling Musk a “speciesist.” The emergence of a new major AI developer will likely increase competitive pressures.
Google Brain and DeepMind merge into Google DeepMind. Google announced a merger between Google Brain and DeepMind, two major AI developers. This restructuring was likely spurred by products from Google’s competitors, OpenAI and Microsoft. Google’s announcement stated this was to make them move “faster” and to “accelerate” AI development. The new organization, Google DeepMind, will be run by Demis Hassabis (former CEO of DeepMind).
From an AI safety perspective, the effect of this decision will largely be determined by whether the new organization will have a safety culture more similar to DeepMind or Google Brain. DeepMind leadership has a much stronger history and track record of being concerned about AI safety, whereas Google Brain has never had a safety team. If DeepMind leadership has more influence over the new organization, this might represent a win for AI safety. If not, then society has essentially lost one of the (relatively) responsible leading AI developers.
Competitive pressures shape the AI landscape. These news updates relate to a larger trend in the AI development landscape: the role of competitive pressures. Building safer AI systems can incur a substantial cost. Competitive pressures can make it difficult for AI developers—even ones that care about safety—to act in ways that put safety first. The pressures to race ahead can cause actors to make sacrifices, especially when there are tradeoffs between safety and competitiveness. | AI Policy and Regulations |
On Monday, President Joe Biden issued an executive order on AI that outlines the federal government's first comprehensive regulations on generative AI systems. The order includes testing mandates for advanced AI models to ensure they can't be used for creating weapons, suggestions for watermarking AI-generated media, and provisions addressing privacy and job displacement.
In the United States, an executive order allows the president to manage and operate the federal government. Using his authority to set terms for government contracts, Biden aims to influence AI standards by stipulating that federal agencies must only enter into contracts with companies that comply with the government's newly outlined AI regulations. This approach utilizes the federal government's purchasing power to drive compliance with the newly set standards.
As of press time Monday, the White House had not yet released the full text of the executive order, but from the Fact Sheet authored by the administration and through reporting on drafts of the order by Politico and The New York Times, we can relay a picture of its content. Some parts of the order reflect positions first specified in Biden's 2022 "AI Bill of Rights" guidelines, which we covered last October.
Amid fears of existential AI harms that made big news earlier this year, the executive order includes a notable focus on AI safety and security. For the first time, developers of powerful AI systems that pose risks to national security, economic stability, or public health will be required to notify the federal government when training a model. They will also have to share safety test results and other critical information with the US government in accordance with the Defense Production Act before making them public.
Moreover, the National Institute of Standards and Technology (NIST) and the Department of Homeland Security will develop and implement standards for "red team" testing, aimed at ensuring that AI systems are safe and secure before public release. Implementing those efforts is likely easier said than done because what constitutes a "foundation model" or a "risk" could be subject to vague interpretation.
The order also suggests, but doesn't mandate, the watermarking of photos, videos, and audio produced by AI. This reflects growing concerns about the potential for AI-generated deepfakes and disinformation, particularly in the context of the upcoming 2024 presidential campaign. To ensure accurate communications that are free of AI meddling, the Fact Sheet says federal agencies will develop and use tools to "make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world."
Under the order, several agencies are directed to establish clear safety standards for the use of AI. For instance, the Department of Health and Human Services is tasked with creating safety standards, while the Department of Labor and the National Economic Council are to study AI's impact on the job market and potential job displacement. While the order itself can't prevent job losses due to AI advancements, the administration appears to be taking initial steps to understand and possibly mitigate the socioeconomic impact of AI adoption. According to the Fact Sheet, these studies aim to inform future policy decisions that could offer a safety net for workers in industries most likely to be affected by AI. | AI Policy and Regulations |
The European Union is absolutely set on regulating AI, and now the biggest online platforms in the world need to help folks tell if the growing number of fake images, video, and audio was created with artificial intelligence. Major tech companies including Google, Facebook, and TikTok have until Aug. 25 to start identifying what images, video, or audio contain deep fakes or potentially face multi-million dollar fines from the EU.
In talks, European Commission Vice President for Values and Transparency Věra Jourová said that dozens of tech companies need to start coming up with ways to label “AI generated disinformation.” The official said during a press conference that companies would need to “put in place technology to recognize such content and clearly label this to users.”
Jourová said AI-generated content need to have “prominent markings” denoting they’re deep fakes or were manipulated to some degree. This regulation is being promoted under the European body’s Digital Services Act, a law meant to mandate transparency of online content moderation.
According to information sent to Gizmodo from Jourová’s office, the new guidelines follow from an early May meeting with the task force of the Code of Practice on disinformation which includes representatives from both the companies and regulators. In addition, those platforms that make use of AI chatbots, including for customer service, must let users know they’re interacting with an AI instead of a real flesh and blood human.
Microsoft and Google are locked in a race to develop AI chatbots, and the EU has taken notice how far both seem to be going without any roadblocks or safeguards. According to The Guardian, Jourová met with Google CEO Sundar Pichai last week who told her they were working on developing means to detect fake AI-generated text. Despite how fast these companies have worked on proliferating AI chatbots, few have taken the same resources toward dealing with the mass AI content farms pumping out disinformation.
The DSA is already in force, but the EU still has to offer designations which online platforms fall under its specific restrictions. Late last month, Elon Musk’s Twitter decided to leave the EU’s voluntary Code of Practice against disinformation. EU Commissioner for Internal Markets Thierry Breton announced Twitter’s departure through a tweet, adding the DSA’s disinformation requirements would be applied to all by Aug. 25.
The Commission is working on some of the world’s first hardline AI regulations under the Artificial Intelligence Act. In part, that law would mandate AI developers disclose all the copyrighted materials used to train their AI models. Jourová said that European Parliament could apply rules mandating platforms detect and label AI-generated text content. Current methods for detecting AI-generated text are rather inefficient, so the onus would be on major tech companies to develop new models for determining deep fakes, whether that’s watermarks or some other method of ingraining an immutable AI signifier. | AI Policy and Regulations |
About four years ago, former Google CEO Eric Schmidt was appointed to the National Security Commission on Artificial Intelligence by the chairman of the House Armed Services Committee.It was a powerful perch. Congress tasked the new group with a broad mandate: to advise the U.S. government on how to advance the development of artificial intelligence, machine learning and other technologies to enhance the national security of the United States.The mandate was simple: Congress directed the new body to advise on how to enhance American competitiveness on AI against its adversaries, build the AI workforce of the future, and develop data and ethical procedures.In short, the commission, which Schmidt soon took charge of as chairman, was tasked with coming up with recommendations for almost every aspect of a vital and emerging industry. The panel did far more under his leadership. It wrote proposed legislation that later became law and steered billions of dollars of taxpayer funds to industry he helped build — and that he was actively investing in while running the group.If you're going to be leading a commission that is steering the direction of government AI and making recommendations for how we should promote this sector and scientific exploration in this area, you really shouldn't also be dipping your hand in the pot and helping yourself to AI investments.Walter ShaubSenior Ethics Fellow, Project on Government OversightHis credentials, however, were impeccable given his deep experience in Silicon Valley, his experience advising the Defense Department, and a vast personal fortune estimated at about $20 billion.Five months after his appointment, Schmidt made a little-noticed private investment in an initial seed round of financing for a startup company called Beacon, which uses AI in the company's supply chain products for shippers who manage freight logistics, according to CNBC's review of investment information in database Crunchbase.There is no indication that Schmidt broke any ethics rules or did anything unlawful while chairing the commission. The commission was, by design, an outside advisory group of industry participants, and its other members included well-known tech executives including Oracle CEO Safra Catz, Amazon Web Services CEO Andy Jassy and Microsoft Chief Scientific Officer Dr. Eric Horvitz, among others.'Conflict of interest'Schmidt's investment was just the first of a handful of direct investments he would make in AI start-up companies during his tenure as chairman of the AI commission."It's absolutely a conflict of interest," said Walter Shaub, a senior ethics fellow at the Project on Government Oversight, and the former director of the U.S. office of Government Ethics."That's technically legal for a variety of reasons, but it's not the right thing to do," Shaub said.Venture capital firms financed, in part, by Schmidt and his private family foundation also made dozens of additional investments in AI companies during Schmidt's tenure, giving Schmidt an economic stake in the industry even as he developed new regulations and encouraged taxpayer financing for it. Altogether, Schmidt and entities connected to him made more than 50 investments in AI companies while he was chairman of the federal commission on AI. Information on his investments isn't publicly available.All that activity meant that, at the same time Schmidt was wielding enormous influence over the future of federal AI policy, he was also potentially positioning himself to profit personally from the most promising young AI companies.Institutional issuesSchmidt's conflict of interest is not unusual. The investments are an example of a broader issue identified by ethics reformers in Washington, DC: Outside advisory committees that are given significant sway over industries without enough public disclosure of potential conflicts of interest. "The ethics enforcement process in the executive branch is broken, it does not work," said Craig Holman, a lobbyist on ethics, lobbying and campaign finance for Public Citizen, the consumer advocacy organization. "And so the process itself is partly to blame here."The federal government counts a total of 57 active federal advisory commissions, with members offering input on everything from nuclear reactor safeguards to environmental rules and global commodities markets.For years, reformers have tried to impose tougher ethics rules on Washington's sprawling network of outside advisory commissions. In 2010, then-President Barack Obama used an executive order to block federally registered lobbyists from serving on federal boards and commissions. But a group of Washington lobbyists fought back with a lawsuit arguing the new rule was unfair to them, and the ban was scaled back.'Fifth arm of government'The nonprofit Project on Government Oversight has called federal advisory committees the "fifth arm of government" and has pushed for changes including additional requirements for posting conflict-of-interest waivers and recusal statements, as well as giving the public more input in nominating committee members. Also in 2010, the House passed a bill that would prohibit the appointment of commission members with conflicts of interest, but the bill died in the Senate."It's always been this way," Holman said. "When Congress created the Office of Government Ethics to oversee the executive branch, you know, they didn't really want a strong ethics cop, they just wanted an advisory commission." Holman said each federal agency selects its own ethics officer, creating a vast system of more than 4,000 officials. But those officers aren't under the control of the Office of Government Ethics – there's "no one person in charge," he said.Eric Schmidt during a news conference at the main office of Google Korea in Seoul on November 8, 2011.Jung Yeon-je | Afp | Getty ImagesPeople close to Schmidt say his investments were disclosed in a private filing to the U.S. government at the time. But the public and the news media had no access to that document, which was considered confidential. The investments were not revealed to the public by Schmidt or the commission. His biography on the commission's website detailed his experiences at Google, his efforts on climate change and his philanthropy, among other details. But it did not mention his active investments in artificial intelligence.A spokesperson for Schmidt told CNBC that he followed all rules and procedures in his tenure on the commission: "Eric has given full compliance on everything," the spokesperson said.But ethics experts say Schmidt simply should not have made private investments while leading a public policy effort on artificial intelligence."If you're going to be leading a commission that is steering the direction of government AI and making recommendations for how we should promote this sector and scientific exploration in this area, you really shouldn't also be dipping your hand in the pot and helping yourself to AI investments," said Shaub of the Project on Government Oversight.He said there were several ways Schmidt could have minimized this conflict of interest: He could have made the public aware of his AI investments, he could have released his entire financial disclosure report, or he could have made the decision not to invest in AI while he was chair of the AI commission.Public interest"It's extremely important to have experts in the government," Shaub said. "But it's, I think, even more important to make sure that you have experts who are putting the public's interests first."The AI commission, which Schmidt chaired until it expired in the fall of 2021, was far from a stereotypical Washington blue-ribbon commission issuing white papers that few people actually read.Instead, the commission delivered reports which contained actual legislative language for Congress to pass into law to finance and develop the artificial intelligence industry. And much of that recommended language was written into vast defense authorization bills. Sections of legislative language passed, word for word, from the commission into federal law.The commission's efforts also sent millions of taxpayer dollars to priorities it identified. In just one case, the fiscal year 2023 National Defense Authorization Act included $75 million "for implementing the National Security Commission on Artificial Intelligence recommendations."At a commission event in September 2021, Schmidt touted the success of his team's approach. He said the commission staff "had this interesting idea that not only should we write down what we thought, which we did, but we would have a hundred pages of legislation that they could just pass." That, Schmidt said, was "an idea that had never occurred to me before but is actually working."$200 billion modificationSchmidt said one piece of legislation moving on Capitol Hill was "modified by $200 billion dollars." That, he said, was "essentially enabled by the work of the staff" of the commission.At that same event, Schmidt suggested that his staff had wielded similar influence over the classified annexes to national security related bills emanating from Congress. Those documents provide financing and direction to America's most sensitive intelligence agencies. To protect national security, the details of such annexes are not available to the American public."We don't talk much about our secret work," Schmidt said at the event. "But there's an analogous team that worked on the secret stuff that went through the secret process that has had similar impact."Asked whether classified language in the annex proposed by the commission was adopted in legislation that passed into law, a person close to Schmidt responded, "due to the classified nature of the NSCAI annex, it is not possible to answer this question publicly. NSCAI provided its analysis and recommendations to Congress, to which members of Congress and their staff reviewed and determined what, if anything, could/should be included in a particular piece of legislation."Beyond influencing classified language on Capitol Hill, Schmidt suggested that the key to success in Washington was being able to push the White House to take certain actions. "We said we need leadership from the White House," Schmidt said at the 2021 event. "If I've learned anything from my years of dealing with the government, is the government is not run like a tech company. It's run top down. So, whether you like it or not, you have to start at the top, you have to get the right words, either they say it, or you write it for them, and you make it happen. Right? And that's how it really, really works."Industry friendlyThe commission produced a final report with topline conclusions and recommendations that were friendly to the industry, calling for vastly increased federal spending on AI research and a close working relationship between government and industry.The final report waived away concerns about too much government intervention in the private sector or too much federal spending."This is not a time for abstract criticism of industrial policy or fears of deficit spending to stand in the way of progress," the commission concluded in its 2021 report. "In 1956, President Dwight Eisenhower, a fiscally conservative Republican, worked with a Democratic Congress to commit $10 billion to build the Interstate Highway System. That is $96 billion in today's world."The commission didn't go quite that big, though. In the end, it recommended $40 billion in federal spending on AI, and suggested it should be done hand in hand with tech companies."The federal government must partner with U.S. companies to preserve American leadership and to support development of diverse AI applications that advance the national interest in the broadest sense," the commission wrote. "If anything, this report underplays the investments America will need to make."The urgency driving all of this, the commission said, is Chinese development of AI technology that rivals the software coming out of American labs: "China's plans, resources, and progress should concern all Americans."China, the commission said, is an AI peer in many areas and a leader in others. "We take seriously China's ambition to surpass the United States as the world's AI leader within a decade," it wrote.But Schmidt's critics see another ambition behind the commission's findings: Steering more federal dollars toward research that can benefit the AI industry."If you put a tech billionaire in charge, any framing that you present them, the solution will be, 'give my investments more money,' and that's indeed what we see," said Jack Poulson, executive director of the nonprofit group Tech Inquiry. Poulson formerly worked as a research scientist at Google, but he resigned in 2018 in protest of what he said was Google bending to the censorship demands of the Chinese government.Too much power?To Poulson, Schmidt was simply given too much power over federal AI policy. "I think he had too much influence," Poulson said. "If we believe in a democracy, we should not have a couple of tech billionaires, or, in his case, one tech billionaire, that is essentially determining US government allocation of hundreds of billions of dollars."The federal commission wound down its work on Oct. 1, 2021.Four days later, on Oct. 5, Schmidt announced a new initiative called the Special Competitive Studies Project. The new entity would continue the work of the congressionally created federal commission, with many of the same goals and much of the same staff. But this would be an independent nonprofit and operate under the financing and control of Schmidt himself, not Congress or the taxpayer. The new project, he said, will "make recommendations to strengthen America's long-term global competitiveness for a future where artificial intelligence and other emerging technologies reshape our national security, economy, and society."The CEO of Schmidt's latest initiative would be the same person who had served as the executive director of the National Security Commission. More than a dozen staffers from the federal commission followed Schmidt to the new private sector project. Other people from the federal commission came over to Schmidt's private effort, too: Vice Chair Robert Work, a former deputy secretary of defense, would serve on Schmidt's board of advisors. Mac Thornberry, the congressman who appointed Schmidt to the federal commission in the first place, was now out of office and would also join Schmidt's board of advisors.They set up new office space just down the road from the federal commission's headquarters in Crystal City, VA, and began to build on their work at the federal commission.The new Special Competitive Studies Project issued its first report on Sept. 12. The authors wrote, "Our new project is privately funded, but it remains publicly minded and staunchly nonpartisan in believing technology, rivalry, competition and organization remain enduring themes for national focus."The report calls for the creation of a new government entity that would be responsible for organizing the government-private sector nexus. That new organization, the report says, could be based on the roles played by the National Economic Council or the National Security Council inside the White House.It is not clear if the Project will disclose Schmidt's personal holdings in AI companies. So far, it has not.Asked if Schmidt's AI investments will be disclosed by the Project in the future, a person close to Schmidt said, "SCSP is organized as a charitable entity, and has no relationship to any personal investment activities of Dr. Schmidt." The person also said the project is a not-for-profit research entity that will provide public reports and recommendations. "It openly discloses that it is solely funded by the Eric and Wendy Schmidt Fund for Strategic Innovation."In a way, Schmidt's approach to Washington is the culmination of a decade or more as a power player in Washington. Early on, he professed shock at the degree to which industry influenced policy and legislation in Washington. But since then, his work on AI suggests he has embraced that fact of life in the capital.Obama donorSchmidt first came to prominence on the Potomac as an early advisor and donor to the first presidential campaign of Barack Obama. Following the 2008 election, he served on Obama's presidential transition and as a presidential advisor on science and technology issues. Schmidt had risen to the heights of power and wealth in Silicon Valley, but what he saw in the nation's capital surprised him.In a 2010 conversation with the Atlantic's then Editor-in Chief James Bennet, Schmidt told a conference audience what he had learned in his first years in the nation's capital. "The average American doesn't realize how much the laws are written by lobbyists," Schmidt said. "It's shocking now, having spent a fair amount of time inside the system, how the system actually works. It is obvious that if the system is organized around incumbencies writing the laws, the incumbencies will benefit from the laws that are written."Bennet, pushing back, suggested that Google was already one of the greatest incumbent corporations in America."Well, perhaps," Schmidt replied in 2010. "But we don't write the laws." — CNBC's Paige Tortorelli, Bria Cousins, Scott Zamost and Margaret Fleming contributed to this report. | AI Policy and Regulations |
WASHINGTON -- Amazon, Google, Meta, Microsoft and other companies that are leading the development of artificial intelligence technology have agreed to meet a set of AI safeguards brokered by President Joe Biden's administration.
The White House said Friday that it has secured voluntary commitments from seven U.S. companies meant to ensure their AI products are safe before they release them. Some of the commitments call for third-party oversight of the workings of commercial AI systems, though they don't detail who will audit the technology or hold the companies accountable.
A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has brought public fascination as well as concern about their ability to trick people and spread disinformation, among other dangers.
The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing “carried out in part by independent experts” to guard against major risks, such as to biosecurity and cybersecurity, the White House said in a statement.
The companies have also committed to methods for reporting vulnerabilities to their systems and to using digital watermarking to help distinguish between real and AI-generated images known as deepfakes.
They will also publicly report flaws and risks in their technology, including effects on fairness and bias, the White House said.
The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology. Company executives plan to gather with Biden at the White House on Friday as they pledge to follow the standards.
Some advocates for AI regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable.
“History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations,” said a statement from James Steyer, founder and CEO of the nonprofit Common Sense Media.
Senate Majority Leader Chuck Schumer, D-N.Y., has said he will introduce legislation to regulate AI. He said in a statement that he will continue “working closely with the Biden administration and our bipartisan colleagues” to build upon the pledges made Friday.
A number of technology executives have called for regulation, and several went to the White House in May to speak with Biden, Vice President Kamala Harris and other officials.
Microsoft President Brad Smith said in a blog post Friday that his company is making some commitments that go beyond the White House pledge, including support for regulation that would create a “licensing regime for highly capable models.”
But some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their AI systems known as large language models adhere to regulatory strictures.
A number of countries have been looking at ways to regulate AI, including European Union lawmakers who have been negotiating sweeping AI rules for the 27-nation bloc that could restrict applications deemed to have the highest risks.
U.N. Secretary-General Antonio Guterres recently said the United Nations is “the ideal place” to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year.
Guterres also said he welcomed calls from some countries for the creation of a new U.N. body to support global efforts to govern AI, inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.
The White House said Friday that it has already consulted on the voluntary commitments with a number of countries.
——
O'Brien reported from Providence, Rhode Island. | AI Policy and Regulations |
Welcome to AI This Week, Gizmodo’s weekly roundup where we do a deep dive on what’s been happening in artificial intelligence.
As governments fumble for a regulatory approach to AI, everybody in the tech world seems to have an opinion about what that approach should be and most of those opinions do not resemble one another. Suffice it to say, this week presented plenty of opportunities for tech nerds to yell at each other online, as two major developments in the space of AI regulations took place, immediately spurring debate.
The first of those big developments was the United Kingdom’s much-hyped artificial intelligence summit, which saw the UK’s prime minister, Rishi Sunak, invite some of the world’s top tech CEOs and leaders to Bletchley Park, home of the UK’s WWII codebreakers, in an effort to cess out the promise and peril of the new technology. The event was marked by a lot of big claims about the dangers of the emergent technology and ended with an agreement surrounding security testing of new software models. The second (arguably bigger) event to happen this week was the unveiling of the Biden administration’s AI executive order, which laid out some modest regulatory initiatives surrounding the new technology in the U.S. Among many other things, the EO also involved a corporate commitment to security testing of software models.
However, some prominent critics have argued that the US and UK’s efforts to wrangle artificial intelligence have been too heavily influenced by a certain strain of corporately-backed doomerism which critics see as a calculated ploy on the part of the tech industry’s most powerful companies. According to this theory, companies like Google, Microsoft, and OpenAI are using AI scaremongering in an effort to squelch open-source research into the tech as well as make it too onerous for smaller startups to operate while keeping its development firmly within the confines of their own corporate laboratories. The allegation that keeps coming up is “regulatory capture.”
This conversation exploded out into the open on Monday with the publication of an interview with Andrew Ng, a professor at Stanford University and the founder of Google Brain. “There are definitely large tech companies that would rather not have to try to compete with open source [AI], so they’re creating fear of AI leading to human extinction,” Ng told the news outlet. Ng also said that two equally bad ideas had been joined together via doomerist discourse: that “AI could make us go extinct” and that, consequently, “a good way to make AI safer is to impose burdensome licensing requirements” on AI producers.
More criticism swiftly came down the pipe from Yann LeCun, Meta’s top AI scientist and a big proponent of open-source AI research, who got into a fight with another techie on X about how Meta’s competitors were attempting to commandeer the field for themselves. “Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment,” LeCun said, in reference to OpenAI, Google, and Anthropic’s top AI executives. “They are the ones who are attempting to perform a regulatory capture of the AI industry. You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D,” he said.
After Ng and LeCun’s comments circulated, Google Deepmind’s current CEO, Demis Hassabis, was forced to respond. In an interview with CNBC, he said that Google wasn’t trying to achieve “regulatory capture” and said: “I pretty much disagree with most of those comments from Yann.”
Predictably, Sam Altman eventually decided to jump into the fray to let everybody know that no, actually, he’s a great guy and this whole scaring-people-into-submitting-to-his-business-interests thing is really not his style. On Thursday, the OpenAI CEO tweeted:
there are some great parts about the AI EO, but as the govt implements it, it will be important not to slow down innovation by smaller companies/research teams. i am pro-regulation on frontier systems, which is what openai has been calling for, and against regulatory capture.
“So, capture it is then,” one person commented, beneath Altman’s tweet.
Of course, no squabble about AI would be complete without a healthy mouthful from the world’s most opinion-filled internet troll and AI funder, Elon Musk. Musk gave himself the opportunity to provide that mouthful this week by somehow forcing the UK’s Sunak to conduct an interview with him (Musk), which was later streamed to Musk’s own website, X. During the conversation, which amounted to Sunak looking like he wanted to take a nap and sleepily asking the billionaire a roster of questions, Musk managed to get in some classic Musk-isms. Musk’s comments weren’t so much thought-provoking or rooted in any sort of serious policy discussion as they were dumb and entertaining—which is more the style of rhetoric he excels at.
Included in Musk’s roster of comments was that AI will eventually create what he called “a future of abundance where there is no scarcity of goods and services” and where the average job is basically redundant. However, the billionaire also warned that we should still be worried about some sort of rogue AI-driven “superintelligence” and that “humanoid robots” that can “chase you into a building or up a tree” were also a potential thing to be worried about.
When the conversation rolled around to regulations, Musk claimed that he “agreed with most” regulations but said, of AI: “I generally think it’s good for government to play a role when public safety is at risk. Really, for the vast majority of software, public safety is not at risk. If an app crashes on your phone or laptop it’s not a massive catastrophe. But when we talk about digital superintelligence—which does pose a risk to the public—then there is a role for government to play.” In other words, whenever software starts resembling that thing from the most recent Mission Impossible movie then Musk will probably be comfortable with the government getting involved. Until then...ehhh.
Musk may want regulators to hold off on any sort of serious policies since his own AI company is apparently debuting its technology soon. In a tweet on X on Friday, Musk announced that his startup, xAI, planned to “release its first AI to a select group” on Saturday and that this tech was in some “important respects,” the “best that currently exists.” That’s about as clear as mud, though it’d probably be safe to assume that Musk’s promises are somewhere in the same neighborhood of hyperbole as his original comments about the Tesla bot.
This week we spoke with Samir Jain, vice president of policy at the Center for Democracy and Technology, to get his thoughts on the much anticipated executive order from the White House on artificial intelligence. The Biden administration’s EO is being looked at as the first step in a regulatory process that could take years to unfold. Some onlookers praised the Biden administration’s efforts; others weren’t so thrilled. Jain spoke with us about his thoughts on the legislation as well as his hopes for future regulation. This interview has been edited for brevity and clarity.
I just wanted to get your initial response to Biden’s executive order. Are you pleased with it? Hopeful? Or do you feel like it leaves some stuff out?
Overall we are pleased with the executive order. We think it identifies a lot of key issues, in particular current harms that are happening, and that it really tries to bring together different agencies across the government to address those issues. There’s a lot of work to be done to implement the order and its directives. So, ultimately, I think the judgment as to whether it’s an effective EO or not will turn to a significant degree on how that implementation goes. The question is whether those agencies and other parts of government will carry out those tasks effectively. In terms of setting a direction, in terms of identifying issues and recognizing that the administration can only act within the scope of the authority that it currently has...we were quite pleased with the comprehensive nature of the EO.
One of the things the EO seems like it’s trying to tackle is this idea of long-term harms around AI and some of the more catastrophic potentialities of the way in which it could be wielded. It seems like the executive order focuses more on the long-term harms rather than the short-term ones. Would you say that’s true?
I’m not sure that’s true. I think you’re characterizing the discussion correctly, in that there’s this idea out there that there’s a dichotomy between “long-term” and “short-term” harms. But I actually think that, in many respects, that’s a false dichotomy. It’s a false dichotomy both in the sense that we should have to choose one or the other—and in fact, we shouldn’t; and, also, a lot of the infrastructure and steps that you would take to deal with current harms are also going to help in dealing with whatever long-term harms there may be. So, if for example, we do a good job with promoting and entrenching transparency—in terms of the use and capability of AI systems—that’s going to also help us when we turn to addressing longer-term harms.
With respect to the EO, although there certainly are provisions that deal with long-term harms...there’s actually a lot in the EO—I would go so far as to say the bulk of the EO—deals with current and existing harms. It’s directing the Secretary of Labor to mitigate potential harms from AI-based tracking of workers; it’s calling on the Housing and Urban Development and Consumer Financial Protection bureaus to develop guidance around algorithmic tenant screening; it’s directing the Department of Education to figure out some resources and guidance about the safe and non-discriminatory use of AI in education; it’s telling the Health and Human Services Department to look at benefits administration and to make sure that AI doesn’t undermine equitable administration of benefits. I’ll stop there, but that’s all to say that I think it does a lot with respect to protecting against current harms.
- The race to replace your smartphone is being led by Humane’s weird AI pin. Tech companies want to cash in on the AI gold rush and a lot of them are busy trying to launch algorithm-fueled wearables that will make your smartphone obsolete. At the head of the pack is Humane, a startup founded by two former Apple employees, that is scheduled to unveil its much anticipated AI pin next week. Humane’s pin is actually a tiny projector that you attach to the front of your shirt; the device is equipped with a proprietary large language model powered by GPT-4 and can supposedly answer and make calls for you, read back your emails for you, and generally act as a communication device and virtual assistant.
- News groups release research pointing to how much news content is used to train AI algorithms. The New York Times reports that the News Media Alliance, a trade group that represents numerous large media outlets (including the Times), has published new research alleging that many large language models are built using copyrighted material from news sites. This is potentially big news, as there’s currently a fight brewing over whether AI companies may have legally infringed on the rights of news organizations when they built their algorithms.
- AI-fueled facial recognition is now being used against geese for some reason. In what feels like a weird harbinger of the end times, NPR reports that the surveillance state has come for the waterfowl of the world. That is to say, academics in Vienna recently admitted to writing an AI-fueled facial recognition program designed for geese; the program trolls through databases of known goose faces and seeks to identify individual birds by distinct beak characteristics. Why exactly this is necessary I’m not sure but I can’t stop laughing about it. | AI Policy and Regulations |
Key Takeaways
- Nvidia shares tumbled last week after U.S. officials imposed more restrictions on chip exports to China.
- Nvidia's decline dragged down many ETFs that are heavily exposed to the chipmaker.
- Some of the biggest tech-oriented funds like the Invesco QQQ Trust (QQQ), which tracks the Nasdaq 100 index, also slumped.
- An AI Policy Institute survey found the majority of respondents disapproved of Nvidia selling high-performance chips to China and would back antitrust efforts against the company.
Nvidia (NVDA) shares tumbled almost 9% last week after U.S. government officials announced more stringent curbs on exports of advanced AI chips to China, dragging down many technology-centric exchange-traded funds (ETFs) that are heavily exposed to the chipmaker.
The VanEck Semiconductor ETF (SMH), a $9.4 billion fund with almost 20% of its capital invested in Nvidia, fell more than 4% last week. The iShares Semiconductor ETF (SOXX), an $8.7 billion fund with a 7.8% weight in Nvidia, lost a similar amount.
The Invesco QQQ Trust (QQQ), a broad-based tech-centered ETF tracking the return of the Nasdaq 100 Index, fell almost 3% last week. Nvidia is the fund's fourth-biggest holding, comprising just over 4% of its portfolio. The Vanguard Information Technology ETF (VGT), which holds a similar share of funds in Nvidia, fell over 3%.
While shares of Nvidia have fared poorly in recent days, their price has nearly tripled so far in 2023, making it the S&P 500's best-performing stock this year.
Public Opinion Favors Export Restrictions, Antitrust Action
Nvidia's A800 and H800 chips, which it designed specifically to sell to China under AI chip restrictions announced last year, are subject to the new rules laid out last week.
An AI Policy Institute (AIPI) survey of the U.S. public found 71% of respondents disapprove of Nvidia selling high-performance computer chips to China, versus just 18% who approve.
Another 63% of respondents said they would back antitrust legislation or related action against Nvidia, to prevent the company from attaining a disproportionate share of the semiconductor market. | AI Policy and Regulations |
Subscribe to Here’s the Deal, our politics
newsletter for analysis you won’t find anywhere else.
Thank you. Please check your inbox to confirm.
Kelvin Chan, Associated Press
Kelvin Chan, Associated Press
Leave your feedback
LONDON (AP) — The breathtaking development of artificial intelligence has dazzled users by composing music, creating images and writing essays, while also raising fears about its implications. Even European Union officials working on groundbreaking rules to govern the emerging technology were caught off guard by AI’s rapid rise.
The 27-nation bloc proposed the Western world’s first AI rules two years ago, focusing on reining in risky but narrowly focused applications. General purpose AI systems like chatbots were barely mentioned. Lawmakers working on the AI Act considered whether to include them but weren’t sure how, or even if it was necessary.
READ MORE: From data privacy to AI, here are new rules Congress is considering for tech companies
“Then ChatGPT kind of boom, exploded,” said Dragos Tudorache, a Romanian member of the European Parliament co-leading the measure. “If there was still some that doubted as to whether we need something at all, I think the doubt was quickly vanished.”
The release of ChatGPT last year captured the world’s attention because of its ability to generate human-like responses based on what it has learned from scanning vast amounts of online materials. With concerns emerging, European lawmakers moved swiftly in recent weeks to add language on general AI systems as they put the finishing touches on the legislation.
The EU’s AI Act could become the de facto global standard for artificial intelligence, with companies and organizations potentially deciding that the sheer size of the bloc’s single market would make it easier to comply than develop different products for different regions.
“Europe is the first regional bloc to significantly attempt to regulate AI, which is a huge challenge considering the wide range of systems that the broad term ‘AI’ can cover,” said Sarah Chander, senior policy adviser at digital rights group EDRi.
Authorities worldwide are scrambling to figure out how to control the rapidly evolving technology to ensure that it improves people’s lives without threatening their rights or safety. Regulators are concerned about new ethical and societal risks posed by ChatGPT and other general purpose AI systems, which could transform daily life, from jobs and education to copyright and privacy.
WATCH: ‘Godfather of AI’ discusses dangers the developing technologies pose to society
The White House recently brought in the heads of tech companies working on AI including Microsoft, Google and ChatGPT creator OpenAI to discuss the risks, while the Federal Trade Commission has warned that it wouldn’t hesitate to crack down.
China has issued draft regulations mandating security assessments for any products using generative AI systems like ChatGPT. Britain’s competition watchdog has opened a review of the AI market, while Italy briefly banned ChatGPT over a privacy breach.
The EU’s sweeping regulations — covering any provider of AI services or products — are expected to be approved by a European Parliament committee Thursday, then head into negotiations between the 27 member countries, Parliament and the EU’s executive Commission.
European rules influencing the rest of the world — the so-called Brussels effect — previously played out after the EU tightened data privacy and mandated common phone-charging cables, though such efforts have been criticized for stifling innovation.
Attitudes could be different this time. Tech leaders including Elon Musk and Apple co-founder Steve Wozniak have called for a six-month pause to consider the risks.
Geoffrey Hinton, a computer scientist known as the “Godfather of AI,” and fellow AI pioneer Yoshua Bengio voiced their concerns last week about unchecked AI development.
Tudorache said such warnings show the EU’s move to start drawing up AI rules in 2021 was “the right call.”
Google, which responded to ChatGPT with its own Bard chatbot and is rolling out AI tools, declined to comment. The company has told the EU that “AI is too important not to regulate.”
Microsoft, a backer of OpenAI, did not respond to a request for comment. It has welcomed the EU effort as an important step “toward making trustworthy AI the norm in Europe and around the world.”
Mira Murati, chief technology officer at OpenAI, said in an interview last month that she believed governments should be involved in regulating AI technology.
But asked if some of OpenAI’s tools should be classified as posing a higher risk, in the context of proposed European rules, she said it’s “very nuanced.”
“It kind of depends where you apply the technology,” she said, citing as an example a “very high-risk medical use case or legal use case” versus an accounting or advertising application.
WATCH: The potential dangers as artificial intelligence grows more sophisticated and popular
OpenAI CEO Sam Altman plans stops in Brussels and other European cities this month in a world tour to talk about the technology with users and developers.
Recently added provisions to the EU’s AI Act would require “foundation” AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislation obtained by The Associated Press.
Foundation models, also known as large language models, are a subcategory of general purpose AI that includes systems like ChatGPT. Their algorithms are trained on vast pools of online information, like blog posts, digital books, scientific articles and pop songs.
“You have to make a significant effort to document the copyrighted material that you use in the training of the algorithm,” paving the way for artists, writers and other content creators to seek redress, Tudorache said.
Officials drawing up AI regulations have to balance risks that the technology poses with the transformative benefits that it promises.
Big tech companies developing AI systems and European national ministries looking to deploy them “are seeking to limit the reach of regulators,” while civil society groups are pushing for more accountability, said EDRi’s Chander.
“We want more information as to how these systems are developed — the levels of environmental and economic resources put into them — but also how and where these systems are used so we can effectively challenge them,” she said.
Under the EU’s risk-based approach, AI uses that threaten people’s safety or rights face strict controls.
Remote facial recognition is expected to be banned. So are government “social scoring” systems that judge people based on their behavior. Indiscriminate “scraping” of photos from the internet used for biometric matching and facial recognition is also a no-no.
Predictive policing and emotion recognition technology, aside from therapeutic or medical uses, are also out.
Violations could result in fines of up to 6 percent of a company’s global annual revenue.
Even after getting final approval, expected by the end of the year or early 2024 at the latest, the AI Act won’t take immediate effect. There will be a grace period for companies and organizations to figure out how to adopt the new rules.
It’s possible that industry will push for more time by arguing that the AI Act’s final version goes farther than the original proposal, said Frederico Oliveira Da Silva, senior legal officer at European consumer group BEUC.
They could argue that “instead of one and a half to two years, we need two to three,” he said.
He noted that ChatGPT only launched six months ago, and it has already thrown up a host of problems and benefits in that time.
If the AI Act doesn’t fully take effect for years, “what will happen in these four years?” Da Silva said. “That’s really our concern, and that’s why we’re asking authorities to be on top of it, just to really focus on this technology.”
AP Technology Writer Matt O’Brien in Providence, Rhode Island, contributed.
Support Provided By:
Learn more | AI Policy and Regulations |
American technology leaders including Tesla CEO Elon Musk, Meta Platforms CEO Mark Zuckerberg and Alphabet CEO Sundar Pichai met with lawmakers at Capitol Hill on Wednesday for a closed-door forum that focused on regulating artificial intelligence.
“It’s important for us to have a referee,” Musk told reporters, adding that a regulator was needed “to ensure that companies take actions that are safe and in the general interest of the public.”
New Jersey Sen. Cory Booker praised the discussion, saying all the participants agreed “the government has a regulatory role” but crafting legislation would be a challenge.
Lawmakers want safeguards against potentially dangerous deepfakes such as bogus videos, election interference and attacks on critical infrastructure.
“Today, we begin an enormous and complex and vital undertaking: building a foundation for bipartisan AI policy that Congress can pass,” Senate Majority Leader Chuck Schumer, a Democrat, said in opening remarks. “Congress must play a role, because without Congress we will neither maximize AI’s benefits, nor minimize its risks.”
Other attendees include Nvidia CEO Jensen Huang, Microsoft Satya Nadella, IBM CEO Arvind Krishna, former Microsoft CEO Bill Gates, AFL-CIO labor federation President Liz Shuler and Senators Mike Rounds, Martin Heinrich and Todd Young.
Schumer, who talked AI with Musk in April, wants attendees to talk “about why Congress must act, what questions to ask, and how to build a consensus for safe innovation.” Sessions began at 10 a.m. ET and are to last until 5 p.m. ET.
In March, Musk and a group of AI experts and executives called for a six-month pause in developing systems more powerful than OpenAI’s GPT-4, citing potential risks to society.
This week, Congress is holding three separate hearings on AI. Microsoft President Brad Smith told a Senate Judiciary subcommittee on Tuesday that Congress should “require safety brakes for AI that controls or manages critical infrastructure.”
Smith compared AI safeguards to requiring circuit breakers in buildings, school buses having emergency brakes and airplanes having collision avoidance systems.
Republican Sen. Josh Hawley questioned Wednesday’s closed-door session, saying Congress has failed to pass any meaningful tech legislation. “I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money,” Hawley said.
Regulators globally have been scrambling to draw up rules governing the use of generative AI, which can create text and generate images whose artificial origins are virtually undetectable.
Adobe, IBM, Nvidia and five other companies on Tuesday said they had signed President Joe Biden’s voluntary AI commitments, which require steps such as watermarking AI-generated content.
The commitments, which were announced in July, were aimed at ensuring AI’s power was not used for destructive purposes. Google, OpenAI and Microsoft signed on in July. The White House has also been working on an AI executive order. | AI Policy and Regulations |
Sept 19 (Reuters) - Rapid advances in artificial intelligence (AI) such as Microsoft-backed OpenAI's ChatGPT are complicating governments' efforts to agree laws governing the use of the technology.
Here are the latest steps national and international governing bodies are taking to regulate AI tools:
AUSTRALIA
* Planning regulations
Australia will make search engines draft new codes to prevent the sharing of child sexual abuse material created by AI and the production of deepfake versions of the same material, the country's internet regulator said on Sept. 8.
BRITAIN
* Planning regulations
Britain's Competition and Markets Authority (CMA) set out seven principles on Sept. 18 designed to make developers accountable, prevent Big Tech tying up the tech in their walled platforms, and stop anti-competitive conduct like bundling.
The proposed principles, which come six weeks before Britain hosts a global AI safety summit, will underpin its approach to AI when it assumes new powers in the coming months to oversee digital markets.
Britain's competition regulator said in May it would start examining the impact of AI on consumers, businesses and the economy and whether new controls were needed.
CHINA
* Implemented temporary regulations
China has issued a set of temporary measures effective from Aug. 15, requiring service providers to submit security assessments and receive clearance before releasing mass-market AI products.
EUROPEAN UNION
* Planning regulations
European Commission President Ursula von der Leyen on Sept. 13 called for a global panel to assess the risks and benefits of AI, similarly to the global IPCC panel which informs policy makers about the climate.
EU lawmakers agreed in June to changes in a draft of the bloc's AI Act. The lawmakers will now have to thrash out details with EU countries before the draft rules become legislation.
The biggest issue is expected to be facial recognition and biometric surveillance where some lawmakers want a total ban while EU countries want an exception for national security, defence and military purposes.
FRANCE
* Investigating possible breaches
France's privacy watchdog CNIL said in April it was investigating several complaints about ChatGPT after the chatbot was temporarily banned in Italy over a suspected breach of privacy rules.
France's National Assembly approved in March the use of AI video surveillance during the 2024 Paris Olympics, overlooking warnings from civil rights groups.
G7
* Seeking input on regulations
Group of Seven (G7) leaders meeting in Hiroshima, Japan, acknowledged in May the need for governance of AI and immersive technologies and agreed to have ministers discuss the technology as the "Hiroshima AI process" and report results by the end of 2023.
G7 nations should adopt "risk-based" regulation on AI, G7 digital ministers said after a meeting in April.
IRELAND
* Seeking input on regulations
Generative AI needs to be regulated, but governing bodies must work out how to do so properly before rushing into prohibitions that "really aren't going to stand up", Ireland's data protection chief said in April.
ISRAEL
* Seeking input on regulations
Israel has been working on AI regulations to achieve the right balance between innovation and the preservation of human rights, Ziv Katzir, director of national AI planning at the Israel Innovation Authority, said in June.
Israel published a 115-page draft AI policy in October 2022 and is collating public feedback ahead of a final decision.
ITALY
* Investigating possible breaches
Italy's data protection authority plans to review artificial intelligence platforms and hire AI experts, a top official said in May.
JAPAN
* Investigating possible breaches
Japan expects to introduce by the end of 2023 regulations that are likely closer to the U.S. attitude than the stringent ones planned in the EU, an official close to deliberations said in July.
The country's privacy watchdog said in June it had warned OpenAI not to collect sensitive data without people's permission.
SPAIN
* Investigating possible breaches
Spain's data protection agency said in April it was launching a preliminary investigation into potential data breaches by ChatGPT. It has also asked the EU's privacy watchdog to evaluate privacy concerns surrounding ChatGPT.
UNITED NATIONS
* Planning regulations
The U.N. Security Council held its first formal discussion on AI in New York in July. The council addressed both military and non-military applications of AI, which "could have very serious consequences for global peace and security", U.N. Secretary-General Antonio Guterres said.
Guterres in June backed a proposal by some AI executives for the creation of an AI watchdog like the International Atomic Energy Agency, but noted that "only member states can create it, not the Secretariat of the United Nations".
The U.N. Secretary-General has also announced plans to start work by the end of the year on a high-level AI advisory body to review AI governance arrangements.
U.S.
* Seeking input on regulations
More than 60 senators took part in the talks, during which Musk called for a U.S. "referee" for AI. Lawmakers said there was universal agreement about the need for government regulation of the technology.
On Sept. 12, the White House said Adobe (ADBE.O), IBM (IBM.N), Nvidia (NVDA.O) and five other firms had signed President Joe Biden's voluntary commitments governing AI, which require steps such as watermarking AI-generated content.
Washington D.C. district Judge Beryl Howell ruled on Aug. 21 that a work of art created by AI without any human input cannot be copyrighted under U.S. law.
The U.S. Federal Trade Commission (FTC) opened in July an expansive investigation into OpenAI on claims that it has run afoul of consumer protection laws.
Compiled by Alessandro Parodi and Amir Orusov in Gdansk; Editing by Kirsten Donovan, Mark Potter, Christina Fincher and Milla Nissi
Our Standards: The Thomson Reuters Trust Principles. | AI Policy and Regulations |
Competition between the U.S. and China in artificial intelligence has expanded into a race to design and implement comprehensive AI regulations.
The efforts to come up with rules to ensure AI's trustworthiness, safety and transparency come at a time when governments around the world are exploring the impact of the technology on national security and education.
ChatGPT, a chatbot that mimics human conversation, has received massive attention since its debut in November. Its ability to give sophisticated answers to complex questions with a language fluency comparable to that of humans has caught the world by surprise. Yet its many flaws, including its ostensibly coherent responses laden with misleading information and apparent bias, have prompted tech leaders in the U.S. to sound the alarm.
"What happens when something vastly smarter than the smartest person comes along in silicon form? It's very difficult to predict what will happen in that circumstance," said Tesla Chief Executive Officer Elon Musk in an interview with Fox News. He warned that artificial intelligence could lead to "civilization destruction" without regulations in place.
Google CEO Sundar Pichai echoed that sentiment. "Over time there has to be regulation. There have to be consequences for creating deep fake videos which cause harm to society," Pichai said in an interview with CBS's "60 Minutes" program.
Jessica Brandt, policy director for the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution, told VOA Mandarin, "Business leaders understand that regulators will be watching this space closely, and they have an interest in shaping the approaches regulators will take."
US grapples with regulations
AI regulation is still nascent in the U.S. Last year, the White House released voluntary guidance through a Blueprint for an AI Bill of Rights to help ensure users' rights are protected as technology companies design and develop AI systems.
At a meeting of the President's Council of Advisors on Science and Technology this month, President Joe Biden expressed concern about the potential dangers associated with AI and underscored that companies had a responsibility to ensure their products were safe before making them public.
On April 11, the National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, began to seek comment and public input with the aim of crafting a report on AI accountability.
The U.S. government is trying to find the right balance to regulate the industry without stifling innovation "in part because the U.S. having innovative leadership globally is a selling point for the United States' hard and soft power," said Johanna Costigan, a junior fellow at the Asia Society Policy Institute's Center for China Analysis.
Brandt, with Brookings, said, "The challenge for liberal democracies is to ensure that AI is developed and deployed responsibly, while also supporting a vibrant innovation ecosystem that can attract talent and investment."
Meanwhile, other Western countries have also started to work on regulating the emerging technology.
The U.K. government published its AI regulatory framework in March. Also last month, Italy temporarily blocked ChatGPT in the wake of a data breach, and the German commissioner for data protection said his country could follow suit.
The European Union stated it's pushing for an AI strategy aimed at making Europe a world-class hub for AI that ensures AI is human-centric and trustworthy, and it hopes to lead the world in AI standards.
Cyber regulations in China
In contrast to the U.S., the Chinese government has already implemented regulations aimed at tech sectors related to AI. In the past few years, Beijing has introduced several major data protection laws to limit the power of tech companies and to protect consumers.
The Cybersecurity Law enacted in 2017 requires that data must be stored within China and operators must submit to government-conducted security checks. The Data Security Law enacted in 2021 sets a comprehensive legal framework for processing personal information when doing business in China. The Personal Information Protection Law established in the same year gives Chinese consumers the right to access, correct and delete their personal data gathered by businesses. Costigan, with the Asia Society, said these laws have laid the groundwork for future tech regulations.
In March 2022, China began to implement a regulation that governs the way technology companies can use recommendation algorithms. The Cyberspace Administration of China (CAC) now supervises the process of using big data to analyze user preferences and companies' ability to push information to users.
On April 11, the CAC unveiled a draft for managing generative artificial intelligence services similar to ChatGPT, in an effort to mitigate the dangers of the new technology.
Costigan said the goal of the proposed generative AI regulation could be seen in Article 4 of the draft, which states that content generated by future AI products must reflect the country's "core socialist values" and not encourage subversion of state power.
"Maintaining social stability is a key consideration," she said. "The new draft regulation does some good and is unambiguously in line with [President] Xi Jinping's desire to ensure that individuals, companies or organizations cannot use emerging AI applications to challenge his rule."
Michael Caster, the Asia digital program manager at Article 19, a London-based rights organization, told VOA, "The language, especially at Article 4, is clearly about maintaining the state's power of censorship and surveillance.
"All global policymakers should be clearly aware that while China may be attempting to set standards on emerging technology, their approach to legislation and regulation has always been to preserve the power of the party."
The future of cyber regulations
As strategies for cyber and AI regulations evolve, how they develop may largely depend on each country's way of governance and reasons for creating standards. Analysts say there will also be intrinsic hurdles linked to coming up with consensus.
"Ethical principles can be hard to implement consistently, since context matters and there are countless potential scenarios at play," Brandt told VOA. "They can be hard to enforce, too. Who would take on that role? How? And of course, before you can implement or enforce a set of principles, you need broad agreement on what they are."
Observers said the international community would face challenges as it creates standards aimed at making AI technology ethical and safe. | AI Policy and Regulations |
Volunteer moderators at Stack Overflow, a popular forum for software developers to ask and answer questions run by Stack Exchange, have issued a general strike over the company’s new AI content policy, which says that all GPT-generated content is now allowed on the site, and suspensions over AI content must stop immediately. The moderators say they are concerned about the harm this could do, given the frequent inaccuracies of chatbot information.
“Stack Overflow, Inc. has decreed a near-total prohibition on moderating AI-generated content… tacitly allowing the proliferation of incorrect information (“hallucinations”) and unfettered plagiarism on the Stack Exchange network,” reads an open letter written by the moderators, who are all volunteers elected by the community.
“This poses a major threat to the integrity and trustworthiness of the platform and its content. Effective immediately, we are enacting a general moderation strike on Stack Overflow and the Stack Exchange network, in protest of this and other recent and upcoming changes to policy and the platform that are being forced upon us by Stack Overflow, Inc.”
The new policy, enacted in late May, requires moderators to stop moderating AI-generated content simply for being AI-generated. Without proper moderation of AI-generated content, though, moderators say the quality and accuracy of Stack Exchange’s information will quickly decline.
“AI chatbots are like parrots,” reads a post by moderators on Meta Stack Exchange further explaining their demands. “ChatGPT, for example, doesn’t understand the responses it gives you; it simply associates a given prompt with information it has access to and regurgitates plausible-sounding sentences. It has no way to verify that the responses it’s providing you with are accurate. ChatGPT is not a writer, a programmer, a scientist, a physicist, or any other kind of expert our network of sites is dependent upon for high-value content. When prompted, it’s just stringing together words based upon the information it was trained with. It does not understand what it’s saying.”
This, it continues, allows users to regurgitate ChatGPT’s answers without understanding them themselves, which goes against the very purpose of the site: “To be a repository of high-quality question and answer content.”
The content of the new AI policy is one big problem. The other, moderators say, is the lack of transparency surrounding it. They write in the post that in December, the site had enacted a temporary policy banning all ChatGPT use due to its “general inaccuracy” and violations of the site’s referencing requirements. This policy was supported both by volunteer moderators and Stack Exchange staff, and resulted in many post removals and user suspensions.
However, on May 29, the moderators write that a new policy was implemented in private, requiring “an immediate cessation of issuing suspensions for AI-generated content and to stop moderating AI-generated content on that basis alone.” The following day, a slightly different version of the policy was released to the public, without the language requiring moderators to stop restricting all AI content.
“The new policy overrode established community consensus and previous CM support, was not discussed with any community members, was presented misleadingly to moderators and then even more misleadingly in public, and is based on unsubstantiated claims derived from unreviewed and unreviewable data analysis,” the Meta Stack Exchange post reads.
It continues, “The fact that you have made one point in private, and one in public, which differ so significantly has put the moderators in an impossible situation, and made them targets for being accused of being unreasonable, and exaggerating the effect of the new policy.”
Moderators are also demanding non-AI-related improvements to the site. “The strike is also in large part about a pattern of behavior recently exhibited by Stack Exchange, Inc,” the post reads. “The company has once again ignored the needs and established consensus of its community, instead focusing on business pivots at the expense of its own Community Managers.” One example they list is the chat function, which they say is desperately out-of-date but has been ignored for years.
The strike is the first major action against ChatGPT content flooding online sites. But moderators on other forums are similarly concerned. Moderators on Reddit have braced for a slew of AI-generated posts with inaccurate information. One Reddit moderator, who wished to remain anonymous, told Motherboard that Reddit’s ChatGPT-powered bot problem is “pretty bad,” and that several hundred accounts had already been manually removed from the site, since Reddit’s automated systems struggle with AI-created content.
The moderators want the AI policy to be retracted and revised, the inconsistency between the public and private versions of the policy to be resolved and apologized for, and the company to “stop being dishonest” about its relationship with the community.
Stack Overflow’s Vice President of Community, Philippe Beaudette, told Motherboard in a statement that, “A small number of moderators (11%) across the Stack Overflow network have stopped engaging in several activities, including moderating content. The primary reason for this action is dissatisfaction with our position on detection tools regarding AI-generated content. Stack Overflow ran an analysis and the ChatGPT detection tools that moderators were previously using have an alarmingly high rate of false positives.” The moderators write in their post that they were aware of the problems with the detection tool.
“We stand by our decision to require that moderators stop using the tools previously used,” Beaudette continued. “We are confident that we will find a path forward. We regret that actions have progressed to this point, and the Community Management team is evaluating the current situation as we work hard to stabilize things in the short term.” | AI Policy and Regulations |
Civil society groups urge White House to make AI guidelines into binding policy
The White House is facing a call from a coalition of civil society groups to make its proposed guidelines for artificial intelligence (AI) regulation into binding policy as part of a forthcoming executive order, according to a letter sent Thursday.
The coalition of civil, technology and human rights organizations, sent a letter to the White House urging the Biden administration to make the AI Bill of Rights, which the administration released a blueprint for in October, into binding government policy on use of AI by federal agencies, contractors and federal grant recipients.
“Simply put, the federal government should not use an AI system unless it is shown to be effective, safe, and nondiscriminatory. AI should work, and work for everyone,” the letter stated.
The letter follows the White House last month saying it is developing an executive order related to responsible AI innovation.
The blueprint for an AI Bill of Rights lays out guidelines for the booming industry. It is one of several steps the administration has taken to set in motion regulation for AI.
Last month the administration said it secured voluntary commitments aimed at managing risks posed by AI from seven top companies, including Google, Microsoft and ChatGPT creator OpenAI.
But the lack of binding commitments leaves guardrails largely up to the tech industry to set for itself.
The letter is signed by nine groups, including the Center for American Progress, the Center for Democracy & Technology, the NAACP and the Leadership Conference on Civil and Human Rights.
The groups said the new executive order should direct the executive brand to “immediately implement the AI Bill of Rights for federal agencies, contractors and grantees. As the largest employer in the country, the groups wrote the federal government has “enormous ability to shape the emerging AI policy and business landscape.”
The requirement for federal agencies should extend toward law enforcement and the national security community, the groups wrote.
“The forthcoming AI EO presents a clear opportunity to implement the White House’s own AI Bill of Rights. We urge you not to miss this critical chance to operationalize the values your administration has uplifted,” they wrote.
At the same time as the White House weighs action, Congress is also considering how to regulate AI. Senate Majority Leader Chuck Schumer (D-N.Y.) revealed a framework for AI policy, and organized briefings for senators on risks and opportunities from AI, but no clear regulatory package has yet emerged.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. | AI Policy and Regulations |
GitHub CEO Thomas Dohmke says that open source developers should be made exempt from the European Union’s (EU) proposed new artificial intelligence (AI) regulations, saying that the opportunity is still there for Europe to lead on AI.
“Open source is forming the foundation of AI in Europe,” Dohmke said onstage at the EU Open Source Policy Summit in Brussels. “The U.S. and China don’t have to win it all.”
The regulations in question come via The Artificial Intelligence Act (AI Act), first proposed back in April 2021 to address the growing reach of AI into our every day lives. The rules would govern AI applications based on their perceived risks, and would effectively be the first AI-centric laws introduced by any major regulatory body.
The European Parliament is set to vote on a draft version of the AI Act in the coming months, and depending on what discussions and debates follow, it could be adopted by the end of 2023.
Open source + AI
As many will know, open source and AI are intrinsically linked, given that collaboration and shared data are pivotal to developing AI systems. As well-meaning as the AI Act might be, critics argue that it could have significant unintended consequences for the open source community, which in turn could hamper the progress of AI. The crux of the problem is that the Act would likely create legal liability for general purpose AI systems (GPAI), and bestow more power and control to the big tech firms given that independent open source developers don’t have the resources to contend with legal wrangles.
So, why would GitHub — a $7.5 billion U.S. company owned by Microsoft — be concerned about regulations on the other side of the pond? There are multiple reasons. Open source software by its very nature is distributed, and GitHub — which recently passed 100 million users — relies on developers globally. Indeed, a report from VC firm Runa Capital this week indicated that 58% of the fastest-growing open source startups are based outside the U.S., with Germany, France and the U.K. (though it isn’t governed by EU regulations) in particular central to this.
More importantly, perhaps, is the fact that Europe has emerged as a driving force behind tech regulations, evidenced by its GDPR data privacy and protection regulations. Put simply, what happens in Europe can ripple into other countries and quickly become a global standard.
“The AI act is so crucial,” Dohmke said onstage. “This policy could well set the precedent for how the world regulates AI. It is foundationally important. It is important for European technological leadership, and for the future of the European economy itself. It must be fair and balanced to the open source community.”
Big bucks
Microsoft and GitHub stand to benefit from a fertile open source landscape, evidenced by their potentially lucrative Copilot tool that helps developers code using technology trained on the work of open source developers. Microsoft, GitHub and AI research lab OpenAI, in which Microsoft is heavily invested, are facing a class action lawsuit for their endeavors.
Elsewhere, OpenAI’s much-hyped text-generating AI phenomenon ChatGPT is also in the spotlight, with the EU’s Internal Market Commissioner Thierry Breton noting in an interview with Reuters today that ChatGPT’s transformative and wide-reaching applications underscores the need for robust regulation.
“As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens, but can also pose risks,” Breton told Reuters. “This is why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data.”
Pretty much the entire world of AI as we know it today has been built on an open source foundation, and anyone with an interest in commercializing AI needs the open source status quo to continue. The big tech firms, including Microsoft, recognize that they might have more legal battles on their hands as a result of impending AI regulations, but at the very least they don’t want open source developers deterred from their work.
Dohmke said that the AI Act can bring “the benefits of AI according to the European values and fundamental rights,” adding that lawmakers have a big part to play in achieving this.
“This is why I believe that the open source developers should be exempt from the AI act,” he said. “Because ultimately this comes down to people. The open source community is not a community of entities. It’s a community of people and the compliance burden should fall on entities, it should fall on companies that are shipping products. OSS developers are often just volunteers, many of them are working two jobs. They are hobbyists and scientists, academics and doctors, professors and university students all alike, and they don’t usually stand to profit from their contributions. They certainly don’t have big budgets, or their own compliance department.” | AI Policy and Regulations |
US senators are proving slow studies when it comes to the generative artificial intelligence tools that are poised to upend life as we know it. But they’ll be tested soon—and the rest of us through them—if their new private tutors are to be trusted.In a historic first, yesterday upwards of 60 senators sat like school children—not allowed to speak or even raise their hands—in a private briefing where some 20 Silicon Valley CEOs, ethicists, academics, and consumer advocates prophesied about AI’s potential to upend, heal, or even erase life as we knew it.“It’s important for us to have a referee,” Elon Musk, the CEO of Tesla, SpaceX, and X (formerly Twitter), told a throng of paparazzi-like press corps waiting on the sidewalk outside the briefing. “[It] may go down in history as very important to the future of civilization.”The weight of the moment is lost on no one, especially after Musk warned senators inside the room of the “civilizational risks” of generative AI.As many senators grapple with AI basics, there’s still time to influence the Senate’s collective thinking before lawmakers try to do what they’ve failed to do in recent years: regulate the emerging disruptive tech.Inside the briefing room, there was consensus on the dais that the federal government’s regulatory might is needed. At one point, Senate Majority Leader Chuck Schumer, the New York Democrat who organized the briefing, asked his assembled guests, “Does the government need to play a role in regulating AI?”“Every single person raised their hand, even though they had diverse views,” Schumer continued. “So that gives us a message here: We have to try to act, as difficult as the process may be.”The raising of diverse hands felt revelatory to many.“I think people all agreed that this is something that we need the government’s leadership on,” said Sam Altman, CEO of OpenAI, the maker of ChatGPT. “Some disagreement about how it should happen, but unanimity [that] this is important and urgent.”The devilish details are haunting, though. Because generative AI is so all-encompassing, a debate over regulating it can quickly expand to include every divisive issue under the sun, which was on display in the briefing right alongside the show of unity, according to attendees who spoke to WIRED.To the surprise of many, the session was replete with specifics. Some attendees brought up their need for more high skilled workers, while Bill Gates focused on feeding the globe’s hungry. Some envision a sweeping new AI agency, while others argue existing entities—like the National Institute of Standards and Technology or NIST, which was mentioned by name—are better suited to regulate in real-time (well, AI-time).“It was a very good pairing. Better than I expected,” says senator Cynthia Lummis, a Wyoming Republican who attended the briefing. “I kind of expected it to be a nothing burger, and I learned a lot. I thought it was extremely helpful, so I'm really glad I went. Really glad.”Like many in the room, Lummis’ ears perked when a speaker called out Section 230 of the 1996 Communications Decency Act—Silicon Valley firm’s favored legislative shield from liability for what users publish on their social media platforms.“One of the speakers said, ‘Make users and creators of the technology accountable, not immune from liability,’” Lummis says, reading from her exhaustive hand-scribbled notes. “In other words, he specifically said, ‘Do not create a Section 230 for AI.’ Lummis adds that the speaker who proposed this—she didn’t identify him— “was sitting next to [Meta CEO Mark] Zuckerberg and he said it—one or two seats away, which I thought was fascinating.”Beyond the opinions of lawmakers, there were also disagreements among the experts invited to speak at the private briefing. The forum's attendees and other tech leaders are talking about building and expanding on gains from AI, but many Latinos still lack broadband internet access, says Janet Murguía, president of Hispanic civil rights organization UnidosUS who attended. That reality underscores how “existing infrastructure gaps keep us from being at the front door of AI,” she says.Murguía wants lawmakers to think about the needs of the Hispanic community to prioritize job training, fight job displacement, and guard against “surveillance that gets away from the values of our democracy.” In particular, she mentioned AI-driven tools like geolocation tracking and face recognition, pointing to a report released Tuesday that found that federal law enforcement agencies using face recognition lack safeguards to protect privacy and civil rights.The resounding message she heard from tech CEOs was a desire for US leadership in AI policy. “Whether it was Mark Zuckerberg or Elon Musk or Bill Gates or [Alphabet CEO] Sundar Pichai, there was a clear resonance that the US must take the lead in AI policy and regulation,” she says.Murguía was glad to see women like Maya Wiley from the Leadership Conference on Civil and Human Rights and union leaders at the forum, which she called impressive and historic. But she wants to see more society in the room at the next forum, saying, “We can’t have the same small circle of folks that aren’t diverse making these decisions.”In her remarks during Wednesday’s briefing, American Federation of Teachers president Randi Weingarten highlighted WIRED reporting that $400 can bankroll a disinformation campaign. Later, Tristan Harris from the Center for Humane Technology talked about how $800 and a few hours of work stripped Meta’s Llama 2 language model of safety controls and made it share instructions on how to make a biological weapon.“It’s like we were having a debate about how little it costs to ruin the world,” Weingarten says, pointing to Musk’s comment about how AI could spell the end of civilization.Weingarten credits Schumer for bringing together people at a critical moment in history when there’s both tremendous potential for AI to do good for humanity and potential to undermine democracy and human decision-making. Teachers and students deserve protections from inequality, identity theft, disinformation, and other harms that AI can fuel, she says, and meaningful federal legislation should protect privacy and look to resolve issues like job displacement.“We want the responsibility to keep up with the innovation and think that that is what makes the innovation sustainable, like commercial air and passenger airlines. The innovation would not have been sustainable without a real commitment to safety,” says Weingarten.Ahead of the forum, Inioluwa Deb Raji, a UC Berkeley researcher, argued that the most reliable experts on real-world harms wrought by AI come from outside corporations. She told WIRED she was thankful she was in the room to reiterate her opinion.A few times, she heard some people argue that the reason voluntary commitments to assess AI systems before deployment made between major AI companies and the Biden administration is led by corporations is because they built the technology so therefore they understand it best.In response, she said that perhaps that’s true, but hearing from people impacted by AI systems and examining how they’re impacted is a valid and important expertise. Their experience informs how to regulate AI and develop standards. She knows from experience auditing AI systems for years that they don’t always work very well or can fail in unexpected ways and endanger human lives. The work of independent auditors, she argued during the briefing, opens things up for more investigation by civil society.“I'm glad I could be there to bring up some non-corporate talking points, but I wish I had more backup,” Raji says.Some commonly known tensions came up such as whether open or closed source AI is best, and the importance of addressing the ways AI models that exist today harm people versus hypothetical existential risks that don't exist yet. While Musk, who signed a letter in favor of a pause in AI development earlier this year, talked about the possibility of AI wiping out civilization, Raji criticized Tesla’s Autopilot AI, which has faced criticism following passenger deaths.“Maybe I should have cared a little more about the independent wealth of people sitting two steps away from me, but I feel like it wasn't that intimidating because I knew that they were repeating points that I've heard before from corporate representatives at these companies about these exact same topics, so I had a sense of what to expect,” she says.Despite some disagreements, Raji says, some of the strongest and surprising moments of the meeting occurred when she heard a consensus emerge that government regulation of AI is necessary. Those moments made it seem as if there may be a path to bipartisan legislation. “That was actually pretty educational for me and probably for the senators,” she says.There’s still an aversion to new regulations among many Republicans, which is why Senate Commerce chair Maria Cantwell, a Democrat from Washington state, was struck by how Microsoft CEO Satya Nadella framed the challenge.“‘When it comes to AI, we shouldn't be thinking about autopilot—like, you need to have copilots. So who's going to be watching, you know, this activity and making sure that it's done correctly?” Cantwell says.While all the CEOs, union bosses, and civil rights advocates were asked to raise their hands, one flaw with muzzling senators, according to critics on both sides of the proverbial aisle, is that lawmakers weren’t easily able to game out where their allies are in the Senate. And coalitions are key to compromise.“There's no feeling in the room,” says senator Elizabeth Warren, a Massachusetts Democrat. “Closed-door for tech giants to come in and talk to senators and answer no tough questions is a terrible precedent for trying to develop any kind of legislation.”While Warren sat in the front row—close enough so the assembled saw the whites of her fiery, consumer-focused eyes—other critics boycotted the affair, even as they sought out the throngs of reporters huddled in the halls.“My concern is that his legislation is leading to nowhere. I mean, I haven't seen any indication [Schumer’s] actually going to put real legislation on the floor. It's a little bit like with antitrust the last two years, he talks about it constantly and does nothing about it,” says senator Josh Hawley, a Missouri Republican. “Part of what this is is a lot of song and dance that covers the fact that actually nothing is advancing. The whole fact that it's not public, it's just absurd.”Absurd or not, some inside were placated, in part, because senators were reminded AI isn’t just our future, it’s been in our lives for years—from social media, to Google searches, to self-driving cars and video doorbells—without destroying the world.“I learned that we're in good shape, that I'm not overly concerned about it,” says senator Roger Marshall, a Kansas Republican. “I think artificial intelligence has been around for decades, most of it machine learning.”Marshall stands out as an outlier, though his laissez-faire thinking is becoming en vogue in the GOP, which critics say is due to all the lobbying done by the firms whose leaders were in Wednesday’s briefing..“The good news is, the United States is leading the way on this issue. I think as long as we stay on the front lines, like we have the military weapons advancement, like we have in satellite investments, we're gonna be just fine,” Marshall says. “I’m very confident we’re moving in the right direction.”Still, studious attendees left with a renewed sense of urgency, even if, initially, that means more studying a technology few truly understand, including those on the dais. It seems the more one learns about the sweeping scope of generative AI, the more they recognize there’s no end in sight to the Senate’s new regulatory role."Are we ready to go out and write legislation? Absolutely not," says senator Mike Rounds, a South Dakota Republican who helped Schumer run the bipartisan AI forums, the next of which will focus on innovation. "We're not there."In what was once heralded as the “world’s greatest deliberative body,” even the timeline is debatable. “Everyone's nodding their head saying, ‘Yeah, this is something we need to act on,’ so now the question is, how long does it take to get to a census?” says senator John Hickenlooper, a Colorado Democrat. “But in broad strokes, I think, that it's not unreasonable to expect to get something done next year.” | AI Policy and Regulations |
WASHINGTON -- Senate Majority Leader Chuck Schumer has been talking for months about accomplishing a potentially impossible task: Passing bipartisan legislation within the next year that both encourages the rapid development of artificial intelligence and also mitigates its biggest risks. On Wednesday, he is convening a meeting of some of the country’s most prominent technology executives, among others, to ask them how Congress should do it.
The closed-door forum on Capitol Hill will include almost two dozen tech leaders and advocates, and some of the industry’s biggest names: Meta’s Mark Zuckerberg and Elon Musk, the CEO of X and Tesla, as well as former Microsoft CEO Bill Gates. All 100 senators are invited, but the public is not.
Schumer, D-N.Y., who is leading the forum with Republican Sen. Mike Rounds of South Dakota, won’t necessarily take the tech executives’ advice as he works with Republicans and fellow Democrats to try and ensure some oversight of the burgeoning sector. But he’s hoping that they will give senators some realistic direction as he tries to do what Congress hasn't done for many years — pass meaningful regulation of the tech industry.
“It’s going to be a fascinating group because they have different points of view,” Schumer said in an interview with The Associated Press ahead of the forum. “Hopefully we can weave it into a little bit of some broad consensus.”
Rounds, who spoke to AP with Schumer on Tuesday, said Congress needs to get ahead of fast-moving AI by making sure it continues to develop “on the positive side” while also taking care of potential issues surrounding data transparency and privacy.
“AI is not going away, and it can do some really good things or it can be a real challenge,” Rounds said.
Schumer says regulation of artificial intelligence will be “one of the most difficult issues we can ever take on,” and ticks off the reasons why: It’s technically complicated, it keeps changing and it “has such a wide, broad effect across the whole world,” he said.
Congress has a lackluster track record when it comes to regulating technology. Lawmakers have lots of proposals — many of them bipartisan — but have mostly failed to agree on major legislation to regulate the industry as powerful tech companies have resisted.
Many lawmakers point to the failure to pass any legislation surrounding social media — bills have stalled in both chambers that would better protect children, regulate activity around elections and mandate stricter privacy standards, among other measures.
“We don’t want to do what we did with social media, which is let the techies figure it out, and we’ll fix it later,” says Senate Intelligence Committee Chairman Mark Warner, D-Va., on the AI push.
Schumer’s bipartisan working group — comprised of Rounds, Democratic Sen. Martin Heinrich of New Mexico and Republican Sen. Todd Young of Indiana — is hoping that the rapid growth of artificial intelligence will create more urgency. Sparked by the release of ChatGPT less than a year ago, businesses across many sectors have been clamoring to apply new generative AI tools that can compose human-like passages of text, program computer code and create novel images, audio and video. The hype over such tools has accelerated worries over its potential societal harms and prompted calls for more transparency in how the data behind the new products is collected and used.
“You have to have some government involvement for guardrails,” Schumer said. “If there are no guardrails, who knows what could happen.”
Schumer says Wednesday’s forum will focus on big ideas like whether the government should be involved at all, and what questions Congress should be asking. Each participant will have three minutes to speak on a topic of their choosing, and Schumer and Rounds will moderate open discussions among the group in the morning and afternoon.
Some of Schumer’s most influential guests, including Musk and Sam Altman, CEO of ChatGPT-maker OpenAI, have signaled more dire concerns evoking popular science fiction about the possibility of humanity losing control to advanced AI systems if the right safeguards are not in place.
But for many lawmakers and the people they represent, AI's effects on employment and navigating a flood of AI-generated misinformation are more immediate effects.
A recent report from the market research group Forrester projected that generative AI technology could replace 2.4 million jobs in the U.S. by 2030, many of them white-collar roles not affected by previous waves of automation. This year alone the number of lost jobs could total 90,000, the report said, though far more jobs will be reshaped than eliminated.
AI experts have also warned of the growing potential of AI-generated online disinformation to influence elections, including the upcoming 2024 presidential race.
On the more positive side, Rounds says he would like to see the empowerment of new medical technologies that could save lives and allow medical professionals to access more data. That topic is “very personal to me,” Rounds says, after his wife died of cancer two years ago.
Many members of Congress agree that legislation will probably be needed in response to the quick escalation of artificial intelligence tools in government, business and daily life. But there is little consensus on what that should be, or what might be needed. There is also some division — some members worry more about overregulation, and others worry more about the potential risks of an unchecked industry.
“I am involved in this process in large measure to ensure that we act, but we don’t act more boldly or over-broadly than the circumstances require,” says Sen. Young, one of the members of Schumer’s working group. “We should be skeptical of government, which is why I think it’s important that you got Republicans at the table.”
Young says that Schumer has reassured him that he will be “hypersensitive to overshooting as we address some of the potential harms of AI.”
Some Republicans have been wary of following the path of the European Union, which signed off in June on the world’s first set of comprehensive rules for artificial intelligence. The EU’s AI Act will govern any product or service that uses an AI system and classify them according to four levels of risk, from minimal to unacceptable.
A group of corporations has called on EU leaders to rethink the rules, arguing that it could make it harder for companies in the 27-nation bloc to compete with rivals overseas in the use of generative AI.
In the United States, most major tech companies have expressed support for AI regulations, though they don’t necessarily agree on what that means.
“We’ve always said that we think that AI should get regulated,” said Dana Rao, general counsel and chief trust officer for software company Adobe. “We’ve talked to Europe about this for the last four years, helping them think through the AI Act they’re about to pass. There are high-risk use cases for AI that we think the government has a role to play in order to make sure they’re safe for the public and the consumer.”
Adobe, which makes Photoshop and the new AI image-generator Firefly, is proposing its own federal legislation: an “anti-impersonation” bill to protect artists as well as AI developers from the misuse of generative AI tools to produce derivative works without a creator’s consent.
Senators say they will figure out a way to regulate the industry, despite the odds.
“Make no mistake. There will be regulation. The only question is how soon, and what,” said Sen. Richard Blumenthal, D-Conn., at a Tuesday hearing on legislation he wrote with Republican Sen. Josh Hawley of Missouri.
Blumenthal’s framework calls for a new “licensing regime” that would require tech companies to seek licenses for high-risk AI systems. It would also create an independent oversight body led by experts and hold companies liable when their products breach privacy or civil rights or endanger the public.
“Risk-based rules, managing the risks, is what we need to do here,” Blumenthal said.
___
O'Brien reported from Providence, Rhode Island. Associated Press writers Ali Swenson in New York and Kelvin Chan in London contributed to this report. | AI Policy and Regulations |
It's the first big step to hold AI to account.Stephanie Arnett/MITTR | Getty The White House wants Americans to know: The age of AI accountability is coming. President Joe Biden has today unveiled a new AI Bill of Rights, which outlines five protections Americans should have in the AI age. Biden has previously called for stronger privacy protections and for tech companies to stop collecting data. But the US—home to some of the world’s biggest tech and AI companies—has so far been one of the only Western nations without clear guidance on how to protect its citizens against AI harms. Today’s announcement is the White House’s vision of how the US government, technology companies, and citizens should work together to hold AI accountable. However, critics say the plan lacks teeth and the US needs even tougher regulation around AI. In September, the administration announced core principles for tech accountability and reform, such as stopping discriminatory algorithmic decision-making, promoting competition in the technology sector, and providing federal protections for privacy. The AI Bill of Rights, the vision for which was first introduced a year ago by the Office of Science and Technology Policy (OSTP), a US government department that advises the president on science and technology, is a blueprint for how to achieve those goals. It provides practical guidance to government agencies and a call to action for technology companies, researchers, and civil society to build these protections. “These technologies are causing real harms in the lives of Americans—harms that run counter to our core democratic values, including the fundamental right to privacy, freedom from discrimination, and our basic dignity,” a senior administration official told reporters at a press conference. AI is a powerful technology that is transforming our societies. It also has the potential to cause serious harm, which often disproportionately affects minorities. Facial recognition technologies used in policing and algorithms that allocate benefits are not as accurate for ethnic minorities, for example. The new blueprint aims to redress that balance. It says that Americans should be protected from unsafe or ineffective systems; that algorithms should not be discriminatory and systems should be used as designed in an equitable way; and that citizens should have agency over their data and should be protected from abusive data practices through built-in safeguards. Citizens should also know whenever an automated system is being used on them and understand how it contributes to outcomes. Finally, people should always be able to opt out of AI systems in favor of a human alternative and have access to remedies when there are problems. “We want to make sure that we are protecting people from the worst harms of this technology, no matter the specific underlying technological process used,” a second senior administration official said. The OSTP’s AI Bill of Rights is “impressive,” says Marc Rotenberg, who heads the Center for AI and Digital Policy, a nonprofit that tracks AI policy. “This is clearly a starting point. That doesn't end the discussion over how the US implements human-centric and trustworthy AI,” he says.” But it is a very good starting point to move the US to a place where it can carry forward on that commitment.” Willmary Escoto, US policy analyst for the digital rights group Access Now, says the guidelines skillfully highlight the “importance of data minimization” while “naming and addressing the diverse harms people experience from other AI-enabled technologies, like emotion recognition.” “The AI Bill of Rights could have a monumental impact on fundamental civil liberties for Black and Latino people across the nation,” says Escoto. The tech sector welcomed the White House’s acknowledgment that AI can also be used for good. Matt Schruers, president of the tech lobby CCIA, which counts companies including Google, Amazon, and Uber as its members, says he appreciates the administration’s “direction that government agencies should lead by example in developing AI ethics principles, avoiding discrimination, and developing a risk management framework for government technologists.” Shaundra Watson, policy director for AI at the tech lobby BSA, whose members include Microsoft and IBM, says she welcomes the document’s focus on risks and impact assessments. “It will be important to ensure that these principles are applied in a manner that increases protections and reliability in practice,” Watson says. While the EU is pressing on regulations that aim to prevent AI harms and hold companies accountable for harmful AI tech, and has a strict data protection regime, the US has been loath to introduce new regulations. The newly outlined protections echo those introduced in the EU, but the document is nonbinding and does not constitute US government policy, because the OSTP cannot enact law. It will be up to lawmakers to propose new bills. Russell Wald, director of policy for the Stanford Institute for Human-Centered AI, says the document lacks details or mechanisms for enforcement. “It is disheartening to see the lack of coherent federal policy to tackle desperately needed challenges posed by AI, such as federally coordinated monitoring, auditing, and reviewing actions to mitigate the risks and harm brought by deployed or open-source foundation models,” he says. Rotenberg says he’d prefer for the US to implement regulations like the EU’s AI Act, an upcoming law that aims to add extra checks and balances to AI uses that have the most potential to cause harm to humans. “We’d like to see some clear prohibitions on AI deployments that have been most controversial, which include, for example, the use of facial recognition for mass surveillance,” he says. The AI Bill of Rights may set the stage for future legislation, such as the passage of the Algorithmic Accountability Act or the establishment of an agency to regulate AI, says Sneha Revanur, who leads Encode Justice, an organization that focuses on young people and AI. "Though it is limited in its ability to address the harms of the private sector, the AI Bill of Rights can live up to its promise if it is enforced meaningfully, and we hope that regulation with real teeth will follow suit,” she says. This story has been updated to include Sneha Revanur's quote. Stay connectedIllustration by Rose WongGet the latest updates fromMIT Technology ReviewDiscover special offers, top stories, upcoming events, and more. | AI Policy and Regulations |
- A committee of lawmakers in the European Parliament on Thursday approved the EU’s AI Act, making it closer to becoming law.
- The regulation takes a risk-based approach to regulating artificial intelligence.
- The AI Act specifies requirements for developers of "foundation models" such as ChatGPT, including provisions to ensure that their training data doesn't violate copyright law.
A key committee of lawmakers in the European Parliament have approved a first-of-its-kind artificial intelligence regulation — making it closer to becoming law.
The approval marks a landmark development in the race among authorities to get a handle on AI, which is evolving with breakneck speed. The law, known as the European AI Act, is the first law for AI systems in the West. China has already developed draft rules designed to manage how companies develop generative AI products like ChatGPT.
The law takes a risk-based approach to regulating AI, where the obligations for a system are proportionate to the level of risk that it poses.
The rules also specify requirements for providers of so-called "foundation models" such as ChatGPT, which have become a key concern for regulators, given how advanced they're becoming and fears that even skilled workers will be displaced.
The AI Act categorizes applications of AI into four levels of risk: unacceptable risk, high risk, limited risk and minimal or no risk.
Unacceptable risk applications are banned by default and cannot be deployed in the bloc.
They include:
- AI systems using subliminal techniques, or manipulative or deceptive techniques to distort behavior
- AI systems exploiting vulnerabilities of individuals or specific groups
- Biometric categorization systems based on sensitive attributes or characteristics
- AI systems used for social scoring or evaluating trustworthiness
- AI systems used for risk assessments predicting criminal or administrative offenses
- AI systems creating or expanding facial recognition databases through untargeted scraping
- AI systems inferring emotions in law enforcement, border management, the workplace, and education
Several lawmakers had called for making the measures more expensive to ensure they cover ChatGPT.
To that end, requirements have been imposed on "foundation models," such as large language models and generative AI.
Developers of foundation models will be required to apply safety checks, data governance measures and risk mitigations before making their models public.
They will also be required to ensure that the training data used to inform their systems do not violate copyright law.
"The providers of such AI models would be required to take measures to assess and mitigate risks to fundamental rights, health and safety and the environment, democracy and rule of law," Ceyhun Pehlivan, counsel at Linklaters and co-lead of the law firm's telecommunications, media and technology and IP practice group in Madrid, told CNBC.
"They would also be subject to data governance requirements, such as examining the suitability of the data sources and possible biases."
It's important to stress that, while the law has been passed by lawmakers in the European Parliament, it's a ways away from becoming law.
Google on Wednesday announced a slew of new AI updates, including an advanced language model called PaLM 2, which the company says outperforms other leading systems on some tasks.
Novel AI chatbots like ChatGPT have enthralled many technologists and academics with their ability to produce humanlike responses to user prompts powered by large language models trained on massive amounts of data.
But AI technology has been around for years and is integrated into more applications and systems than you might think. It determines what viral videos or food pictures you see on your TikTok or Instagram feed, for example.
The aim of the EU proposals is to provide some rules of the road for AI companies and organizations using AI.
The rules have raised concerns in the tech industry.
The Computer and Communications Industry Association said it was concerned that the scope of the AI Act had been broadened too much and that it may catch forms of AI that are harmless.
"It is worrying to see that broad categories of useful AI applications – which pose very limited risks, or none at all – would now face stringent requirements, or might even be banned in Europe," Boniface de Champris, policy manager at CCIA Europe, told CNBC via email.
"The European Commission's original proposal for the AI Act takes a risk-based approach, regulating specific AI systems that pose a clear risk," de Champris added.
"MEPs have now introduced all kinds of amendments that change the very nature of the AI Act, which now assumes that very broad categories of AI are inherently dangerous."
Dessi Savova, head of continental Europe for the tech group at law firm Clifford Chance, said that the EU rules would set a "global standard" for AI regulation. However, he added that other jurisdictions including China, the U.S. and U.K. are quickly developing their own responses.
"The long-arm reach of the proposed AI rules inherently means that AI players in all corners of the world need to care," Savova told CNBC via email.
"The right question is whether the AI Act will set the only standard for AI. China, the U.S., and the U.K. to name a few are defining their own AI policy and regulatory approaches. Undeniably they will all closely watch the AI Act negotiations in tailoring their own approaches."
Savova added that the latest AI Act draft from Parliament would put into law many of the ethical AI principles organizations have been pushing for.
Sarah Chander, senior policy adviser at European Digital Rights, a Brussels-based digital rights campaign group, said the laws would require foundation models like ChatGPT to "undergo testing, documentation and transparency requirements."
"Whilst these transparency requirements will not eradicate infrastructural and economic concerns with the development of these vast AI systems, it does require technology companies to disclose the amounts of computing power required to develop them," Chander told CNBC.
"There are currently several initiatives to regulate generative AI across the globe, such as China and the US," Pehlivan said.
"However, the EU's AI Act is likely to play a pivotal role in the development of such legislative initiatives around the world and lead the EU to again become a standards-setter on the international scene, similarly to what happened in relation to the General Data Protection Regulation." | AI Policy and Regulations |
Tech Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what's coming next. You can read more here.
AI regulation is hot. Ever since the success of OpenAI’s chatbot ChatGPT, the public’s attention has been grabbed by wonder and worry about what these powerful AI tools can do. Generative AI has been touted as a potential game-changer for productivity tools and creative assistants. But they are already showing the ways they can cause harm. Generative models have been used to generate misinformation, and they could be weaponized as spamming and scamming tools.
Everyone from tech company CEOs to US senators and leaders at the G7 meeting has in recent weeks called for international standards and stronger guardrails for AI technology. The good news? Policymakers don’t have to start from scratch.
We’ve analyzed six different international attempts to regulate artificial intelligence, set out the pros and cons of each, and given them a rough score indicating how influential we think they are.
A legally binding AI treaty
The Council of Europe, a human rights organization that counts 46 countries as its members, is finalizing a legally binding treaty for artificial intelligence. The treaty requires signatories to take steps to ensure that AI is designed, developed, and applied in a way that protects human rights, democracy, and the rule of law. The treaty could potentially include moratoriums on technologies that pose a risk to human rights, such as facial recognition.
If all goes according to plan, the organization could finish drafting the text by November, says Nathalie Smuha, a legal scholar and philosopher at the KU Leuven Faculty of Law who advises the council.
Pros: The Council of Europe includes many non-European countries, including the UK and Ukraine, and has invited others such as the US, Canada, Israel, Mexico, and Japan to the negotiating table. “It’s a strong signal,” says Smuha.
Cons: Each country has to individually ratify the treaty and then implement it in national law, which could take years. There’s also a possibility that countries will be able to opt out of certain elements that they don’t like, such as stringent rules or moratoriums. The negotiating team is trying to find a balance between strengthening protection and getting as many countries as possible to sign, says Smuha.
Influence rating: 3/5
The OECD AI principles
In 2019, countries that belong to the Organisation for Economic Co-operation and Development (OECD) agreed to adopt a set of nonbinding principles laying out some values that should underpin AI development. Under these principles, AI systems should be transparent and explainable; should function in a robust, secure, and safe way; should have accountability mechanisms; and should be designed in a way that respects the rule of law, human rights, democratic values, and diversity. The principles also state that AI should contribute to economic growth.
Pros: These principles, which form a sort of constitution for Western AI policy, have shaped AI policy initiatives around the world since. The OECD’s legal definition of AI will likely be adopted in the EU’s AI Act, for example. The OECD also tracks and monitors national AI regulations and does research on AI’s economic impact. It has an active network of global AI experts doing research and sharing best practices.
Cons: The OECD’s mandate as an international organization is not to come up with regulation but to stimulate economic growth, says Smuha. And translating the high-level principles into workable policies requires a lot of work on the part of individual countries, says Phil Dawson, head of policy at the responsible AI platform Armilla.
Influence rating: 4/5
The Global Partnership on AI
The brainchild of Canadian prime minister Justin Trudeau and French president Emmanuel Macron, the Global Partnership on AI (GPAI) was founded in 2020 as an international body that could share research and information on AI, foster international research collaboration around responsible AI, and inform AI policies around the world. The organization includes 29 countries, some in Africa, South America, and Asia.
Pros: The value of GPAI is its potential to encourage international research and cooperation, says Smuha.
Cons: Some AI experts have called for an international body similar to the UN’s Intergovernmental Panel on Climate Change to share knowledge and research about AI, and GPAI had potential to fit the bill. But after launching with pomp and circumstance, the organization has been keeping a low profile, and it hasn’t published any work in 2023.
Influence rating: 1/5
The EU’s AI Act
The European Union is finalizing the AI Act, a sweeping regulation that aims to regulate the most “high-risk” usages of AI systems. First proposed in 2021, the bill would regulate AI in sectors such as health care and education.
Pros: The bill could hold bad actors accountable and prevent the worst excesses of harmful AI by issuing huge fines and preventing the sale and use of noncomplying AI technology in the EU. The bill will also regulate generative AI and impose some restrictions on AI systems that are deemed to create “unacceptable” risk, such as facial recognition. Since it’s the only comprehensive AI regulation out there, the EU has a first-mover advantage. There is a high chance the EU’s regime will end up being the world’s de facto AI regulation, because companies in non-EU countries that want to do business in the powerful trading bloc will have to adjust their practices to comply with the law.
Cons: Many elements of the bill, such as facial recognition bans and approaches to regulating generative AI, are highly controversial, and the EU will face intense lobbying from tech companies to water them down. It will take at least a couple of years before it snakes its way through the EU legislative system and enters into force.
Influence rating: 5/5
Technical industry standards
Technical standards from standard-setting bodies will play an increasingly crucial role in translating regulations into straightforward rules companies can follow, says Dawson. For example, once the EU’s AI Act passes, companies that meet certain technical standards will automatically be in compliance with the law. Many AI standards exist already, and more are on their way. The International Organization for Standardization (ISO) has already developed standards for how companies should go about risk management and impact assessments and manage the development of AI.
Pros: These standards help companies translate complicated regulations into practical measures. And as countries start writing their own individual laws for AI, standards will help companies build products that work across multiple jurisdictions, Dawson says.
Cons: Most standards are general and apply across different industries. So companies will have to do a fair bit of translation to make them usable in their specific sector. This could be a big burden for small businesses, says Dawson. One bone of contention is whether technical experts and engineers should be drafting rules around ethical risks. “A lot of people have concerns that policymakers … will simply punt a lot of the difficult questions about best practice to industry standards development,” says Dawson.
Influence rating: 4/5
The United Nations
The United Nations, which counts 193 countries as its members, wants to be the sort of international organization that could support and facilitate global coordination on AI. In order to do that, the UN set up a new technology envoy in 2021. That year, the UN agency UNESCO and member countries also adopted a voluntary AI ethics framework, in which member countries pledge to, for example, introduce ethical impact assessments for AI, assess the environmental impact of AI, and ensure that AI promotes gender equality and is not used for mass surveillance.
Pros: The UN is the only meaningful place on the international stage where countries in the Global South have been able to influence AI policy. While the West has committed to OECD principles, the UNESCO AI ethics framework has been hugely influential in developing countries, which are newer to AI ethics. Notably, China and Russia, which have largely been excluded from Western AI ethics debates, have also signed the principles.
Cons: That raises the question of how sincere countries are in following the voluntary ethical guidelines, as many countries, including China and Russia, have used AI to surveil people. The UN also has a patchy track record when it comes to tech. The organization’s first attempt at global tech coordination was a fiasco: the diplomat chosen as technology envoy was suspended after just five days following a harassment scandal. And the UN’s attempts to come up with rules for lethal autonomous drones (also known as killer robots) haven’t made any progress for years.
Influence rating: 2/5
Keep Reading
Most Popular
Covid hasn’t entirely gone away—here’s where we stand
The coronavirus continues to cause infections, disease and death—and long covid.
Meta’s latest AI model is free for all
The company hopes that making LLaMA 2 open source might give it the edge over rivals like OpenAI.
Junk websites filled with AI-generated text are pulling in money from programmatic ads
More than 140 brands are advertising on low-quality content farm sites—and the problem is growing fast.
Eric Schmidt: This is how AI will transform the way science gets done
Science is about to become much more exciting—and that will affect us all, argues Google's former CEO.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more. | AI Policy and Regulations |
A new battle is brewing between states and the federal government. This time the fight isn’t over taxes or immigration but rather the limits of regulating advanced artificial intelligence systems. Political disagreements around AI’s role in healthcare, in particular, could be the tip of the spear in that emerging skirmish.
Those were some of the concerns voiced by North Carolina Republican representative Greg Murphy speaking this week at the Connected Health Initiative’s AI and the Future of Digital Healthcare. Murphy, the only active practicing surgeon in Congress and co-chair of the GOP Doctors Caucus, believes, like many, that the technology could transform healthcare, but warned against broadly applying the same rules and standards nationwide.
“The federal government does not know the difference between Montana and New Jersey, but the folks in Montana do,” Murphy said at the event according to Politico. “It should be up to the folks who understand it to control that.”
Doctors and technologists alike say predictive AI tools could radically improve healthcare by deeply scanning X-rays, CT scans, and MRIs for early signs of disease in ways unavailable to human doctors in the past. Generative AI chatbots trained specifically on a corpus of medical journals, on the other hand, can potentially assist doctors with quick medical suggestions, perform administrative tasks, or (in some cases already) help communicate with patients with more compassion. The American Medical Association estimates one in five doctors in the US already use some form of AI in their practice.
But even as its use proliferates, the rules governing what AI can and can’t be used for remain murky from state to state or are just flat-out nonexistent. That’s an issue, especially if future doctors choose to rely more on chatGPT-style chatbots which regularly spit fabricated facts out of thin air. Those AI “hallucinations” have already led to libel lawsuits in the legal field. Murphy worries doctors could one day face another conundrum in the age of advanced AI. What happens when a human doctor wants to overrule an AI’s medical suggestion?
“The challenge is: Do we lose our humanity in this?” Murphy asked at the event. “Do we let the machines control us or do we control them?”
Doctors probably aren’t anywhere near at risk of being overruled by an AI chatbot anytime soon. Still, states are nonetheless drafting legislation to rein in more mundane, but more common ways misused AI could harm patients. California’s proposed AB 1502, for example, would ban health insurers or healthcare service plans from using AI to discriminate against patients based on their race, gender, or other protected categories. Another proposed legislation in Illinois would regulate the use of AI algorithms in diagnosing patients. Georgia has already enacted a law regulating AI use in conducting eye exams.
Those laws risk coming into conflict with far more widely covered federal AI regulations. In the past month, Senate Majority Leader Chuck Schumer has convened around half a dozen hearings specifically on AI legislation, with some of the biggest names and personalities in tech passing their way through his chambers to weigh in on the topic. Top AI firms like OpenAI, Microsoft, and Google, have already agreed to voluntary safety commitments proposed by the White House. Federal health agencies like the FDA, meanwhile, have issued their own recommendations on the issue.
It’s unlikely these quickly evolving federal rules governing AI will mesh perfectly with Americans from state to state. If the battle over AI regulation resembles anything like the disagreements over digital privacy before it, rules governing the technology’s use could vary widely from state to state. A lack of strong regulations explicitly barring doctors from making operational decisions based on an AI chatbot, for example, could encourage lawmakers to push for their own stricter requirements on a state level.
At least for now, US adults have made it clear they largely aren’t interested in AI dictating their next doctor’s office visit. More than half (60%) of adults recently surveyed by Pew Research said they would feel uncomfortable if their healthcare provider used AI to diagnose a disease or recommend treatment. Only a third of respondents thought using AI in those scenarios would lead to better outcomes for patients. At the same time, new polling shows Americans overwhelmingly want more government intervention when it comes to AI. More than eight in ten (82%) of respondents in a recent survey conducted by the AI Policy Institute said they did not trust tech companies to regulate themselves on AI. | AI Policy and Regulations |
UNITED NATIONS -- Britain pitched itself to the world Friday as a ready leader in shaping an international response to the rise of artificial intelligence, with Deputy Prime Minister Oliver Dowden telling the U.N. General Assembly his country was “determined to be in the vanguard.”
Touting the United Kingdom's tech companies, its universities and even Industrial Revolution-era innovations, he said the nation has “the grounding to make AI a success and make it safe.” He went on to suggest that a British AI task force, which is working on methods for assessing AI systems' vulnerability, could develop expertise to offer internationally.
His remarks at the assembly's annual meeting of world leaders previewed an AI safety summit that British Prime Minister Rishi Sunak is convening in November. Dowden's speech also came as other countries and multinational groups — including the European Union, the bloc that Britain left in 2020 — are making moves on artificial intelligence.
The EU this year passed pioneering regulations that set requirements and controls based on the level of risk that any given AI system poses, from low (such as spam filters) to unacceptable (for example, an interactive, children's toy that talks up dangerous activities).
The U.N., meanwhile, is pulling together an advisory board to make recommendations on structuring international rules for artificial intelligence. Members will be appointed this month, Secretary-General António Guterres told the General Assembly on Tuesday; the group's first take on a report is due by the end of the year.
Major U.S. tech companies have acknowledged a need for AI regulations, though their ideas on the particulars vary. And in Europe, a roster of big companies ranging from French jetmaker Airbus to to Dutch beer giant Heineken signed an open letter to urging the EU to reconsider its rules, saying it would put European companies at a disadvantage.
“The starting gun has been fired on a globally competitive race in which individual companies as well as countries will strive to push the boundaries as far and fast as possible," Dowden said. He argued that “the most important actions we will take will be international.”
Listing hoped-for benefits — such improving disease detection and productivity — alongside artificial intelligence's potential to wreak havoc with deepfakes, cyberattacks and more, Dowden urged leaders not to get “trapped in debates about whether AI is a tool for good or a tool for ill.”
"It will be a tool for both,” he said.
It's “exciting. Daunting. Inexorable,” Dowden said, and the technology will test the international community “to show that it can work together on a question that will help to define the fate of humanity.” | AI Policy and Regulations |
A new poll of more than 1,200 registered voters provides some of the clearest data yet illustrating the public’s desire to reign in AI.
54% of registered US voters surveyed in a new poll conducted by The Tech Oversight Project agreed Congress should take “swift action to regulate AI” in order to promote privacy and safety and ensure the tech provides the “maximum benefit to society.” Republicans and Democrats expressed nearly identical support for reining in AI, a rare sign of bipartisanship hinting at a growing consensus about the rapidly evolving technology. 41% of the voters said they’d rather see that regulation come from government intervention as opposed to just 20% who thought tech companies should regulate themselves. The polled voters also didn’t seem to buy arguments from tech executives who warn new AI regulation could set the US economy back. Just 15% of the respondents said regulating AI would stifle innovation.
“While the new technology of artificial intelligence—and the public’s understanding of it—is evolving rapidly, it is deeply telling that a majority of Americans do not trust Big Tech to prioritize safety and regulate it, and by a two-to-one margin want Congress to act,” Tech Oversight Project Deputy Executive Director Kyle Morris told Gizmodo.
The poll drops at what could turn out to be an inflection point for government AI policy. Hours prior to the poll’s release the Biden Administration met with the leaders of four leading AI companies to discuss AI risks. The administration also revealed The National Science Foundation would provide $140 million in funding to launch seven new National AI Research Institutes.
Even without polling, there are some clear signs the national conversation surrounding AI has shifted away from mild amusement and excitement around AI generators and chatbots toward potential harms. What exactly those harms are, however, varies widely, depending on who you ask. Last month, more than 500 tech experts and business leaders signed an open letter calling on AI labs to immediately pause development on all new large language models more powerful than OpenAI’ GPT-4 over concerns it could pose “profound risks to society and humanity.” The signatories, which included OpenAI co-founder Elon Musk and Apple co-founder Steve Wozniak, said they’d support a government-mandated moratorium on the tech if companies refused to willingly play ball.
Other leading researchers in the field like University of Washington Professor of Linguistics Emily M. Bende and AI Now Institute Managing Director Sarah Myers West agree AI needs more regulation but balk at the increasingly common trend of ascribing human-like characteristics to machines essentially playing a highly advanced game or word association. AI systems, the researchers previously told Gizmodo, aren’t sentient or human, but that doesn’t matter. They fear the technology’s tendency to make up facts and present them as truth could lead to a flooding of misinformation making it even more difficult to determine what’s true. The tech’s baked-in biases from discriminatory datasets, they say, mean negative impacts could be even worse for marginalized groups. Conservatives, fearful of “woke” biases in chatbot results, meanwhile, have applauded the idea of Musk creating his own politically incorrect “BasedAI.”
“Unless we have policy intervention, we’re facing a world where the trajectory for AI will be unaccountable to the public, and determined by the handful of companies that have the resources to develop these tools and experiment with them in the wild,” West told Gizmodo.
Congress, a legislative body not known for keeping up with new tech, is scrambling to pick up the pace when it comes to AI tech policy. Last week, Colorado Sen. Michael Bennet introduced a bill calling for the formation of an “AI Task Force” to identify potential civil liberty issues posed by AI and provide recommendations. Days before that, Massachusetts Sen. Ed Markey and California Rep. Ted Lieu introduced their own bill attempting to prevent AI from having control over nuclear launch weapons. They said they worry could lead to a Hollywood-style nuclear holocaust. Senate majority leader Chuck Schumer similarly released his own AI framework attempting to increase transparency and accountability of the tech.
“The Age of AI is here, and here to stay,” Schumer said in a statement. “Now is the time to develop, harness, and advance its potential to benefit our country for generations.”
This week, the Biden administration signaled its own interest in the area by meeting with leaders of four leading AI companies this week to discuss AI safety. FTC Chair Lina Khan, one of the country’s top regulatory enforcers, recently published her own New York Times editorial with a clear, direct message: “We must regulate AI.”
Much of that sudden movement, according to lawmakers speaking in a recent Politico article, comes from a strong public response to ChatGPT and other popular emerging chatbots. The mass popularity of the apps and general confusion around their ability to create convincing, and sometimes disturbing responses had reportedly struck a nerve in ways few other tech issues have.
“AI is one of those things that kind of moved along at ten miles an hour, and suddenly now is 100, going on 500 miles an hour,” House Science Committee Chair Frank Lucas told Politico. “It’s got everybody’s attention, and we’re all trying to focus,” said Lucas.
Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT. | AI Policy and Regulations |
Subscribe to Here’s the Deal, our politics
newsletter for analysis you won’t find anywhere else.
Thank you. Please check your inbox to confirm.
Mary Clare Jalonick, Associated Press
Mary Clare Jalonick, Associated Press
Matt O'Brien, Associated Press
Matt O'Brien, Associated Press
Leave your feedback
WASHINGTON (AP) — Senate Majority Leader Chuck Schumer has been talking for months about accomplishing a potentially impossible task: passing bipartisan legislation within the next year that encourages the rapid development of artificial intelligence and mitigates its biggest risks.
Watch Elon Musk’s comments in the player above.
On Wednesday, he convened a meeting of some of the country’s most prominent technology executives, among others, to ask them how Congress should do it.
WATCH: Senate committee holds hearing on artificial intelligence and U.S. competitiveness
The closed-door forum on Capitol Hill included almost two dozen tech executives, tech advocates, civil rights groups and labor leaders. The guest list featured some of the industry’s biggest names: Meta’s Mark Zuckerberg and X and Tesla’s Elon Musk as well as former Microsoft CEO Bill Gates. All 100 senators were invited; the public was not.
“Today, we begin an enormous and complex and vital undertaking: building a foundation for bipartisan AI policy that Congress can pass,” Schumer said as he opened the meeting. His office released his introductory remarks.
Schumer, who was leading the forum with Sen. Mike Rounds, R-S.D., will not necessarily take the tech executives’ advice as he works with colleagues to try and ensure some oversight of the burgeoning sector. But he is hoping they will give senators some realistic direction for meaningful regulation of the tech industry.
“It’s going to be a fascinating group because they have different points of view,” Schumer said in an interview with The Associated Press before the event. “Hopefully we can weave it into a little bit of some broad consensus.”
Tech leaders outlined their views, with each participant getting three minutes to speak on a topic of their choosing.
Musk and former Google CEO Eric Schmidt raised existential risks posed by AI, Zuckerberg brought up the question of closed vs. “open source” AI models and IBM CEO Arvind Krishna expressed opposition to the licensing approach favored by other companies, according to a person in attendance.
There appeared to be broad support for some kind of independent assessments of AI systems, according to this person, who spoke on condition of anonymity due to the rules of the closed-door forum.
“It was a very civilized discussion among some of the smartest people in the world,” Musk said after leaving the meeting. He said there is clearly some strong consensus, noting that nearly everyone raised their hands after Schumer asked if they believed some regulation is needed.
Some senators were critical of the private meeting, arguing that tech executives should testify in public.
Sen. Josh Hawley, R-Mo., said he would not attend what he said was a “giant cocktail party for big tech.” Hawley has introduced legislation with Sen. Richard Blumenthal, D-Conn., to require tech companies to seek licenses for high-risk AI systems.
“I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money and then close it to the public,” Hawley said.
Congress has a lackluster track record when it comes to regulating technology, and the industry has grown mostly unchecked by government in the past several decades.
Many lawmakers point to the failure to pass any legislation surrounding social media. Bills have stalled in the House and Senate that would better protect children, regulate activity around elections and mandate stricter privacy standards, for example.
“We don’t want to do what we did with social media, which is let the techies figure it out, and we’ll fix it later,” Senate Intelligence Committee Chairman Mark Warner, D-Va., said about the AI push.
Schumer said regulation of artificial intelligence will be “one of the most difficult issues we can ever take on,” and ticks off the reasons why: It’s technically complicated, it keeps changing and it “has such a wide, broad effect across the whole world,” he said.
But his bipartisan working group — Rounds and Sens. Martin Heinrich, D-N.M., and Todd Young, R-Ind. — is hoping the rapid growth of artificial intelligence will create more urgency.
Rounds said ahead of the forum that Congress needs to get ahead of fast-moving AI by making sure it continues to develop “on the positive side” while also taking care of potential issues surrounding data transparency and privacy.
“AI is not going away, and it can do some really good things or it can be a real challenge,” Rounds said.
Sparked by the release of ChatGPT less than a year ago, businesses across many sectors have been clamoring to apply new generative AI tools that can compose human-like passages of text, program computer code and create novel images, audio and video. The hype over such tools has accelerated worries over its potential societal harms and prompted calls for more transparency in how the data behind the new products is collected and used.
“You have to have some government involvement for guardrails,” Schumer said. “If there are no guardrails, who knows what could happen.”
Some concrete proposals have already been introduced, including legislation by Sen. Amy Klobuchar, D-Minn., that would require disclaimers for AI-generated election ads with deceptive imagery and sounds. Hawley and Blumenthal’s broader approach would create a government oversight authority with the power to audit certain AI systems for harms before granting a license.
WATCH: Why artificial intelligence is a central dispute in the Hollywood strikes
In the United States, major tech companies have expressed support for AI regulations, though they don’t necessarily agree on what that means. Microsoft has endorsed the licensing approach, for instance, while IBM prefers rules that govern the deployment of specific risky uses of AI rather than the technology itself.
Similarly, many members of Congress agree that legislation is needed but there is little consensus. There is also division, with some members of Congress worrying more about overregulation while others are concerned more about the potential risks. Those differences often fall along party lines.
“I am involved in this process in large measure to ensure that we act, but we don’t act more boldly or over-broadly than the circumstances require,” Young said. “We should be skeptical of government, which is why I think it’s important that you got Republicans at the table.”
Some of Schumer’s most influential guests, including Musk and Sam Altman, CEO of ChatGPT-maker OpenAI, have signaled more dire concerns evoking popular science fiction about the possibility of humanity losing control to advanced AI systems if the right safeguards are not in place.
But for many lawmakers and the people they represent, AI’s effects on employment and navigating a flood of AI-generated misinformation are more immediate worries.
Rounds said he would like to see the empowerment of new medical technologies that could save lives and allow medical professionals to access more data. That topic is “very personal to me,” Rounds said, after his wife died of cancer two years ago.
Some Republicans have been wary of following the path of the European Union, which signed off in June on the world’s first set of comprehensive rules for artificial intelligence. The EU’s AI Act will govern any product or service that uses an AI system and classify them according to four levels of risk, from minimal to unacceptable.
A group of European corporations has called on EU leaders to rethink the rules, arguing that it could make it harder for companies in the 27-nation bloc to compete with rivals overseas in the use of generative AI.
“We’ve always said that we think that AI should get regulated,” said Dana Rao, general counsel and chief trust officer for software company Adobe. “We’ve talked to Europe about this for the last four years, helping them think through the AI Act they’re about to pass. There are high-risk use cases for AI that we think the government has a role to play in order to make sure they’re safe for the public and the consumer.”
O’Brien reported from Providence, Rhode Island. Associated Press writers Ali Swenson in New York and Kelvin Chan in London contributed to this report.
Support Provided By:
Learn more | AI Policy and Regulations |
World leaders are addressing their concerns about AI at the G7 Summit in Hiroshima on Friday, suggesting that “guardrails” should be put in place to monitor the evolving technology. The summit, which will tackle other issues including the Ukraine War, China relations, and clean energy, is bringing the AI discussion to the table for the first time at the request of President Joe Biden’s national security adviser, Jake Sullivan.
Leaders from the UK, France, Germany, Italy, Japan, the U.S., and Canada are attending the G7 summit, and European Commission President Ursula von der Leyen and UK Prime Minister Rishi Sunak said the nation will lead the movement toward better AI regulations. Sunak said that while AI could benefit society, it’s important to introduce it “safely and securely with guard rails in place.”
He told The Guardian, “I think that the UK has a track record of being in a leadership position and bringing people together, particularly in regard to technological regulation in the online safety bill … And again, the companies themselves, in that instance as well, have worked with us and looked to us to provide those guard rails as they will do and have done on AI.”
The conversation comes as experts warn against the harmful effects of AI, and while much of the dialogue has revolved around ChatGPT, experts have expressed concern over AI’s involvement in the health sector as well. Professionals from the UK, US, Australia, Costa Rica, and Malaysia wrote in the BMJ Global Health journal that the risks “include the potential for AI errors to cause patient harm, issues with data privacy and security and the use of AI in ways that will worsen social and health inequalities,” The Guardian reported.
Sunak has long been an advocate for AI development, previously saying it does have the potential to benefit economic growth and transform public services, but at the G7 summit, he struck a more cautious tone, advising other leaders that they need to instead focus on regulatory measures.
Von der Leyen said at the summit’s opening remarks that “artificial intelligence’s potential benefits for citizens and the economy are great,” but added the caveat “At the same time, we need to agree to guardrails to develop AI in the EU, reflecting our democratic values.” She said in a statement to The Financial Times, “We want AI systems to be accurate, reliable, safe, and non-discriminatory, regardless of their origin.” | AI Policy and Regulations |
Elon Musk, the billionaire CEO of electric vehicle maker Tesla and social media platform Twitter, discussed artificial intelligence issues with Senate Majority Leader Chuck Schumer on Wednesday.
“We talked about the future,” Musk told reporters after exiting the meeting that lasted about an hour. “We talked about AI and the economy.”
Schumer’s office confirmed the meeting.
Earlier this month, Schumer said he had launched an effort to establish rules on artificial intelligence to address national security and education concerns, as use of programs like ChatGPT becomes widespread.
Schumer said he had drafted and circulated a “framework that outlines a new regulatory regime that would prevent potentially catastrophic damage to our country while simultaneously making sure the US advances and leads in this transformative technology.”
In March, Musk and a group of artificial intelligence experts and industry executives called for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” said the letter issued by the Future of Life Institute.
There is a growing push in Washington for AI regulations. Senate Intelligence Committee chair Mark Warner sent major AI CEOs a letter Wednesday asking them to take steps to address concerns.
Commerce Secretary Gina Raimondo told reporters Wednesday the Biden administration is working “as aggressively as possible to figure out our approach” to AI.
“The challenge is you don’t want to stifle innovation in a brand new area with massive potential,” Raimondo said. “The risks related to misinformation and deep fakes etcetera are massive.”
In January, Musk met two top White House officials in Washington to discuss how Tesla and the administration of President Joe Biden could work together to advance electric vehicle production. He also visited with House Speaker Kevin McCarthy in a meeting earlier this year.
Separately, South Korean President Yoon Suk Yeol met with Musk on Wednesday in Washington to call for investment in his country, news agency Yonhap reported, citing a presidential aide.
The two met at Musk’s request as Yoon is in the US for a six-day state visit, Yonhap said.
Yoon touted South Korea as an ideal country for Tesla to build a gigafactory, citing the country’s cutting edge industrial robots and high-skilled workers, the report said.
He also offered to provide support including tax benefits to attract the EV maker’s manufacturing plant.
Musk told Yoon that South Korea remains as one of the top candidates for Tesla’s Gigafactories, and he would have an opportunity to visit the Asian country, according to Yonhap. | AI Policy and Regulations |
Since OpenAI introduced ChatGPT, privacy advocates have warned consumers about the potential threat to privacy posed by generative AI apps. The arrival of a ChatGPT app in the Apple App Store has ignited a fresh round of caution.
“[B]efore you jump headfirst into the app, beware of getting too personal with the bot and putting your privacy at risk,” warned Muskaan Saxena in Tech Radar.
The iOS app comes with an explicit tradeoff that users should be aware of, she explained, including this admonition: “Anonymized chats may be reviewed by our AI trainer to improve our systems.”
Anonymization, though, is no ticket to privacy. Anonymized chats are stripped of information that can link them to particular users. “However, anonymization may not be an adequate measure to protect consumer privacy because anonymized data can still be re-identified by combining it with other sources of information,” Joey Stanford, vice president of privacy and security at Platform.sh, a maker of a cloud-based services platform for developers based in Paris, told TechNewsWorld.
“It’s been found that it’s relatively easy to de-anonymize information, especially if location information is used,” explained Jen Caltrider, lead researcher for Mozilla’s Privacy Not Included project.
“Publicly, OpenAI says it isn’t collecting location data, but its privacy policy for ChatGPT says they could collect that data,” she told TechNewsWorld.
Nevertheless, OpenAI does warn users of the ChatGPT app that their information will be used to train its large language model. “They’re honest about that. They’re not hiding anything,” Caltrider said.
Taking Privacy Seriously
Caleb Withers, a research assistant at the Center for New American Security, a national security and defense think tank in Washington, D.C., explained that if a user types their name, place of work, and other personal information into a ChatGPT query, that data will not be anonymized.
“You have to ask yourself, ‘Is this something I would say to an OpenAI employee?'” he told TechNewsWorld.
OpenAI has stated that it takes privacy seriously and implements measures to safeguard user data, noted Mark N. Vena, president and principal analyst at SmartTech Research in San Jose, Calif.
“However, it’s always a good idea to review the specific privacy policies and practices of any service you use to understand how your data is handled and what protections are in place,” he told TechNewsWorld.
As dedicated to data security as an organization might be, vulnerabilities might exist that could be exploited by malicious actors, added James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.
“It’s always important to be cautious and consider the necessity of sharing sensitive information to ensure that your data is as secure as possible,” he told TechNewsWorld.
“Protecting your privacy is a shared responsibility between users and the companies that collect and use their data, which is documented in those long and often unread End User License Agreements,” he added.
Built-In Protections
McQuiggan noted that users of generative AI apps have been known to insert sensitive information such as birthdays, phone numbers, and postal and email addresses into their queries. “If the AI system is not adequately secured, it can be accessed by third parties and used for malicious purposes such as identity theft or targeted advertising,” he said.
He added that generative AI applications could also inadvertently reveal sensitive information about users through their generated content. “Therefore,” he continued, “users must know the potential privacy risks of using generative AI applications and take the necessary steps to protect their personal information.”
Unlike desktops and laptops, mobile phones have some built-in security features that can curb privacy incursions by apps running on them.
However, as McQuiggan points out, “While some measures, such as application permissions and privacy settings, can provide some level of protection, they may not thoroughly safeguard your personal information from all types of privacy threats as with any application loaded on the smartphone.”
Vena agreed that built-in measures like app permissions, privacy settings, and app store regulations offer some level of protection. “But they may not be sufficient to mitigate all privacy threats,” he said. “App developers and smartphone manufacturers have different approaches to privacy, and not all apps adhere to best practices.”
Even OpenAI’s practices vary from desktop to mobile phone. “If you’re using ChatGPT on the website, you have the ability to go into the data controls and opt out of your chat being used to improve ChatGPT. That setting doesn’t exist on the iOS app,” Caltrider noted.
Beware App Store Privacy Info
Caltrider also found the permissions used by OpenAI’s iOS app a bit fuzzy, noting that “In the Google Play Store, you can check and see what permissions are being used. You can’t do that through the Apple App Store.”
She warned users about depending on privacy information found in app stores. “The research that we’ve done into the Google Play Store safety information shows that it’s really unreliable,” she observed.
“Research by others into the Apple App Store shows it’s unreliable, too,” she continued. “Users shouldn’t trust the data safety information they find on app pages. They should do their own research, which is hard and tricky.”
“The companies need to be better at being honest about what they’re collecting and sharing,” she added. “OpenAI is honest about how they’re going to use the data they collect to train ChatGPT, but then they say that once they anonymize the data, they can use it in lots of ways that go beyond the standards in the privacy policy.”
Stanford noted that Apple has some policies in place that can address some of the privacy threats posed by generative AI apps. They include:
- Requiring user consent for data collection and sharing by apps that use generative AI technologies;
- Providing transparency and control over how data is used and by whom through the AppTracking Transparency feature that allows users to opt out of cross-app tracking;
- Enforcing privacy standards and regulations for app developers through the App Store review process and rejecting apps that violate them.
However, he acknowledged, “These measures may not be enough to prevent generative AI apps from creating inappropriate, harmful, or misleading content that could affect users’ privacy and security.”
Call for Federal AI Privacy Law
“OpenAI is just one company. There are several creating large language models, and many more are likely to crop up in the near future,” added Hodan Omaar, a senior AI policy analyst at the Center for Data Innovation, a think tank studying the intersection of data, technology, and public policy, in Washington, D.C.
“We need to have a federal data privacy law to ensure all companies adhere to a set of clear standards,” she told TechNewsWorld.
“With the rapid growth and expansion of artificial intelligence,” added Caltrider, “there definitely needs to be solid, strong watchdogs and regulations to keep an eye out for the rest of us as this grows and becomes more prevalent.” | AI Policy and Regulations |
When Senate Majority Leader Chuck Schumer, D-N.Y., announced a "major effort" in April to put the Senate’s imprint on artificial intelligence policy, he talked about having an "urgency to act" and said a legislative plan would start taking shape in a matter of weeks.
"In the coming weeks, Leader Schumer plans to refine the proposal in conjunction with stakeholders from academia, advocacy organizations, industry, and the government," he said in an April 13 statement.
But on Thursday, more than two months later, Schumer indicated that legislation may not be ready until 2024. In Wednesday remarks to the Center for Strategic and International Studies, Schumer said the process of getting input for the plan is still months away.
"Later this fall, I will convene the top minds in artificial intelligence here in Congress for a series of AI Insight Forums to lay down a new foundation for AI policy," he said. Schumer said developers, scientists, CEOs, national security experts and others must do "years of work in a matter of months," a sign the effort could go well into next year.
He said once this input is collected, it will be up to legislators to listen and translate their ideas into legislation. He outlined a broad range of issues to cover, including how to protect innovation, intellectual property rights, risk management, national security, guarding against "doomsday scenarios," transparency, "explainability" and privacy.
Making the job even more difficult, Schumer said bipartisan support is critical to the effort and said several committees will be expected to chip in.
Schumer this week announced what he called the SAFE Innovation Framework for AI, which is aimed at protecting U.S. innovation in this emerging field but ensuring there are guardrails to ensure security, promote accountability, support human liberty, civil rights and justice, and guarantee that AI outputs can be explained to users.
But Schumer laid out similar goals in April when he talked about the need to inform users, reduce the potential harm caused by AI outputs and make sure AI systems line up with "American values."
Jake Denton, a technology policy research associate at the Heritage Foundation, said he sees Schumer’s recent announcement mostly as a sign that the process hasn’t gotten very far yet.
"The goalpost seems to keep moving," Denton told Fox News Digital. "We never really get the bill text. We never get the details."
Schumer’s office declined to comment for this story.
Denton said the basic principles that Schumer has outlined twice now are broadly accepted ideas, but he said the trick will be turning them into legislation. Several ideas have bounced around Capitol Hill – including a commission to guide AI policy or even a new agency that could license AI technology and ensure it produces outputs that are free of bias or discrimination.
Denton said it’s possible that Congress is still months or even years away from passing significant legislation to regulate AI based on its current pace. He said the precedent is there, as Congress has allowed other technology to flourish before stepping in.
"Our lawmakers are still in the process of trying to figure out how to handle social media," he noted.
While the effort could easily drift into next year, Schumer said this proposal is the most efficient path forward to get a congressional regulatory plan in place.
"If we take the typical path – holding congressional hearings with opening statements and each member asking questions five minutes at a time on different issues – we simply won’t be able to come up with the right policies," he said.
"By the time we act, AI will have evolved into something new," he added. "This will not do. A new approach is required." | AI Policy and Regulations |
OpenAI has been lobbying the European Union to water down incoming AI legislation. According to documents from the European Commission obtained by Time, the ChatGPT creator requested that lawmakers make several amendments to a draft version of the EU AI Act — an upcoming law designed to better regulate the use of artificial intelligence — before it was approved by the European Parliament on June 14th. Some changes suggested by OpenAI were eventually incorporated into the legislation.
Prior to its approval, lawmakers debated expanding terms within the AI Act to designate all general-purpose AI systems (GPAIs) such as OpenAI’s ChatGPT and DALL-E as “high risk” under the act’s risk categorizations. Doing so would hold them to the most stringent safety and transparency obligations. According to Time, OpenAI repeatedly fought against the company’s generative AI systems falling under this designation in 2022, arguing that only companies explicitly applying AI to high-risk use cases should be made to comply with the regulations. This argument has also been pushed by Google and Microsoft, which have similarly lobbied the EU to reduce the AI Act’s impact on companies building GPAIs.
“GPT-3 is not a high-risk system, but possesses capabilities that can potentially be employed in high risk use cases”
“OpenAl primarily deploys general purpose Al systems - for example, our GPT-3 language model can be used for a wide variety of use cases involving language, such as summarization, classification, questions and answers, and translation,” said OpenAI in an unpublished white paper sent to EU Commission and Council officials in September 2022. “By itself, GPT-3 is not a high-risk system, but possesses capabilities that can potentially be employed in high risk use cases.”
Three representatives for OpenAI met with European Commission officials in June 2022 to clarify the risk categorizations proposed within the AI Act. “They were concerned that general purpose AI systems would be included as high-risk systems and worried that more systems, by default, would be categorized as high-risk,” said an official record of the meeting obtained by Time. An anonymous European Commission source also informed Time that, within that meeting, OpenAI expressed concern that this perceived overregulation could impact AI innovation, claiming it was aware of the risks regarding AI and was doing all it could to mitigate them. OpenAI reportedly did not suggest regulations that it believes should be in place.
“At the request of policymakers in the EU, in September 2022 we provided an overview of our approach to deploying systems like GPT-3 safely, and commented on the then-draft of the [AI Act] based on that experience,” said an OpenAI spokesperson in a statement to Time. “Since then, the [AI Act] has evolved substantially and we’ve spoken publicly about the technology’s advancing capabilities and adoption. We continue to engage with policymakers and support the EU’s goal of ensuring AI tools are built, deployed, and used safely now and in the future.”
OpenAI has not previously disclosed its lobbying efforts in the EU, and they appear to be largely successful — GPAIs aren’t automatically classified as high risk in the final draft of the EU AI Act approved on June 14th. It does, however, impose greater transparency requirements on “foundation models” — powerful AI systems like ChatGPT that can be used for different tasks — which will require companies to carry out risk assessments and disclose if copyrighted material has been used to train their AI models.
Changes suggested by OpenAI, including not enforcing tighter regulations on all GPAIs, were incorporated into the EU’s approved AI Act
An OpenAI spokesperson informed Time that OpenAI supported the inclusion of “foundation models” as a separate category within the AI Act, despite OpenAI’s secrecy regarding where it sources the data to train its AI models. It’s widely believed that these systems are being trained on pools of data that have been scraped from the internet, including intellectual property and copyrighted materials. The company insists it’s remained tight-lipped about data sources to prevent its work from being copied by rivals, but if forced to disclose such information, OpenAI and other large tech companies could become the subject of copyright lawsuits.
OpenAI CEO Sam Altman’s stance on regulating AI has been fairly erratic so far. The CEO has visibly pushed for regulation — having discussed plans with US Congress — and highlighted the potential dangers of AI in an open letter he signed alongside other notable tech leaders like Elon Musk and Steve Wozniak earlier this year. But his focus has mainly been on future harms of these systems. At the same time, he’s warned that OpenAI might cease its operations in the EU market if the company is unable to comply with the region’s incoming AI regulations (though he later rolled back on those comments).
OpenAI argued that its approach to mitigating the risks that occur from GPAIs is “industry-leading” in its white paper sent to the EU Commission. “What they’re saying is basically: trust us to self-regulate,” Daniel Leufer, a senior policy analyst at Access Now, told Time. “It’s very confusing because they’re talking to politicians saying, ‘Please regulate us,’ they’re boasting about all the [safety] stuff that they do, but as soon as you say, ‘Well, let’s take you at your word and set that as a regulatory floor,’ they say no.”
The EU’s AI Act still has a way to go before it comes into effect. The legislation will now be discussed among the European Council in a final “trilogue” stage, which aims to finalize details within the law, including how and where it can be applied. Final approval is expected by the end of this year and may take around two years to come into effect. | AI Policy and Regulations |
20 March 2023
Dear Friends
In 2019, many countries around the world, including the United States, committed to the development of human-centric and trustworthy AI. Yet less than a few years on, we appear to be approaching a tipping point with the release of Generative AI techniques, which are neither human-centric nor trustworthy.
These systems produce results that cannot be replicated or proven. They fabricate and hallucinate. They describe how to commit terrorist acts, how to assassinate political leaders, and how to conceal child abuse. GPT-4 has the ability to undertake mass surveillance at scale, combining the ability to ingest images, link to identities, and develop comprehensive profiles.
As this industry has rapidly evolved so too has the secrecy surrounding the products. The latest technical paper on GPT-4 provides little information about the training data, the number of parameters, or the assessment methods. A fundamental requirement in all emerging AI policy frameworks – an independent impact assessment prior to deployment – was never undertaken.
Many leading AI experts, including many companies themselves, have called for regulation. Yet there is little effort in the United States today to develop regulatory responses even as countries around the world race to establish legal safeguards.
The present course cannot be sustained. The public needs more information about the impact of artificial intelligence. Independent experts need the opportunity to interrogate these models. Laws should be enacted to promote algorithmic transparency and counter algorithmic bias. There should be a national commission established to assess the impact of AI on American Society, to better understand the benefits as well as the risks.
This week the Center for AI and Digital Policy, joined by others, will file a complaint with the Federal Trade Commission, calling for an investigation of Open AI and the product chatGPT. We believe the FTC has the authority to act in this matter and is uniquely positioned as the lead consumer protection agency in the United States to address this emerging challenge. We will ask the FTC to establish a moratorium on the release of further commercial versions of GPT until appropriate safeguards are established. We will simultaneously petition the FTC to undertake a rulemaking for the regulation of the generative AI industry.
We favor growth and innovation. We recognize a wide range of opportunities and benefits that AI may provide. But unless we are able to maintain control of these systems, we will be unable to manage the risk that will result or the catastrophic outcomes that may emerge. We are asking the FTC to “hit the pause button” so that there is an opportunity for our institutions, our laws, and our society to catch up. We need to assert agency over the technologies we create before we lose control.
Merve Hickock and Marc Rotenberg
For the CAIDP
As we move forward with the FTC Complaint and Petition, we welcome your suggestions for points to make and issues to raise with the FTC. Most helpful for us are (1) accurate, authoritative descriptions of risks arising from the use of GPT, and (2) expert opinions of risks arising from the use of GPT, and (3) examples of how GPT violates the specific guidelines that the Federal Trade Commission has established for the marketing and advertising of AI products and services.
Among the topics we have identified so far:
Please note that we make evidence-based arguments and cite to published work. We will not be able to include general policy arguments, unsupported claims, or rhetorical statements.
Thank you for your assistance!
Over the last several years, the FTC has issued several reports and policy guidelines concerning marketing and advertising of AI-relate products and services. We believe that OpenAI should be required by the FTC to comply with these guidelines.
When you talk about AI in your advertising, the FTC may be wondering, among other things:
In 2021, the FTC warned that advances in Artificial Intelligence "has highlighted how apparently “neutral” technology can produce troubling outcomes – including discrimination by race or other legally protected classes." The FTC explained it has decades of experience enforcing three laws important to developers and users of AI:
The FTC said its recent work on AI – coupled with FTC enforcement actions – offers important lessons on using AI truthfully, fairly, and equitably.
" . . . we at the FTC have long experience dealing with the challenges presented by the use of data and algorithms to make decisions about consumers. Over the years, the FTC has brought many cases alleging violations of the laws we enforce involving AI and automated decision-making, and have investigated numerous companies in this space.
"The FTC’s law enforcement actions, studies, and guidance emphasize that the use of AI tools should be transparent, explainable, fair, and empirically sound, while fostering accountability. We believe that our experience, as well as existing laws, can offer important lessons about how companies can manage the consumer protection risks of AI and algorithms."
FTC Report Warns About Using Artificial Intelligence to Combat Online Problems
Agency Concerned with AI Harms Such As Inaccuracy, Bias, Discrimination, and Commercial Surveillance Creep (June 16, 2022)
Today the Federal Trade Commission issued a report to Congress warning about using artificial intelligence (AI) to combat online problems and urging policymakers to exercise “great caution” about relying on it as a policy solution. The use of AI, particularly by big tech platforms and other companies, comes with limitations and problems of its own. The report outlines significant concerns that AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance.
CAIDP Presentation, Hitting the Pause Button: A Moratorium for Generative AI (March 19, 2023)
Testimony and statement for the Record, Merve Hickok, CAIDP Chair and Research Director
House Committee on Oversight and Accountability, March 6, 2023
OpenAI's system card with a long list of possible risks (ranging from disinformation to nuclear proliferation and terrorism):
Twitter thread by Sam Altman: https://twitter.com/sama/status/1627110893388693504?s=20 “we also need enough time for our institutions to figure out what to do. regulation will be critical and will take time to figure out; although current-generation AI tools aren’t very scary, i think we are potentially not that far away from potentially scary ones.”
Marc Rotenberg and Merve Hickok, Regulating A.I.: The U.S. Needs to Act
New York Times, March 6, 2023
Marc Rotenberg and Merve Hickok, Artificial Intelligence and Democratic Values: Next Steps for the United States
Council on Foreign Relations, August 22, 2023
Merve Hickok and Marc Rotenberg, The State of AI Policy: The Democratic Values Perspective
Turkish Policy Qaurtelry, March 4, 2022
Marc Rotenberg and Sunny Seon Kang, The Use of Algorithmic Decision Tools, Artificial Intelligence, and Predictive Analytics
Federal Trade Commission, August 20, 2018
Federal Trade Commission, November 6, 2019
Federal Trade Commission, May 17, 2017
FTC Opens Rulemaking Petition Process, Promoting Public Participation and Accountability
Changes to FTC Rules of Practice reflect commitment to public access to vital agency processes
September 15, 2021
At an open Commission meeting today, the Federal Trade Commission voted to make significant changes to enhance public participation the agency’s rulemaking, a significant step to increase public participation and accountability around the work of the FTC.
The Commission approved a series of changes to the FTC’s Rules of Practice designed to make it easier for members of the public to petition the agency for new rules or changes to existing rules that are administered by the FTC. The changes are a key part of the work of opening the FTC’s regulatory processes to public input and scrutiny. This is a departure from the previous practice, under which the Commission had no obligation to respond to or otherwise address petitions for agency action. | AI Policy and Regulations |
On Thursday, Microsoft President Brad Smith announced that his biggest apprehension about AI revolves around the growing concern for deepfakes and synthetic media designed to deceive, Reuters reports.
Smith made his remarks while revealing his "blueprint for public governance of AI" in a speech at Planet World, a language arts museum in Washington, DC. His concerns come when talk of AI regulations is increasingly common, sparked largely by the popularity of OpenAI's ChatGPT and a political tour by OpenAI CEO Sam Altman.
Smith expressed his desire for urgency in formulating ways to differentiate between genuine photos or videos and those created by AI when they might be used for illicit purposes, especially in enabling society-destabilizing disinformation.
"We're going have to address the issues around deepfakes. We're going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians," Smith said, according to Reuters. "We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI."
Smith also pushed for the introduction of licensing for critical forms of AI, arguing that these licenses should carry obligations to protect against threats to security, whether physical, cybersecurity, or national. "We will need a new generation of export controls, at least the evolution of the export controls we have, to ensure that these models are not stolen or not used in ways that would violate the country's export control requirements," he said.
Last week, Altman appeared at the US Senate and voiced his concerns about AI, saying that the nascent industry needs to be regulated. Altman, whose company OpenAI is backed by Microsoft, argued for global cooperation on AI and incentives for safety compliance.
In his speech Thursday, Smith echoed these sentiments and insisted that people must be held accountable for the problems caused by AI. He urged for safety measures to be put on AI systems controlling critical infrastructure, like the electric grid and water supply, to ensure human oversight.
In an effort to maintain transparency around AI technologies, Smith urged that developers should develop a "know your customer"-style system to keep a close eye on how AI technologies are used and inform the public about content created by AI, making it easier to identify fabricated content. Along these lines, companies such as Adobe, Google, and Microsoft are all working on ways to watermark or otherwise label AI-generated content.
Deepfakes have been a subject of research at Microsoft for years now. In September, Microsoft's Chief Scientific Officer Eric Horvitz penned a research paper about the dangers of both interactive deepfakes and the creation of synthetic histories, subjects also covered in a 2020 article in FastCompany by this author, which also mentioned earlier efforts from Microsoft at detecting deepfakes.
Meanwhile, Microsoft is simultaneously pushing to include text- and image-based generative AI technology into its products, including Office and Windows. Its rough launch of an unconditioned and undertested Bing chatbot (based on a version of GPT-4) in February spurred deeply emotional reactions from its users. It also reignited latent fears that world-dominating superintelligence may be just around the corner, a reaction that some critics claim is part of a conscious marketing campaign from AI vendors.
So the question remains: What does it mean when companies like Microsoft are selling the very product that they are warning us about? | AI Policy and Regulations |
The European Union is set to impose some of the world’s most sweeping safety and transparency restrictions on artificial intelligence. A draft of the EU Artificial Intelligence Act (AIA or AI Act) — new legislation that restricts high-risk uses of AI — was passed by the European Parliament on June 14th. Now, after two years and an explosion of interest in AI, only a few hurdles remain before it comes into effect.
The AI Act was proposed by European lawmakers in April 2021. In their proposal, lawmakers warned the technology could provide a host of “economic and societal benefits” but also “new risks or negative consequences for individuals or the society.” Those warnings may seem fairly obvious these days, but they predate the mayhem of generative AI tools like ChatGPT or Stable Diffusion. And as this new variety of AI has evolved, a once (relatively) simple-sounding regulation has struggled to encompass a huge range of fast-changing technologies. As Daniel Leufer, senior policy analyst at Access Now, said to The Verge, “The AI Act has been a bit of a flawed tool from the get-go.”
In order to regulate AI, you first need to define what it even is
The AI Act was created for two main reasons: to synchronize the rules for regulating AI technology across EU member states and to provide a clearer definition of what AI actually is. The framework categorizes a wide range of applications by different levels of risk: unacceptable risk, high risk, limited risk, and minimal or no risk. “Unacceptable” risk models, which include social “credit scores” and real-time biometric identification (like facial recognition) in public spaces, are outright prohibited. “Minimal” risk ones, including spam filters and inventory management systems, won’t face any additional rules. Services that fall in between will be subject to transparency and safety restrictions if they want to stay in the EU market.
The early AI Act proposals focused on a range of relatively concrete tools that were sometimes already being deployed in fields like job recruitment, education, and policing. What lawmakers didn’t realize, however, was that defining “AI” was about to get a lot more complicated.
The EU wants rules of the road for high-risk AI
The current approved legal framework of the AI Act covers a wide range of applications, from software in self-driving cars to “predictive policing” systems used by law enforcement. And on top of the prohibition on “unacceptable” systems, its strictest regulations are reserved for “high risk” tech. If you provide a “limited risk” system like customer service chatbots on websites that can interact with a user, you just need to inform consumers that they’re using an AI system. This category also covers the use of facial recognition technology (though law enforcement is exempt from this restriction in certain circumstances) and AI systems that can produce “deepfakes” — defined within the act as AI-generated content based on real people, places, objects, and events that could otherwise appear authentic.
For anything the EU considers riskier, the restrictions are much more onerous. These systems are subject to “conformity assessments” before entering the EU market to determine whether they meet all necessary AI Act requirements. That includes keeping a log of the company’s activity, preventing unauthorized third parties from altering or exploiting the product, and ensuring the data being used to train these systems is compliant with relevant data protection laws (such as GDPR). That training data is also expected to be of a high standard — meaning it should be complete, unbiased, and free of any false information.
The scope for “high risk” systems is so large that it’s broadly divided into two sub-categories: tangible products and software. The first applies to AI systems incorporated in products that fall under the EU’s product safety legislation, such as toys, aviation, cars, medical devices, and elevators — companies that provide them must report to independent third parties designated by the EU in their conformity assessment procedure. The second includes more software-based products that could impact law enforcement, education, employment, migration, critical infrastructure, and access to essential private and public services, such as AI systems that could influence voters in political campaigns. Companies providing these AI services can self-assess their products to ensure they meet the AI Act’s requirements, and there’s no requirement to report to a third-party regulatory body.
Now that the AI Act has been greenlit, it’ll enter the final phase of inter-institutional negotiations. That involves communication between Member States (represented by the EU Council of Ministers), the Parliament, and the Commission to develop the approved draft into the finalized legislation. “In theory, it should end this year and come into force in two to five years,” said Sarah Chander, senior policy advisor for the European Digital Rights Association, to The Verge.
These negotiations present an opportunity for some regulations within the current version of the AI Act to be adjusted if they’re found to be particularly contentious. Leufer said that while some provisions within the legislation may be watered down, those regarding generative AI could potentially be strengthened. “The council hasn’t had their say on generative AI yet, and there may be things that they’re actually quite worried about, such as its role in political disinformation,” he says. “So we could see new potentially quite strong measures pop up in the next phase of negotiations.”
Generative AI has thrown a wrench in the AI Act
When generative AI models started appearing on the market, the first draft of the AI Act was already being shaped. Blindsided by the explosive development of these AI systems, European lawmakers had to figure out how they could be regulated under their proposed legislation — fast.
The seemingly limitless ways in which LLMs can be adapted presented an issue for the EUs regulatory plans
“The issue with the AI Act was that it was very much focused on the application layer,” said Leufer. It focused on relatively complete products and systems with defined uses, which could be evaluated for risk-based largely on their purpose. Then, companies began releasing powerful models that were much broader in scope. OpenAI’s GPT-3.5 and GPT-4 large language models (LLMs) appeared on the market after the EU had already begun negotiating the terms of the new legislation. Lawmakers refer to these as “foundation” models: a term coined by Stanford University for models that are “trained on broad data at scale, designed for the generality of output, and can be adapted to a wide range of distinctive tasks.”
Things like GPT-4 are often shorthanded as generative AI tools, and their best-known applications include producing reports or essays, generating lines of code, and answering user inquiries on endless subjects. But Leufer emphasizes that they’re broader than that. “People can build apps on GPT-4, but they don’t have to be generative per se,” he says. Similarly, a company like Microsoft could build a facial recognition or object detection API, then let developers build downstream apps with unpredictable results. They can do it much faster than the EU can usher in specific regulations covering each app. And if the underlying models aren’t covered, individual developers could be the ones held responsible for not complying with the AI Act — even if the issue stems from the foundation model itself.
“These so-called General Purpose AI Systems that work as a kind of foundation layer or a base layer for more concrete applications were what really got the conversation started about whether and how that kind of layer of the pipeline should be included in the regulation,” says Leufer. As a result, lawmakers have proposed numerous amendments to ensure that these emerging technologies — and their yet-unknown applications — will be covered by the AI Act.
The capabilities and legal pitfalls of these models have swiftly raised alarm bells for policymakers across the world. Services like ChatGPT and Microsoft’s Bard were found to spit out inaccurate and sometimes dangerous information. Questions surrounding the intellectual property and private data used to train these systems have sparked several lawsuits. While European lawmakers raced to ensure these issues could be addressed within the upcoming AI Act, regulators across its member states have relied on alternative solutions to try and keep AI companies in check.
“In the interim, regulators are focused on the enforcement of existing laws,” said Sarah Myers West, managing director at the AI Now Institute, to The Verge. Italy’s Data Protection Authority, for instance, temporarily banned ChatGPT for violating the GDPR. Amsterdam’s Court of Appeals also issued a ruling against Uber and Lyft for violating drivers’ rights through algorithmic wage management and automated firing and hiring.
Other countries have introduced their own rules in a bid to keep AI companies in check. China published draft guidelines signaling how generative AI should be regulated within the country back in April. Various states in the US, like California, Illinois, and Texas, have also passed laws that focus on protecting consumers against the potential dangers of AI. Certain legal cases in which the FTC applied “algorithmic disgorgement” — which requires companies to destroy the algorithms or AI models it built using ill-gotten data — could lay a path for future regulations on a nationwide level.
The rules impacting foundation model providers are anticlimactic
The AI Act legislation that was approved on June 14th includes specific distinctions for foundation models. Providers must assess their product for a huge range of potential risks, from those that can impact health and safety to risks regarding the democratic rights of those residing in EU member states. They must register their models to an EU database before they can be released to the EU market. Generative AI systems using these foundation models, including OpenAI’s ChatGPT chatbot, will need to comply with transparency requirements (such as disclosing when content is AI-generated) and ensure safeguards are in place to prevent users from generating illegal content. And perhaps most significantly, the companies behind foundation models will need to disclose any copyrighted data used to train them to the public.
The mystery of “Schrödinger’s copyrighted content” in AI training data may soon be more apparent
This last measure could have seismic effects on AI companies. Popular text and image generators are trained to produce content by replicating patterns in code, text, music, art, and other data created by real humans — so much data that it almost certainly includes copyrighted materials. This training sits in a legal gray area, with arguments for and against the idea that it can be conducted without permission from the rightsholders. Individual creators and large companies have sued over the issue, and making it easier to identify copyrighted material in a dataset will likely draw even more suits.
But overall, experts say the AI Act’s regulations could have gone much further. Legislators rejected an amendment that could have slapped an onerous “high risk” label on all General Purpose AI Systems (GPAIs) — a vague classification defined as “an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed.” When this amendment was proposed, the AI Act did not explicitly distinguish between GPAIs and foundation AI models and therefore had the potential to impact a sizable chunk of AI developers. According to one study conducted by appliedAI in December 2022, 45 percent of all surveyed startup companies considered their AI system to be a GPAI.
GPAIs are still defined within the approved draft of the act, though these are now judged based on their individual applications. Instead, legislators added a separate category for foundation models, and while they’re still subject to plenty of regulatory rules, they’re not automatically categorized as being high risk. “‘Foundational models’ is a broad terminology encouraged by Stanford, [which] also has a vested interest in such systems,” said Chander. “As such, the Parliament’s position only covers such systems to a limited extent and is much less broad than the previous work on general-purpose systems.”
AI providers like OpenAI lobbied against the EU including such an amendment, and their influence in the process is an open question. “We’re seeing this problematic thing where generative AI CEOs are being consulted on how their products should be regulated,” said Leufer. “And it’s not that they shouldn’t be consulted. But they’re not the only ones, and their voices shouldn’t be the loudest because they’re extremely self-interested.”
Potholes litter the EU’s road to AI regulations
As it stands, some experts believe the current rules for foundation models don’t go far enough. Chander tells The Verge that while the transparency requirements for training data would provide “more information than ever before,” disclosing that data doesn’t ensure users won’t be harmed when these systems are used. “We have been calling for details about the use of such a system to be displayed on the EU AI database and for impact assessments on fundamental rights to be made public,” added Chander. “We need public oversight over the use of AI systems.”
“The AI Act will only mandate these companies to do things they should already be doing”
Several experts tell The Verge that far from solving the legal concerns around generative AI, the AI Act might actually be less effective than existing rules. “In many respects, the GDPR offers a stronger framework in that it is rights-based, not risk-based,” said Myers West. Leufer also claims that GDPR has a more significant legal impact on generative AI systems. “The AI Act will only mandate these companies to do things they should already be doing,” he says.
OpenAI has drawn particular criticism for being secretive about the training data for its GPT-4 model. Speaking to The Verge in an interview, Ilya Sutskever, OpenAI’s chief scientist and co-founder, said that the company’s previous transparency pledge was “a bad idea.”
“These models are very potent, and they’re becoming more and more potent. At some point, it will be quite easy, if one wanted, to cause a great deal of harm with those models,” said Sutskever. “And as the capabilities get higher, it makes sense that you don’t want want to disclose them.”
As other companies scramble to release their own generative AI models, providers of these systems may be similarly motivated to conceal how their product is developed — both through fear of competitors and potential legal ramifications. Therefore, the AI Act’s biggest impact, according to Leufer, may be on transparency — in a field where companies are “becoming gradually more and more closed.”
The AI Act falls short on protecting migrants and refugees against AI systems
Outside of the narrow focus on foundation models, other areas in the AI Act have been criticized for failing to protect marginalized groups that could be impacted by the technology. “It contains significant gaps such as overlooking how AI is used in the context of migration, harms that affect communities of color most,” said Myers West. “These are the kinds of harms where regulatory intervention is most pressing: AI is already being used widely in ways that affect people’s access to resources and life chances, and that ramp up widespread patterns of inequality.”
If the AI Act proves to be less effective than existing laws protecting individuals’ rights, it might not bode well for the EU’s AI plans, particularly if it’s not strictly enforced. After all, Italy’s attempt to use GDPR against ChatGPT started as tough-looking enforcement, including near-impossible-seeming requests like ensuring the chatbot didn’t provide inaccurate information. But OpenAI was able to satisfy Italian regulators’ demands seemingly by adding fresh disclaimers to its terms and policy documents. Europe has spent years crafting its AI framework — but regulators will have to decide whether to take advantage of its teeth. | AI Policy and Regulations |
UK government embarks on bargain bin hunt for AI policy wonk
Con: You won't get a Menlo Park salary. Pro: You won't have to meet Zuck
UK government is trying to hire a "Deputy Director for AI International," a policy leadership role for someone willing to work for a relative pittance compared to research scientists in the field.
For many of us, the £75,000-a-year ($92,000) role with a gilt-edged civil service pension sounds like a valid career move, but against roles being offered by Meta – $1.175 million to work in the labs at Menlo Park – it pales into insignificance.
Clearly unperturbed by market economics for developers and AI boffins, the British administration is forging ahead.
"AI is bringing about huge changes to society, and it is our job as a team to work out how Government should respond, including at the Prime Minister's forthcoming AI Safety Summit. It's a once-in-a-generation moment, and an incredibly fast-paced and exciting environment," states the job ad.
"The UK is committed to finding solutions on AI together with its international partners, and your role will sit at the heart of the wider AI team. The postholder will play a critical role in maintaining the momentum created by the Summit, continuing to strengthen these pivotal relationships for international collaboration and actively shape the international discussions on AI to ensure the UK remains at the forefront of developments and emerging risks and opportunities."
- AI girlfriend encouraged man to attempt crossbow assassination of Queen
- UK civil servants – hopefully including those spending billions on tech – to skill up in STEM
- Yeah, Rishi, it's AI that'll make Britain great again
- UK becomes Unicorn Kingdom, where AI fairy dust earns King's ransom
An AI Safety Summit is being hosted by Britain on November 1-2 and the applications process closes on October 22.
So what does the job involve? Maintaining a leadership role in AI after next month's event; "thought leadership" on the global developments in AI; building a network of relationships and influencing international discussions at the G7, G20, OECD, and UN; and making sure local AI policy aligns with global ambitions.
The ideal candidate will have a track record of influencing senior decision makers (ideally at Secretary of State level or equivalent), have built effective networks to "advance an agenda"; possess strong analytical and prioritization skills; and be able to inspire the leadership of diverse teams.
Desirable attributes include experience of AI or tech policy development, and experience working on national security issues.
For that, the right candidate will get a £75,000 annual salary, and the Department for Science, Innovation and Technology will contribute £20,250 toward being a member of the Civil Service Defined Benefit Pension scheme.
You will be unlikely, however, to meet Mark Zuckerberg, which is one perk of not working in Menlo Park. ® | AI Policy and Regulations |
How Obama helped President Biden draft the AI executive order
Obama -- who has been interested in AI -- assisted the Biden team with the plan.
Former President Barack Obama helped draft the White House's new artificial intelligence policy that President Joe Biden rolled out earlier this week, according to aides familiar with the situation.
On Monday, Biden rolled out the administration's AI policy, culminating with an executive order he signed. The plan also includes a task force run by the Department of Commerce to study and research emerging AI trends. Biden's executive order aims to safeguard against threats posed by artificial intelligence, ensuring that bad actors do not use the technology to develop devastating weapons or mount supercharged cyberattacks.
Ahead of its debut, Obama -- who has long had interest in AI -- assisted the Biden team with the plan, an Obama aide told ABC News. Obama held meetings with AI industry leaders, where he pressed them and made them aware of not just the national security concerns AI poses, but also other issues with AI such as bias and discrimination, the aide said.
Obama also met with congressional leaders, including Majority Leader Chuck Schumer, to figure out the best ways to regulate AI, according to an official.
In 2016, the Obama White House released a report on how artificial intelligence could shape the world going forward, and since then the former president has been engaged in ways AI and the government would coexist.
At a Cabinet meeting earlier this month, Biden urged his cabinet to work together to draft policy recommendations regarding AI -- and two departments, Commerce and Homeland Security, have taken the lead in developing national security policy recommendations. Biden instructed his advisers to take the time they needed in putting together the AI policy given its importance, a White House official said.
"President Biden has taken the most significant actions on AI than any government anywhere in the world. He directed his team to move fast and pull every lever," White House Chief of Staff Jeff Zients told ABC News in a statement. "And one of the best levers we have is listening to the experts: from civil society to tech innovators to scientists -- and even former presidents. Former President Obama's advice has been critical to our aggressive strategy to harness the benefits of AI while minimizing the risks."
For his part, Obama reached out to industry leaders at AI companies and leaders in the advocacy and civil society space who are concerned about AI. Also, he spoke with leading academics and researchers to inform the Biden administration's approach, according to an Obama aide.
Conversations between President Biden and Obama on the topic of AI came up in June over lunch, according to the White House official, and the two talk "regularly." The official also told ABC News that the president was involved at every turn in the drafting of the executive order.
Obama engaged with the Biden administration to build on the work they had already begun as they coordinated a White House response to develop a "framework to address the real threats and harms posed by AI," according to an Obama aide.
As part of an April lesson in AI, President Biden received a demonstration of how Chat GPT worked and saw an AI-generated image of himself, as well as his dog, Commander. He was shown deep fake videos of himself, too, according to a White House official. | AI Policy and Regulations |
Bipartisan US lawmakers from both chambers of Congress introduced legislation this week that would formally prohibit AI from launching nuclear weapons. Although Department of Defense policy already states that a human must be “in the loop” for such critical decisions, the new bill — the Block Nuclear Launch by Autonomous Artificial Intelligence Act — would codify that policy, preventing the use of federal funds for an automated nuclear launch without “meaningful human control.”
Aiming to protect “future generations from potentially devastating consequences,” the bill was introduced by Senator Ed Markey (D-MA) and Representatives Ted Lieu (D-MA), Don Beyer (D-VA) and Ken Buck (R-CO). Senate co-sponsors include Jeff Merkley (D-OR), Bernie Sanders (I-VT), and Elizabeth Warren (D-MA). “As we live in an increasingly digital age, we need to ensure that humans hold the power alone to command, control, and launch nuclear weapons – not robots,” said Markey. “That is why I am proud to introduce the Block Nuclear Launch by Autonomous Artificial Intelligence Act. We need to keep humans in the loop on making life or death decisions to use deadly force, especially for our most dangerous weapons.”
Artificial intelligence chatbots (like the ever-popular ChatGPT, the more advanced GPT-4 and Google Bard), image generators and voice cloners have taken the world by storm in recent months. (Republicans are already using AI-generated images in political attack ads.) Various experts have voiced concerns that, if left unregulated, humanity could face grave consequences. “Lawmakers are often too slow to adapt to the rapidly changing technological environment,” Cason Schmit, Assistant Professor of Public Health at Texas A&M University, told The Conversation earlier this month. Although the federal government hasn’t passed any AI-based legislation since the proliferation of AI chatbots, a group of tech leaders and AI experts signed a letter in March requesting an “immediate” six-month pause on developing AI systems beyond GPT-4. Additionally, the Biden administration recently opened comments seeking public feedback about possible AI regulations.
“While we all try to grapple with the pace at which AI is accelerating, the future of AI and its role in society remains unclear,” said Rep. Lieu. “It is our job as Members of Congress to have responsible foresight when it comes to protecting future generations from potentially devastating consequences. That’s why I’m pleased to introduce the bipartisan, bicameral Block Nuclear Launch by Autonomous AI Act, which will ensure that no matter what happens in the future, a human being has control over the employment of a nuclear weapon – not a robot. AI can never be a substitute for human judgment when it comes to launching nuclear weapons.”
Given the current political climate in Washington, passing even the most common-sense of bills isn’t guaranteed. Nevertheless, perhaps a proposal as fundamental as “don’t let computers decide to obliterate humanity” will serve as a litmus test for how prepared the US government is to deal with this quickly evolving technology. | AI Policy and Regulations |
China plans new AI regulations after Alibaba, Baidu, Huawei launch tech
The Cyberspace Administration of China (CAC), China's internet regulator, proposed rules to govern artificial intelligence (AI) tools like OpenAI's ChatGPT on Tuesday.
“China supports the independent innovation, popularization and application and international cooperation of basic technologies such as AI algorithms and frameworks,” CAC said in the draft regulation published on its website.
“It also encourages the priority use of safe and reliable software, tools, computing, and data resources.”
The CAC would hold businesses accountable for the material produced by the services and compel them to undergo a government security evaluation before offering AI services.
According to the rules, content produced by these services cannot include any components that could undermine the state's authority, encourage secession, or disturb the social order.
The proposed regulations are more specific than the generic guidelines being debated in other jurisdictions, noted a Wall Street Journal (WSJ) article on Tuesday.
AI is a challenge for global governance
The proposed regulations state that businesses would be in charge of safeguarding user data and that data used by AI product developers to train their systems must adhere to Chinese legal requirements.
However, despite the US restrictions to purchase advanced semiconductors necessary for training AI models and China's strict censorship regulations, Chinese internet giants like Alibaba, SenseTime, Baidu, and Huawei are moving forward with plans to integrate AI into their services.
Alibaba released Tongyi Qianwen, a large language model it plans to integrate into products such as its search engine, voice assistant, entertainment, and e-commerce.
A ChatGPT-like service called SenseChat and a collection of apps built on SenseNova, a sizable AI model system, were both released by SenseTime Group Inc.
Services based on Pangu, a group of sizable AI models that Huawei created in 2019, are available to enterprise clients in various sectors, including finance, pharmaceuticals, and meteorology.
Last month, Baidu, China's Google, announced ERNIE Bot, ChatGPT-like Chatbot, with few complications.
Globally, governments are debating whether and how to regulate the emerging generation of generative AI tools.
China's proposed regulations maybe be more specific, but Italy temporarily outlawed ChatGPT after discovering that the AI chatbot had collected and saved information inadvertently. The Biden administration in the US has started looking at whether restrictions on the tools are necessary.
"AI is a challenge for global governance," You Chuanman from the Chinese University of Hong Kong, a tech regulation specialist, told WSJ. "Governments from different countries should work together to deliver a global standard."
Previously, according to a report by Chinese state-affiliated media, Global Times, certain professionals in the sector have begun using ChatGPT and other AI-generated chatbot tools in their work. However, the Payment & Clearing Association of China on Monday urged practitioners to use these tools "cautiously." | AI Policy and Regulations |
Since the tech industry began its love affair with machine learning about a decade ago, US lawmakers have chattered about the potential need for regulation to rein in the technology. No proposal to regulate corporate AI projects has got close to becoming law—but OpenAI’s release of ChatGPT in November has convinced some senators there is now an urgent need to do something to protect people’s rights against the potential harms of AI technology.At a hearing held by a Senate Judiciary subcommittee yesterday attendees heard a terrifying laundry list of ways artificial intelligence can harm people and democracy. Senators from both parties spoke in support of the idea of creating a new arm of the US government dedicated to regulating AI. The idea even got the backing of Sam Altman, CEO of OpenAI.“My worst fear is that we—the field, the technology, the industry—cause significant harm to the world,” Altman said. He also endorsed the idea of AI companies submitting their AI models to testing by outsiders and said a US AI regulator should have the power to grant or revoke licenses for creating AI above a certain threshold of capability.A number of US federal agencies including the Federal Trade Commission and the Food and Drug Administration already regulate how companies use AI today. But senator Peter Welch said his time in Congress has convinced him that it can’t keep up with the pace of technological change.“Unless we have an agency that is going to address these questions from social media and AI, we really don't have much of a defense against the bad stuff, and the bad stuff will come,” says Welch, a Democrat. “We absolutely have to have an agency.”Richard Blumenthal, a fellow Democrat who chaired the hearing, said that a new AI regulator may be necessary because Congress has shown it often fails to keep pace with new technology. US lawmakers’ spotty track record on digital privacy and social media were mentioned frequently during the hearing.But Blumenthal also expressed concern that a new federal AI agency could struggle to match the tech industry’s speed and power. “Without proper funding you’ll run circles around those regulators,” he told Altman and his fellow witness from the industry, Christina Montgomery, IBM’s chief privacy and trust officer. Altman and Montgomery were joined by psychology professor turned AI commentator Gary Marcus, who advocated for the creation of an international body to monitor AI progress and encourage safe development of the technology.Blumenthal opened the hearing with an AI voice clone of himself reciting text written by ChatGPT to highlight that AI can produce convincing results. The senators did not suggest a name for the prospective agency or map out its possible functions in detail. They also also discussed less radical regulatory responses to recent progress in AI.Those included endorsing the idea of requiring public documentation of AI systems’ limitations or the datasets used to create them, akin to an AI nutrition label, ideas introduced years ago by researchers like former Google Ethical AI team lead Timnit Gebru who was ousted from the company after a dispute about a prescient research paper warning about the limitations and dangers of large language models.Another change urged by lawmakers and industry witnesses alike was requiring disclosure to inform people when they’re conversing with a language model and not a human, or when AI technology makes important decisions with life changing consequences. One effect of a disclosure requirement could be to reveal when a facial recognition match is the basis of an arrest or criminal accusation.The senate hearing follows growing interest from US and European governments and even some tech insiders in putting new guardrails on AI to prevent it harming people. In March a group letter signed by major names in tech and AI called for a six-month pause on AI development; this month the White House called in executives from OpenAI, Microsoft and other companies and announced it is backing a public hacking contest to probe generative AI systems; and the European Union is currently finalizing a sweeping law called the AI Act.IBM’s Montgomery yesterday urged Congress to take inspiration from the AI Act, which categorizes AI systems by the risks they pose to people or society and sets rules for—or even bans—them accordingly. She also endorsed the idea of encouraging self regulation, highlighting her position on IBM’s AI ethics board, although at Google and Axon those structures have become mired in controversy.Tech think tank the Center for Data Innovation said in a letter released after yesterday’s hearing that the US doesn’t need a new regulator for AI. “Just as it would be ill-advised to have one government agency regulate all human decision-making, it would be equally ill-advised to have one agency regulate all AI,” the letter said.“I don’t think it’s pragmatic, and it’s not what they should be thinking about right now,” says Hodan Omaar, a senior analyst at the CDI.Omaar says the idea of booting up a whole new agency for AI is improbable given that Congress has yet to follow through on other necessary tech reforms like the need for overarching data privacy protections. She believes it is better to update existing laws and allow federal agencies to add AI oversight to their existing regulatory work.The Equal Employment Opportunity Commission and Department of Justice guidance issued last summer on how businesses using algorithms in hiring that may expect people to look or behave a certain way can stay in compliance with the Americans with Disabilities Act, showing how AI policy can overlap with existing law and involve many different communities and use cases.Alex Engler, a fellow at the Brookings Institution, says he’s concerned that the US could repeat problems that sank federal privacy regulation last fall. The historic bill was scuppered by California lawmakers withholding their votes because the law would override the state’s own privacy legislation. “That’s a good enough concern,” Engler says. “Now is that a good enough concern that you're gonna say we're just not going to have civil society protections for AI? I don't know about that.”Though the hearing touched on potential harms of AI ranging from election disinformation to conceptual dangers that don’t exist yet like self-aware AI, generative AI systems like ChatGPT that inspired the hearing taking place got the most attention. Multiple senators argued they could increase inequality and monopolization. The only way to guard against that, said Cory Booker, a Democrat senator who has cosponsored AI regulation in the past and supported a federal ban on face recognition, is if Congress creates rules of the road. | AI Policy and Regulations |
Aug 31 (Reuters) - Rapid advances in artificial intelligence (AI) such as Microsoft-backed OpenAI's ChatGPT are complicating governments' efforts to agree laws governing the use of the technology.
Here are the latest steps national and international governing bodies are taking to regulate AI tools:
AUSTRALIA
* Seeking input on regulations
The government is consulting Australia's main science advisory body and considering next steps, a spokesperson for the industry and science minister said in April.
BRITAIN
* Planning regulations
The Financial Conduct Authority, one of several state regulators that has been tasked with drawing up new guidelines covering AI, is consulting with the Alan Turing Institute and other legal and academic institutions to improve its understanding of the technology, a spokesperson told Reuters.
Britain's competition regulator said in May it would start examining the impact of AI on consumers, businesses and the economy and whether new controls were needed.
CHINA
* Implemented temporary regulations
China has issued a set of temporary measures effective from Aug. 15 to manage the generative AI industry, requiring service providers to submit security assessments and receive clearance before releasing mass-market AI products.
EUROPEAN UNION
* Planning regulations
EU lawmakers agreed in June to changes in a draft of the bloc's AI Act. The lawmakers will now have to thrash out details with EU countries before the draft rules become legislation.
The biggest issue is expected to be facial recognition and biometric surveillance where some lawmakers want a total ban while EU countries want an exception for national security, defence and military purposes.
FRANCE
* Investigating possible breaches
France's privacy watchdog CNIL said in April it was investigating several complaints about ChatGPT after the chatbot was temporarily banned in Italy over a suspected breach of privacy rules.
France's National Assembly approved in March the use of AI video surveillance during the 2024 Paris Olympics, overlooking warnings from civil rights groups.
G7
* Seeking input on regulations
Group of Seven (G7) leaders meeting in Hiroshima, Japan, acknowledged in May the need for governance of AI and immersive technologies and agreed to have ministers discuss the technology as the "Hiroshima AI process" and report results by the end of 2023.
G7 nations should adopt "risk-based" regulation on AI, G7 digital ministers said after a meeting in April.
IRELAND
* Seeking input on regulations
Generative AI needs to be regulated, but governing bodies must work out how to do so properly before rushing into prohibitions that "really aren't going to stand up", Ireland's data protection chief said in April.
ISRAEL
* Seeking input on regulations
Israel has been working on AI regulations "for the last 18 months or so" to achieve the right balance between innovation and the preservation of human rights and civic safeguards, Ziv Katzir, director of national AI planning at the Israel Innovation Authority, said in June.
Israel published a 115-page draft AI policy in October and is collating public feedback ahead of a final decision.
ITALY
* Investigating possible breaches
Italy's data protection authority plans to review other artificial intelligence platforms and hire AI experts, a top official said in May.
JAPAN
* Investigating possible breaches
Japan expects to introduce by the end of 2023 regulations that are likely closer to the U.S. attitude than the stringent ones planned in the EU, an official close to deliberations said in July, as it looks to the technology to boost economic growth and make it a leader in advanced chips.
The country's privacy watchdog said in June it had warned OpenAI not to collect sensitive data without people's permission and to minimise the sensitive data it collects.
SPAIN
* Investigating possible breaches
Spain's data protection agency said in April it was launching a preliminary investigation into potential data breaches by ChatGPT. It has also asked the EU's privacy watchdog to evaluate privacy concerns surrounding ChatGPT.
UNITED NATIONS
* Planning regulations
The U.N. Security Council held its first formal discussion on AI in New York in July. The council addressed both military and non-military applications of AI, which "could have very serious consequences for global peace and security", U.N. Secretary-General Antonio Guterres said.
Guterres in June backed a proposal by some AI executives for the creation of an AI watchdog like the International Atomic Energy Agency, but noted that "only member states can create it, not the Secretariat of the United Nations".
The U.N. Secretary-General has also announced plans to start work by the end of the year on a high-level AI advisory body to regularly review AI governance arrangements and offer recommendations.
U.S.
* Seeking input on regulations
Washington D.C. district Judge Beryl Howell ruled on Aug. 21 that a work of art created by AI without any human input cannot be copyrighted under U.S. law, affirming the Copyright Office's rejection of an application filed by computer scientist Stephen Thaler on behalf of his DABUS system.
The U.S. Federal Trade Commission (FTC) opened in July an expansive investigation into OpenAI on claims that it has run afoul of consumer protection laws by putting personal reputations and data at risk.
Generative AI raises competition concerns and is a focus of the FTC's Bureau of Technology along with its Office of Technology, the agency said in a blog post in June.
Senator Michael Bennet wrote to leading tech firms in June to urge them to label AI-generated content and limit the spread of material aimed at misleading users. He had introduced a bill in April to create a task force to look at U.S. policies on AI.
Compiled by Alessandro Parodi and Amir Orusov in Gdansk; Editing by Kirsten Donovan, Mark Potter and Milla Nissi
Our Standards: The Thomson Reuters Trust Principles. | AI Policy and Regulations |
A tale of two AI futures
Two key U.S. Senate committees held contemporaneous artificial intelligence (AI) hearings on May 16 that covered different angles with varying degrees of partisanship. One focused on regulating private sector use of AI, while the other examined the improvement challenges of federal government AI use. A contextual takeaway from both hearings is clear: if Congress wants to regulate AI, it must comprehend the complexities and potential pitfalls that come with oversight of the transformative technology.
The Judiciary Subcommittee on Privacy, Technology, and the Law heard testimony from OpenAI CEO Sam Altman and others in a hearing titled “Oversight of A.I.: Rules for Artificial Intelligence.” The witnesses and senators discussed the promise and perils of AI with remarkable clarity. In a breath of bipartisan fresh air, senators from both sides of the aisle appeared to agree on the potential need for a federal regulatory response to mass-deployed AIs, such as the forthcoming ChatGPT 5 and the like.
Each senator who spoke recognized that the tech industry and Congress have an opportunity to collaboratively avoid the regulatory failures surrounding social media when it comes to AI. They expressed that a laissez-faire approach to AI would risk an unprecedented invasion of privacy, manipulation of personal behavior and opinion, and even a destabilization of American democracy.
There was a consensus among the senators that many members of Congress do not know enough about AI, and that the subject matter would be difficult to regulate outside of a delegated context. Thus, perhaps a new independent agency with the authority to regulate AI via licensing and enforcement, or a newly empowered Federal Trade Commission (FTC) or Federal Communication Commission (FCC) could be the answer.
At the same time, a very different conversation was taking place in the Homeland Security & Government Affairs Committee. In a more partisan tone, there was a robust discussion on government data collection practices, politicized pressure on private industry, and excesses in the adjudication of what constitutes misinformation. One key takeaway from the hearing was the testimony of Stanford Law School Professor Daniel Ho, whose research team concluded that the federal government is severely lagging behind private industry in its AI expertise and has a long way to go to achieve best practices in its own use of AI.
These crucial Senate committee discourses gives rise to a tremendously important question: How can an executive branch agency be expected to regulate AI if the federal government insufficiently understands the responsible use of AI?
To answer this, let’s first unpack what a hypothetical AI oversight agency might look like. A degree of federal AI regulation is needed because the marketplace, alone, will not solve the societal problems that will emerge from unguided AI development. Any agency that prospectively regulates AI will need to be given express delegations of authority, in light of the Supreme Court’s possible abrogation or elimination of the doctrine of Chevron deference to federal agencies’ gap-filling interpretations of their authorizing statutes.
Congress tends to regulate infrequently on the same subject. This is due to the challenge associated with fostering majorities on contentious issues and the adverse consequences that flow from ineffective policy choices.
The concept that appears to have initial bipartisan support is to establish a powerful federal agency with general authorities in a branch of government that has very limited subject-matter expertise and an objectively poor record of process legitimacy or use of the very same technology that it will be overseeing. This could be a recipe for severe politicization of AI.
Adding any additional power to the FTC, and to a lesser degree the FCC, will inject unnecessary partisan concerns into the discussion. The establishment of a new agency will be more likely to secure legislative consensus.
The best course is for Congress to keep its options open — to resist the impulse to delegate permanent authority to executive branch experts who simply do not exist right now. Instead, it should focus on maintaining structural constraints with a biannual reauthorization requirement for the new agency that regulates AI, requiring robust reporting and congressional oversight. Congress must employ its political will to set clear guardrails for how such an agency will oversee, enforce, and report on the executive branch’s use of AI, in addition to the use of AI by the private sector.
Congress can build on this moment of bipartisan AI policy, allowing innovation to flourish and America’s strategic advantage over global competitors to remain unhindered. If Congress chooses to continuously regulate AI through soft-touch and narrow legislation instead of passing an expansive statute and washing its hands of the details, we will all be better off for it.
Aram A. Gavoor is associate dean for academic affairs and professorial lecturer in law at the George Washington University Law School. He previously served for more than a decade in the U.S. Department of Justice and is an internationally recognized U.S. public law expert.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. | AI Policy and Regulations |
OpenAI CEO Sam Altman, whose company has become one of the most lucrative ventures for the rollout of artificial intelligence, has also worked to become one of the new figureheads for AI regulation. It’s a hard line to walk, and while he managed to make a number of U.S. congresspeople smile and nod along, he hasn’t found the same success in Europe. He’s now been forced to clarify what his company’s plans are for keeping on outside the U.S.
During a stop in London, UK on Wednesday, Altman told a crowd that if the EU keeps on the same tack with its planned AI regulations, it will cause them some serious headaches. He said “If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible.”
Altman rolled back that statement to some degree on Friday after returning home from his week-long world tour. He said that “we are excited to continue to operate here and of course have no plans to leave.”
While the White House has issued some guidance on combating the risks of AI, the U.S. is still miles behind on any real AI legislation. There is some movement within Congress like the year-old Algorithmic Accountability Act, and more recently with a proposed “AI Task Force,” but in reality there’s nothing on the books that can deal with the rapidly expanding world of AI implementation.
The EU, on the other hand, modified a proposed AI Act to take into account modern generative AI like chatGPT. Specifically, that bill could have huge implications for how large language models like OpenAI’s GPT-4 are trained on terabyte upon terabyte of scraped user data from the internet. The ruling European body’s proposed law could label AI systems as “high risk” if they could be used to influence elections.
Of course, OpenAI isn’t the only big tech company wanting to at least seem like it’s trying to get in front of the AI ethics debate. On Thursday, Microsoft execs did a media blitz to explain their own hopes for regulation. Microsoft President Brad Smith said during a LinkedIn livestream that the U.S. could use a new agency to handle AI. It’s a line that echoes Altman’s own proposal to Congress, though he also called for laws that would increase transparency and create “safety breaks” for AI used in critical infrastructure.
Even with a five-point blueprint for dealing with AI, Smith’s speech was heavy on hopes but feather light on details. Microsoft has been the most-ready to proliferate AI compared to its rivals, all in an effort to get ahead of big tech companies like Google and Apple. Not to mention, Microsoft is in an ongoing multi-billion dollar partnership with OpenAI.
On Thursday, OpenAI revealed it was creating a grant program to fund groups that could decide rules around AI. The fund would give out 10, $100,000 grants to groups willing to do the legwork and create “proof-of-concepts for a democratic process that could answer questions about what rules AI systems should follow.” The company said the deadline for this program was in just a month, by June 24.
OpenAI offered some examples of what questions grant seekers should look to answer. One example was whether AI should offer “emotional support” to people. Another question was if vision-language AI models should be allowed to identify people’s gender, race, or identity based on their images. That last question could easily be applied to any number of AI-based facial recognition systems, in which case the only acceptable answer is “no, never.”
And there’s quite a few ethical questions that a company like OpenAI is incentivized to leave out of the conversation, particularly in how it decides to release the training data for its AI models.
Which goes back to the everlasting problem of letting companies dictate how their own industry can be regulated. Even if OpenAI’s intentions are, for the most part, driven by a conscious desire to reduce the harm of AI, tech companies are financially incentivized to help themselves before they help anybody else.
Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators, The Best ChatGPT Alternatives, and Everything We Know About OpenAI’s ChatGPT. | AI Policy and Regulations |
Tasos Katopodis/Getty Images for Accountable Tec
toggle caption
A mobile billboard is seen near the U.S. Capitol on Tuesday.
Tasos Katopodis/Getty Images for Accountable Tec
A mobile billboard is seen near the U.S. Capitol on Tuesday.
Tasos Katopodis/Getty Images for Accountable Tec
More than 20 tech industry leaders with meet Wednesday behind closed doors with U.S. senators as part of a closer look into how Congress can regulate artificial intelligence.
Tesla and X CEO Elon Musk, Meta CEO Mark Zuckerberg and Microsoft founder Bill Gates are among those attending. The leaders of several AI companies, including OpenAI CEO Sam Altman, will also join the discussion.
The gathering is part of a series being led by Senate Majority Leader Chuck Schumer and a bipartisan group of senators in a larger effort to craft groundbreaking AI law. Ahead of the first of his so-called "AI Insight forums," Schumer argued lawmakers must balance AI innovation in medicine, education and national security against the technology's risks.
"The only way we'll achieve this goal is by bringing a diverse group of perspectives together, from those who work every day on these systems, to those openly critical of many parts of AI and who worry about its effects on workers, on racial and gender bias, and more," Schumer said Tuesday from the Senate floor.
This would be one of the biggest gatherings of top U.S. tech leaders in recent memory, and it follows a series of all-senators AI meetings earlier this year that provided a baseline of information, including a classified briefing. The forums will be broader in subject matter, with more forward-looking discussions on possible legislative paths forward.
Wednesday's forums will take place in a private Senate meeting room over two different sessions in the morning and the afternoon that will could span two to three hours each. A source familiar with the plans said the more than 20 tech experts are expected to address senators in attendance.
Senators will hear from the leaders of entertainment, labor and civil rights groups, including the head of the Motion Picture Association, the Writers Guild of America West, the American Federation of Teachers and the AFL-CIO.
Other tech leaders who will attend include Google CEO Sundar Pichai and the company's ex-CEO Eric Schmidt, Microsoft CEO Satya Nadella and IBM CEO Arvind Krishna.
An IBM spokesperson shared a preview of Krishna's remarks to the senators, which included a push for regulating AI risk but not AI algorithms, making AI creators and deployers accountable, and supporting open AI innovation.
"We should not create a licensing regime for AI," Krishna is expected to say. "A licensing agreement would inevitably favor large, well-funded incumbents and limit competition."
Ahead of Wednesday's meeting, AFL-CIO President Liz Shuler argued that workers must be central to AI policy.
"Public support for unions is at near record highs because workers are tired of being guinea pigs in an AI live experiment," Shuler said in a statement. "The labor movement knows AI can empower workers and increase prosperity – but only if workers are centered in its creation and the rules that govern it.
"Workers understand how to do our jobs better than any boardroom or algorithm. Bring us in as full partners in this transformation."
Despite the momentum, Congress faces an uphill battle crafting AI legislation.
Historically, lawmakers have struggled to regulate emerging technologies, from the internet to social media. AI is moving quickly, and Congress has a deficit of experts on AI, leaving many members to learn more about the technology as they simultaneously look to regulate it.
However, Schumer has argued they're doing the necessary work to catch up. New Mexico Democratic Sen. Martin Heinrich and Republican Sens. Mike Rounds, R-S.D., and Todd Young, R-Ind. are helping lead that charge.
"Congress must recognize two things: that this effort must be bipartisan, and we need outside help if we want to write effective AI policies," Schumer said Tuesday.
That outside help, Schumer argued, needs to include industry developers, experts, critics and ethicists, and members from the world of academia, defense and more.
"All of these groups, together in one room, talking about why Congress must act, what questions to ask, and how to build a consensus for safe innovation," Schumer said.
Schumer also faces obstacles from within Congress, with members on both sides of the aisle trying to tackle their own proposals to regulate AI. Multiple congressional committees hold jurisdiction on the issue, and Congress has easily hosted more than a dozen AI hearings with many more to come.
This, as House Republican Speaker Kevin McCarthy has already argued against over-regulation. McCarthy has said there's no need to create an agency to regulate AI, a popular idea among some Senate Democrats. | AI Policy and Regulations |
Half of Americans say Congress should take ‘swift action’ to regulate AI: poll
About half of Americans said Congress should be taking action to regulate artificial intelligence (AI) technology, according to a poll released Thursday.
Fifty-four percent of polled registered voters said Congress should take “swift action” to regulate the technology in a way that promotes privacy, fairness and safety in a way that ensures “maximum benefit to society with minimal risks,” according to the poll conducted for the Omidyar Network-funded group the Tech Oversight Project.
Behind the scenes: The AI arms race is on. Are regulators ready?
Only 15 percent of respondents said that regulating AI will stifle innovation and put the U.S. at a competitive disadvantage, according to the poll shared exclusively with The Hill.
The tech industry has raised concerns around regulations stifling innovation and harming the US in global competition over AI. At the same time, national security and tech experts have been warning Congress to take action as generative AI tools, like the popular ChatGPT chatbot, enter the public market.
The poll also found that 41 percent of voters said Congress should be the driving force behind AI regulation, and just 20 percent said that tech companies, such as Google, Apple, Meta, Amazon and Microsoft, should be leading the way.
An additional 39 percent said they are not sure who should lead on AI regulations.
The poll was conducted by Change Research and surveyed 1.208 registered voters nationwide between April 28 and May 2. The margin of error is 3 percentage points.
As AI technology ramps up, both through AI powering automated systems and through generative AI tools, the government is grappling with a range of risks from inherent bias leading to discrimination to increased threats of the spread of misinformation.
As of now, guidance from the government, both through the White House’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework, largely lays out voluntary guidelines for companies to follow. The administration also said Thursday it will invest $140 million into research and development of AI.
Senate Majority Leader Charles Schumer (D-N.Y.) last month also unveiled a proposal to create a framework for AI regulation in a way that aims to increase transparency and accountability.
As Congress mulls action, Federal Trade Commission (FTC), Civil Rights Division of the Department of Justice (DOJ), Consumer Financial Protection Bureau (CFPB) and Equal Employment Opportunity Commission (EEOC) put out a joint statement last month pledging to enforce existing laws that aim to uphold fairness and justice in response to issues posed by AI.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. | AI Policy and Regulations |
(Bloomberg) -- An internal policy memo drafted by OpenAI shows the company supports the idea of requiring government licenses from anyone who wants to develop advanced artificial intelligence systems. The document also suggests the company is willing to pull back the curtain on the data it uses to train image generators.
Most Read from Bloomberg
The creator of ChatGPT and DALL-E laid out a series of AI policy commitments in the internal document following a May 4 meeting between White House officials and tech executives including OpenAI Chief Executive Officer Sam Altman. “We commit to working with the US government and policy makers around the world to support development of licensing requirements for future generations of the most highly capable foundation models,” the San Francisco-based company said in the draft.
Read More: OpenAI’s Altman Says ‘There Are Many Ways’ AI Could Go Wrong
The idea of a government licensing system co-developed by AI heavyweights such as OpenAI sets the stage for a potential clash with startups and open-source developers who may see it as an attempt to make it more difficult for others to break into the space. It’s not the first time OpenAI has raised the idea: During a US Senate hearing in May, Altman backed the creation of an agency that, he said, could issue licenses for AI products and yank them should anyone violate set rules.
The policy document comes just as Microsoft Corp., Alphabet Inc.’s Google and OpenAI are expected to publicly commit Friday to safeguards for developing the technology — heeding a call from the White House. According to people familiar with the plans, the companies will pledge to responsible development and deployment of AI.
Read More: AI Leaders Set to Accede to White House Demand for Safeguards
OpenAI cautioned that the ideas laid out in the internal policy document will be different from the ones that will soon be announced by the White House, alongside tech companies. Anna Makanju, the company’s vice president of global affairs, said in an interview that the company isn’t “pushing” for licenses as much as it believes such permitting is a “realistic” way for governments to track emerging systems.
“It’s important for governments to be aware if super powerful systems that might have potential harmful impacts are coming into existence,” she said, and there are “very few ways that you can ensure that governments are aware of these systems if someone is not willing to self-report the way we do.”
Read More: Big Tech Wants AI Regulation — So Long as Users Bear the Brunt
Makanju said OpenAI supports licensing regimes only for AI models more powerful than OpenAI’s current GPT-4 one and wants to ensure smaller startups are free from too much regulatory burden. “We don’t want to stifle the ecosystem,” she said.
OpenAI also signaled in the internal policy document that it’s willing to be more open about the data it uses to train image generators such as DALL-E, saying it was committed to “incorporating a provenance approach” by the end of the year. Data provenance — a practice used to hold developers accountable for transparency in their work and where it came from — has been raised by policy makers as critical to keeping AI tools from spreading misinformation and bias.
The commitments laid out in OpenAI’s memo track closely with some of Microsoft’s policy proposals announced in May. OpenAI has noted that, despite receiving a $10 billion investment from Microsoft, it remains an independent company.
The firm disclosed in the document that it’s conducting a survey on watermarking — a method of tracking the authenticity of and copyrights on AI-generated images — as well as detection and disclosure in AI-made content. It plans to publish results.
The company also said in the document that it was open to external red teaming — in other words, allowing people to come in and test vulnerabilities in its system on multiple fronts including offensive content, the risk of manipulation and misinformation and bias. The firm said in the memo that it supports the creation of an information-sharing center to collaborate on cybersecurity.
In the memo, OpenAI appears to acknowledge the potential risk that AI systems pose to job markets and inequality. The company said in the draft that it would conduct research and make recommendations to policy makers to protect the economy against potential “disruption.”
--With assistance from Anna Edgerton and Courtney Rozen.
Most Read from Bloomberg Businessweek
©2023 Bloomberg L.P. | AI Policy and Regulations |
The world may not be “that far from potentially scary” artificial intelligence (AI) tools, the CEO of OpenAI said on the weekend.
Sam Altman, whose company created the wildly popular ChatGPT, was giving his thoughts on the current and future state of AI in a Twitter thread, following the explosion in public interest in generative AI tools.
Some experts, however, have told Euronews Next that rather than “potentially scary” AI applications being around the corner, we are currently living in a “dystopic present” thanks to the use of AI in sensitive settings that have a real impact on people’s opportunities.
Altman was speaking out following the integration of ChatGPT in Microsoft’s Bing search engine, which a number of tech experts and journalists put to the test - with some terrifying results.
Amid a two-hour chat with a New York Times tech columnist, Bing professed its love for him, tried to break up his marriage, and told him "I want to be alive".
Others have reported threats of violence and blackmail emanating from the chatbot, which is still in its testing phase.
Altman said in his Twitter thread “the adaptation to a world deeply integrated with AI tools is probably going to happen pretty quickly,” while admitting the tools were “still somewhat broken”.
"Regulation will be critical and will take time to figure out," he said, adding that “although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones".
So, what do AI ethics experts - the people who are thinking ahead and trying to shape the future integration of AI into our everyday lives - think about this?
'The dystopic present'
While Altman claims "current-generation AI tools aren’t very scary," some experts disagree.
Sarah Myers West, Managing Director of the AI Now Institute, told Euronews Next that "in many senses, that’s already where we are," with AI systems already being used to exacerbate "longstanding patterns of inequality".
AI Now is an American research institute studying the social implications of artificial intelligence, putting them at the forefront of thinking around the challenges that AI poses to society.
"They're used in very sensitive decision-making processes, often without very little oversight or accountability. So I think that we're already seeing that unfold around us. And that's exactly what's animating the drive to look at policy approaches to shape the direction that it takes," Myers West said.
These sensitive decision-making processes include hiring processes and education.
"One area, just as one example of many, is the use of emotion or affect recognition. Which is essentially the claim that you can infer people's inner emotional states or mental states from their facial features, and that there are particular AI systems that can read people's emotional states and even their personality traits," Amba Kak, AI Now’s Executive Director, said.
These AI systems are based on scientific foundations that are “shaky at best,” and they are “actually shaping people’s access to opportunity in real-time," she added.
"So, there's an urgent need to restrict these systems".
Kak and Myers West both push back on the idea of a dystopian future, as for them, in some ways, we are living in a “dystopic present”.
"Yesterday is the right time to introduce friction into that process to redistribute that power," argues Myers West.
"Let's say we accept that these technologies are a kind of inevitable future," said Kak.
"I think we're also then ceding ground to the fact that a handful of tech companies, would essentially have tremendous and unjustifiable control and power over societies and over people's lives, and over how eventually - and I don't think this is hyperbolic to say - even the autonomy we have to think, given just how much algorithms are shaping our information flows and so many aspects of our lives".
To say that AI is not currently regulated would, however, be a misconception, Kak explained.
While the EU and the US are drawing up their AI regulatory frameworks, there are at least indirect regulations already in place.
The data and computational infrastructures that make up the components of current AI technologies are "already regulated at many different levels," for example with data protection laws in the EU.
Other kinds of AI systems are already regulated in many countries, especially regarding facial recognition and biometrics, she added.
Regulation means the ability to shape the direction these technologies can take us, Kak says, with it being "less as a kind of constraining force and more as a shaping force in terms of how technologies develop".
What’s coming in terms of regulation, and why?
According to the Organisation for Economic Co-operation and Development’s (OECD) AI Policy Observatory, there are already 69 countries and territories with active AI policy initiatives, but most significantly the EU is currently drafting its own AI Act, which will be the first law on AI put in place by a major regulator.
Currently, the act divides AI into four risk-based categories, with those posing minimal or no risk to citizens - such as spam filters - being exempt from new rules.
Limited risk applications include things like chatbots, and will require transparency to ensure users know they are interacting with an AI.
High risk could include using AI for facial recognition, legal matters, or sorting CVs during employment processes. These could cause harm or limit opportunities, so they will face higher regulatory standards.
AI deemed an unacceptable risk - in other words, systems that are a clear threat to people - "will be banned," according to the European Commission.
Thierry Breton, the European Commissioner for the Internal Market, recently said the sudden rise of the popularity of applications like ChatGPT and the associated risks underscore the urgent need for rules to be established.
According to Francesca Rossi, an IBM fellow and the IBM AI Ethics Global Leader, “companies, standard bodies, civil society organisations, media, policymakers, all AI stakeholders need to play their complementary role” in achieving the goal of making sure AI is trustworthy and used responsibly.
"We are supportive of regulation when it uses a 'precision' risk-based approach to AI applications, rather than AI technology: applications which are riskier should be subject to more obligations," she told Euronews Next.
“We are also supportive of transparency obligations that convey the capabilities and limitations of the technology used,” she added, noting this is the approach the EU is taking with its AI Act.
She champions a company-wide AI ethics framework at IBM, a company that is supporting the rapid transformation towards the use of AI in society, "which always comes with questions, concerns, and risks to be considered and suitably addressed".
"A few years ago we could not imagine many of the capabilities that AI is now supporting in our personal lives and in the operations of many companies," Rossi said.
Like the representatives of the AI Now Institute, she believes "we as society and individuals" must steer the trajectory of AI development "so it can serve human’s and the planet's progress and values".
AI could 'disrupt the social order'
One of the particular fears expressed by Kak and Myers West about the rollout of AI systems in society was that the negative or positive impacts will not be distributed evenly.
"I feel like sometimes it might appear as if everybody will be equally impacted by the negatives, by the harms of technology, when actually, that's not true," said Kak.
"The people building the technology and people who inhabit similar forms of privilege, whether that's race, privilege, class privilege, all of these things, it feels like those are people that are unlikely to be as harmed by seeing a racist tiering algorithm. And so the question to ask is not just will AI benefit humanity, but who will it work for and who will it work against?"
This is also an area of interest for Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance in Berlin.
For her, the boom in AI could turn out to be a period of technological progress that "disrupts the current social order" while leaving some people behind.
"I think society's only stable when we produce those kinds of contexts, that people have an idea of where they belong and how they fit in," she told Euronews Next.
"And they're proud of their job and they're willing to go and compete for this, and they make enough money to be reasonably stable. And so what I'm really worried about is that I just think we're probably going through these periods when technology disrupts the social order we had".
“In the long term, if you aren't keeping people happy and interested and engaged and healthy, and you are adequately well paid and everything else, you're not going to have a secure society".
Writing on her blog at the end of 2022, regarding the question of her biggest concerns around AI ethics, Bryson said the biggest challenges involving AI are around digital governance.
"Are we using technology in a way that is safe, just, and equitable? Are we helping citizens, residents, and employees flourish?" she asked.
With the EU still fleshing out its AI Act ahead of presenting it to the EU Parliament at the end of March, these questions may remain unanswered for some time.
In the meantime, Meyers West wants to emphasise that "we have tremendous scope to shape the direction of where our technological future takes us".
"I think that it's really important that these policy conversations proceed in exactly that vein, ensuring they're working in the interests of the broader public and not just in the imaginations of those who are building them and profit from them," she said. | AI Policy and Regulations |
Big Tech lobbying on AI regulation as industry races to harness ChatGPT popularity
As the technology industry races to create tools harnessing generative artificial intelligence, the technology that powers AI chatbots like ChatGPT, Big Tech companies also rushed to K Street to weigh in on potential regulation of the novel technologies.
In the first three months of 2023, 123 companies, universities and trade associations lobbied the federal government on issues relating to artificial intelligence, an OpenSecrets analysis of federal lobbying disclosures found. They collectively spent roughly $94 million lobbying on AI and other issues from January through March 2023, though it is not possible to determine how much went to lobby issues specifically related to AI.
The number of entities lobbying on issues related to AI boomed in recent years, from single digits a decade ago to 30 in 2017 to 158 last year, an OpenSecrets analysis found.
Big Tech companies Amazon, Microsoft, Oracle, Google’s parent company Alphabet Inc., IBM and Meta were among those that reported lobbying on AI issues. Silicon Valley giants Alphabet, Meta and Microsoft laid off thousands of employees in recent months to, in part, focus on their in-house AI projects, though teams focused on ethical AI development were among those laid off.
Microsoft announced earlier this year that they were planning to invest $10 billion into ChatGPT’s creator, OpenAI, in an effort to integrate OpenAI’s technology into their Bing search engine, Azure cloud service and GitHub coding tools, among other uses. Microsoft, which invested in OpenAI in 2019 and in 2021, spent $2.4 million on lobbying according to quarterly reports, including on issues related to AI and facial recognition.
In an effort to keep up with rivals Microsoft and OpenAI, Google recently rolled out its own artificially intelligent chatbot dubbed Bard. The chatbot was released in March despite Google describing it as an “early experiment,” and a Google employee who tested the tool dubbed Bard a “pathological liar,” Bloomberg reported. Alphabet Inc. spent $3.4 million in lobbying from January through March this year, including on issues relating to AI principles, generative AI, research and development on AI, machine learning and quantum information science.
Top executives of Alphabet Inc., Microsoft, OpenAI and AI startup Anthropic will meet with Vice President Kamala Harris and other top administrators today to discuss AI related concerns, including misinformation, bias and privacy, Reuters reported.
ChatGPT attracted U.S. legislators’ attention after becoming the fastest growing consumer application in history just two months after it launched, reaching 100 million monthly users in January. In April, President Joe Biden said that whether AI is dangerous remains to be seen, but it was the companies’ responsibility to make sure their products were safe.
Meta was among the tech giants that shifted priorities to catch up with generative AI after an underwhelming metaverse initiative two years ago. The social network conglomerate — which owns Facebook, Instagram and WhatsApp — has laid out plans to integrate AI powered tools such as image generation and artificially intelligent chat across its platforms. Meta spent $4.6 million in lobbying expenses in the first quarter of the year, including for “continued conversations on Artificial Intelligence,” among other issues such as cybersecurity, election integrity and misinformation policies.
Software giant Oracle spent $3.1 million to lobby on AI and machine learning policy, research and development, among other issues related to defense, the supply chain and workforce. The Texas-based company has become one of the industry frontrunners in the race to catch up with ChatGPT, providing cloud computing for AI startups.
In April, Amazon announced that it will also be joining the generative AI race by making two new AI language models available through Amazon Web Services, the company’s cloud platform. The company spent roughly $5 million to lobby Congress in the year’s first three months on issues including AI and cloud security.
The auto company General Motors, which announced plans to integrate ChatGPT into its vehicles as driver assistants in March, lobbied on issues related to AI, electrification and autonomous vehicles, among other things. Their total spending on lobbying for the first quarter of 2023 was $5.5 million.
The U.S. Chamber of Commerce, the largest lobbying group in the country representing business interests, spent $19 million on lobbying in the year’s initial quarter. Its lobbying efforts included, but were not limited to, establishing task forces on AI and financial technology in the House Committee on Financial Services, implementing the National Artificial Intelligence Act, drafting automated vehicle bills, and related to other national AI related bills and executive orders as well as relating to international AI policy and the European Union’s Artificial Intelligence Act.
Lobbyists for insurance companies Zurich Insurance Group and State Farm Insurance also reported lobbying to further AI discussions this year. State Farm specifically lobbied for “Congressional efforts to better understand commercial use of artificial intelligence and its impact on consumers.”
Higher educational institutions such as Carnegie Mellon University lobbied to support the Army AI Integration Center and research related to Distributed AI applications for defense, among other issues, according to disclosures. At least ten other universities including Case Western Reserve, Vanderbilt, Harvard and Stanford also spent on lobbying around AI research related issues.
But as tech giants and other groups go all in on AI systems, some industry insiders fear that the technology is scaling up too fast, before it is properly understood and can be controlled.
In late March, over a thousand tech leaders, professors and researchers working in artificial intelligence signed an open letter warning that AI technologies “pose profound risks to society and humanity.” The letter urged AI labs to pause the development of their most advanced technologies so they can be better understood. The letter was published just two weeks after the San Francisco start-up OpenAI unveiled GPT-4, the latest version of their chatbot ChatGPT.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter by the Future of Life Institute, a nonprofit organization, read. The Future of Life Institute spent $50,000 in the first quarter of 2023 to lobby for provisions and funding that would ensure “trustworthy artificial intelligence” development. The nonprofit is backed by Elon Musk, who was one of OpenAI’s co-founders and previously invested in the company before a fallout with its other founders. Musk has since been working to launch his own AI start-up, X.AI, to take on OpenAI.
The tech ethics group Center for Artificial Intelligence and Digital Policy called on the U.S. Federal Trade Commission to open an investigation into OpenAI and stop it from releasing new ChatGPT models in March before guidelines were established, dubbing GPT-4 “biased, deceptive, and a risk to privacy and public safety.” The group also urged AI regulations including laws to ensure algorithmic transparency.
Geoffrey Hinton, who pioneered the neural technology that became the foundation for today’s AI systems and recently quit his over-a-decade-long job at Google to warn about the risks of the technology he created, said in a New York Times interview that he feared the AI race between tech companies will keep escalating without some sort of regulation.
The disruptive technology can flood the internet with fake imagery and text in the short run, and can later replace human workers, Hinton warned. IBM, which spent $1.5 million to lobby issues related, but not limited to, emerging technologies including blockchain, cloud computing, 5G, AI and facial recognition in the year’s first quarter, made headlines on Monday for pausing their hiring to replace 7,800 workers with AI in coming years.
Down the road, Hinton fears, AI can slip outside our control and potentially threaten humanity by learning and executing malicious behavior that its creators didn’t expect.
“I don’t think they should scale this up more until they have understood whether they can control it,” Hinton told New York Times.
Support Accountability Journalism
At OpenSecrets.org we offer in-depth, money-in-politics stories in the public interest. Whether you’re reading about 2022 midterm fundraising, conflicts of interest or “dark money” influence, we produce this content with a small, but dedicated team. Every donation we receive from users like you goes directly into promoting high-quality data analysis and investigative journalism that you can trust. | AI Policy and Regulations |
Technology and innovation are nothing new to the entertainment business. Hollywood has always been at the forefront of the use of innovative techniques and technologies to create engaging media, from the advent of sound to the development of computer-generated imagery. However, the recent boom in AI has raised alarm bells among working performers who worry that their jobs are at risk from new technologies. The effects of AI on the entertainment industry and the actors’ reactions to them will be discussed in this article.
The field of entertainment has seen major AI advancements in recent years. Images, videos, and even written text can now be generated by AI algorithms that look and sound almost as good as anything made by a human. Some movies and TV shows have already made use of this technology to make the actors look younger, such as The Mandalorian and The Irishman.
Synthetic voices and faces are also being developed with the help of AI. DALL-E, an artificial intelligence program, can create pictures of inanimate objects, and GPT-3 can write text that sounds human. All-new characters can be made with these tools, or actors can be cast in specific roles.
Actors and performers are worried about the increasing prevalence of AI in the entertainment industry. They worry that their jobs could be automated away and become obsolete. This is a valid concern because there are currently available artificial intelligence-based tools capable of producing photo- and text-realistic outputs. One well-known AI-based chatbot that can simulate human conversation is ChatGPT.
The use of their digital likenesses has also raised concerns among actors. The development of deepfake technology has allowed for the production of fake videos and photographs of nonexistent individuals. This tech could be used to produce maliciously misleading videos featuring actors.
Actors in Hollywood have gone on strike over their fears of being replaced by artificial intelligence. The WGA has been joined in their demand for AI regulations to protect writers and their works by the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA).
More than two months have passed since the strike began, and neither side is willing to budge. The Writers Guild of America and the Screen Actors Guild have made it clear that they do not want their members’ contractually protected works to be used in the training of artificial intelligence.
The Alliance of Motion Picture and Television Producers, the trade group representing Hollywood studios, has offered actors a “groundbreaking AI proposal” to safeguard their digital likenesses. This proposal includes a demand for actors’ approval before any digital replicas or alterations of their performances can be made. SAG-AFTRA, however, has rejected this proposal on the grounds that it does not go far enough to protect the rights of actors.
Hollywood’s directors’ union, the Director’s Guild of America (DGA), has successfully negotiated AI safeguards into its contract. According to the DGA’s official position, “duties performed by DGA members must be assigned to a person,” and “generative artificial intelligence does not constitute a person.” This safeguard prevents the use of AI-based tools that could otherwise replace board members.
Some artists in the entertainment industry have taken legal action against AI. Sarah Silverman, a comedian, and two authors have sued OpenAI and Facebook parent company Meta for copyright infringement.
Legal action has been taken against both defendants on the grounds that they illegally copied and distributed summaries of the plaintiffs’ books. This legal battle illustrates the growing anxiety among artists about the possibility of their work being copied and remixed by software powered by artificial intelligence.
Hollywood’s embrace of AI is a trend that’s not going away anytime soon. Tools based on artificial intelligence (AI) are already widely used in the entertainment industry, and their prevalence is only expected to grow as technology advances. But that doesn’t mean acting is going away any time soon.
Instead, performers will need to adjust to the new realities of the business. They may need to acquire new abilities, such as the ability to use tools powered by AI, in order to maintain their relevance. It’s possible they’ll also have to work out more favorable terms for the use of their digital likenesses in contracts.
In conclusion, actors and entertainers are worried that their jobs will be automated away due to the proliferation of AI in the entertainment industry. But that doesn’t mean acting is going away any time soon. Instead, they need to adjust to the new realities of their field and work harder to secure favorable contractual protections for themselves.
As artists fight to prevent the unauthorized use of their work, the legal fight against AI-based tools is likely to continue. Tools based on artificial intelligence (AI) are already widely used in the entertainment industry, and their prevalence is only expected to grow as technology advances. Nonetheless, this won’t make the need for actors and performers obsolete. Instead, they need to accept the new Hollywood reality in the age of AI and make the necessary adjustments.
First reported on NBC News
Frequently Asked Questions
Q1: How has AI impacted the entertainment industry?
AI has brought significant advancements to the entertainment industry. It enables the creation of realistic images, videos, and text that rival human-made content. Movies and TV shows have used AI to make actors appear younger, and AI algorithms can generate synthetic voices and faces. These technologies have the potential to reshape the industry and create new possibilities for character creation and storytelling.
Q2: What concerns do actors have regarding AI in the entertainment industry?
Actors are concerned about the automation of their jobs and the potential obsolescence of their roles due to AI. The ability of AI to generate photo- and text-realistic outputs raises worries that their performances could be replicated or replaced by AI-generated content. The use of deepfake technology has also raised concerns about the unauthorized use of actors’ digital likenesses in misleading or malicious ways.
Q3: What actions have actors taken in response to AI in the industry?
Actors have gone on strike, demanding AI regulations to protect their rights and works. Organizations like the Writers Guild of America (WGA) and the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) have joined forces to advocate for AI regulations. The negotiation between the industry trade group, the Alliance of Motion Picture and Television Producers, and the actors’ unions has been ongoing, with disputes over the protection of actors’ digital likenesses and their approval rights.
Q4: What safeguards have been negotiated for directors in relation to AI?
The Director’s Guild of America (DGA) has successfully negotiated safeguards into its contract to address AI concerns. The DGA stipulates that duties performed by its members must be assigned to a person, stating that “generative artificial intelligence does not constitute a person.” This provision helps prevent the replacement of directors with AI-based tools.
Q5: Are there legal battles concerning AI in the entertainment industry?
Yes, legal action has been taken by artists against companies involved in AI. Comedian Sarah Silverman and two authors have sued OpenAI and Facebook’s parent company, Meta, for copyright infringement. They allege that the defendants illegally copied and distributed summaries of their books, highlighting concerns about the unauthorized use and remixing of their work by AI-powered software.
Q6: How should actors adapt to the growing presence of AI in the industry?
Actors need to adjust to the new realities of the industry by acquiring new skills and adapting to the use of AI-powered tools. Embracing AI and understanding its applications can help actors remain relevant and navigate the changing landscape. Additionally, securing favorable contractual protections for the use of their digital likenesses is essential in safeguarding their rights and interests.
Q7: Will acting be replaced by AI in the entertainment industry?
While AI is transforming the industry, acting is not expected to be completely replaced. The human element, emotions, and creativity that actors bring to their performances are still highly valued. However, actors need to adapt and embrace the opportunities that AI brings, collaborating with technology to enhance storytelling and create innovative experiences.
Q8: What is the future outlook for AI in the entertainment industry?
AI’s role in the entertainment industry is expected to continue growing as technology advances. AI-powered tools are already widely used and are likely to become more prevalent. It will be crucial for artists to strike a balance between embracing AI’s potential and protecting their rights and creative works. The industry will continue to navigate the challenges and opportunities presented by AI in the pursuit of innovative and engaging entertainment experiences.
Featured Image Credit: Unsplash | AI Policy and Regulations |
Senate Majority Leader Chuck Schumer will host tech leaders including Tesla CEO Elon Musk, Meta Platforms CEO Mark Zuckerberg and Alphabet CEO Sundar Pichai at an artificial intelligence forum on Sept. 13, Axios reported on Monday, citing sources.
The closed-door forum, which is expected to last two to three hours, will also feature OpenAI CEO Sam Altman and Microsoft CEO Satya Nadella, according to Axios.
Schumer previously hinted at the forum in June, indicating that it would “lay down a new foundation for AI policy.”
“We need the best of the best sitting at the table: the top AI developers, executives, scientists, advocates, community leaders, workers, national security experts – all together in one room, doing years of work in a matter of month,” Schumer has said, according to the Senate Democrats’ website. | AI Policy and Regulations |
Congress must get ahead on AI legislation before it’s too late
AI is a mixed bag. For all its great promise, this new technology comes with serious pitfalls.
As a country, we will only be able to reap the benefits of AI if we can ensure that the American people have broad trust that AI systems are working for them, augmenting, rather than degrading, human potential. Therefore, the overarching question for policymakers should be: How do we wring the benefits out of AI, like better medical diagnosis and treatment, while neutralizing its risks, such as algorithmic discrimination?
Amid a flurry of recent hearings and announcements, Congress appears keen to pass something having to do with reining in artificial intelligence. Since 2018, Congress has actually racked up an impressive bipartisan record of AI legislation, laying a foundation upon which policymakers can build to answer the questions above and others.
The challenge is that while technological change is rapid, Congress moves slowly.
As policymakers gear up for years of debate about comprehensive AI legislation, we risk a situation where — in the long interim between today and any enactment of some public law — nothing is done to channel AI in ways that maximize its benefits and reduce its harms. But neither should Congress rush to pass legislation that might have unrecognized consequences for a technology that’s becoming vital to our national security and national competitiveness.
Fortunately, there is a middle ground. Congress can walk and chew gum at the same time — while the country waits for comprehensive AI legislation, policymakers can still respond to the AI moment in ways that serve the dual mission of wringing benefits while mitigating risks.
Here are four consensus-building proposals Congress can swiftly pass to do that:
First, Congress should make clear that all civil rights laws apply to decisions made by algorithms just as they apply to decisions made by people — discrimination, whether by man or machine, is illegal. In doing so, Congress should also invest more in the National Institute of Standards and Technology (NIST) to expand its work to solve the thorniest of AI problems. NIST needs more resources for AI testing, evaluation and verification, including for identifying bias and discrimination. Fortunately, we already have a proposal to do just that.
Second, Congress should work with the Biden administration to give them the resources and authorities they need to build the National AI Research Resource (NAIRR), a set of shared computing and data resources made available to researchers throughout the United States. You shouldn’t have to work in Silicon Valley to get access to the tools necessary for the next generation of AI innovation. By democratizing access to AI research tools and computing power with the NAIRR, we can help ensure more researchers are participating in efforts to make AI safer and more trustworthy.
Third, Congress should pass the Deepfake Task Force Act, which narrowly missed being included in last year’s national defense authorization despite enjoying unanimous support in the Homeland Security and Government Affairs Committee. Deepfakes are hyper-realistic AI-generated images or videos depicting events that did not occur, which are used in many concerning ways from pornography to elections. Involving collaboration between industry and government, this bill would help develop standards to ensure the provenance of digital content, thereby reducing the risks associated with deepfakes.
Fourth, Congress should build on the success of the National Security Commission on AI (NSCAI) by creating a new Commission on the Future of Work. The NSCAI was a successful nonpartisan commission created by Congress in 2018 to develop and recommend policies to hone the United States’ competitive edge concerning AI. Thanks to the NSCAI’s practicality and rigor, many proposals have become law, including their idea to build the NAIRR.
Undoubtedly, AI will transform our economy, including the nature of work. For that reason, Congress should convene experts with various viewpoints and backgrounds to develop concrete policies to ensure that our economy harnesses AI in ways that create jobs, protect the dignity of workers and promote overall national competitiveness.
For the past few years, and under presidents of two different parties, Congress has passed a number of bipartisan AI-related laws. There is no easy red versus blue partisan breakdown when it comes to something as new and transformative as AI. This is an American challenge. The more Congress can continue to work collaboratively and build consensus around quality AI policy proposals, the easier it will be to tackle the ever more complex challenges posed by this technology.
The evolution of AI and its applications will not wait for policymakers. Congress needs to do its part in developing sensible guidelines now.
Rob Portman, a fellow at the American Enterprise Institute and former United States senator from Ohio, cofounded and cochaired the AI Caucus in the Senate.
Sam Mulopulos is currently an Ian Axford Fellow in Public Policy and a former senior staffer to Portman.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. | AI Policy and Regulations |
The government has set out plans to regulate artificial intelligence with new guidelines on "responsible use".
Describing it as one of the "technologies of tomorrow", the government said AI contributed £3.7bn ($5.6bn) to the UK economy last year.
Critics fear the rapid growth of AI could threaten jobs or be used for malicious purposes.
The term AI covers computer systems able to do tasks that would normally need human intelligence.
This includes chatbots able to understand questions and respond with human-like answers, and systems capable of recognising objects in pictures.
A new white paper from the Department for Science, Innovation and Technology proposes rules for general purpose AI, which are systems that can be used for different purposes.
Technologies include, for example, those which underpin chatbot ChatGPT.
As AI continues developing rapidly, questions have been raised about the future risks it could pose to people's privacy, their human rights or their safety.
There is concern that AI can display biases against particular groups if trained on large datasets scraped from the internet which can include racist, sexist and other undesirable material.
AI could also be used to create and spread misinformation.
As a result many experts say AI needs regulation.
However AI advocates say the tech is already delivering real social and economic benefits for people.
And the government fears organisations may be held back from using AI to its full potential because a patchwork of legal regimes could cause confusion for businesses trying to comply with rules.
Instead of giving responsibility for AI governance to a new single regulator, the government wants existing regulators - such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority - to come up with their own approaches that suit the way AI is actually being used in their sectors.
These regulators will be using existing laws rather than being given new powers.
Michael Birtwistle, associate director from the Ada Lovelace Institute, carries out independent research, and said he welcomed the idea of regulation but warned about "significant gaps" in the UK's approach which could leave harms unaddressed.
"Initially, the proposals in the white paper will lack any statutory footing. This means no new legal obligations on regulators, developers or users of AI systems, with the prospect of only a minimal duty on regulators in future.
"The UK will also struggle to effectively regulate different uses of AI across sectors without substantial investment in its existing regulators," he said.
The white paper outlines five principles that the regulators should consider to enable the safe and innovative use of AI in the industries they monitor:
⢠Safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed
⢠Transparency and "explainability": organisations developing and deploying AI should be able to communicate when and how it is used and explain a system's decision-making process in an appropriate level of detail that matches the risks posed by the use of AI
⢠Fairness: AI should be used in a way which complies with the UK's existing laws, for example on equalities or data protection, and must not discriminate against individuals or create unfair commercial outcomes
⢠Accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
⢠Contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI
Over the next year, regulators will issue practical guidance to organisations to set out how to implement these principles in their sectors.
Science, innovation and technology secretary Michelle Donelan said: "Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely."
But Simon Elliott, partner at cybersecurity firm Dentons told the BBC the government's approach was a "light-touch" that makes the UK "an outlier" against the global trends around AI regulation.
China, for example, has taken the lead in moving AI regulations past the proposal stage with rules that mandate companies notify users when an AI algorithm is playing a role.
"Numerous countries globally are developing or passing specific laws to address perceived AI risks - including algorithmic rules passed in China or the USA," continued Mr Elliott.
He warned about the concerns that consumer groups and privacy activists will have over the risks to society "without detailed, unified regulation."
He is also worried that the UK's regulators could be burdened with "an increasingly large and diverse" range of complaints, when "rapidly developing and challenging" AI is added to their workloads.
In the EU, the European Commission has published proposals for regulations titled the Artificial Intelligence Act which would have a much broader scope than China's enacted regulation.
"AI has been around for decades but has reached new capacities fuelled by computing power," Thierry Breton, the EU's Commissioner for Internal Market, said in a statement.
The AI Act aims to "strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use," Mr Breton added.
Meanwhile in the US The Algorithmic Accountability Act 2022 requires companies to assess the impacts of AI. | AI Policy and Regulations |
Press play to listen to this article
Voiced by artificial intelligence.
Artificial intelligence's newest sensation — the gabby chatbot-on-steroids ChatGPT — is sending European rulemakers back to the drawing board on how to regulate AI.
The chatbot dazzled the internet in past months with its rapid-fire production of human-like prose. It declared its love for a New York Times journalist. It wrote a haiku about monkeys breaking free from a laboratory. It even got to the floor of the European Parliament, where two German members gave speeches drafted by ChatGPT to highlight the need to rein in AI technology.
But after months of internet lolz — and doomsaying from critics — the technology is now confronting European Union regulators with a puzzling question: How do we bring this thing under control?
The technology has already upended work done by the European Commission, European Parliament and EU Council on the bloc’s draft artificial intelligence rulebook, the Artificial Intelligence Act. The regulation, proposed by the Commission in 2021, was designed to ban some AI applications like social scoring, manipulation and some instances of facial recognition. It would also designate some specific uses of AI as “high-risk,” binding developers to stricter requirements of transparency, safety and human oversight.
You may like
The catch? ChatGPT can serve both the benign and the malignant.
This type of AI, called a large language model, has no single intended use: People can prompt it to write songs, novels and poems, but also computer code, policy briefs, fake news reports or, as a Colombian judge has admitted, court rulings. Other models trained on images rather than text can generate everything from cartoons to false pictures of politicians, sparking disinformation fears.
In one case, the new Bing search engine powered by ChatGPT's technology threatened a researcher with "hack[ing]" and "ruin." In another, an AI-powered app to transform pictures into cartoons called Lensa hypersexualized photos of Asian women.
“These systems have no ethical understanding of the world, have no sense of truth, and they're not reliable,” said Gary Marcus, an AI expert and vocal critic.
These AIs "are like engines. They are very powerful engines and algorithms that can do quite a number of things and which themselves are not yet allocated to a purpose," said Dragoș Tudorache, a Liberal Romanian lawmaker who, together with S&D Italian lawmaker Brando Benifei, is tasked with shepherding the AI Act through the European Parliament.
Already, the tech has prompted EU institutions to rewrite their draft plans. The EU Council, which represents national capitals, approved its version of the draft AI Act in December, which would entrust the Commission with establishing cybersecurity, transparency and risk-management requirements for general-purpose AIs.
The rise of ChatGPT is now forcing the European Parliament to follow suit. In February the lead lawmakers on the AI Act, Benifei and Tudorache, proposed that AI systems generating complex texts without human oversight should be part of the “high-risk” list — an effort to stop ChatGPT from churning out disinformation at scale.
The idea was met with skepticism by right-leaning political groups in the European Parliament, and even parts of Tudorache's own Liberal group. Axel Voss, a prominent center-right lawmaker who has a formal say over Parliament's position, said that the amendment “would make numerous activities high-risk, that are not risky at all.”
In contrast, activists and observers feel that the proposal was just scratching the surface of the general-purpose AI conundrum. “It's not great to just put text-making systems on the high-risk list: you have other general-purpose AI systems that present risks and also ought to be regulated,” said Mark Brakel, a director of policy at the Future of Life Institute, a nonprofit focused on AI policy.
The two lead Parliament lawmakers are also working to impose stricter requirements on both developers and users of ChatGPT and similar AI models, including managing the risk of the technology and being transparent about its workings. They are also trying to slap tougher restrictions on large service providers while keeping a lighter-tough regime for everyday users playing around with the technology.
Professionals in sectors like education, employment, banking and law enforcement have to be aware "of what it entails to use this kind of system for purposes that have a significant risk for the fundamental rights of individuals,” Benifei said.
If Parliament has trouble wrapping its head around ChatGPT regulation, Brussels is bracing itself for the negotiations that will come after.
The European Commission, EU Council and Parliament will hash out the details of a final AI Act in three-way negotiations, expected to start in April at the earliest. There, ChatGPT could well cause negotiators to hit a deadlock, as the three parties work out a common solution to the shiny new technology.
On the sidelines, Big Tech firms — especially those with skin in the game, like Microsoft and Google — are closely watching.
The EU's AI Act should “maintain its focus on high-risk use cases,” said Microsoft’s Chief Responsible AI Officer Natasha Crampton, suggesting that general-purpose AI systems such as ChatGPT are hardly being used for risky activities, and instead are used mostly for drafting documents and helping with writing code.
“We want to make sure that high-value, low-risk use cases continue to be available for Europeans,” Crampton said. (ChatGPT, created by U.S. research group OpenAI, has Microsoft as an investor and is now seen as a core element in its strategy to revive its search engine Bing. OpenAI did not respond to a request for comment.)
A recent investigation by transparency activist group Corporate Europe Observatory also said industry actors, including Microsoft and Google, had doggedly lobbied EU policymakers to exclude general-purpose AI like ChatGPT from the obligations imposed on high-risk AI systems.
Could the bot itself come to EU rulemakers' rescue, perhaps?
ChatGPT told POLITICO it thinks it might need regulating: “The EU should consider designating generative AI and large language models as ‘high risk’ technologies, given their potential to create harmful and misleading content,” the chatbot responded when questioned on whether it should fall under the AI Act’s scope.
“The EU should consider implementing a framework for responsible development, deployment, and use of these technologies, which includes appropriate safeguards, monitoring, and oversight mechanisms," it said.
The EU, however, has follow-up questions. | AI Policy and Regulations |
(Bloomberg) -- European Union negotiators have backed a plan to place additional constraints on the biggest artificial intelligence systems under the upcoming AI Act.
Most Read from Bloomberg
Representatives from the European Commission, European Parliament and EU countries are discussing an approach that would address concerns posed by powerful large language models — the technology underpinning AI chatbots — such as OpenAI’s GPT-4 while also ensuring new startups aren’t overly burdened by regulation, people familiar with the matter said. The agreement is preliminary, not yet laid out in a written draft and subject to change, the people said, asking not to be identified discussing the private conversation.
It’s a similar tiered approach to the EU’s recently rolled out Digital Services Act. While the DSA requires all platforms and websites to take actions to protect user data and monitor for illegal activities, the most stringent controls are reserved for the largest, including Alphabet Inc. and Meta Platforms Inc.
Read More: US Warns EU’s Landmark AI Policy Will Only Benefit Big Tech
The EU could become the first Western government to place mandatory rules on artificial intelligence with its AI Act. Under the proposed law, AI companies would have to perform risk assessments and label deepfakes, among other requirements. Negotiators want to finalize the legislation by the end of the year and aim to hone in on a deal at the next meeting on Oct. 25.
There are still a number of key issues that remain, including how exactly to regulate generative AI and whether to completely ban live facial scanning in crowds. The European Parliament backed a complete ban of this biometric surveillance — something that many EU countries have said they’ll reject.
Most Read from Bloomberg Businessweek
©2023 Bloomberg L.P. | AI Policy and Regulations |
Is it time to put the brakes on the development of artificial intelligence (AI)? If you’ve quietly asked yourself that question, you’re not alone.
In the past week, a host of AI luminaries signed an open letter calling for a six-month pause on the development of more powerful models than GPT-4; European researchers called for tighter AI regulations; and long-time AI researcher and critic Eliezer Yudkowsky demanded a complete shutdown of AI development in the pages of TIME magazine.
Meanwhile, the industry shows no sign of slowing down. In March, a senior AI executive at Microsoft reportedly spoke of “very, very high” pressure from chief executive Satya Nadella to get GPT-4 and other new models to the public “at a very high speed”.
I worked at Google until 2020, when I left to study responsible AI development, and now I research human-AI creative collaboration. I am excited about the potential of artificial intelligence, and I believe it is already ushering in a new era of creativity. However, I believe a temporary pause in the development of more powerful AI systems is a good idea. Let me explain why.
What is GPT-4 and what is the letter asking for?
The open letter published by the US non-profit Future of Life Institute makes a straightforward request of AI developers:
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
So what is GPT-4? Like its predecessor GPT-3.5 (which powers the popular ChatGPT chatbot), GPT-4 is a kind of generative AI software called a “large language model”, developed by OpenAI.
GPT-4 is much larger and has been trained on significantly more data. Like other large language models, GPT-4 works by guessing the next word in response to prompts – but it is nonetheless incredibly capable.
In tests, it passed legal and medical exams, and can write software better than professionals in many cases. And its full range of abilities is yet to be discovered.
Good, bad, and plain disruptive
GPT-4 and models like it are likely to have huge effects across many layers of society.
On the upside, they could enhance human creativity and scientific discovery, lower barriers to learning, and be used in personalised educational tools. On the downside, they could facilitate personalised phishing attacks, produce disinformation at scale, and be used to hack through the network security around computer systems that control vital infrastructure.
OpenAI’s own research suggests models like GPT-4 are “general-purpose technologies” which will impact some 80% of the US workforce.
Layers of civilisation and the pace of change
The US writer Stewart Brand has argued that a “healthy civilisation” requires different systems or layers to move at different speeds:
The fast layers innovate; the slow layers stabilise. The whole combines learning with continuity.
According to the ‘pace layers’ model, different layers of a healthy civilisation move at different speeds, from the slow movement of nature to the rapid shifts of fashion. Image: Stewart Brand / Journal of Design and Science.
In Brand’s “pace layers” model, the bottom layers change more slowly than the top layers.
Technology is usually placed near the top, somewhere between fashion and commerce. Things like regulation, economic systems, security guardrails, ethical frameworks, and other aspects exist in the slower governance, infrastructure and culture layers.
Right now, technology is accelerating much faster than our capacity to understand and regulate it – and if we’re not careful it will also drive changes in those lower layers that are too fast for safety.
The US sociobiologist E.O. Wilson described the dangers of a mismatch in the different paces of change like so:
The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and god-like technology.
Are there good reasons to maintain the current rapid pace?
Some argue that if top AI labs slow down, other unaligned players or countries like China will outpace them.
However, training complex AI systems is not easy. OpenAI is ahead of its US competitors (including Google and Meta), and developers in China and other countries also lag behind.
It’s unlikely that “rogue groups” or governments will surpass GPT-4’s capabilities in the foreseeable future. Most AI talent, knowledge, and computing infrastructure is concentrated in a handful of top labs.
Other critics of the Future of Life Institute letter say it relies on an overblown perception of current and future AI capabilities.
However, whether or not you believe AI will reach a state of general superintelligence, it is undeniable that this technology will impact many facets of human society. Taking the time to let our systems adjust to the pace of change seems wise.
Slowing down is wise
While there is plenty of room for disagreement over specific details, I believe the Future of Life Institute letter points in a wise direction: to take ownership of the pace of technological change.
Despite what we have seen of the disruption caused by social media, Silicon Valley still tends to follow Facebook’s infamous motto of “move fast and break things”.
I believe a wise course of action is to slow down and think about where we want to take these technologies, allowing our systems and ourselves to adjust and engage in diverse, thoughtful conversations. It is not about stopping, but rather moving at a sustainable pace of progress. We can choose to steer this technology, rather than assume it has a life of its own that we can’t control.
After some thought, I have added my name to the list of signatories of the open letter, which the Future of Life Institute says now includes some 50,000 people. Although a six-month moratorium won’t solve everything, it would be useful: it sets the right intention, to prioritise reflection on benefits and risks over uncritical, accelerated, profit-motivated progress. | AI Policy and Regulations |
OpenAI’s warning shot shows the fragile state of EU regulatory dominance
On May 24, OpenAI, the company that created ChatGPT, announced that it may discontinue business in the European Union (EU). This announcement followed the European Parliament’s recent vote to adopt its new AI Act.
In response to criticism from EU industry chief Thierry Breton, OpenAI CEO Sam Altman tweeted a day later that OpenAI has no plans to leave Europe. Yet the very threat of such a departure underscores the need for continued dialogue on AI regulations. As competition increases, regulatory collisions within this multi-trillion dollar industry could be disastrous. A U.S.-EU misalignment could generate huge inefficiencies, duplicated efforts and opportunity costs.
Most importantly, the mere possibility that OpenAI would depart could signal the demise or significant weakening of European regulatory primacy — something known as the “Brussels Effect” — given the company’s widespread influence and applications.
The Brussels Effect highlights how the EU’s market forces alone are enough to incentivize multinational companies to voluntarily abide by its regulations and encourage other countries to adopt similar laws on their own. In recent years, however, the EU has implemented various interventionist policies that critics argue hinder innovation within and outside the region. One of these policies is the General Data Protection Regulation. Immediately after its enactment in May 2016, it prompted many data privacy law copycats worldwide, including within many U.S. states. However, many have argued its vague provisions and lack of guidance for companies looking to comply have rendered it ineffective.
Another example is the EU’s Digital Markets Act (DMA), enacted in November 2022. It targets “gatekeepers” — core digital platforms — which the European Commission claims “prevent competition, leading to less innovation.” Critics have said the DMA worsens services for consumers and that its “big, bad tech” approach actually reduces innovation.
The EU’s new AI Act shares similar flaws with the GDPR and DMA. OpenAI’s CEO Sam Altman labeled the policy as “over-regulating” and stated that if compliance is unforeseeable, they will cease operations in the EU. Altman’s primary concern centers around the AI Act’s requirement for companies to disclose copyrighted materials used in the training and developing of generative AI tools like ChatGPT. Indeed, complying with that particular rule would be essentially impossible for AI companies to achieve.
The EU’s reaction to Altman’s statement will determine the extent of American firms’ direct influence on European regulatory actions. If the EU decides to amend its policies based on OpenAI’s suggestions, it may signal a further weakening of de facto EU standards. Conversely, if the EU rigidly enforces the rule unchanged, OpenAI’s potential withdrawal would send a message to other countries and companies around the world that doing business in the EU is perhaps unnecessary.
Either way, there is a risk of divergence between EU and non-EU standards, resulting in a fragmented AI regulatory landscape with varying levels of accountability and limitations.
The U.S. and EU already exhibit starkly different approaches to regulating AI and promoting ethical and responsible innovation. Although both have similar guidelines on non-discrimination, accuracy, robustness, security and data privacy, the EU’s approach is much more centralized and punitive. In contrast, the U.S. AI Bill of Rights is more geared toward delegating regulatory decisions to agencies, whose authority in enforcing these regulations is unclear. It focuses on a tailored, sector-specific approach.
Moreover, the U.S. federal government has been investing more heavily in AI innovation. In fact, on May 4, the White House announced $140 million in funding to launch seven new “National AI Research Institutes.”
Also, when it comes to AI, U.S. states have already broken from their pattern of following EU tech regulatory standards.
Ultimately, Washington will have to determine whether the benefits of regulatory differentiation from the EU outweigh the costs. It remains to be seen whether an American innovation-centered strategy will give rise to new methods of responsible AI. However, if U.S. firms and agencies persist in charting their own course on AI regulations, it could undermine the Brussels Effect, potentially eroding Europe’s sway over global tech norms.
April Liu is a research associate at the Libertas Institute, specializing in data privacy, tech and AI regulation.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. | AI Policy and Regulations |
Japanese politicians are starting to take generative AI seriously. In January, Ken Akamatsu, a member of the Japanese legislature, uploaded a nearly 40-minute video to his official YouTube channel, calling for new national guidelines on the use of generative AI.
Since becoming the first professional manga artist elected to Japan’s national legislature in 2022, Akamatsu has built a political career defending the interests of the country’s manga and anime artists. Months after a series of AI art controversies rocked those industries, he has turned his sights on text-to-image generators, like Stable Diffusion and Midjourney.
“What are you going to do with this culture and data that we’ve created?” said Akamatsu in the video, citing comments he’d received from illustrators who worry their portfolios may be used to train image generators. “So far, there are no [policy] proposals that face the feelings of creators – their anger, resentment, and anxiety needs to be taken into account.”
Akamatsu lays out a series of ideas in the video, including an opt-out system for artists and a licensing system to compensate those who participate. But he stops short of fully endorsing either plan, and significant questions remain about both proposals. A spokesperson for Akamatsu declined an interview for this story, citing his responsibilities during the ongoing session of the Diet, Japan’s national legislature.
Labor groups globally have raised warnings about generative AI tools, but Japanese politicians are in a unique position to draw on the commercial and cultural importance of the anime and manga industries in the country. AI image generators have provoked both excitement and anxiety among the professional artists who staff those industries, creating an opening for politicians to propose regulation. But while Akamatsu and other politicians are already moving to seize the moment, their efforts have mostly demonstrated how politically thorny the issue is.
In the weeks since Akamatsu first spoke out publicly, a few Diet legislators have arranged listening tours to hear more about artists’ concerns. Earlier this month, Taisuke Ono, a member of the right-wing Japan Restoration Party, announced that he had taken meetings with illustration students to hear policy proposals on AI image generators.
“They are right to be concerned,” Ono told Rest of World. “If the works created by illustrators and other artists are learned by AI on the internet without permission, and if machines continue to create valuable works based on them almost without limit, it will be impossible for human beings to make a living from their creative activities.” He added that he would like to see mandates that require companies to disclose which works their AI has been trained on.
Animators, manga illustrators, and industry labor organizations told Rest of World they are grateful for these efforts, but not sure how much to expect from them. It remains unclear how the rhetoric will translate into tangible changes, and whether the Japanese government can address the underlying issues troubling anime and manga workers.
“The law is slow to change, especially in Japan,” said Jun Sugawara, founder of the Animator Supporters nonprofit, which advocates for increased wages in the industry and provides housing support for animators, to Rest of World. “The truth is, we need change now, and I think it is up to the community and those with power in the industry to start making it.”
Akamatsu has floated several specific policy recommendations, although he has stopped short of endorsing any particular measure. One idea is requiring AI developers to receive consent from artists in order to use their artwork in training datasets. This would allow artists to retain some control over their copyright-protected images, and not feed into models that may ultimately produce similar work.
Akamatsu has a personal stake in protections for illustrators: The legislator has also been a working manga author for 30 years, releasing a string of hits including the serialized comic Love Hina. (Another early success was the series AI Won’t Stop! about a high-schooler whose AI program comes to life, and becomes his girlfriend) His pivot to politics came about largely through lobbying the Diet against censorship of the manga industry, followed by working with the Japan Cartoonists Association (JCA). He has been a managing director of the JCA, a trade lobbying group, since 2018.
A fierce advocate for expanding digital copyright laws to protect illustrators from theft, Akamatsu’s political ascent has not been without controversy. One of his flagship issues is “freedom of speech,” including arguing for the rights of Japanese artists to produce lolicon-style manga — a genre that features sexualized imagery of minors, mostly teenage girls, and has been criticized as a form of child abuse content. (The term is a portmanteau of “Lolita complex.”) It’s a disturbing topic for many, but has often mobilized Akamatsu’s base, despite his membership in the conservative Liberal Democratic Party (LDP).
Akamatsu has specifically called out services like NovelAI and Waifu Diffusion that are primed to churn out anime and manga-style illustrations. For illustrators whose work has been used in training such AI, the current Japanese copyright law offers few legal avenues to sue developers for copyright infringement.
“I think it would be good if illustrators could be paid or opted out,” Akamatsu had said in his January video, envisioning a future where “images created by AI generators will be used commercially, and appropriate profits will be returned to the rights holders of copyrighted works used in the machine-learning process.”
He has also suggested developing licenses that AI companies could use to pay illustrators for their work. In the short term, licenses would insulate developers from legal challenges and also compensate artists. However, it’s unclear how many revenue-sharing contracts developers could support. Sophisticated AI models like Stable Diffusion require billions of images in their training set, and it’s unlikely that licensing contracts would cover more than a small portion of the total set.
Ono agreed there may be issues with setting licensing fees, but said the intention is sound. “A system should be created whereby authors are duly compensated for works that are evaluated as excellent and in need by AI developers, and are frequently learned by models,” said Ono, though he did not offer any specific suggestions.
Since many AI tools are developed overseas, it’s not clear how effective Japanese legislation would be in protecting artists’ interests. The Agency for Cultural Affairs is currently drafting a proposed revision of the country’s national copyright law to address digital copyright issues, such as film and TV piracy. To date, however, the proposals have made no mention of AI or the copyright debates playing out around image-generator training data.
Still, Akamatsu and Ono both demonstrate politicians’ growing desire to place limits on AI image generators.
Some detractors believe Akamatsu’s approach to AI doesn’t go far enough to address underlying labor issues. Bryan Hikari Hartzheim, an associate professor of new media studies at Waseda University, told Rest of World that Akamatsu’s proposals often leave out entry-level and more technical animators and illustrators.
“His solutions are through the free market, pushing for ways to redistribute profits back to creators,” Hartzheim said. “This sort of policy tends to favor successful creators and does little to help those barely scraping by or workers ‘below the line’ with no recourse to licensing royalties.”
Hartzheim is quick to point out that the labor issues facing the manga and anime industries are discrete, but that both have earned notoriety for exploiting young creators just starting out in their careers. In some anime studios, for example, “in-between animators” — responsible for filling in the gaps left between key frames — may be paid as little as 200 to 400 yen ($1.50 to $3) per drawing, he said.
“Akamatsu’s ideas, and his most recent embrace of AI tools to help manga workflow, don’t help those workers at all,” said Hartzheim.
Sugawara, the founder of Animator Supporters, points out that animators have been “one of the cornerstones of ‘Cool Japan,’” a long-term national branding and soft power campaign focused on Japanese cultural exports, like anime. Despite their cultural contribution, many animators are still struggling for basic labor protections.
“I would like to see policies that guarantee basic workers’ rights,” he said, noting that opt-out and licensing proposals target independent illustrators, and will do little to help contract animators in the studio system. “Making sure that animators are hired full-time as employees and taught the latest tools and techniques would be a step in the right direction, rather than treating them as disposable.”
It’s likely Japan will continue to be a laboratory for these policy debates. In January, London-based Stability AI — the creator of Stable Diffusion — which earned a $1 billion valuation last October, hired its first country manager in Japan. The company is currently hiring for five other roles in its Tokyo office, including a machine learning engineer who will work on building customized models for Japanese clients, and a community lead to help grow support for generative AI among creators.
“Stability AI recognizes Japan as a priority market because of the country’s vast pool of creativity,” said Jerry Chi, Stability AI’s head of Japan, in a statement to Rest of World, adding that they are looking to the gaming, advertising, and art sectors in the country. “This includes manga and anime artists, whose work is growing and developing in exciting ways with the help of generative AI.”
In the meantime, professional illustrators are already integrating AI into their work. “Rather than a threat, I feel that most people are thinking about how they can use AI for their own creations,” Ryuichi Kimura, an anime director, told Rest of World. Kimura is also a board member at the Japan Animation Creators Association (JAniCA), an industry labor organization. Still, Kimura said he would be in support of opt-out or licensing proposals.
In January, Netflix Japan released a new anime short film, in which all background art was created using an AI image generator. The project came on the heels of similar developments in manga publishing. In August 2022, manga author Rootport began tweeting excerpts from a new series he had created using the AI image generator Midjourney. Cyberpunk Momotaro, the full compiled work, is set to be published in print this coming March.
“For the majority of artists, AI will be a partner in expanding human possibilities,” Rootport told Rest of World, explaining that he sees a future of collaboration between humans and AI tools that will allow creatives to “express what they really wanted to express.”
For fear of creating obstacles for research and development, Rootport openly opposes AI regulations. He also questions the feasibility of passing regulations, as a seeming international race in AI innovation continues to ramp up. “Every country wants to lead in AI research,” Rootport said. “If Japan were to regulate AI alone, we would fall behind significantly.” | AI Policy and Regulations |
Dan Nechita has spent the past year shuttling back and forth between Brussels and Strasbourg. As the head of cabinet (essentially chief of staff) for one of the two rapporteurs leading negotiations over the EU's proposed new AI law, he's helped hammer out compromises between those who want the technology to be tightly regulated and those who believe innovation needs more space to evolve.The discussions have, Nechita says, been “long and tedious.” First there were debates about how to define AI—what it was that Europe was even regulating. “That was a very, very, very long discussion,” Nechita says. Then there was a split over what uses of AI were so dangerous they should be banned or categorized as high-risk. “We had an ideological divide between those who would want almost everything to be considered high-risk and those who would prefer to keep the list as small and precise as possible.”But those often tense negotiations mean that the European Parliament is getting closer to a sweeping political agreement that would outline the body’s vision for regulating AI. That agreement is likely to include an outright ban on some uses of AI, such as predictive policing, and extra transparency requirements for AI judged to be high-risk, such as systems used in border control.This is only the start of a long process. Once the members of the European Parliament (MEPs) vote on the agreement later this month, it will need to be negotiated all over again with EU member states. But Europe’s politicians are some of the first in the world to go through the grueling process of writing the rules of the road for AI. Their negotiations offer a glimpse of how politicians everywhere will have to find a balance between protecting their societies from AI’s risks while also trying to reap its rewards. What’s happening in Europe is being closely watched in other countries, as they wrestle with how to shape their own responses to increasingly sophisticated and prevalent AI.“It’s going to have a spillover effect globally, just as we witnessed with the EU General Data Protection Regulation,” says Brandie Nonnecke, director of the CITRIS Policy Lab at the University of California, Berkeley.At the core of the debate about regulating AI is the question of whether it's possible to limit the risks it presents to societies without stifling the growth of a technology that many politicians expect to be the engine of the future economy.The discussions about risks should not focus on existential threats to the future of humanity, because there are major issues with the way AI is being used right now, says Mathias Spielkamp, cofounder of AlgorithmWatch, a nonprofit that researches the use of algorithms in government welfare systems, credit scores, and the workplace, among other applications. He believes it is the role of politicians to put limits on how the technology can be used. “Take nuclear power: You can make energy out of it or you can build bombs with it,” he says “The question of what you do with AI is a political question. And it is not a question that should ever be decided by technologists.”By the end of April, the European Parliament had zeroed in on a list of practices to be prohibited: social scoring, predictive policing, algorithms that indiscriminately scrape the internet for photographs, and real-time biometric recognition in public spaces. However, on Thursday, parliament members from the conservative European People's Party were still questioning whether the biometric ban should be taken out. “It's a strongly divisive political issue, because some political forces and groups see it as a crime-fighting force and others, like the progressives, we see that as a system of social control,” says Brando Benifei, co-rapporteur and an Italian MEP from the Socialists and Democrats political group.Next came talks about the types of AI that should be flagged as high-risk, such as algorithms used to manage a company’s workforce or by a government to manage migration. These are not banned. “But because of their potential implications—and I underline the word potential—on our rights and interests, they are to go through some compliance requirements, to make sure those risks are properly mitigated,” says Nechita’s boss, the Romanian MEP and co-rapporteur Dragoș Tudorache, adding that most of these requirements are principally to do with transparency. Developers have to show what data they've used to train their AI, and they must demonstrate how they have proactively tried to eliminate bias. There would also be a new AI body set up to create a central hub for enforcement.Companies deploying generative AI tools such as ChatGPT would have to disclose if their models have been trained on copyrighted material—making lawsuits more likely. And text or image generators, such as MidJourney, would also be required to identify themselves as machines and mark their content in a way that shows it’s artificially generated. They should also ensure that their tools do not produce child abuse, terrorism, or hate speech, or any other type of content that violates EU law.One person, who asked to remain anonymous because they did not want to attract negative attention from lobbying groups, said some of the rules for general-purpose AI systems were watered down at the start of May following lobbying by tech giants. Requirements for foundation models—which form the basis of tools like ChatGPT—to be audited by independent experts were taken out.However the parliament did agree that foundation models should be registered in a database before being released to the market, so companies would have to inform the EU of what they have started selling. “That's a good start,” says Nicolas Moës, director of European AI governance at the Future Society, a think tank.The lobbying by Big Tech companies, including Alphabet and Microsoft, is something that lawmakers worldwide will need to be wary of, says Sarah Myers West, managing director of the AI Now Institute, another think tank. “I think we're seeing an emerging playbook for how they're trying to tilt the policy environment in their favor,” she says.What the European Parliament has ended up with is an agreement that tries to please everyone. “It's a true compromise,” says a parliament official, who asked not to be named because they are not authorized to speak publicly. “Everybody's equally unhappy.”The agreement could still be altered before the vote—currently scheduled for May 11—that allows the AI Act to move to the next stage. With uncertainty over last-minute changes, tensions lingered through the final weeks of negotiations. There were disagreements until the end about whether AI companies should have to follow strict environmental requirements. “I would still say the proposal is already very overburdened for me,” says Axel Voss, a German MEP from the conservative European People's Party, speaking to WIRED in mid-April.“Of course, there are people who think the less regulation the better for innovation in the industry. I beg to differ,” says another German MEP, Sergey Lagodinsky, from the left-wing Greens group. “We want it to be a good, productive regulation, which would be innovation-friendly but would also address the issues our societies are worried about.”The EU is increasingly an early mover on efforts to regulate the internet. Its privacy law, the General Data Protection Regulation, came into force in 2018, putting limits on how companies could collect and handle people’s data. Last year, MEPs agreed on new rules designed to make the internet safer as well as more competitive. These laws often set a global standard—the so-called “Brussels effect.”As the first piece of omnibus AI legislation expected to pass into law, the AI Act will likely set the tone for global policymaking efforts surrounding artificial intelligence, says Myers West.China released its draft AI regulations in April, and Canada's Parliament is considering its own hotly contested Artificial Intelligence and Data Act. In the US, several states are working on their own approaches to regulating AI, while discussions at the national level are gaining momentum. White House officials, including vice president Kamala Harris, met with Big Tech CEOs in early May to discuss the potential dangers of the technology. In the coming weeks, US senator Ron Wyden of Oregon will begin a third attempt to pass a bill called the Algorithmic Accountability Act, a law that would require testing of high-risk AI before deployment.There have also been calls to think beyond individual legislatures to try to formulate global approaches to regulating AI. Last month, 12 MEPs signed a letter asking European Commission president Ursula von der Leyen and US president Joe Biden to convene a global Summit on Artificial Intelligence. That call has, so far, remained unanswered. Benifei says he will insist on the summit and more international attention. “We think that our regulation will produce the Brussels effect towards the rest of the world,” he adds. “Maybe they won’t copy our legislation. But at least it will oblige everyone to confront the risks of AI.” | AI Policy and Regulations |
Schumer’s AI regulations would stifle innovation and dampen free expression
Senate Majority Leader Chuck Schumer (D-N.Y.) has proposed artificial intelligence (AI) regulations that would control how AI is created and the decisions it makes. The proposed “guardrails” will provide answers to key questions such as who, where and how, while aiming to “protect” Americans. It also requires companies to allow a review of their AI software before its release.
All of this has the explicit goal of aligning “systems with American values.” Unfortunately, his proposal undermines some of the values he claims to protect.
Schumer’s framework threatens to limit programmers’ freedom of expression during and after the development process. It goes beyond typical regulations imposed by federal authorities on commercial decisions and other areas of control. Beyond these challenges, excessive regulation of AI is bad for American competitiveness and inventors.
Regulating how technology is developed — as opposed to how it is used — risks slowing development and stifling creativity. A complex regulatory regime may also make it all but impossible for new firms to enter the market, thus providing significant benefits to established firms that are better equipped to deal with regulation systems.
Schumer rightly expresses concern about America’s foreign competitors getting a leg up in AI technology and suggests that this is a reason for his proposed regulations. This echoes comments by Russian President Vladimir Putin that “Whoever becomes the leader in this sphere will become the ruler of the world.” AI advancement may well become the next cold war and America’s foreign competitors would like nothing better than for the U.S. to damage its ability to innovate.
Artificial intelligence systems are pieces of software. They are created by programmers and, in some cases, they are given the capability to adapt over time. We call this process of adaptation learning; however, it is not at all like humans’ ability to learn across all areas. In most cases, the software can only learn how to advance in its area of focus. For example, a credit decision-making AI may get better at predicting risk over time by evaluating its past decisions’ outcomes but it’s not going to somehow learn how to play chess.
Even the seemingly more general-purpose AIs, like ChatGPT, are still quite limited. ChatGPT learns patterns of speech; however, it isn’t suitable for something like controlling robots. AI learning is a highly controlled process that allows a system to incorporate new knowledge from its operations, the world around it and selected training data.
The notion that entities chosen by the federal government will tell the developers of an AI system what it can learn from, how it must learn, or require a review before the system can begin learning poses questions about constitutional rights. We are talking about going beyond the regulation of speech — which by itself is constitutionally problematic — to regulating the way that programmers approach the intellectual challenge of developing systems.
Software code is written by humans and can be read by both humans and computers. Therefore, one can argue that many types of code are protected as a form of expression under the First Amendment. However, its use or application is, inarguably, open to regulation. Just like existing software used by banks, landlords and other organizations are regulated to ensure fairness, the actions taken by an organization’s AI systems can also be monitored to make sure the public’s interests are being respected.
To create an effective AI regulatory framework, it’s best to draw upon existing laws and regulations. For example, laws that forbid discrimination against protected classes can be used to address AI bias. Relevant laws, similarly, exist for adjudicating cases involving damages caused by AI. By drawing on existing precedents, we can ensure that AI activities aren’t treated differently than similar activities carried out by humans or non-AI software.
Some laws may need to be adapted. We may need to consider how to best assign liability for AI-caused losses or damage between software developers, users, resellers and others. However, this does not require us to create a special regulatory framework for AI in all areas. Instead, we can amend existing regulations to address any new AI-specific considerations.
Schumer is correct that laws have often failed to keep up with technological advancements, leaving courts perplexed when trying to apply outdated laws to new technologies. Going forward, lawmakers at all levels should create regulations that respect fundamental rights and don’t potentially regulate thought or expression. Rather than focusing on specific technologies, they should instead consider the best applications of technology within society as a whole.
Regulation of AI should center around its usage, as opposed to how it functions, in order to avoid the regulation of intellectual processes and expression. This approach avoids interfering with innovation and avoids potential conflicts with constitutional free speech rights.
Jeremy Straub is the director of the North Dakota State University’s Institute for Cyber Security Education and Research, an NDSU Challey Institute Faculty Fellow and an assistant professor in the NDSU Computer Science Department. The author’s opinions are his own.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. | AI Policy and Regulations |
A Patchwork of rules and regulations won’t cut it for AI
This year has marked a turning point for artificial intelligence. Advanced AI tools are writing poetry, diagnosing diseases, and maybe even getting us closer to a clean energy future. At the same time, we face new questions about how to develop and deploy these tools responsibly.
The past two weeks have been a milestone in the young history of AI governance — a moment of constitutional creation. The G7 just released an international code of conduct for responsible AI; the United Nations announced its AI advisory group; the U.S. Senate continued its “AI Insight Forums,” the Biden Administration’s Executive Order directed federal agencies to use AI systems and develop AI benchmarks; and the UK is currently holding an international summit on AI safety.
As a result, we’re beginning to see the emerging outlines of an international framework for responsible AI innovation.
It’s a good thing too, because while the latest advances in AI are a triumph of scientific innovation, there is no doubt we need smart, well-crafted international regulation and industry standards to ensure AI benefits everyone.
If we don’t put such a framework into place, there is a very real risk of a fractured regulatory environment that delays access to important products, makes life harder for start-ups, slows the global development of powerful new technologies, and undermines responsible development efforts. We’ve seen that happen before with privacy, where a patchwork of rules and regulations has left people with uneven protections based on where they live, and made it harder for small businesses to navigate conflicting laws.
To avoid these missteps, we first need regulatory frameworks that can promote policy alignment for a worldwide technology. We’ll need to keep advocating for democratic values and openness in the governance of these tools. And at the national level, sectoral regulators from banking to healthcare will need to develop their own expertise and assess whether and where there may be gaps in existing laws. There’s no one-size-fits-all regulation for a general-purpose technology, any more than we have a Department of Engines, or a single law that governs all uses of electricity. Every agency will need to be an AI agency, and in the U.S. the National Institute of Standards and Technology can be at the center of a hub-and-spoke model providing coherent, practical approaches to AI governance.
Second, public-private partnerships, regulators, and industry bodies will need to be both technically informed and nimble, promoting research on where AI is going, and filling gaps where regulation is still evolving. Promoting alignment on industry best practices will also be imperative, even as many companies have already made commitments to pursue AI responsibly.
For example, Google was one of the first to issue a detailed set of principles in 2018, with an internal governance framework and annual progress reports. The development of cross-industry bodies — such as the Frontier Model Forum and its new $10 million AI Safety Fund — will also go a long way toward investing in the long-term safety of emerging technologies.
Additionally, broader coalitions of AI developers, academics, and civil society will be vital to developing best practices and international performance benchmarks for AI development and deployment. The good news here is that the Partnership on AI, MLCommons, and the International Standards Organization are building common technical standards that can align practices globally. These industry-wide codes and standards will be a cornerstone of responsible AI development, the equivalents of the Underwriters’ Laboratory or Good Housekeeping Seal of Approval.
AI can bring us science at digital speed — but that won’t happen by accident.
As AI innovation advances, we need public and private stakeholders to keep coming together to map out an opportunity agenda to harness AI’s potential in preventive medicine, precision agriculture, economic productivity, and much more through a global, flexible, multi-dimensional AI policy framework.
At a challenging time for international institutions, work on AI policy is off to a promising start. For proof of that, we need look no further than the new G7 Code of Conduct, which will provide a strong and consistent framework as we move forward. But continued progress is essential and may well prove that governments can still work constructively on important transnational issues.
None of the developments of the past week will be a panacea. But they are a sign that the global AI ecosystem gets what’s at stake and stakeholders are ready to do the work needed to unlock the benefits of artificial intelligence — not in a vacuum, but collaboratively, together.
Kent Walker is president of global affairs at Google and Alphabet.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. | AI Policy and Regulations |
President Joe Biden signed a wide-ranging executive order on artificial intelligence Monday, setting the stage for some industry regulations and funding for the U.S. government to further invest in the technology. From a report: The order is broad, and its focuses range from civil rights and industry regulations to a government hiring spree. In a media call previewing the order Sunday, a senior White House official, who asked to not be named as part of the terms of the call, said AI has so many facets that effective regulations have to cast a wide net. "AI policy is like running into a decathlon, and there's 10 different events here," the official said. "And we don't have the luxury of just picking 'we're just going to do safety' or "we're just going to do equity' or 'we're just going to do privacy.' You have to do all of these things."
The official also called for "significant bipartisan legislation" to further advance the country's interests with AI. Senate Majority Leader Chuck Schumer, D-N.Y., held a private forum in September with industry leaders but has yet to introduce significant AI legislation. Some of the order builds on a previous nonbinding agreement that seven of the top U.S. tech companies developing AI agreed to in July, like hiring outside experts to probe their systems for weaknesses and sharing their critical findings. The order leverages the Defense Production Act to legally require those companies to share safety test results with the federal government.
The official also called for "significant bipartisan legislation" to further advance the country's interests with AI. Senate Majority Leader Chuck Schumer, D-N.Y., held a private forum in September with industry leaders but has yet to introduce significant AI legislation. Some of the order builds on a previous nonbinding agreement that seven of the top U.S. tech companies developing AI agreed to in July, like hiring outside experts to probe their systems for weaknesses and sharing their critical findings. The order leverages the Defense Production Act to legally require those companies to share safety test results with the federal government. | AI Policy and Regulations |
A Frontier AI taskforce established by the U.K. back in June to prepare for the AI Safety Summit held this week is on course to be a permanent fixture, as the U.K. bids to take a leadership role on AI policy in the future. The U.K. Prime Minister Rishi Sunak today formally announced the launch of the AI Safety Institute, a “global hub based in the U.K. and tasked with testing the safety of emerging types of AI.”
The institute was informally announced last week in the lead up to this week’s summit. Now the government has confirmed that it will be led by Ian Hogarth — an investor, founder and engineer had also chaired the taskforce — and that Yoshio Bengio, one of the most prominent people in the field of AI, will be taking the lead on the production of its first report.
It’s not clear how much funding the government will inject into the AI Safety Institute, or whether industry players will be expected to foot some of the bill. The institute, which will sit underneath the Department of Science, Innovation and Technology, is described as “backed by leading AI companies” although that might be more in reference to endorsement rather than financial backing. We have reached out to the DSIT to ask and will update as we learn more.
The news comes alongside yesterday’s announcement of a new agreement, the Bletchley Declaration, which has been signed by all of the countries that have attended the summit and commits them to joined up testing and other commitments on assessing risks of “frontier AI” technologies, for example large language models.
“Until now, the only people testing the safety of new AI models have been the very companies developing them,” Sunak said in a meeting with journalists this evening. Citing work being done also by other countries, the UN and the G7 to address AI, now the plan will be to “work together on testing the safety of new AI models before they are released.”
All of this, to be sure, is still very much in its early stages. The U.K. has up to now resisted making moves to consider how to regulate AI technologies, both at the platform level and at more specific application levels, and some believe that without any teeth, the ideas of safety and quantifying risk are meaningless.
Sunak argued that it’s too early to regulate.
“The technology is developing at such a pace that governments have to make sure that we can keep up,” Sunak said in response to an accusation that he was being too light on legislation while going heavy on big ideas. “Before you start mandating things and legislating for things… you need to know exactly what you’re legislating for.”
While transparency seems to be a very clear aim of a lot of the long term efforts around this brave new world of technology, today’s series of meetings at Bletchley, day two of the summit, were very far from that ethos.
In addition to bilateral sessions with European Commission President Ursula von der Leyen and Secretary-General of the United Nations António Guterres, the summit today focused on two plenary sessions. Closed off to journalists beyond small pools watching as people assembled in rooms, attendees for these included the CEOs of DeepMind, OpenAI, Antrhopic, InflectionAI, Salesforce and Mistral, as well as the president of Microsoft and head of AWS. Among those representing governments, the line up included Sunak and U.S. Vice President Kamala Harris, as well as the Giorgia Meloni of Italy, and French minister of finance Bruno Le Maire.
Notably, although China was a much-touted guest during the first day, it did not make an appearance at the closed plenaries on day two.
Also absent at today’s sessions, it seems, was Elon Musk, the owner of X.ai (formerly known as Twitter). Sunak is due to have a fireside chat with him this evening on Musk’s social platform. Interestingly, that is not expected to be a live broadcast. | AI Policy and Regulations |
The White House launched a two-year competition the week that will award millions of dollars in prize money to teams that develop artificial intelligence tools that can be used to protect critical U.S. computer code.
"This competition, which will feature almost $20 million in prizes, will drive the creation of new technologies to rapidly improve the security of computer code, one of cybersecurity’s most pressing challenges," the White House said Wednesday. "It marks the latest step by the Biden-Harris Administration to ensure the responsible advancement of emerging technologies and protect Americans."
The AI Cyber Challenge will be hosted by the Defense Advanced Research Projects Agency and will let AI development teams show the agency early next year how their AI-powered tools can protect U.S. code that "helps run the internet and other critical infrastructure." The top 20 teams will compete at the DEF CON 2024 cybersecurity conference, and the top five teams will win money and advance to the final round at DEF CON 2025.
"The top three scoring competitors in the final competition will receive additional monetary prizes," the White House said.
Competitors will be helped along by four companies that have worked with the White House in recent weeks on AI policy. Anthropic, Google, Microsoft and OpenAI, which agreed with other companies last month on a set of voluntary AI principles promoted by the White House, will give competitors access to their technology to meet the demands of the competition.
"The top competitors will make a meaningful difference in cybersecurity for America and the world," the White House said. "The Open Source Security Foundation (OpenSSF), a project of the Linux Foundation, will serve as a challenge advisor. It will also help ensure that the winning software code is put to use right away protecting America’s most vital software and keeping the American people safe."
The competition is one of several steps the Biden administration has taken to influence the development of AI technology. The commitment it secured in July with seven AI developers is aimed at ensuring "safer, more secure and more transparent" AI guidelines.
It said Wednesday that the independent evaluation of AI-driven large language models developed by the companies would start this week and added that administration officials are developing an executive order on AI and keep pushing for legislation in Congress to regulate AI development.
Congress has fallen short of passing a broad, comprehensive AI regulatory framework, despite months of effort from Senate Majority Leader Chuck Schumer, D-N.Y. Schumer said this year that he still plans on holding listening sessions in the fall to help shape an AI bill. | AI Policy and Regulations |
This year, we’ve seen the introduction of powerful generative AI systems that have the ability to create images and text on demand. At the same time, regulators are on the move. Europe is in the middle of finalizing its AI regulation (the AI Act), which aims to put strict rules on high-risk AI systems. Canada, the UK, the US, and China have all introduced their own approaches to regulating high-impact AI. But general-purpose AI seems to be an afterthought rather than the core focus. When Europe’s new regulatory rules were proposed in April 2021, there was no single mention of general-purpose, foundational models, including generative AI. Barely a year and a half later, our understanding of the future of AI has radically changed. An unjustified exemption of today’s foundational models from these proposals would turn AI regulations into paper tigers that appear powerful but cannot protect fundamental rights.ChatGPT made the AI paradigm shift tangible. Now, a few models—such as GPT-3, DALL-E, Stable Diffusion, and AlphaCode—are becoming the foundation for almost all AI-based systems. AI startups can adjust the parameters of these foundational models to better suit their specific tasks. In this way, the foundational models can feed a high number of downstream applications in various fields, including marketing, sales, customer service, software development, design, gaming, education, and law. While foundational models can be used to create novel applications and business models, they can also become a powerful way to spread misinformation, automate high-quality spam, write malware, and plagiarize copyrighted content and inventions. Foundational models have been proven to contain biases and generate stereotyped or prejudiced content. These models can accurately emulate extremist content and could be used to radicalize individuals into extremist ideologies. They have the capability to deceive and present false information convincingly. Worryingly, the potential flaws in these models will be passed on to all subsequent models, potentially leading to widespread problems if not deliberately governed.The problem of “many hands” refers to the challenge of attributing moral responsibility for outcomes caused by multiple actors, and it is one of the key drivers of eroding accountability when it comes to algorithmic societies. Accountability for the new AI supply chains, where foundational models feed hundreds of downstream applications, must be built on end-to-end transparency. Specifically, we need to strengthen the transparency of the supply chain on three levels and establish a feedback loop between them.Transparency in the foundational models is critical to enabling researchers and the entire downstream supply chain of users to investigate and understand the models’ vulnerabilities and biases. Developers of the models have themselves acknowledged this need. For example, DeepMind’s researchers suggest that the harms of large language models must be addressed by collaborating with a wide range of stakeholders building on a sufficient level of explainability and interpretability to allow efficient detection, assessment, and mitigation of harms. Methodologies for standardized measurement and benchmarking, such as Standford University’s HELM, are needed. These models are becoming too powerful to operate without assessment by researchers and independent auditors. Regulators should ask: Do we understand enough to be able to assess where the models should be applied and where they must be prohibited? Can the high-risk downstream applications be properly evaluated for safety and robustness with the information at hand?Transparency in the use of foundational models. The organizations deploying these models for a specific use case will ultimately determine whether they are suitable and meet the necessary performance and robustness requirements. However, transparency around the use of these foundational models is essential to making potential harms visible. Deployers must credit the foundational models involved, enabling users, auditors, and the broader community to evaluate the risks of these downstream applications.Transparency in the outcomes created by AI. One of the biggest transparency challenges is the last-mile issue: distinguishing AI-generated content from that created by humans. In the past week, many of us have been fooled by LinkedIn posts written by ChatGPT. Various industry actors have recognized the problem, and everyone seems to agree on the importance of solving it. But the technical solutions are still being developed. People have proposed labeling AI-generated content with watermarks as a way to address copyright issues and detect potentially prohibited and malicious uses. Some experts say an ideal solution would be one that a human reader could not discern but that would still enable highly confident detection. This way, the labeling wouldn’t significantly interfere with user experiences of all AI-created content but would still enable better filtering for content misuse.Feedback loops. Large generative models are known to be highly unpredictable, which makes it also difficult to anticipate the consequences of their development and deployment. This unpredictable nature, together with the many hands problem, makes it important to have an additional level of transparency—feedback loops—that can help both the industry and regulators ensure better-aligned and safer solutions.Alignment techniques, which employ human-given feedback to instruct AI, were successfully used to train GPT-3 to produce less offensive language and less misinformation and make fewer mistakes. The approach used reinforcement learning to teach the model, drawing on feedback from 40 human trainers hired to phrase and rate GPT-3’s responses. Outcomes of the aligned model received a positive response, and this encouraging example underlines the importance of human involvement in the evaluation of AI outcomes.More focus from both industry players and regulators is needed to scale and standardize notification mechanisms that would enable users to report false, biased, or harmful outputs created by foundational models. This can serve two important targets: helping further train foundational models with human feedback from downstream applications and providing researchers and regulators with real-world data to inform the development of risk mitigations and policies.Recent weeks have offered a glimpse into the current state of AI capabilities, which are both fascinating and worrying. Regulators will need to adjust their thinking to address coming developments, and industry will need to collaborate with policymakers as we navigate the next AI paradigm shift. | AI Policy and Regulations |
Google's Former CEO Is Leveraging His $27 Billion Fortune To Shape AI Policy
Schmidt has become an indispensable adviser to government, even as some of his investments have won federal contracts.
(Bloomberg) -- Eric Schmidt isn’t shy about his wealth and power: The former Google CEO recently won an auction for a superyacht seized from a Russian oligarch, he owns a big stake in a secretive and successful hedge fund and he spent $15 million for the Manhattan penthouse featured in Oliver Stone’s sequel to.
He has also leveraged his $27 billion fortune to build a powerful influence machine in Washington that’s allowed him to shape public policy to reflect his worldview and benefit the industries in which he’s deeply invested — most recently, artificial intelligence. When senators meet next week to hear from tech executives and experts about how AI should be regulated, Schmidt will be at the table.
In his previous comments to Congress on AI, Schmidt has delivered a relatively simple message: The US needs to keep both public and private money flowing to innovative companies to counter China’s technological advancements. Behind his testimony, he has complex layers of connections to drive that message home.
There’s the Special Competitive Studies Project, a private think tank he founded in 2021 and funded with an initial $2 million. Located roughly a mile from the Pentagon and modeled on a Cold War-era initiative, SCSP focuses on how AI and other emerging technologies could upend the US economy and national security. It has sent experts this year to testify before the Senate Judiciary Committee and advise the House committee studying the US’s “strategic competition” with China. At the think tank’s emerging technology summit later this month, Schmidt and his associates will once again share the stage with senior Biden administration officials.
Then there’s Schmidt Futures, his initiative to support scientists and entrepreneurs, some of whom have gone on to work for governments around the world. Politico reported last year that Schmidt Futures indirectly paid the salaries of some White House science office employees.
And Schmidt himself has been on high-profile advisory committees under the past three presidential administrations, including the National Security Commission on AI — mandated by Congress — which he led from 2019 to 2021.
“For decades there’s been a hollowing out of the federal government’s expertise and capacity, and that hollowing out has helped create an opportunity for Schmidt — who would always be an influential person under any scenario — to be just mind-blowingly influential,” said Jeff Hauser, head of the Revolving Door Project, a nonprofit that scrutinizes government appointees. “He just has the capacity to generate almost every mouth that is in the ears of policymakers in Washington.”
And they appear to be listening.
“I’ve always found Eric Schmidt to be a helpful resource,” Senate Majority Leader Chuck Schumer said in an emailed response to questions about the former Google CEO. “I know many senators on both sides of the aisle feel the same way. He is smart, thoughtful, and pragmatic.”
When Schmidt testified before the House China Committee in May, he said the US “must organize around innovation power.” The next month, Schumer said “innovation must be our North Star” when introducing his AI framework and the roundtable of tech leaders he’ll host next week.
All this is happening at a key moment for American AI policy, as the government races to set the global standard for regulating the new technology. Schumer said he aims to turn next week’s “insight forum” — including Schmidt, civil society leaders and executives from Google, Microsoft Corp., OpenAI and other top tech companies — into legislation in a matter of months, not years.
Read More: AI Is Making Politics Easier, Cheaper and a Lot More Dangerous
“Eric has been grateful for the opportunity to volunteer his time serving on various US government committees under both Democratic and Republican administrations and has always fully complied with all disclosure requirements,” a spokesperson for Schmidt said in an emailed statement.
‘Your Emissaries’
Schmidt, who took over as Google CEO from founders Sergey Brin and Larry Page in 2001 to help the company grow and go public, has already profited handsomely from Wall Street’s newfound obsession with AI. The bulk of Schmidt’s wealth comes from his roughly 1% stake in Google parent Alphabet Inc., one of a handful of companies that have been the key drivers and beneficiaries of AI advancements in the US. He also owns 20% of hedge fund D.E. Shaw & Co., which manages $60 billion and is a prominent investor in technology and AI.
Alphabet’s share price is up 55% this year, helping add $7.9 billion to Schmidt’s net worth, according to the Bloomberg Billionaires Index. He’s collected about $4 billion from the sale of Alphabet stock over the years, which has helped fund his investments in AI startups, including some that have gone on to win federal contracts.
Back in June, Schmidt agreed at a public auction to pay $67.6 million for the Alfa Nero superyacht, which was abandoned in Antigua after the US Treasury sanctioned Russian billionaire Andrey Guryev. He dropped the purchase after the deal faced legal challenges.
Government watchdogs and outside groups have long raised questions about Schmidt’s potential conflicts of interest. Senator Elizabeth Warren, a Massachusetts Democrat, last year wrote to the Pentagon laying out her concerns that Schmidt could use his position on federal commissions to “further his own personal financial interests.”
Schmidt has been remarkably candid about how his connections from the National Security Commission on AI and his other initiatives expand his reach — a strategy honed from his time at Google, which has developed a reputation as a lobbying powerhouse that aggressively advocates for its interests in Washington.
“The people who work in the commission and then go into the government, they are your emissaries,” Schmidt told a Capitol Hill cyber policy event in June. “A rule of business is that if you could put your person in the company, they’re likely to buy from you. It’s the same principle.”
Read More: Tech Giants Broke Their Spending Records on Lobbying Last Year
At the same Capitol Hill event, Schmidt pointed to how much of the AI commission’s final report was incorporated into the National Defense Authorization Act, one of the bipartisan spending bills Congress passes annually.
“This is pretty arrogant on our part but we figured, since they’ve asked us to work, we might as well produce some legislation candidates, and much of that, something like two-thirds of it, is now part of the NDAA,” Schmidt said.
Contracts
According to a June report from the conservative nonprofit Bull Moose Project, Schmidt’s own venture capital funds and others in which he owns a stake invested in at least 57 AI startups, seven of which won government contracts or other forms of support. These investments, which were verified by Bloomberg News, include Rebellion Defense, a warfighting software company that counts Schmidt-backed venture firm Innovation Endeavors among its investors and won an Energy Department contract in April to enhance cybersecurity for the US nuclear arsenal.
Innovation Endeavors also invested in Machina Labs, an AI and robotics manufacturing company that has won $4.8 million in contracts from the Defense Department and NASA since 2021, according to government spending data cited by Bull Moose, including a $3 million contract for aerospace components.
The Tech Transparency Project, a nonprofit, also documented how Schmidt helped craft the 2022 CHIPS and Science Act, which injected more than $50 billion into US manufacturing of semiconductors, an essential ingredient in AI development. Several of his initiatives, including the public-private America’s Frontier Fund, are poised to benefit from the legislation, according to the nonprofit’s July report.
Schmidt’s financial interest in AI isn’t limited to US firms. Even as he warns policymakers about the competitive threat from China, his philanthropic Schmidt Family Foundation had invested in some of China’s biggest tech companies developing AI tools.
Managed by Hillspire LLC, Schmidt’s family office, the foundation held $1.7 million in Tencent Holdings Ltd. and $1 million in Alibaba Group Holding Ltd., according to tax documents filed last year. It also had smaller investments in more than 40 other Chinese companies, including state-owned enterprises like the Bank of China. These investments made up a small portion of the foundation’s $2.3 billion in total assets.
A Schmidt spokesperson said the investments were made by a third-party manager and neither Schmidt nor the foundation made any decisions, suggestions or recommendations. The spokesperson didn’t respond to a question regarding the current size of the foundation’s stake in the Chinese companies.
Read More: Billionaires and Bureaucrats Mobilize China for AI Race With US
‘Our Solution’
AI isn’t the only emerging technology to be shaped by Schmidt and his network. In April 2022, a Schmidt Futures task force published a report on how the US should develop biotechnology. Several months later, the White House issued an executive order that echoed many of its recommendations, including proposals for federal investment and data sharing.
In December, Congress appointed Schmidt to the National Security Commission on Emerging Biotechnology, roughly two years after he founded First Spark Ventures, a fund that invests in the industry.
At a synthetic biology conference last year, he advised attendees on how to win the US government’s attention and money.
“What I found with politicians — because I spend an infinite amount of time in Washington, it seems, and have for decades — is some people respond to the threat from another country” like China, Schmidt said, while others respond to the promise of solving issues like climate change. The trick, he said, is to “figure out what they care about and offer our solution to their problems.”
(Updates to add details of yacht auction in 15th paragraph and to reflect Friday’s closing prices. An earlier version corrected the SCSP funding amount in fourth paragraph.)
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P. | AI Policy and Regulations |
An anonymous reader quotes a report from Reuters: U.S. senators on Thursday introduced two separate bipartisan artificial intelligence bills on Thursday amid growing interest in addressing issues surrounding the technology. One would require the U.S. government to be transparent when using AI to interact with people and another would establish an office to determine if the United States is remaining competitive in the latest technologies. Senators Gary Peters, a Democrat who chairs the Homeland Security committee, introduced a bill along with Senators Mike Braun and James Lankford, both Republicans, which would require U.S. government agencies to tell people when the agency is using AI to interact with them. The bill also requires agencies to create a way for people to appeal any decisions made by AI.
"The federal government needs to be proactive and transparent with AI utilization and ensure that decisions aren't being made without humans in the driver's seat," said Braun in a statement. Senators Michael Bennet and Mark Warner, both Democrats, introduced a measure along with Republican Senator Todd Young that would establish an Office of Global Competition Analysis that would seek to ensure that the United States stayed in the front of the pack in developing artificial intelligence. "We cannot afford to lose our competitive edge in strategic technologies like semiconductors, quantum computing, and artificial intelligence to competitors like China," Bennet said.
Earlier this week, Senate Majority Leader Chuck Schumer said he had scheduled three briefings for senators on artificial intelligence, including the first classified briefing on the topic so lawmakers can be educated on the issue. The briefings include a general overview on AI, examining how to achieve American leadership on AI and a classified session on defense and intelligence issues and implications. Further reading: Ask Slashdot: What Are Some Good AI Regulations?
"The federal government needs to be proactive and transparent with AI utilization and ensure that decisions aren't being made without humans in the driver's seat," said Braun in a statement. Senators Michael Bennet and Mark Warner, both Democrats, introduced a measure along with Republican Senator Todd Young that would establish an Office of Global Competition Analysis that would seek to ensure that the United States stayed in the front of the pack in developing artificial intelligence. "We cannot afford to lose our competitive edge in strategic technologies like semiconductors, quantum computing, and artificial intelligence to competitors like China," Bennet said.
Earlier this week, Senate Majority Leader Chuck Schumer said he had scheduled three briefings for senators on artificial intelligence, including the first classified briefing on the topic so lawmakers can be educated on the issue. The briefings include a general overview on AI, examining how to achieve American leadership on AI and a classified session on defense and intelligence issues and implications. Further reading: Ask Slashdot: What Are Some Good AI Regulations? | AI Policy and Regulations |
Musk predicts ‘digital superintelligence’ will exist in 5–6 years
Elon Musk said he believes “digital superintelligence” would exist in the next five or six years, during a conversation with Rep. Mike Gallagher (R-Wis.) and Rep. Ro Khanna (D-Calif.) hosted on Twitter Spaces Wednesday.
“I think it’s five or six years away,” the Twitter owner and CEO of SpaceX and Tesla said in the conversation about artificial intelligence.
“The definition of digital superintelligence is that it’s smarter than any human, at anything,” he added, explaining, “That’s not necessarily smarter than the sum of all humans – that’s a higher bar.”
The event, Khanna said, was the product of a separate conversation he had with Musk and Gallagher about a month beforehand, when “we all thought it’d be important to have a thoughtful, engaged conversation in a way that people could participate without the theatrics of congressional hearings where people just are looking to score points. Hopefully, that’ll happen today.”
The conversation came hours after Musk announced the formation of his new artificial intelligence firm, xAI, which aims to “to understand the true nature of the universe,” according to its website. He acknowledged in the conversation that “xAI is really just starting out here, so … it’ll be a while before it’s relevant on a scale” of some of the leading artificial intelligence firms.
In the conversation, the participants discussed the dangers and potential benefits of AI, but all agreed on the need for some sort of regulatory framework – though they diverged on details.
Khanna suggested a regulatory agency like the U.S. Food and Drug Administration (FDA), whose officials “really know what they’re talking about.” Khanna said he believes the FDA has not only ensured the safety of drugs, but the high standards to which the U.S. holds its drugs.
Gallagher disagreed, and was concerned the agency would not keep pace with the rapid change in technology. He suggested, that oversight of AI regulations requires “a more dynamic regulatory process with the technology like this where the pace of change is so quick.”
“Even if we passed a sensible AI law this year that struck that balance … between oversight guardrails, but also the need to innovate – it might be outdated very quickly. So figuring out that dynamic regulatory model without stifling innovation, I think, is the core dilemma,” Gallagher added.
Musk, too, expressed his desire for some sort of oversight for AI, saying, “just as we have regulation for nuclear technology. You can’t just go make a nuclear barrage, and everyone thinks that’s cool – Like, we don’t think that’s cool. So there’s a lot of regulation around things that we think are dangerous.”
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. | AI Policy and Regulations |
By Chang Che
Seoul: Five months after ChatGPT set off an investment frenzy over artificial intelligence, Beijing is moving to rein in China’s chatbots, a show of the government’s resolve to keep tight regulatory control over technology that could define an era.
The Cyberspace Administration of China this month unveiled draft rules for so-called generative AI, the software systems, like the one behind ChatGPT, that can formulate text and pictures in response to a user’s questions and prompts.
According to the regulations, companies must heed the Chinese Communist Party’s strict censorship rules, just as websites and apps have to avoid publishing material that besmirches Chinese leaders or rehashes forbidden history. The content of AI systems will need to reflect “socialist core values” and avoid information that undermines “state power” or national unity.
Companies will also have to make sure their chatbots create words and pictures that are truthful and respect intellectual property, and will be required to register their algorithms, the software brains behind chatbots, with regulators.
The rules are not final, and regulators may continue to modify them, but experts said engineers building AI services in China were already figuring out how to incorporate the edicts into their products.
Around the world, governments have been wowed by the power of chatbots with the AI-generated results ranging from alarming to benign. Artificial intelligence has been used for everything from passing college exams to creating a fake photo of Pope Francis in a puffer jacket.
ChatGPT, developed by the US company OpenAI, which is backed by about $US13 billion ($19.5 billion) from Microsoft, has spurred Silicon Valley to apply the underlying technology to new areas such as video games and advertising. The venture capital firm Sequoia Capital estimates that AI businesses could eventually produce “trillions of dollars” in economic value.
In China, investors and entrepreneurs are racing to catch up. Shares of Chinese AI firms have soared. Splashy announcements have been made by some of the country’s biggest tech companies, including most recently e-commerce giant Alibaba; SenseTime, which makes facial recognition software; and search engine Baidu. At least two startups developing Chinese alternatives to OpenAI’s technology have raised millions of dollars.
ChatGPT is unavailable in China. But faced with a growing number of homegrown alternatives, the government has swiftly unveiled its red lines for AI, ahead of other countries that are still considering how to regulate chatbots.
The rules showcase a “move fast and break things” approach to regulation, said Kendra Schaefer, head of tech policy at Trivium China, a Beijing-based consulting firm.
“Because you don’t have a two-party system where both sides argue, they can just say, ‘OK, we know we need to do this, and we’ll revise it later’,” she added.
Chatbots are trained on large swaths of the internet, and developers are grappling with the inaccuracies and surprises of what they sometimes spit out. On their face, China’s rules require a level of technical control over chatbots that Chinese tech companies have not achieved. Even companies such as Microsoft are still fine-tuning their chatbots to weed out harmful responses. China has a much higher bar, which is why some chatbots have already been shut down and others are available only to a limited number of users.
Experts are divided on how difficult it will be to train AI systems to be consistently factual. Some doubt that companies can account for the gamut of Chinese censorship rules, which are often sweeping, are ever-changing and even require censorship of specific words and dates such as June 4, 1989, the day of the Tiananmen Square massacre. Others believe that over time, and with enough work, the machines can be aligned with truth and specific values systems, even political ones.
Analysts expect the rules to undergo changes after consultation with China’s tech companies. Regulators could soften their enforcement so the rules don’t wholly undermine development of the technology.
China has a long history of censoring the internet. Throughout the 2000s, the country has constructed the world’s most powerful information dragnet over the web. It scared away non-compliant Western companies such as Google and Facebook. It hired millions of workers to monitor internet activity.
All the while, Chinese tech companies, which had to comply with the rules, flourished, defying Western critics who predicted that political control would undercut growth and innovation. As technologies such as facial recognition and mobile phones arose, companies helped the state harness them to create a surveillance state.
The current AI wave presents new risks for the Communist Party, said Matt Sheehan, an expert on Chinese AI and a fellow at the Carnegie Endowment for International Peace.
The unpredictability of chatbots, which will make statements that are nonsensical or false, what AI researchers call hallucination, runs counter to the party’s obsession with managing what is said online, Sheehan said.
“Generative artificial intelligence put into tension two of the top goals of the party: the control of information and leadership in artificial intelligence,” he added.
China’s new regulations are not entirely about politics, experts said. For example, they aim to protect privacy and intellectual property for individuals and creators of the data on which AI models are trained, a topic of worldwide concern.
In February, Getty Images, an image-database company, sued AI startup Stable Diffusion for training its image-generating system on 12 million watermarked photos, which Getty claimed diluted the value of its images.
China is making a broader push to address legal questions about AI companies’ use of underlying data and content. In March, as part of a major institutional overhaul, Beijing established the National Data Bureau, an effort to better define what it means to own, buy and sell data. The state body would also assist companies with building the data sets necessary to train such models.
“They are now deciding what kind of property data is and who has the rights to use it and control it,” said Schaefer, who has written extensively on China’s AI regulations and called the initiative “transformative”.
Still, China’s new guardrails may be ill-timed. The country is facing intensifying competition and sanctions on semiconductors that threaten to undermine its competitiveness in technology, including AI.
Hopes for Chinese AI ran high in early February when Xu Liang, an AI engineer and entrepreneur, released one of China’s earliest answers to ChatGPT as a mobile app. The app, ChatYuan, garnered more than 10,000 downloads in the first hour, Xu said.
Media reports of marked differences between the party line and ChatYuan’s responses soon surfaced. Responses offered a bleak diagnosis of the Chinese economy and described the Russian war in Ukraine as a “war of aggression”, at odds with the party’s more pro-Russia stance. Days later, authorities shut down the app.
Xu said he was adding measures to create a more “patriotic” bot. They include filtering out sensitive keywords and hiring more manual reviewers who can help him flag problematic answers. He is even training a separate model that can detect “incorrect viewpoints,” which he will filter.
Still, it is not clear when Xu’s bot will ever satisfy authorities. The app was initially set to resume Feb. 13, according to screenshots, but as of Friday it was still down.
“Service will resume after troubleshooting is complete,” it read.
This article originally appeared in The New York Times.
Get a note directly from our foreign correspondents on what’s making headlines around the world. Sign up for the weekly What in the World newsletter here. | AI Policy and Regulations |
WASHINGTON -- Amazon, Google, Meta, Microsoft and other companies that are leading the development of artificial intelligence technology have agreed to meet a set of AI safeguards brokered by President Joe Biden's administration.
The White House said Friday that it has secured voluntary commitments from seven U.S. companies meant to ensure their AI products are safe before they release them. Some of the commitments call for third-party oversight of the workings of commercial AI systems, though they don't detail who will audit the technology or hold the companies accountable.
A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has brought public fascination as well as concern about their ability to trick people and spread disinformation, among other dangers.
The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing “carried out in part by independent experts” to guard against major risks, such as to biosecurity and cybersecurity, the White House said in a statement.
The companies have also committed to methods for reporting vulnerabilities to their systems and to using digital watermarking to help distinguish between real and AI-generated images known as deepfakes.
They will also publicly report flaws and risks in their technology, including effects on fairness and bias, the White House said.
The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology.
Some advocates for AI regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable.
“History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations,” said a statement from James Steyer, founder and CEO of the nonprofit Common Sense Media.
Senate Majority Leader Chuck Schumer, D-N.Y., has said he will introduce legislation to regulate AI. He has held a number of briefings with government officials to educate senators about an issue that's attracted bipartisan interest.
A number of technology executives have called for regulation, and several went to the White House in May to speak with Biden, Vice President Kamala Harris and other officials.
But some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their AI systems known as large language models adhere to regulatory strictures.
The software trade group BSA, which includes Microsoft as a member, said Friday that it welcomed the Biden administration's efforts to set rules for high-risk AI systems.
“Enterprise software companies look forward to working with the administration and Congress to enact legislation that addresses the risks associated with artificial intelligence and promote its benefits,” the group said in a statement.
A number of countries have been looking at ways to regulate AI, including European Union lawmakers who have been negotiating sweeping AI rules for the 27-nation bloc.
U.N. Secretary-General Antonio Guterres recently said the United Nations is “the ideal place” to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year.
The United Nations chief also said he welcomed calls from some countries for the creation of a new U.N. body to support global efforts to govern AI, inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.
The White House said Friday that it has already consulted on the voluntary commitments with a number of countries. | AI Policy and Regulations |
With help from Derek Robertson
Earlier this year, to galvanize public concern about the growing risks of AI, Tristan Harris and Aza Raskin — co-founders of the Center for Humane Technology — uploaded an hourlong Youtube video they recorded in March at a private gathering in San Francisco. Since then, nearly 3 million people have watched their TED-style talk on “The A.I. Dilemma.”
One of those was California Gov. Gavin Newsom, who watched the video — multiple times, according to his office — took notes, and forwarded the video to his cabinet and senior staff.
The talk was meant to urge policymakers into putting guardrails on the technology now. And in California, it just paid off.
Roughly six months later, on Wednesday, Newsom signed an executive order to shape the state’s own handling of generative AI, and to study the development, use and risks of the technology.
Newsom’s order was one of the most definitive moves yet to regulate AI — a technology that has been the focus of sudden attention in Congress, the West Wing and state capitals with little clarity on what should actually be done.
In an interview with DFD, Newsom’s deputy chief of staff, Jason Elliott, talked in more detail about how the order came about, and Newsom’s long-term goals.
With AI’s full long-term impact still unclear, focusing the order on the government’s own use of the technology was a strategic choice, he said — a way to avoid boiling the whole AI ocean at once. “We first seek to control that which we can control,” Elliott said. (Independently, Sen. Gary Peters is trying out a similar approach with some success in the Senate.)
Elliott said technology vendors had been vying to sell AI tools to the California government since before the ChatGPT burst into public consciousness. “There really isn’t much in the way of best practices or guidelines for government procurement and licensing of genAI technology,” he said. “Well, that feels to us like a perfect place for California to step up.”
Newsom is hoping to set an example for how other state governments should contract with government technology vendors on generative AI, Elliott said. And in doing so, he hopes to influence wider industry standards for the technology.
The lead-by-example approach that California is taking has the blessing of the White House, which is also looking at ways the government should use generative AI, Elliott said. “We’re working very closely with the president’s team. To the extent that they want to push for legislation, we’re obviously going to be supportive of where Joe Biden is headed with this,” he said.
California’s lawmakers are also looking into AI: Several pieces of AI legislation are floating around the Capitol this session, although only one — a bill affirming the legislature’s commitment to the White House’s AI Bill of Rights — has been signed into law so far. With attention at every level of government, Newsom’s office is mindful of its lane: “This is really something where we recognize our role in the federal system,” Elliott said, also mentioning the state legislature’s “ideas on how to approach consumer protection, bias, misinformation, and financial protection,”
“This executive order is not the be-all, end-all of California’s entire posture on AI forevermore,” he said.
And the governor’s office is still hoping that Washington — whether Congress or the White House, or both — will lay out a national framework on AI. “We’re very sensitive to companies not wanting a state-by-state patchwork quilt,” Elliott said. “But at the same time, we’re not going to abdicate our responsibility.”
To that end, Elliott said part of the executive order was crafted so that Newsom’s office could start figuring out the security risks of AI for itself, instead of taking its cues from tech interest groups.
And at the bottom of it, for the state whose tech hubs birthed generative AI, there’s a bit of pride involved in coming to the plate ahead of others. When it comes to AI policy, “California is a natural first mover,” Elliott said. “We are the literal home to a majority of these companies and a majority of the patents and a majority of the venture capital globally.”
“This is really about us, embracing that first mover advantage, and trying to put some meat on the bones of what we mean when we say safe, ethical AI,” Elliot said.
POLITICO’s Mark Scott is back with a new edition of Digital Bridge, examining the seemingly redundant web of upcoming global AI summits meant to set norms and standards for the technology.
Mark points to an upcoming policy draft expected from a meeting today of G7 officials, which will be shared with experts and watchdogs at a subsequent October summit and then approved by the G7’s digital ministers sometime before the end of the year. He characterizes this as a “massive game of horse-trading” over what will ultimately go into the guidance, which reveals a philosophical split between Western powers that want to take a more hands-off approach and those that support Europe’s AI Act.
“In that context, the G7 is trying to thread the needle so that countries can pursue their own forms of AI governance, while also creating a patchwork of international cooperation,” Mark writes. And then… there’s also a summit planned in India for December, including a wider set of non-Western countries, and one in the U.K. at the beginning of November, the focus of which remains unclear — except for the U.K.’s insistence on including China, which represents an entirely different set of competing interests. — Derek Robertson
Speaking of which, in a new essay for the Harvard Business Review, Hemant Taneja and Fareed Zakaria lay out the implications of what they describe as a “new digital cold war” between the West and China over the development of powerful AI technologies. That’s no small issue as China touts the release of a new Tencent-designed chatbot and gloats over its overcoming trade embargoes on powerful microchips.
Taneja and Zakaria insist the West’s only hope in surpassing a China-led, surveillance-focused, authoritarian digital global order is to band together in cooperation. “For a future to prevail that prizes openness and individual rights, democratic nations need to be market leaders in AI,” they write. “The only way to ensure this is by promoting international collaboration, especially between democracies and other defenders of the rules-based order.”
They take pains to point out that means not just collaboration across governments, but between governments and the private sector itself. “We cannot risk AI going awry and putting democracies off track in this competitive race,” they write in conclusion. “Since the impacts of AI will be felt across every sector of society, accounting for broad stakeholder interests is both a moral responsibility and the only way to bring about sustainable transformation.” — Derek Robertson
- Generative AI is the newest tool in the phishing wars.
- Maybe adding tech to your sleep cycle isn’t such a good idea after all.
- OpenAI will soon host its first developer conference in San Francisco.
- Could the future of heat pump technology be… propane?
- A new “Center For Civil Rights and Technology” is fighting AI hate speech.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); and Steve Heuser ([email protected]). Follow us @DigitalFuture on Twitter. | AI Policy and Regulations |
OpenAI released ChatGPT last November without worrying too much about user privacy, copyright, or accuracy implications. But we’re starting to see more users and regulators realize that generative AI tech needs oversight on all these matters. The latest move comes from a large group of content creators from Germany who are worried about ChatGPT’s potential copyright infringement.
More than 140,000 authors and performers urged the European Union on Wednesday to beef up draft artificial intelligence (AI) rules and include stronger copyright protections.
“The unauthorised usage of protected training material, its non-transparent processing, and the foreseeable substitution of the sources by the output of generative AI raise fundamental questions of accountability, liability and remuneration, which need to be addressed before irreversible harm occurs,” said the letter Reuters saw. “Generative AI needs to be at the centre of any meaningful AI market regulation.”
The report says letter signatories include trade unions for the creative sector Verti and DGB. Also, associations for photographers, designers, journalists, and illustrators signed the document.
The European Commission proposed AI rules last year and should finalize the details in the coming months. But the German group wants the EU to beef up the regulations to cover generative AI across the entire product cycle.
The group also wants providers of technology like ChatGPT to be liable for the content the chatbots deliver. That includes content that might infringe on personal rights and copyrights. It also covers generative content that might lead to misinformation and discrimination.
Finally, the letter asks for regulations that would prevent companies that provide ChatGPT-like platforms, like Microsoft, Google, Amazon, and Meta to operate platforms that distribute digital content.
The German letter isn’t the first to address issues with OpenAI’s ChatGPT. Italy has banned ChatGPT over privacy matters, and Canada is conducting a similar privacy-based investigation. Furthermore, a mayor in Australia has considered a defamation suit against OpenAI. Separately, News Corp. Australia CEO asked for creators of ChatGPT-like platforms to pay for the news content they use to train their chatbots.
It looks like it’ll be only a matter of time until companies like OpenAI and Google will have to deal with AI regulations for user privacy, copyright, and misinformation. That might hinder the training of smarter AI models, at least initially. And AI access might become more expensive for the end user. But it’s abundantly clear that we can’t have AI without regulation. | AI Policy and Regulations |
U.S. President Joe Biden has issued an executive order (EO) that seeks to establish “new standards” for AI safety and security, including requirements for companies developing foundation AI models to notify federal government and share results of all safety tests before they’re deployed to the public.
The fast-moving generative AI movement, driven by the likes of ChatGPT and foundation AI models developed by OpenAI, has sparked a global debate around the need for guardrails to counter the potential pitfalls of giving over too much control to algorithms. Back in May, G7 leaders identified key themes that need to be addressed as part of the so-called Hiroshima AI Process, with the seven constituent countries today reaching an agreement on guiding principles and a “voluntary” code of conduct for AI developers to follow.
Last week, the United Nations (UN) announced a new board to explore AI governance, while the U.K. is this week hosting its global summit on AI governance at Bletchley Park, with U.S. vice president Kamala Harris set to speak at the event.
The Biden-Harris Administration, for its part, has also been focusing on AI safety in lieu of anything legally-binding, securing “voluntary commitments” from the major AI developers including OpenAI, Google, Microsoft, Meta, and Amazon — this was always intended as a prelude to an executive order, though, which is what is being announced today.
“Safe, secure, and trustworthy AI”
Specifically, the order sets out that developers of the “most powerful AI systems” must share their safety test results and related data with the U.S. government.
“As AI’s capabilities grow, so do its implications for Americans’ safety and security,” the order notes, adding that it’s intended to “protect Americans from the potential risks of AI systems.”
Aligning the new AI safety and security standards with the Defense Production Act (1950), the order specifically targets any foundation model that might pose a risk to national security, economic security, or public health — which, while somewhat open to interpretation, should cover just about any foundation model that comes to fruition.
“These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public,” the order adds.
Elsewhere, the order also outlines plans to develop various new tools and systems to ensure that AI is safe and trustworthy, with the National Institute of Standards and Technology (NIST) tasked with developing new standards “for extensive red-team testing” prior to release. Such tests will be applied across the board, with the Departments of Energy and Homeland Security addressing risks involved with AI and critical infrastructure, for example.
The order also serves to underpin a number of new directives and standards, including — but not limited to — protecting against the risks of using AI to engineer dangerous biological materials; protecting against AI-powered fraud and deception; and establishing a cybersecurity program to build AI tools for addressing vulnerabilities in critical software.
Teeth
It’s worth noting that the order does address areas such as equity and civil rights, pointing to how AI can exacerbate discrimination and bias in healthcare, justice, and housing, as well as the dangers that AI poses in relation to things like workplace surveillance and job displacement. But some might interpret the order as lacking real teeth, as much of it seems to be centered around recommendations and guidelines — for instance, it says that it wants to ensure fairness in the criminal justice system by “developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.”
And while the executive order goes some way toward codifying how AI developers should go about building safety and security into their systems, it’s not clear to what extent it’s enforceable without further legislative changes. For example, the order discusses concerns around data privacy — after all, AI makes it infinitely more easy to extract and exploit individuals’ private data at scale, something that developers might be incentivized to do as part of their model training processes. However, the executive order merely calls on Congress to pass “bipartisan data privacy legislation” to protect Americans’ data, including requesting more federal support to develop privacy-preserving AI development techniques.
With Europe on the cusp of passing the first extensive AI regulations, it’s clear that the rest of the world is also grappling with ways to contain what is set to create one of the greatest societal disruptions since the industrial revolution. How impactful President Biden’s executive order proves to be in reeling in the likes of OpenAI, Google, Microsoft, and Meta remains to be seen. | AI Policy and Regulations |
Republican National Committee
toggle caption
An image from a Republican National Committee ad against President Biden features imagery generated by artificial intelligence. The spread of AI-generated images, video and audio presents a challenge for policymakers.
Republican National Committee
An image from a Republican National Committee ad against President Biden features imagery generated by artificial intelligence. The spread of AI-generated images, video and audio presents a challenge for policymakers.
Republican National Committee
This week, the Republican National Committee used artificial intelligence to create a 30-second ad imagining what President Joe Biden's second term might look like.
It depicts a string of fictional crises, from a Chinese invasion of Taiwan to the shutdown of the city of San Francisco, illustrated with fake images and news reports. A small disclaimer in the upper left says the video was "Built with AI imagery."
The ad was just the latest instance of AI blurring the line between real and make believe. In the past few weeks, fake images of former President Donald Trump scuffling with police went viral. So did an AI-generated picture of Pope Francis wearing a stylish puffy coat and a fake song using cloned voices of pop stars Drake and The Weeknd.
Artificial intelligence is quickly getting better at mimicking reality, raising big questions over how to regulate it. And as tech companies unleash the ability for anyone to create fake images, synthetic audio and video, and text that sounds convincingly human, even experts admit they're stumped.
"I look at these generations multiple times a day and I have a very hard time telling them apart. It's going to be a tough road ahead," said Irene Solaiman, a safety and policy expert at the AI company Hugging Face.
Solaiman focuses on making AI work better for everyone. That includes thinking a lot about how these technologies can be misused to generate political propaganda, manipulate elections, and create fake histories or videos of things that never happened.
Some of those risks are already here. For several years, AI has been used to digitally insert unwitting women's faces into porn videos. These deepfakes sometimes target celebrities and other times are used to take revenge on private citizens.
It underscores that the risks from AI are not just what the technology can do — they're also about how we as a society respond to these tools.
"One of my biggest frustrations that I'm shouting from the mountaintops in my field is that a lot of the problems that we're seeing with AI are not engineering problems," Solaiman said.
Technical solutions struggling to keep up
There's no silver bullet for distinguishing AI-generated content from that made by humans.
Technical solutions do exist, like software that can detect AI output, and AI tools that watermark the images or text they produce.
Another approach goes by the clunky name content provenance. The goal is to make it clear where digital media — both real and synthetic — comes from.
The goal is to let people easily "identify what type of content this is," said Jeff McGregor, CEO of Truepic, a company working on digital content verification. "Was it created by human? Was it created by a computer? When was it created? Where was it created?"
But all of these technical responses have shortcomings. There's not yet a universal standard for identifying real or fake content. Detectors don't catch everything, and must constantly be updated as AI technology advances. Open source AI models may not include watermarks.
Laws, regulations, media literacy
That's why those working on AI policy and safety say a mix of responses are needed.
Laws and regulation will have to play a role, at least in some of the highest-risk areas, said Matthew Ferraro, an attorney at WilmerHale and an expert in legal issues around AI.
"It's going to be, probably, nonconsensual deepfake pornography or deepfakes of election candidates or state election workers in very specific contexts," he said.
Ten states already ban some kinds of deepfakes, mainly pornography. Texas and California have laws barring deepfakes targeting candidates for office.
Copyright law is also an option in some cases. That's what Drake and The Weeknd's label, Universal Music Group, has invoked to get the song impersonating their voices pulled from streaming platforms.
When it comes to regulation, the Biden administration and Congress have signaled their intentions to do something. But as with other matters of tech policy, the European Union is leading the way with the forthcoming AI Act, a set of rules meant to put guardrails on how AI can be used.
Tech companies, however, are already making their AI tools available to billions of people, and incorporating them into apps and software many of us use every day.
That means for better or worse, sorting fact from AI fiction requires people to be savvier media consumers, though it doesn't mean reinventing the wheel. Propaganda, medical misinformation and false claims about elections are problems that predate AI.
"We should be looking at the various ways of mitigating these risks that we already have and thinking about how to adapt them to AI," said Princeton University computer science professor Arvind Narayanan.
That includes efforts like fact-checking, and asking yourself whether what you're seeing can be corroborated, which Solaiman calls "people literacy."
"Just be skeptical, fact-check anything that could have a large impact on your life or democratic processes," she said. | AI Policy and Regulations |
WASHINGTON, D.C. – Members of Congress provided a range of opinions on regulating AI, but several agreed that bipartisanship is the key to moving forward with a framework, lawmakers on Capitol Hill told Fox News.
China and the European Union have recently drafted AI regulations, but Congress hasn't passed any legislation since the tech's recent rapid development. Republicans worry that lawmakers could overregulate AI and harm innovation, while Democrats fear that machine learning poses potential threats to consumers.
"There is an urgent need for regulation," Rep. Ritchie Torres, a New York Democrat, told Fox News. "But we have to get it right. We have to be careful not to regulate prematurely or haphazardly."
LAWMAKERS SHARE WHY US IS MOVING SLOW ON AI REGULATION:
Rep. Tony Cárdenas, another Democrat, agreed: "We need to have regulations in Europe, around the world, and we need to have regulations on AI right here in the United States."
Nearly 40 countries passed AI laws last year, Gary Marcus, who hosts the AI-themed podcast, "Humans vs Machines with Gary Marcus," told Fox News last week. He called for international coordination to regulate the technology.
But Republicans who spoke with Fox News said overregulation is also a concern.
"If you overregulate, as we tend to do, you're going to stifle innovation," Rep. Nancy Mace said. "If we overregulate like other countries around the world, in European Union for example, we can't even imagine some of the ways that it will be used."
The EU's Parliament approved the Artificial Intelligence Act last week, which would restrict how AI platforms use consumer data and limit how AI can be used for facial recognition and predictive policing. The Cyberspace Administration of China released regulations in April that outline rules AI companies must follow to avoid penalties, such as complying with socialist values and government security reviews of machine learning models before they are released publicly.
"Congress is clearly behind on AI," Ohio Sen. J.D. Vance said. " But I also think that our answers are going to be a lot different than the Democrats' answers."
Senate Majority Leader Chuck Schumer released a framework for AI regulation and met with Tesla and SpaceX CEO Elon Musk to discuss the plan last month, but so far no AI regulation has been passed. Those rules would lay out ethical restrictions as well as require tech companies to disclose its data sources and who trained the algorithm and to explain how the models arrive at their responses.
"Can Chuck Schumer and the Biden administration do anything substantive to stop the China assault on AI? No, they have no willingness to do it," GOP Rep. Ralph Norman said. "We're going in two different directions with a Republican plan versus a Democrat plan."
Despite their different positions, lawmakers on both sides of the aisle said bipartisan cooperation on AI is needed to get anything passed.
"AI Is going to be so revolutionary that it transcends partisanship and that we as a country have to figure out how do we remain the leaders of AI and how do we make it work for the American people," Torres told Fox News.
Vance said: "There's a broad concern about it, and I think that gives some opportunity for bipartisanship."
To watch the lawmakers' full interviews, click here.
Gabrielle Reyes contributed to this report. | AI Policy and Regulations |
The White House has struck a deal with major AI developers—including Amazon, Google, Meta, Microsoft, and OpenAI—that commits them to take action to prevent harmful AI models from being released into the world.Under the agreement, which the White House calls a “voluntary commitment,” the companies pledge to carry out internal tests and permit external testing of new AI models before they are publicly released. The test will look for problems including biased or discriminatory output, cybersecurity flaws, and risks of broader societal harm. Startups Anthropic and Inflection, both developers of notable rivals to OpenAI’s ChatGPT, also participated in the agreement.“Companies have a duty to ensure that their products are safe before introducing them to the public by testing the safety and capability of their AI systems,” White House special adviser for AI Ben Buchanan told reporters in a briefing yesterday. The risks that companies were asked to look out for include privacy violations and even potential contributions to biological threats. The companies also committed to publicly reporting the limitations of their systems and the security and societal risks they could pose.The agreement also says the companies will develop watermarking systems that make it easy for people to identify audio and imagery generated by AI. OpenAI already adds watermarks to images produced by its Dall-E image generator, and Google has said it is developing similar technology for AI-generated imagery. Helping people discern what’s real and what’s fake is a growing issue as political campaigns appear to be turning to generative AI ahead of US elections in 2024.Recent advances in generative AI systems that can create text or imagery have triggered a renewed AI arms race among companies adapting the technology for tasks like web search and writing recommendation letters. But the new algorithms have also triggered renewed concern about AI reinforcing oppressive social systems like sexism or racism, boosting election disinformation, or becoming tools for cybercrime. As a result, regulators and lawmakers in many parts of the world—including Washington, DC—have increased calls for new regulation, including requirements to assess AI before deployment.It’s unclear how much the agreement will change how major AI companies operate. Already, growing awareness of the potential downsides of the technology has made it common for tech companies to hire people to work on AI policy and testing. Google has teams that test its systems, and it publicizes some information, like the intended use cases and ethical considerations for certain AI models. Meta and OpenAI sometimes invite external experts to try and break their models in an approach dubbed red-teaming.“Guided by the enduring principles of safety, security, and trust, the voluntary commitments address the risks presented by advanced AI models and promote the adoption of specific practices—such as red-team testing and the publication of transparency reports—that will propel the whole ecosystem forward,” Microsoft president Brad Smith said in a blog post.The potential societal risks the agreement pledges companies to watch for do not include the carbon footprint of training AI models, a concern that is now commonly cited in research on the impact of AI systems. Creating a system like ChatGPT can require thousands of high-powered computer processors, running for extended periods of time.Andrew Burt, managing partner at law firm BNH, which specializes in AI, says the potential risks of generative AI systems are becoming clear to everyone involved with the technology. The Federal Trade Commission began a probe into OpenAI’s business practices last week, alleging that the company participated in “unfair or deceptive privacy or data security practices.”The White House agreement’s stipulation that companies should commission external assessments of their technology adds to evidence that outside audits are becoming “the central way governments exert oversight for AI systems,” Burt says.The White House also promoted the use of audits in the voluntary AI Bill of Rights issued last year, and it is supporting a hacking contest centered on generative AI models at the Defcon security conference next month. Audits are also a requirement of the EU’s sweeping AI Act, which is currently being finalized.President Joe Biden will meet at the White House today with executives from the companies that joined the new AI agreement, including Anthropic CEO Dario Amodei, Microsoft president Brad Smith, and Inflection AI CEO Mustafa Suleyman. His administration is also developing an executive order to govern the use of AI through actions by federal agencies, but the White House gave no specific timeline for its release. | AI Policy and Regulations |
Do your homework on the good of AI. Then legislate it.
Inside the beltway, artificial intelligence (AI) has rocketed up the public policy agenda. While there’s policy discussion of the promise and perils of AI, the latter is emphasized. Instead, we need more policy balance with increased focus on the tremendous potential for good, and how to best ensure that this good is realized for all.
AI has great promise for our future social advancement, national security, and economic success, including how it can maintain and strengthen U.S. leadership in the global marketplace.
The good of AI is wide-ranging. Education and learning may be customized to an individual’s goals and needs, adjusting to fast-learners, slower-learners, and special interests. Students and learners won’t have to be constrained by the contours of a static curriculum or module. Aided by AI, workers throughout the economy can become more productive and provide services previously not possible or viable. New economic opportunities will be generated, by both America’s largest companies and smallest businesses.
AI should be viewed as “augmented intelligence,” not as a replacement for humans. Some tasks will be more ably handled by AI, leaving more sophisticated tasks for the human touch.
As with previous technological innovations, there will be a disruption in the workforce, with some jobs eliminated and new ones created. As this shift in employment is likely significant and will occur across most sectors of the economy, public and private investment and concerted policy efforts will be needed. The scale of the AI transition may well be comparable to the transition from the manufacturing to the service economy in the 20th century. In this previous shift, blue-collar workers faced the most upheaval. This time, it will be knowledge workers; those employed in fields with large bodies of codified information, standards and procedures.
Instructors will need training and resources to educate students of all ages on the concepts, opportunities, and challenges enabled by AI. We will need to invest in educating and training the workforce, through universities, community colleges, libraries, workforce centers, and other community organizations. Libraries, for instance, have helped the public keep up with advancements in technology by providing computer labs, audio/visual studios, and maker space labs, as well as teaching related skills and applications. The same public access and education should be broadly offered for AI-based systems and services, such as this recent workshop on ChatGPT and other AI programs at Chicago Public Library. Investments are especially important to ensure opportunity for those in low-income communities—community anchor institutions such as libraries are well-situated to provide AI-related education and awareness, by leveraging their existing physical and intellectual resources, presence and reputation.
While we can cherish the power of the free market to propel advances for the nation, leaving things entirely to the market will not be sufficient to effectively guide us through the AI transition. Public sector investments will be needed.
While emphasizing the promise of AI, there are valid challenges and concerns to consider and address—as has been true for past innovations. Questions about the appropriate use of copyrighted materials by AI have emerged, as well as the implications of this use for authors and creators. Other information policy issues are implicated such as privacy and cross-national laws. Another important issue is protection for the general public from potential AI-related harms, which include such moral and ethical concerns as advice about health, safety and other matters of personal import become yet more dependent on technology.
In a recent survey reported by Vox, 72 percent of Americans said that they want the adoption of AI to slow down. Legislation to regulate AI should also slow down, despite the energy for legislative proposals on AI. It is not yet time to act. Rather, more brainstorming and thinking are needed. We need to analyze and plan. The time is perfect to initiate a study by the National Academy of Sciences, Engineering, and Medicine—getting America’s best thinkers to develop visions, scenarios, a blueprint for strategy and potential legislative proposals in a non-partisan setting. Ramping up expertise in other ways, such as strengthening expert staffing in federal agencies is also now desirable. AI policy is too important to act in haste.
ChatGPT agrees with me. I asked, “Should the US Congress pass new legislation on artificial intelligence in 2023 or 2024?” The response included: “It’s important to avoid rushing into legislation solely for the sake of urgency and instead prioritize well-considered, effective, and adaptable regulations that benefit society as a whole.”
Alan S. Inouye is senior director of public policy & government relations at the American Library Association.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. | AI Policy and Regulations |
The UK government has set out plans to regulate artificial intelligence (AI) with new guidelines on "responsible use" in a white paper from the Department for Science, Innovation and Technology. The government views AI as a key technology of the future, having contributed £3.7bn ($5.6bn) to the UK economy in 2022. The paper proposes rules for general-purpose AI, which includes systems that can be used for different purposes, such as chatbots. However, critics have raised concerns that the rapid growth of AI could threaten jobs or be used for malicious purposes. There is also a risk that AI may display biases against particular groups if trained on large datasets that include racist or sexist material.
The government's approach to AI governance involves asking existing regulators, such as the Health and Safety Executive, Equality and Human Rights Commission, and Competition and Markets Authority, to come up with their own approaches that suit the way AI is being used in their sectors. These regulators will use existing laws rather than being given new powers. The government hopes that by doing this, organisations will not be held back from using AI to its full potential, and a patchwork of legal regimes will not cause confusion for businesses trying to comply with rules.
The white paper outlines five principles that regulators should consider to enable the safe and innovative use of AI in the industries they monitor. These include safety, security and robustness, transparency and "explainability", fairness, accountability and governance, and contestability and redress. Over the next year, regulators will issue practical guidance to organisations to set out how to implement these principles in their sectors.
While some experts welcome the idea of regulation, they warn about significant gaps in the UK's approach, which could leave harms unaddressed. Initially, the proposals in the white paper will lack any statutory footing, which means no new legal obligations on regulators, developers or users of AI systems, with the prospect of only a minimal duty on regulators in future. Furthermore, the UK will struggle to regulate different uses of AI across sectors without substantial investment in its existing regulators.
Simon Elliott, a partner at cybersecurity firm Dentons, views the government's approach as "light-touch," making the UK an outlier against the global trends around AI regulation. For example, China has taken the lead in moving AI regulations past the proposal stage, with rules that mandate companies notify users when an AI algorithm is playing a role. The EU has also published proposals for regulations titled the Artificial Intelligence Act, which would have a much broader scope than China's enacted regulation.
I think it is essential to regulate the use of AI to ensure its responsible use, especially as the technology develops rapidly. While AI advocates argue that the tech is already delivering real social and economic benefits, critics are concerned about the risks AI could pose to people's privacy, their human rights, or their safety. Therefore, the government's approach is commendable as it outlines principles that regulators should consider to enable the safe and innovative use of AI. It is also essential to note that the UK's approach may need improvement to address the significant gaps and avoid the burdening of regulators with an increasingly diverse range of complaints. | AI Policy and Regulations |
The nation's biggest technology executives on Wednesday loosely endorsed the idea of government regulations forat an unusual closed-door meeting in the U.S. Senate. But there is little consensus on what regulation would look like, and the political path for legislation is difficult.
Executives attending the meeting included Tesla CEO, Meta's Mark Zuckerberg, former Microsoft CEO Bill Gates and Google CEO Sundar Pichai. Musk said the meeting "might go down in history as being very important for the ."
First, though, lawmakers have to agree on whether to regulate, and how.
Senate Majority Leader Chuck Schumer, who organized the private forum on Capitol Hill as part of a push to legislate artificial intelligence, said he asked everyone in the room — including almost two dozen tech executives, advocates and skeptics — whether government should have a role in the oversight of artificial intelligence, and "every single person raised their hands, even though they had diverse views," he said.
Among the ideas discussed was whether there should be an independent agency to oversee certain aspects of the rapidly developing technology, how companies could be more transparent and how the U.S. can stay ahead of China and other countries.
"The key point was really that it's important for us to have a referee," said Musk during a break in the daylong forum. "It was a very civilized discussion, actually, among some of the smartest people in the world."
Schumer will not necessarily take the tech executives' advice as he works with colleagues on the politically difficult task of ensuring some oversight of the burgeoning sector. But he invited them to the meeting in hopes that they would give senators some realistic direction for meaningful regulation.
Congress should do what it can to maximize AI's benefits and minimize the negatives, Schumer said, "whether that's enshrining bias, or the loss of jobs, or even the kind of doomsday scenarios that were mentioned in the room. And only government can be there to put in guardrails."
Congress has a lackluster track record when it comes to regulating new technology, and the industry has grown mostly unchecked by government in the past several decades. Many lawmakers point to the failure to pass any legislation surrounding social media, such as for stricter privacy standards.
Schumer, who has made AI one of his top issues as leader, said regulation of artificial intelligence will be "one of the most difficult issues we can ever take on," and he listed some of the reasons why: It's technically complicated, it keeps changing and it "has such a wide, broad effect across the whole world," he said.
Sparked by theless than a year ago, businesses have been clamoring to apply new generative AI tools that can compose human-like passages of text, program computer code and create novel images, audio and video. The hype over such tools has accelerated worries over its and prompted calls for more transparency in how the data behind the new products is collected and used.
Republican Sen. Mike Rounds of South Dakota, who led the meeting with Schumer, said Congress needs to get ahead of fast-moving AI by making sure it continues to develop "on the positive side" while also taking care of potential issues surrounding data transparency and privacy.
"AI is not going away, and it can do some really good things or it can be a real challenge," Rounds said.
The tech leaders and others outlined their views at the meeting, with each participant getting three minutes to speak on a topic of their choosing. Schumer and Rounds then led a group discussion.
During the discussion, according to attendees who spoke about it, Musk and former Google CEO Eric Schmidt raised existential risks posed by AI, and Zuckerberg brought up the question of closed vs. "open source" AI models. Gates talked about feeding the hungry. IBM CEO Arvind Krishna expressed opposition to proposals favored by other companies that would require licenses.
In terms of a potential new agency for regulation, "that is one of the biggest questions we have to answer and that we will continue to discuss," Schumer said. Musk said afterward he thinks the creation of a regulatory agency is likely.
Outside the meeting, Google CEO Pichai declined to give details about specifics but generally endorsed the idea of Washington involvement.
"I think it's important that government plays a role, both on the innovation side and building the right safeguards, and I thought it was a productive discussion," he said.
Some senators were critical that the public was shut out of the meeting, arguing that the tech executives should testify in public.
Republican Sen. Josh Hawley of Missouri said he would not attend what he said was a "giant cocktail party for big tech." Hawley has introduced legislation with Democratic Sen. Richard Blumenthal of Connecticut to require tech companies to seek licenses for high-risk AI systems.
"I don't know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money and then close it to the public," Hawley said.
While civil rights and labor groups were also represented at the meeting, some experts worried that Schumer's event risked emphasizing the concerns of big firms over everyone else.
Sarah Myers West, managing director of the nonprofit AI Now Institute, estimated that the combined net worth of the room Wednesday was $550 billion and it was "hard to envision a room like that in any way meaningfully representing the interests of the broader public." She did not attend.
In the U.S., major tech companies have expressed support for AI regulations, though they don't necessarily agree on what that means. Similarly, members of Congress agree that legislation is needed, but there is little consensus on what to do.
Some concrete proposals have already been introduced, including legislation by Sen. Amy Klobuchar, D-Minn., that would require disclaimers for AI-generated election ads with deceptive imagery and sounds. Schumer said they discussed "the need to do something fairly immediate" before next year's presidential election.
Hawley and Blumenthal's broader approach would create a government oversight authority with the power to audit certain AI systems for harms before granting a license.
Some of those invited to Capitol Hill, such as Musk, have voiced dire concerns evoking popular science fiction about the possibility of humanity losing control to advanced AI systems if the right safeguards are not in place. But the only academic invited to the forum, Deborah Raji, a University of California, Berkeley researcher who has studied algorithmic bias, said she tried to emphasize real-world harms already occurring.
"There was a lot of care to make sure the room was a balanced conversation, or as balanced as it could be," Raji said.
What remains to be seen, she said, is which voices senators will listen to and what priorities they elevate as they work to pass new laws.
Some Republicans have been wary of following the path of the European Union, which signed off in June on the world's first set of comprehensive rules for artificial intelligence. The EU's AI Act will govern any product or service that uses an AI system and classify them according to four levels of risk, from minimal to unacceptable.
A group of European corporations has called on EU leaders to rethink the rules, arguing that it could make it harder for companies in the 27-nation bloc to compete with rivals overseas in the use of generative AI.
for more features. | AI Policy and Regulations |
The government says the UK will host a global artificial intelligence (AI) summit this autumn to evaluate the technology's "most significant risks".
There has been a slew of dire warnings about the potentially existential threat AI poses to humanity.
Regulators worldwide are scrambling to devise new rules to contain that risk.
Prime Minister Rishi Sunak said he wanted the UK to lead efforts to ensure the benefits of AI were "harnessed for the good of humanity."
"AI has an incredible potential to transform our lives for the better, but we need to make sure it is developed and used in a way that is safe and secure," he said.
It is not yet known who will attend the summit but the government said it would "bring together key countries, leading tech companies and researchers to agree safety measures to evaluate and monitor the most significant risks from AI".
Speaking to reporters in Washington DC, where Mr Sunak is discussing the issue with President Biden, the prime minister claimed the UK was the "natural place" to lead the conversation on AI.
Downing Street cited the prime minister's recent meetings with the bosses of leading AI firms as evidence of this. It also pointed to the 50,000 people employed in the sector, which it said was worth £3.7bn to the UK.
'Too ambitious'
However, some have questioned the UK's leadership credentials in the field.
Yasmin Afina, research fellow at Chatham House's Digital Society Initiative, said she did not think that the UK "could realistically be too ambitious".
She said there were "stark differences in governance and regulatory approaches" between the EU and US which the UK would struggle to reconcile, and a number of existing global initiatives, including the UN's Global Digital Compact, which had "stronger foundational bases already".
Ms Afina added that none of the world's most pioneering AI firms was based in the UK.
"Instead of trying to play a role that would be too ambitious for the UK and risks alienating it, the UK should perhaps focus on promoting responsible behaviour in the research, development and deployment of these technologies," she told the BBC.
Deep unease
Interest in AI has mushroomed since chatbot ChatGPT burst on to the scene last November, amazing people with its ability to answer complex questions in a human-sounding way.
It can do that because of the incredible computational power AI systems possess, which has caused deep unease.
Two of the three so-called godfathers of AI - Geoffrey Hinton and Prof Yoshua Bengio - have been among those to sound warnings about how the technology they have helped create has a huge potential for causing harm.
In May, AI industry leaders - including the heads of OpenAI and Google Deepmind - warned AI could lead to the extinction of humanity.
They gave examples, including AI potentially being used to develop a new generation of chemical weapons.
Those warnings have accelerated demands for effective regulation of AI, although many questions remain over what that would look like and how it would be enforced.
Regulatory race
The European Union is formulating an Artificial Intelligence Act, but has acknowledged that even in a best-case scenario it will take two-and-a-half years to come into effect.
EU tech chief Margrethe Vestager said last month that would be "way too late" and said it was working on a voluntary code for the sector with the US, which they hoped could be drawn up within weeks.
China has also taken a leading role in drawing up AI regulations, including proposals that companies must notify users whenever an AI algorithm is being used.
The UK government set out its thoughts in March in a White Paper, which was criticised for having "significant gaps."
Marc Warner, a member of the government's AI Council, has pointed to a tougher approach, however, telling the BBC some of the most advanced forms of AI may eventually have to be banned.
Matt O'Shaughnessy, visiting fellow at the Carnegie Endowment for International Peace, said there was little the UK could do about the fact that others were leading the charge on AI regulation - but said it could still have an important role.
"The EU and China are both large markets that have proposed consequential regulatory schemes for AI - without either of those factors, the UK will struggle to be as influential," he said.
But he added the UK was an "academic and commercial hub", with institutions that were "well-known for their work on responsible AI".
"Those all make it a serious player in the global discussion about AI," he told the BBC. | AI Policy and Regulations |
With help from Mohar Chatterjee and Derek Robertson
Earlier this year we reported on the concept of “data dignity,” or the belief that individuals should be acknowledged and even compensated for the data they contribute to AI models. Today two experts propose in POLITICO Magazine an “AI dividend,” their deceptively simple policy scheme for how the average American could cash out for what they contribute to systems like ChatGPT. Read the full op-ed below.
For four decades, Alaskans have opened their mailboxes to find checks waiting for them, their cut of the black gold beneath their feet. This is Alaska’s Permanent Fund, funded by the state’s oil revenues and paid to every Alaskan each year. We’re now in a different sort of resource rush, with companies peddling bits instead of oil: generative AI.
Everyone is talking about these new AI technologies — like ChatGPT — and AI companies are touting their awesome power. But they aren’t talking about how that power comes from all of us. Without all of our writings and photos that AI companies are using to train their models, they would have nothing to sell. Big Tech companies are currently taking the work of the American people, without our knowledge and consent, without licensing it, and are pocketing the proceeds.
You are owed profits for your data that powers today’s AI, and we have a way to make that happen. We call it the AI Dividend.
Our proposal is simple, and harkens back to the Alaskan plan. When Big Tech companies produce output from generative AI that was trained on public data, they would pay a tiny licensing fee, by the word or pixel or relevant unit of data. Those fees would go into the AI Dividend fund. Every few months, the Commerce Department would send out the entirety of the fund, split equally, to every resident nationwide. That’s it.
There’s no reason to complicate it further. Generative AI needs a wide variety of data, which means all of us are valuable — not just those of us who write professionally, or prolifically, or well. Figuring out who contributed to which words the AIs output would be both challenging and invasive, given that even the companies themselves don’t quite know how their models work. Paying the dividend to people in proportion to the words or images they create would just incentivize them to create endless drivel, or worse, use AI to create that drivel. The bottom line for Big Tech is that if their AI model was created using public data, they have to pay into the fund. If you’re an American, you get paid from the fund.
Under this plan, hobbyists and American small businesses would be exempt from fees. Only Big Tech companies — those with substantial revenue — would be required to pay into the fund. And they would pay at the point of generative AI output, such as from ChatGPT, Bing, Bard, or their embedded use in third-party services via Application Programming Interfaces.
Our proposal also includes a compulsory licensing plan. By agreeing to pay into this fund, AI companies will receive a license that allows them to use public data when training their AI. This won’t supersede normal copyright law, of course. If a model starts producing copyright material beyond fair use, that’s a separate issue.
Using today’s numbers, here’s what it would look like. The licensing fee could be small, starting at $0.001 per word generated by AI. A similar type of fee would be applied to other categories of generative AI outputs, such as images. That’s not a lot, but it adds up. Since most of Big Tech has started integrating generative AI into products, these fees would mean an annual dividend payment of a couple hundred dollars per person.
The idea of paying you for your data isn’t new, and some companies have tried to do it themselves for users who opted in. And the idea of the public being repaid for use of their resources goes back to well before Alaska’s oil fund. But generative AI is different: It uses data from all of us whether we like it or not, it’s ubiquitous, and it’s potentially immensely valuable. It would cost Big Tech companies a fortune to create a synthetic equivalent to our data from scratch, and synthetic data would almost certainly result in worse output. They can’t create good AI without us.
Our plan would apply to generative AI used in the U.S. It also only issues a dividend to Americans. Other countries can create their own versions, applying a similar fee to AI used within their borders. Just like an American company collects VAT for services sold in Europe, but not here, each country can independently manage their AI policy.
Don’t get us wrong; this isn’t an attempt to strangle this nascent technology. Generative AI has interesting, valuable and possibly transformative uses, and this policy is aligned with that future. Even with the fees of the AI Dividend, generative AI will be cheap and will only get cheaper as technology improves. There are also risks — both every day and esoteric — posed by AI, and the government may need to develop policies to remedy any harms that arise.
Our plan can’t make sure there are no downsides to the development of AI, but it would ensure that all Americans will share in the upsides — particularly since this new technology isn’t possible without our contribution.
A class action lawsuit was filed against OpenAI Wednesday over the data the company scrapes from the internet to train its powerful artificial intelligence models. The lawsuit, which joins the ranks of cases like Getty Images vs. Stability AI, is part of a growing discussion over whether AI-generated content infringes on the intellectual property and rights of internet users.
AI companies argue that content created by generative AI falls under fair use because they transform the original work. And while the Supreme Court issued a landmark fair use decision this year, it has yet to weigh in on the generative AI issue specifically.
Tracey Cowan, a partner at the Clarkson Law Firm, also flagged an additional angle to consider: That the data scraped from the internet includes the personal information and photographs of minors. “We can’t let these sharp corporate practices continue to go unchallenged, if for no other reason than to protect our children from the mass exploitation that we’re already seeing across the internet,” Cowan said in a press release. The Washington Post reported that the law firm is actively looking for more plaintiffs to join the sixteen they already have.
The lawsuit asks for a temporary freeze on the commercial use of OpenAI’s GPT 3.5, GPT 4.0, Dall-E, and Vall-E models. It also asks for “data dividends” as financial compensation for those people whose data was used to develop and train these models.
OpenAI did not immediately respond to a request for comment. — Mohar Chatterjee
The landscape around AI “compliance” with still-nascent regulations is about as hazy as the skies in Washington today.
That means efforts to grade AI companies on how well they follow the rules — at least those that exist — are tricky to figure out. Stanford University’s Kevin Klyman, one of the authors of a report on compliance with the draft AI Act that we covered in DFD last week, spoke with POLITICO Digital Bridge’s Mark Scott today about his efforts to track how well systems from Meta, OpenAI, and others complied with the regulatory grinder.
“Because there’s so much up in the air, and because some of the requirements are under-specified. we had to fill on some of the gaps,” Klyman told Mark. One of the big sticking points for AI watchdogs and regulators is transparency, where Klyman says the major players are doing pretty well: “The way providers that are doing a good job at handling risks and mitigations is they have a section of their work related to the model where they say, ‘Here are all of the potential dangers from this model,’” he described.
Our colleague Mark, on the other hand, isn’t quite buying it yet. “Having a transparency section on a website about potential downsides and how a company will handle such risks doesn’t mean such safeguards will be enforced,” he wrote. “For that, you need greater disclosures on how these models operate — something, unfortunately, that almost all the firms did badly on.” — Derek Robertson
- AI and TV commercials: An aesthetic match made in heaven.
- Big newsrooms are teaming up to deal with AI disruption.
- Meta is promising behavior analysis “orders of magnitude” larger than GPT-4’s.
- AI-generated code promises both great power and great danger.
- Europe’s AI Act is still fighting to keep pace with technological developments.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); and Steve Heuser ([email protected]). Follow us @DigitalFuture on Twitter. | AI Policy and Regulations |
The Senate doesn’t need to start from scratch on AI legislation
In June, Senate Majority Leader Chuck Schumer (D-N.Y.) launched SAFE Innovation, his framework for upcoming legislation to create the rules of the road for governing artificial intelligence (AI). While there are many points to make about the substance of the framework — heavy on fears about competition with China, light on rights-based protections for Americans — one comment stood out: Schumer made the case that the Senate is “starting from scratch” on AI regulation.
While the senator may have political reasons for preferring a clean-slate approach, this could not be further from the truth. Fortunately for both Schumer’s team and the American public who deserve their government’s protection now, an evidence-based democratic vision for AI policy already exists.
Indeed, shortly before the launch of SAFE Innovation, a community of computer and social scientists who have been at the forefront of advancing research-based approaches to governing AI issued their own statement, urgently calling on policymakers to base forthcoming legislation on the tools we already have “to help build a safer technological future.” The statement points to the large body of research and policy recommendations that has “long-anticipated harmful impact of AI systems” and which includes a roadmap for how to “design, audit or resist AI systems to protect democracy.” Legislators should be drawing on this critical national resource of expertise and existing research — some of it publicly funded through the National Science Foundation — when designing the future of AI governance in American society.
And the frameworks for robust, rights-respecting legislation are also coming from inside the government itself. The White House Office of Science and Technology Policy’s (OSTP) Blueprint for an AI Bill of Rights puts forward five core protections that Americans should expect from AI systems, including safe and effective systems and protection from algorithmic discrimination. The AI Bill of Rights’ release was accompanied by a technical companion describing practical and technologically achievable steps that can and should be taken to protect the public. Similarly, the National Institutes of Standards and Technology (NIST) launched the AI Risk Management Framework; the Risk Magament Framework is a voluntary standard for assessing and managing risk in AI systems, with a strong call to draw on empirical research to understand not only the technical but also the societal impacts of AI systems. Like the AI Bill of Rights, the Risk Management Framework was developed with expert input from academia, civil society and industry actors. Both of these frameworks should be deeply informing congressional action on AI governance.
Scholars and policy researchers have already learned a great deal about the right approaches to AI governance, and these insights are incorporated into the AI Bill of Rights and AI Risk Management Framework, ready for policymakers to take advantage of these distilled and protective steps. Research on the safety and effectiveness of AI systems shows that the systems sometimes simply don’t work and that preemptive consumer protection, liability and independent audits can help. Slews of such independent investigations have shown that AI systems can be discriminatory based on race, gender, religion, sexual orientation, disability and other demographic categories, in sectors from education to finance to healthcare. A clear first step to preventing discrimination is requiring such assessments. Schumer has rightfully included explainability as a key pillar of his framework and here, existing work can also point towards best practices.
Senator Schumer can also learn from the steps that have already been taken in the United States and Europe to bring the force of law to AI governance. President Biden issued Executive Order 14091 which, among other actions, directed the federal agencies to use their civil rights capacities to protect Americans against algorithmic discrimination. And federal agencies across the executive branch have engaged in rule-making and other actions focused on algorithmic safety and rights issues for several years. Additionally, European policymakers have worked for more than two years to craft the EU-AI Act; if passed, as is expected in late 2023, the EU-AI Act will govern the use of AI systems within the European Union, directly impacting American companies.
And American lawmakers themselves are certainly not starting from scratch. From narrow but important redlines such as prohibiting an autonomous launch of nuclear weapons, to more cross-cutting legislation such as accountability measures, lawmakers have already introduced numerous AI-related bills to Congress and at the state level. Lawmakers concerned about moving too fast can look to already proposed legislation like the American Data Privacy and Protection Act and the Algorithmic Accountability Act, which are solid and well-understood bills developed over several years that address important elements in a broader AI governance framework. Sector-specific laws or regulations can also allow Congress to build on existing congressional and agency strengths: For example, the Stop Spying Bosses Act would provide regulatory authority to the Department of Labor to oversee workplace surveillance and the rulemaking by the Department of Health and Human Services prohibits discrimination in clinical algorithms used in covered programs.
To be sure, governing AI poses novel challenges. But the senator’s plan to hold “AI Insight Forums” this fall for Congress to “lay down a new foundation for AI policy” provides the opportunity to show that a foundation already exists and that a robust field of experts acting in the public interest — outside of the tech industry — have been working for years to build it. We need to draw on the broad expertise in AI policymaking both inside and outside of government.
America already has a blueprint for strong AI laws and a great deal of the knowledge it needs to quickly build the guardrails around AI that Senator Schumer rightly identified as necessary.
Janet Haven is executive director of Data & Society, and a member of the National AI Advisory Committee, which advises the president and the National AI Initiative Office on AI policy matters. The above represents her individual perspective, not that of the NAIAC or any government official.
Sorelle Friedler is a senior policy fellow at Data & Society, and the Shibulal Family Associate Professor of Computer Science at Haverford College. Previously, she served as the assistant director for Data and Democracy in the White House Office of Science and Technology Policy under the Biden-Harris administration, where her work included the Blueprint for an AI Bill of Rights.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. | AI Policy and Regulations |
The UK, US, EU and China have all agreed that artificial intelligence poses a potentially catastrophic risk to humanity, in the first international declaration to deal with the fast-emerging technology.
Twenty-eight governments signed up to the so-called Bletchley declaration on the first day of the AI safety summit, hosted by the British government. The countries agreed to work together on AI safety research, even amid signs that the US and UK are competing to take the lead over developing new regulations.
Rishi Sunak welcomed the declaration, calling it “quite incredible” .
Michelle Donelan, the UK technology secretary, told reporters: “For the first time we now have countries agreeing that we need to look not just independently but collectively at the risks around frontier AI.”
Frontier AI refers to the most cutting-edge systems, which some experts believe could become more intelligent than people at a range of tasks. Speaking to the PA news agency on the sidelines of the summit, Elon Musk, the owner of X, formerly Twitter, Tesla and SpaceX, warned: “For the first time, we have a situation where there’s something that is going to be far smarter than the smartest human … It’s not clear to me we can actually control such a thing.”
The communique marks a diplomatic success for the UK and for Sunak in particular, who decided to host the summit this summer after becoming concerned with the way in which AI models were advancing rapidly without oversight.
Donelan opened the summit by telling her fellow participants that the development of AI “can’t be left to chance or neglect or to private actors alone”.
She was joined onstage by the US commerce secretary, Gina Raimondo, and the Chinese vice-minister of science and technology, Wu Zhaohui, in a rare show of global unity.
Matt Clifford, one of the British officials in charge of organising the summit, called the appearance of Raimondo and Wu together on stage “a remarkable moment”.
China signed the declaration, which included the sentence: “We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential.”
Wu told fellow delegates: “We uphold the principles of mutual respect, equality and mutual benefits. Countries regardless of their size and scale have equal rights to develop and use AI.”
South Korea has agreed to host another such summit in six months’ time, while France will host one in a year.
So far, however, there is little international agreement over what a global set of AI regulations might look like or who should draw them up.
Some British officials had hoped other countries would agree to beef up the government’s AI taskforce so that it could be used to test new models from around the world before they are released to the public.
Instead, Raimondo used the summit to announce a separate American AI Safety Institute within thecountry’s National Institute of Standards and Technology, which she called “a neutral third party to develop best-in-class standards”, adding that the institute would develop its own rules for safety, security and testing.
Earlier this week, the Biden administration released an executive orderrequiring US AI companies such as OpenAI and Google to share their safety test results with the government before releasing AI models. Kamala Harris, the vice-president, then gave a speech on AI in London in which she talked about the importance of regulating existing AI models as well as more advanced ones in the future.
Clifford denied any suggestion of a split between the US and UK on which country should take the global lead on AI regulation.
“You’ll have heard Secretary Raimondo really praise us in a full-throated way and talk about the partnership that she wants to have between the UK and the US Safety Institute,” he said. “I really think that that shows the depth of the partnership.”
Sunak said the summit had proved “the appetite from all of those people for the UK to take a leadership role”.
The EU is in the process of passing an AI bill, which aims to develop a set of principles for regulation, as well as bringing in rules for specific technologies such as live facial recognition.
Donelan suggested the government would not include an AI bill in the king’s speech next week, saying: “We need to properly understand the problem before we apply the solutions.”
But she denied the UK was falling behind its international counterparts, adding: “We have called the world together – the first ever global summit on AI at the frontier – and we shouldn’t minimise or overlook that.” | AI Policy and Regulations |
President Joe Biden signed a wide-ranging executive order on artificial intelligence Monday, setting the stage for some industry regulations and funding for the U.S. government to further invest in the technology.
The order is broad, and its focuses range from civil rights and industry regulations to a government hiring spree.
In a media call previewing the order Sunday, a senior White House official, who asked to not be named as part of the terms of the call, said AI has so many facets that effective regulations have to cast a wide net.
“AI policy is like running into a decathlon, and there’s 10 different events here,” the official said.
“And we don’t have the luxury of just picking ‘we’re just going to do safety’ or ‘we’re just going to do equity’ or ‘we’re just going to do privacy.’ You have to do all of these things.”
The official also called for “significant bipartisan legislation” to further advance the country’s interests with AI. Senate Majority Leader Chuck Schumer, D-N.Y., held a private forum in September with industry leaders but has yet to introduce significant AI legislation.
Some of the order builds on a previous nonbinding agreement that seven of the top U.S. tech companies developing AI agreed to in July, like hiring outside experts to probe their systems for weaknesses and sharing their critical findings.
The order leverages the Defense Production Act to legally require those companies to share safety test results with the federal government.
It also tasks the Commerce Department with creating guidance about “watermarking” AI content to make it clear that deepfaked videos or ChatGPT-generated essays were not created by humans.
The order adds funding for new AI research and a federal AI hiring surge. The White House has launched a corresponding website to connect job seekers with AI government jobs: AI.gov.
Fei-Fei Li, a co-director of Stanford’s Institute for Human-Centered Artificial Intelligence, said in an interview that government funding is crucial for AI to be able to tackle major human problems.
“The public sector holds a unique opportunity in terms of data and interdisciplinary talent to cure cancer, cure rare diseases, to map out biodiversity at a global scale, to understand and predict wildfires, to find climate solutions, to supercharge our teachers,” Li said. “There’s so much the public sector can do, but all of this is right now starved because we are severely lacking in resources.”
Sarah Myers West, the managing director of the AI Now Institute, a nonprofit group focused on the technology’s effects on society, commended Biden for including social and ethical concerns in the order.
“It’s great to see the White House set the tone on the issues that matter most to the public: labor, civil rights, protecting privacy, promoting competition,” Myers West said by text message. “This underscores you can’t deal with the future risks of AI without adequately dealing with the present.”
“The key looking forward will be to ensure strong enforcement as companies attempt to set a self-regulatory tone: industry cannot be left to lead the conversation on how to adequately address the effects of AI on the broader public,” she said. | AI Policy and Regulations |
Billionaire Elon Musk called for an artificial intelligence “referee” on Wednesday as tech tycoons and lawmakers met for a closed-door summit in Washington, D.C. to discuss the best way to regulate the burgeoning technology.
Organized by Senate Majority Leader Chuck Schumer, Musk was joined by OpenAI CEO Sam Altman, Google CEO Sundar Pichai, former Microsoft boss Bill Gates, Meta CEO Mark Zuckerberg and more than 60 US senators.
“It’s important for us to have a referee,” Musk told reporters, on the sidelines of the summit, adding that regulations were needed “to ensure that companies take actions that are safe and in the general interest of the public.”
Musk referred to AI as a “double-edged sword” that could bring major benefits or have disastrous consequences for humanity — repeating a frequent warning that he and other tech bigwigs have given in recent months.
Meanwhile, Zuckerberg said Congress “should engage with AI to support innovation and safeguards.”
“This is an emerging technology, there are important equities to balance here, and the government is ultimately responsible for that,” Zuckerberg said, arguing that it is “better that the standard is set by American companies that can work with our government to shape these models on important issues.”
The meeting occurred as a growing number of AI critics, from lawmakers to those in Hollywood, calling for federal regulation to stave off any disastrous consequences, including a rise of deepfake content ahead of the 2024 election.
Key issues under consideration include AI’s impact on the US economy, including its potential to cause sweeping job losses in Hollywood and other sectors.
In his opening remarks, Schumer described the meeting as the start of “an enormous and complex and vital undertaking: building a foundation for bipartisan AI policy that Congress can pass.”
“Congress must play a role, because without Congress we will neither maximize AI’s benefits, nor minimize its risks,” Schumer added.
Lawmakers from both sides of the aisle expressed support for some kind of AI-related legislation in the months and years ahead. However, the exact composition of that legislation — and a timeline for its passage — is still unclear.
“Are we ready to go out and write legislation? Absolutely not,” Republican Sen. Mike Rounds said. “We’re not there.”
The meeting’s closed-door format drew some harsh words from Republican Sen. Josh Hawley of Missouri, who questioned whether it would yield any actual progress on the AI issue.
“I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money,” Hawley said.
During the discussion, Musk, who launched his own artificial intelligence startup called xAI in July, expressed concern about the development of so-called “deeper AI” with human-like data processing capabilities.
Musk “raised concerns about data centers so powerful and big that they could be seen from space, with a level of intelligence that is currently hard to comprehend,” Bloomberg reported, citing a source familiar with the matter.
Meanwhile, the Tesla CEO and X owner reportedly downplayed concerns about risks associated with self-driving technology, which is under active development at his electric car company and its competitors.
“This is an important, urgent, and in some ways unprecedented moment,” added Altman, who spoke to reporters ahead of the meeting. The success of OpenAI’s ChatGPT kickstarted Congressional scrutiny over the technology.
Musk appeared in the same room as his rival Zuckerberg for the first time since plans for their highly anticipated “cage match” appeared to collapse last month. At the time, a frustrated Zuckerberg declared that Musk wasn’t “serious” about participating in a bout.
Earlier, Musk was mobbed by a crowd of reporters and other onlookers as he entered the summit.
In March, Musk was one of hundreds of AI experts who publicly called for a six-month pause in AI development – warning the potential risks of unrestrained advancements ranged from the spread of misinformation to “loss of control of our civilization.”
Elsewhere, Altman has called on Congress to impose guardrails on the AI industry, though he has downplayed concerns about job losses.
In May, the OpenAI boss co-signed a short statement that placed the risks of AI on par with nuclear weapons and pandemics.
With Post wires | AI Policy and Regulations |
- Tech CEOs descended on Capitol Hill Wednesday to speak with senators about artificial intelligence as lawmakers consider how to craft guardrails for the powerful technology.
- Senate Majority Leader Chuck Schumer, D-N.Y., hosted the panel of tech executives, labor and civil rights leaders as part of the Senate's inaugural "AI Insight Forum."
- Tesla and SpaceX CEO Elon Musk, Google CEO Sundar Pichai, Meta CEO Mark Zuckerberg, Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman were among those in attendance.
Tech CEOs descended on Capitol Hill Wednesday to speak with senators about artificial intelligence as lawmakers consider how to craft guardrails for the powerful technology.
It was a meeting that "may go down in history as being very important for the future of civilization," billionaire tech executive Elon Musk told CNBC's Eamon Javers and other reporters as he left the meeting.
Senate Majority Leader Chuck Schumer, D-N.Y., hosted the panel of tech executives, labor and civil rights leaders as part of the Senate's inaugural "AI Insight Forum." Sens. Mike Rounds, R-S.D., Martin Heinrich, D-N.M., and Todd Young, R-Ind., helped organize the event and have worked with Schumer on other sessions educating lawmakers on AI.
Top tech executives in attendance Wednesday included:
- OpenAI CEO Sam Altman
- Former Microsoft CEO Bill Gates
- Nvidia CEO Jensen Huang
- Palantir CEO Alex Karp
- IBM CEO Arvind Krishna
- Tesla and SpaceX CEO Elon Musk
- Microsoft CEO Satya Nadella
- Alphabet and Google CEO Sundar Pichai
- Former Google CEO Eric Schmidt
- Meta CEO Mark Zuckerberg
The panel, attended by more than 60 senators, according to Schumer, took place behind closed doors. Schumer said the closed forum allowed for an open discussion among the attendees, without the normal time and format restrictions of a public hearing. But Schumer said some future forums would be open to public view.
The panel also featured several other stakeholders representing labor, civil rights and the creative industry. Among those were leaders like:
- Motion Picture Association Chairman and CEO Charles Rivkin
- AFL-CIO President Liz Shuler
- Writers Guild President Meredith Steihm
- American Federation of Teachers President Randi Weingarten
- Leadership Conference on Civil and Human Rights President and CEO Maya Wiley
After the morning session, the AFL-CIO's Shuler told reporters that the meeting was a unique chance to bring together a wide range of voices.
In response to a question about getting to speak with Musk, Shuler said, "I think it was just an opportunity to be in each other's space, but we don't often cross paths and so to bring a worker's voice and perspective into the room with tech executives, with advocates, with lawmakers is a really unusual place to be."
"It was a very civilized discussion actually among some of the smartest people in the world," Musk told reporters on his way out. "Sen. Schumer did a great service to humanity here along with the support of the rest of the Senate. And I think something good will come of this."
Google's Pichai outlined four areas where Congress could play an important role in AI development, according to his prepared remarks. First by crafting policies that support innovation, including through research and development investment or immigration laws that incentivize talented workers to come to the U.S. Second, "by driving greater use of AI in government," third by applying AI to big problems like detecting cancer, and finally by "advancing a workforce transition agenda that benefits everyone."
Meta's Zuckerberg said he sees safety and access as the "two defining issues for AI," according to his prepared remarks. He said Meta is being "deliberate about how we roll out these products," by openly publishing research, partnering with academics and setting policies for how its AI models can be used.
He touted Meta's open-source AI work as a way to ensure broad access to the technology. Still, he said, "we're not zealots about this. We don't open source everything. We think closed models are good too, but we also think a more open approach creates more value in many cases."
Schumer said in his prepared remarks that the event marked the beginning of "an enormous and complex and vital undertaking: building a foundation for bipartisan AI policy that Congress can pass."
There's broad interest in Washington in creating guardrails for AI, but so far many lawmakers have said they want to learn more about the technology before figuring out the appropriate restrictions.
But Schumer told reporters after the morning session that legislation should come in a matter of months, not years.
"If you go too fast, you could ruin things," Schumer said. "The EU went too fast, and now they have to go back. So what we're saying is, on a timeline, it can't be days or weeks, but nor should it be years. It will be in the general category of months."
Schumer said he expects the actual legislation to come through the committees. This session provides the necessary foundation for them to do this work, he said. Successful legislation will need to be bipartisan, Schumer added, saying he'd spoken with House Speaker Kevin McCarthy, R-Calif., who was "encouraging."
Schumer said he'd asked everyone in the room Wednesday if they believe government needs to play a role in regulating AI, and everyone raised their hand.
The broad group that attended the morning session did not get into detail about whether a licensing regime or some other model would be most appropriate, Schumer said, adding that it would be discussed further in the afternoon session. Still, he said, they heard a variety of opinions on whether a "light touch" was the right approach to regulation and whether a new or existing agency should oversee AI.
Young said those in the room agreed that U.S. values should inform the development of AI, rather than those of the Chinese Communist Party.
While Schumer has led this effort for a broad legislative framework, he said his colleagues need not wait to craft bills for their ideas about AI regulation. But putting together sensible legislation that can also pass will take time.
Sen. Maria Cantwell, D-Wash., who leads the Commerce Committee, predicted lawmakers could get AI legislation "done in the next year." She referenced the Chips and Science Act, a bipartisan law that set aside funding for semiconductor manufacturing, as an example of being able to pass important technology legislation fairly quickly.
"This is the hardest thing that I think we have ever undertaken," Schumer told reporters. "But we can't be like ostriches and put our head in the sand. Because if we don't step forward, things will be a lot worse." | AI Policy and Regulations |
The UK’s Competition and Markets Authority set out specific principles to help guide AI regulations and companies that develop the technology.
The CMA, which acts as the primary antitrust regulator for the UK, focused its attention on foundation models: AI systems like OpenAI’s GPT-4, Meta’s Llama 2, and other large language models that form the basis for many generative AI use cases.
Companies making foundation models should follow seven principles. That includes making sure developers and businesses that use these models are accountable for the output that consumers are given, ensuring broad access to chips and processors and the training data needed to develop these AI systems, and offering a diversity of business models by including open and closed models. The CMA also said companies should provide a choice for businesses to decide how to use the model, offer flexibility or interoperability to switch to other models or use multiple models at the same time, avoid anti-competitive actions like bundling or self-preferencing, and offer transparency into the risks and limitations of generative AI content.
The CMA developed the principles following an initial review before it launches a series of dialogues with consumer and civil society groups, foundation model developers like Google, Meta, OpenAI, Microsoft, Nvidia, and Anthropic, foundation model users, and academics.
The agency said providing principles for the development and deployment of foundation models is necessary to protect competition and prevent low-performing AI systems from proliferating.
“The impact of foundation models could allow a wider range of firms to compete successfully, perhaps challenging current incumbents,” the CMA said in its review. “Vibrant competition and innovation could benefit the economy as a whole through increased productivity and economic growth.”
It added that if competition is weak, “a handful of firms gain or entrench positions of market power and fail to offer the best products and services or charge high prices.”
While the CMA admitted AI regulation raises broader questions on copyright and data privacy and protection, it chose to focus on competition and consumer protection to help guide the current development of the technology.
Governments around the world have been looking into different ways of regulating generative AI. The European Union, in its proposed AI Act, also focused on foundation models and requiring companies to comply with transparency rules. China’s recent AI rules mandate AI companies to register with the government and promise not to offer anti-competitive algorithms.
Meanwhile, the US is still figuring out how to approach AI regulation, though some policymakers hope to have its rules out by the end of this year. | AI Policy and Regulations |
A delegation of top tech leaders including Sundar Pichai, Elon Musk, Mark Zuckerberg and Sam Altman convened in Washington on Wednesday for a closed-door meeting with US senators to discuss the rise of artificial intelligence and how it should be regulated.
The discussion, billed as an “AI safety forum”, is one of several meetings between Silicon Valley, researchers, labor leaders and government and is taking on fresh urgency with the US elections looming and the rapid pace of AI advancement already affecting peoples’ lives and work.
The Democratic senator Chuck Schumer, who called the meeting “historic”, said that attendees loosely endorsed the idea of regulations but that there is little consensus on what such rules would look like.
Schumer said he asked everyone in the room – including more than 60 senators, almost two dozen tech executives, advocates and skeptics – whether government should have a role in the oversight of artificial intelligence, and that “every single person raised their hands, even though they had diverse views”.
Among the ideas discussed was whether there should be an independent agency to oversee certain aspects of the rapidly-developing technology, how companies could be more transparent and how the US can stay ahead of China and other countries.
“The key point was really that it’s important for us to have a referee,” said Elon Musk, the CEO of Tesla and X, the social network formerly known as Twitter, during a break in the forum. “It was a very civilized discussion, actually, among some of the smartest people in the world.”
Congress should do what it can to maximize the benefits and minimize the negatives of AI, Schumer told reporters, “whether that’s enshrining bias, or the loss of jobs, or even the kind of doomsday scenarios that were mentioned in the room. And only government can be there to put in guardrails”.
Attendees also discussed the pressing need for steps to protect the 2024 US elections from disinformation becoming supercharged by AI, Schumer said.
“The issue of actually having deep fakes where people really believe that somebody, that a campaign was saying something when they were the total creation of AI” was a key concern, said Schumer, and that “watermarking” – badging content as AI generated – was discussed as a solution.
Several AI experts and other industry leaders also attended, including Bill Gates; the Motion Picture Association CEO Charles Rivkin; the former Google CEO Eric Schmidt; the Center for Humane Technology co-founder Tristan Harris; and Deborah Raji, a researcher at University of California, Berkeley.
Some labor and civil liberties groups were also represented among the 22 attendees including Elizabeth Shuler, the president of the labor union AFL-CIO; Randi Weingarten, the president of the American Federation of Teachers; Janet Murguía, the president of UnidosUS; and Maya Wiley, the president and CEO of the Leadership Conference on Civil & Human Rights.
Sparked by the release of ChatGPT less than a year ago, businesses have been clamoring to apply new generative AI tools that can compose human-like passages of text, program computer code and create novel images, audio and video. The hype over such tools has accelerated worries over its potential societal harms and prompted calls for more transparency in how the data behind the new products is collected and used.
In his opening remarks, which Meta shared with the Guardian, Mark Zuckerberg said the company is working with academics, policy makers and civil society to “minimize the risk” of the technology while ensuring they don’t undervalue the benefits. He specifically cited work on how to watermark AI content to avoid risks such as mass spread of disinformation.
Before the forum, representatives for the Alphabet Workers Union said that Schuler, the president of AFL-CIO, would raise worker issues including those of AI raters – human moderators who are tasked with training, testing and evaluating results from Google Search and the company’s AI chatbot – who say they have struggled with low wages and minimum benefits.
“There are many conversations still to come and, throughout the process, the interests of working people must be Congress’ North Star,” Schuler said in a statement. “Workers are not the victims of technological change– – we’re the solution.”
While Schumer described the meeting as “diverse”, the sessions faced criticism for leaning heavily on the opinions of people who stand to benefit from the rapid advancements in generative AI technology. “Half of the people in the room represent industries that will profit off lax AI regulations,” said Caitlin Seeley George, a campaigns and managing director at Fight for the Future, a digital rights group.
“People who are actually impacted by AI must have a seat at this table, including the vulnerable groups already being harmed by discriminatory use of AI right now,” George said. “Tech companies have been running the AI game long enough and we know where that takes us – biased algorithms that discriminate against Black and brown folks, immigrants, people with disabilities and other marginalized groups in banking, the job market, surveillance and policing.”
Some senators were critical of the private meeting, arguing that tech executives should testify in public. The Republican senator Josh Hawley said he would not attend what he said was a “giant cocktail party for big tech”.
“I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money and then close it to the public,” Hawley said.
Agencies contributed reporting | AI Policy and Regulations |
Google CEO says AI will impact ‘every product of every company,’ calls for regs
Google CEO Sundar Pichai on Sunday called for AI regulations and guidelines to ensure the breakthrough technology is “aligned to human values.”
AI will soon impact “every product of every company” and disrupt jobs, Pichai said during an interview with CBS’s “60 Minutes.” He said that writers, accountants, architects, software engineers and other “knowledge workers” will feel the biggest impacts.
Pichai added that without guidelines, AI could be abused by bad actors. The technology could be used to quickly create deepfake videos to spread disinformation and could “cause a lot of harm” at a societal scale, he told CBS’s Scott Pelley.
“How do you develop AI systems that are aligned to human values, including morality?” Pichai asked. “This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers, and so on, and I think we have to be very thoughtful.”
Pichai said that society must collectively decide how AI should be integrated, adding that it’s “not for a company to decide.”
The Google executive said he’s concerned that society might not be ready for the lightning-fast pace of AI advancements. But he’s optimistic about the large number of people “who have started worrying about the implications” of AI at a relatively early point in its development.
Google is currently developing a swath of AI products and is publicly testing its chat AI, Bard. Pichai said that the company is holding back more advanced AIs to ensure that society can get used to the technology.
OpenAI, which operates the popular ChatGPT platform, and other AI companies are also keeping more powerful technologies away from public consumption.
Google has released its own recommendations on how to regulate AI.
Last month, a group of tech leaders including Apple co-founder Steve Wozniak and Tesla CEO Elon Musk penned a letter calling on companies to pause AI development, citing “risks to society.”
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. | AI Policy and Regulations |
OpenAI is set to open its first office in the European Union (EU) and make several strategic hires, as the company prepares for regulatory headwinds.
The ChatGPT-maker says that it plans to open its third office, after San Francisco and London which it announced in June, in Ireland, which has emerged as almost a second home for countless U.S. tech companies seeking to foster closer ties with European lawmakers and customers — while paying a more favorable rate of tax, too.
According to its careers page, OpenAI is currently hiring for 9 positions in the Irish capital, Dublin, and the roles that it’s looking to fill are somewhat indicative of where it’s head is currently at.
Besides a handful of payroll and customer-focused roles, the company is hiring for an associate general counsel for the EMEA region; a policy and partnerships lead for global affairs; a privacy program manager; a software engineer focused on privacy; and a media relations lead.
In short, OpenAI is gearing up to show Brussels that it’s serious about privacy, and it plans to shout this from the rooftops.
The Europe factor
For context, OpenAI has faced more than a little scrutiny off the back of ChatGPT, a generative AI chatbot that has taken the world by storm over its ability to generate extensive content from simple text-based prompts. In Europe, Italy back in March ordered ChatGPT to be blocked over data protection concerns — specifically, how it might be processing people’s data unlawfully, as well as the lack of sufficient guardrails for minors. Spain swiftly followed suit, though OpenAI relaunched ChatGPT in Italy after introducing some privacy disclosures and controls.
More recently, OpenAI was accused of myriad data protection breaches by a security and privacy researcher who filed a complaint with the Polish data protection authority, arguing that OpenAI was infringing on the bloc’s General Data Protection Regulation (GDPR) spanning areas such as (lack of) transparency; data access rights; lawful basis (for processing data); fairness; and privacy by design.
On the horizon, however, is the EU AI Act, which is setting out to govern AI applications based on their perceived risks. Once passed, they will be the first significant AI regulations to emerge anywhere in the world, and could serve as a blueprint for other countries to follow.
Earlier this year, OpenAI CEO Sam Altman embarked on a European schmoozing tour, where he met with regulators and apparently warned against too much AI regulation. This was despite recently telling U.S. regulators that AI regulation was crucial and that an international regulatory body for AI was needed.
And this, effectively, is why OpenAI is having to set up shop in the EU, though its current hiring schedule seems somewhat lightweight when juxtaposed against the might of the EU. It also pales compared to the millions that the likes of Meta, Alphabet, and Microsoft have spent lobbying against regulation in Europe.
At any rate, it’s clear that Europe will be a major focal point for all companies working in the AI realm, and as one of the biggest frontrunners in the burgeoning generative AI realm, we can expect OpenAI to expand its presence and lobbying efforts from here on in. | AI Policy and Regulations |
As the artificial intelligence frenzy builds, a sudden consensus has formed. We should regulate it!While there’s a very real question whether this is like closing the barn door after the robotic horses have fled, not only government types but also people who build AI systems are suggesting that some new laws might be helpful in stopping the technology from going bad. The idea is to keep the algorithms in the loyal-partner-to-humanity lane, with no access to the I-am-your-overlord lane.Though since the dawn of ChatGPT many in the technology world have suggested that legal guardrails might be a good idea, the most emphatic plea came from AI’s most influential avatar of the moment, OpenAI CEO Sam Altman. “I think if this technology goes wrong, it can go quite wrong,” he said in a much anticipated appearance before a US Senate Judiciary subcommittee earlier this month. “We want to work with the government to prevent that from happening.”That is certainly welcome news to the government, which has been pressing the idea for a while. Only days before his testimony, Altman was among a group of tech leaders summoned to the White House to hear Vice President Kamala Harris warn of AI’s dangers and urge the industry to help find solutions. Choosing and implementing those solutions won’t be easy. It’s a giant challenge to strike the right balance between industry innovation and protecting rights and citizens. Clamping limits on such a nascent technology, even one whose baby steps are shaking the earth, courts the danger of hobbling great advances before they’re developed. Plus, even if the US, Europe, and India embrace those limits, will China respect them?The White House has been unusually active in trying to outline what AI regulation might look like. In October 2022—just a month before the seismic release of ChatGPT—the administration issued a paper called the Blueprint for an AI Bill of Rights. It was the result of a year of preparation, public comments, and all the wisdom that technocrats could muster. In case readers mistake the word blueprint for mandate, the paper is explicit on its limits: “The Blueprint for an AI Bill of Rights is non-binding,” it reads, “and does not constitute US government policy.” This AI bill of rights is less controversial or binding than the one in the US Constitution, with all that thorny stuff about guns, free speech, and due process. Instead it’s kind of a fantasy wish list designed to blunt one edge of the double-sided sword of progress. So easy to do when you don’t provide the details! Since the Blueprint nicely summarizes the goals of possible legislation, let me present the key points here.You should not face discrimination by algorithms, and systems should be used and designed in an equitable way.You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.I agree with every single one of those points, which can potentially guide us on the actual boundaries we might consider to mitigate the dark side of AI. Things like sharing what goes into training large language models like those behind ChatGPT, and allowing opt-outs for those who don’t want their content to be part of what LLMs present to users. Rules against built-in bias. Antitrust laws that prevent a few giant companies from creating an artificial intelligence cabal that homogenizes (and monetizes) pretty much all the information we receive. And protection of your personal information as used by those know-it-all AI products.But reading that list also highlights the difficulty of turning uplifting suggestions into actual binding law. When you look closely at the points from the White House blueprint, it’s clear that they don’t just apply to AI, but pretty much everything in tech. Each one seems to embody a user right that has been violated since forever. Big tech wasn’t waiting around for generative AI to develop inequitable algorithms, opaque systems, abusive data practices, and a lack of opt-outs. That’s table stakes, buddy, and the fact that these problems are being brought up in a discussion of a new technology only highlights the failure to protect citizens against the ill effects of our current technology.During that Senate hearing where Altman spoke, senator after senator sang the same refrain: We blew it when it came to regulating social media, so let’s not mess up with AI. But there’s no statute of limitations on making laws to curb previous abuses. The last time I looked, billions of people, including just about everyone in the US who has the wherewithal to poke a smartphone display, are still on social media, bullied, privacy compromised, and exposed to horrors. Nothing prevents Congress from getting tougher on those companies and, above all, passing privacy legislation.The fact that Congress hasn’t done this casts severe doubt on the prospects for an AI bill. No wonder that certain regulators, notably FTC chair Lina Khan, isn’t waiting around for new laws. She’s claiming that current law provides her agency plenty of jurisdiction to take on the issues of bias, anticompetitive behavior, and invasion of privacy that new AI products present.Meanwhile, the difficulty of actually coming up with new laws—and the enormity of the work that remains to be done—was highlighted this week when the White House issued an update on that AI Bill of Rights. It explained that the Biden administration is breaking a big-time sweat on coming up with a national AI strategy. But apparently the “national priorities” in that strategy are still not nailed down.Now the White House wants tech companies and other AI stakeholders—along with the general public—to submit answers to 29 questions about the benefits and risks of AI. Just as the Senate subcommittee asked Altman and his fellow panelists to suggest a path forward, the administration is asking corporations and the public for ideas. In its request for information, the White House promises to “consider each comment, whether it contains a personal narrative, experiences with AI systems, or technical legal, research, policy, or scientific materials, or other content.” (I breathed a sigh of relief to see that comments from large language models are not being solicited, though I’m willing to bet that GPT-4 will be a big contributor despite this omission.)Anyway, humans, you have until 5:00 pm ET on July 7, 2023, to submit your documents, lab reports, and personal narratives to shape an AI policy that’s still just a blueprint, even as millions play with Bard, Sydney, and ChatGPT, and employers make plans for slimmer workforces. Maybe then we’ll get down to embodying those lovely principles into law. Hey, it worked with social media! Uhhhhhh …Time TravelSam Altman's appearance in congress was a lot different than the visits Mark Zuckerberg has endured. The CEO of Facebook, as it used to be called, did not get solicited for advice on how to solve problems. Instead, he got roasted, as I described in my account of a 2019 hearing in the House of Representatives. You are Mark Zuckerberg. It is 1:45 pm in Room 2128 of the Rayburn Office Building. You have been testifying for almost four hours, enduring the questions of the House Financial Services Committee, five minutes per representative, some of them very angry at you. You have to pee.Chair Maxine Waters (D-California) listens to your request for a break and consults with a staffer. There is a floor vote coming up and she wants one more member to ask you questions. So before your break, she instructs, you will take questions from Representative Katie Porter (D-California). Porter begins by asking you about a contention that Facebook’s lawyers made in court earlier this year that Facebook users have no expectation of privacy. You might have heard this—it got press coverage at the time—but you say you can’t comment without the whole context. You’re not a lawyer!She turns to the plight of the thousands of content moderators Facebook employed as contractors who look at disturbing images all day for low wages. You explain that they get more than minimum wage to police your service, at least $15 an hour and, in high-cost regions, $20 an hour. Porter isn’t impressed. She asks if you would vow to spend one hour a day for the next year doing that work. This is something you clearly don’t want to commit to. You squirm—is it nature’s call or the questioning?—and sputter that isn’t the best use of your time. She triumphantly takes that as a no. Waters grants the recess and you run a photographer gauntlet for some relief.Ask Me One ThingPhilip asks, “I was wondering if AI could write my biography using data collected on me, since that process kicked into overdrive post 9/11?”Thanks for asking, Philip, though I suspect you are more interested in deploring the government collection of personal data than dreaming of an algorithmic Boswell. But you bring up an interesting question: Can an AI model craft a biography based simply on raw data about your life? I seriously doubt that even an extensive dossier kept on you would provide the fodder needed for even the driest account of your life. All the hotels you checked into, the bank loans and mortgage payments, those dumb tweets you’ve made over the years … will they really allow GPT-4 to get a sense of who you are? Probably the AI model’s most interesting material will be its hallucinations.If you’re a public figure, though, and have strewn a lot of your personal writings and been featured in numerous interviews, maybe some generative AI model could whip up something of value. If you were the one directing the project, you’d have the advantage of looking it over and prompting the chatbot to “be nicer”—or of cutting to the chase and saying, “Make this a hagiography.” But don’t wait around for the Pulitzer Prize. Even if you choose to let the AI biographer be as incisive and critical as it wants to be, we’re far, far away from a biological LLM like Robert Caro. You can submit questions to [email protected]. Write ASK LEVY in the subject line.End Times ChronicleTina Turner is dead.Last but Not LeastHere’s a challenge for regulations: getting rid of AI-powered “digital colonialism.”Not illegal but should be: AI-generated podcasts. They’re boring!Who will bid $1 trillion for all the world’s seagrass? It’s a bargain!New York isn’t the only city that’s sinking. But it’s sinking! Welcome to the Great Wet Way.Don't miss future subscriber-only editions of this column. Subscribe to WIRED (50% off for Plaintext readers) today. | AI Policy and Regulations |
Multiple generative AI apps have been removed from Apple’s China App Store, two weeks ahead of the country’s new generative AI regulations that are set to take effect on August 15.
The move came after Chinese developers received notices from Apple informing them of their apps’ removal. In its letter to OpenCat, a native ChatGPT client, Apple cited “content that is illegal in China” as the reason for pulling the app.
In July, China announced a set of measures to regulate generative AI services, including API providers. The rules require AI apps operating in China to obtain an administrative license, which is reflected in Apple’s removal notice.
“As you may know, the government has been tightening regulations associated with deep synthesis technologies (DST) and generative AI services, including ChatGPT. DST must fulfill permitting requirements to operate in China, including securing a license from the Ministry of Industry and Information Technology (MIIT),” Apple said to OpenCat. “Based on our review, your app is associated with ChatGPT, which does not have requisite permits to operate in China.”
The popular tech blogger @foxshuo tweeted screenshots showing supposedly over 100 AI apps that have been removed from the China App Store. TechCrunch confirmed that several of those apps indeed couldn’t be found in the China App Store.
TechCrunch has reached out to Apple for comment.
China has been leading the way in regulating the flourishing generative AI space, especially as apps leveraging large language models like ChatGPT have mushroomed in the country. This unpredictable and black-box nature of these LLMs is no doubt a concern for China’s cyberspace censors, whose job is to ensure no illegal or politically sensitive information slips through the crack.
China has already imposed licensing requirements on other areas of the internet, such as video games, and it remains to be seen what criteria will be needed to obtain a generative AI license. In any case, the new regulatory environment will likely deter a lot of developers, especially bootstrapping independent ones, from entering the market, potentially leaving it to deep-pocketed internet giants with the resources to navigate compliance layers.
This is a developing story… | AI Policy and Regulations |
The U.S. federal government is still swimming in circles trying to form some sort of plan to regulate the exploding AI industry. So when the usual suspects of big tech again returned to Capitol Hill on Wednesday for a closed-door meeting on potential AI regulation, they came prepared with the same talking points they’ve been presenting for the last several years, though with an added air of haste to the proceedings.
At the artificial intelligence forum hosted by Senate Majority Leader Chuck Schumer, the big boys all laid their cards on the table, hoping to get the kind of AI regulations they want. Elon Musk, who recently established the late-to-the-party company xAI, again reiterated his stance that AI threatens humanity, according to the Wall Street Journal in a conversation with Schumer after the fact. It’s the same position he’s held for years, though it won’t stop the multi-billionaire from using data harvested from Twitter and Tesla for training his upcoming AI models.
According to CBS News, Musk told reporters that AI companies need a “referee,” referring to the potential that big government would act as the middle manager for big tech’s latest foray into transformative technology. Of course, there’s a wide variety of opinions there. Good old Bill Gates, the original co-founder of Microsoft, went full tech evangelist reportedly saying that generative AI systems will—somehow—end world hunger.
The summit was headlined by the big tech execs of today and yesteryear, including the likes of Nvidia co-founder Jensen Huang and former Google CEO Eric Schmidt. There were some tech critics there as well as union leaders, such as Writers Guild president Meredith Stiehm. The guild is currently on strike partially due to film studios’ desire to use AI to underpay writers. They were sitting across the table from Charles Rivkin, the CEO of the Motion Picture Association. Rivkin and his group aren’t necessarily involved in negotiations, though it paints a picture of just how widespread the concerns over AI have become.
The few outside tech researchers had the task of taking many of Musk’s and Gates’s comments back down to earth. Mozilla Foundation fellow Deb Raji tweeted saying they spent most of their time at the meeting fact-checking claims about what AI could actually do.
At the summit, Meta CEO Mark Zuckerberg got into it with Tristan Harris, who leads the nonprofit Center for Humane Technology, over the company’s use of supposed open-source AI. Harris reportedly claimed his center was able to manipulate Meta’s Llama 2 AI language model to give instructions for creating dangerous biological compounds. Zuckerberg reportedly tried to handwave the critique saying that information is already available on the internet.
According to a transcript of Zuckerberg’s comments released by Meta, Zuck also tried touting his company’s push for open source, as it “democratizes” these AI tools. He said the two big issues at hand are “safety” and responsible use of AI and “access” to AI to create “opportunity in the future.” And despite Zuckerberg continually touting the open nature of his AI models, they really aren’t all that open. The nonprofit advocacy group Open Source Initiative has drilled down on Meta’s actual licenses, noting they only authorize “some commercial uses.”
Schumer also said that everybody involved in the summit, whether they were tech moguls or advocacy groups, all agreed the government needed some sort of role in regulating the advent of AI. The Senate majority leader claimed the tech leaders understood that, even if they install guardrails on their AI models, “they’ll have competitors who won’t.”
What’s already clear is that tech companies want AI regulation. Doing so gives them clear instructions for how to proceed, but it also means they have walls to hide behind when something inevitably goes wrong. Regulations may also make it that much harder for new startups to compete against the tech giants. Microsoft President Brad Smith endorsed federal licensing and a new agency for policing AI platforms. According to Politico, Smith said a licensing regime would ensure “a certain baseline of safety, of capability,” and companies would essentially need to “prove” they’re able to operate their AI under the law.
While Microsoft was trying to pull up the ladder before other companies could reach its current heights, the likes of Google and other tech giants would prefer a softer touch. It’s what they’re currently getting under the auspices of the White House and its voluntary commitments for developing ethical AI. | AI Policy and Regulations |
The Department of Homeland Security (DHS) on Thursday unveiled new guardrails for its use of artificial intelligence in carrying out its mission to secure the border.
The new policies were developed by DHS Artificial Intelligence Task Force (AITF), which DHS Secretary Alejandro Mayorkas created in April.
In announcing these new policies, DHS noted that AI has been critical to its missions, including combating fentanyl trafficking, strengthening supply chain security, countering sexual exploitation, and protecting critical infrastructure.
Mayorkas writes in the AI policy memo, expected to be released later Thursday, that the US must ensure AI is "rigorously tested to be effective [and] safeguards privacy, civil rights, and civil liberties while avoiding inappropriate biases."
DHS has already used AI technology extensively on the southern border, most notably with the use of more than 200 surveillance cameras to detect and flag where human crossings occur.
DHS says it has appointed Chief Information Officer (CIO) Eric Hysen as the Department’s first Chief AI Officer. Hysen, who was set to appear before Congress Thursday, will promote AI innovation and safety within the Department, DHS said.
"I think the potential for unintended harm from the use of AI exists in any federal agency and in any use of AI," Hysten said. "We interact with more people on a daily basis than any other federal agency. And when we interact with people, it can be during some of the most critical times of their lives."
Historically, academics have flagged the dangers of AI regarding racial profiling because it can still make errors while identifying relationships in complex data.
As part of the new policy, Americans are able to decline the use of facial recognition technology in a variety of situations, including during air travel check-ins.
DHS’ new guidelines will also require that facial recognition matches discovered using AI technology be manually reviewed by human analysts to ensure their accuracy, according to a new directive that the agency plans to release alongside the AI memo.
During a congressional hearing, Hysen planned to highlight a recent case at California's San Isidro Port of Entry where agents with Customs and Border Patrol had used advanced machine learning (ML) models to flag an otherwise unremarkable car driving north from Mexico for having a "potentially suspicious pattern."
Agents later discovered 75 kilograms of drugs in the car's gas tank and rear quarter panels.
Reuters contributed to this report. | AI Policy and Regulations |
China will introduce rules governing the use of deep synthesis technology in January 2023. Deepfakes, where artificial intelligence is used to manipulate images and videos, are a concern for Beijing as it ramps up control over online content.Fotografielink | Istock | Getty ImagesIn January, China will introduce first-of-its-kind regulation on "deepfakes," ramping up control over internet content.Deepfakes are synthetically generated or altered images or videos that are made using a form of artificial intelligence. The tech can be used to alter an existing video, for example by putting the face of a politician over an existing video or even creating fake speech.The result is fabricated media that appears to be real but isn't.Beijing announced its rules governing "deep synthesis technologies" earlier this year, and finalized them in December. They will come into effect on Jan. 10.Here are some of the key provisions:Users must give consent if their image is to be used in any deep synthesis technology.Deep synthesis services cannot use the technology to disseminate fake news.Deepfake services need to authenticate the real identity of users.Synthetic content must have a notification of some kind to inform users that the image or video has been altered with technology.Content that goes against existing laws is prohibited, as is content that endangers national security and interests, damages the national image or disrupts the economy.The powerful Cyberspace Administration of China is the regulator behind these rules.Since the end of 2020, China has sought to rein in the power of the country's technology giants and introduced sweeping regulation in areas ranging from antitrust to data protection. But it has also sought to regulate emerging technologies and gone further than any other country in its tech rules.Earlier this year, China introduced a rule governing how technology firms can use recommendation algorithms, in another first-of-its-kind law.Analysts say the law tackles two goals — tighter online censorship and getting ahead of regulation around new technologies."Chinese authorities are clearly eager to crackdown on the ability of anti-regime elements to use deepfakes of senior leaders, including Xi Jinping, to spread anti-regime statement," Paul Triolo, the technology policy lead at consulting firm Albright Stonebridge, told CNBC."But the rules also illustrate that Chinese authorities are attempting to tackle tough online content issues in ways few other countries are doing, seeking to get ahead of the curve as new technologies such as AI-generated content start to proliferate online."Triolo added that the AI regulations that Beijing has introduced in recent years are "designed to keep content regulation and censorship efforts one step ahead of emerging technologies, ensuring that Beijing can continue to anticipate the emergence of technologies that could be used to circumvent the overall control system."Deep synthesis technology isn't all bad. It can have some positive applications across areas such as education and health care.But China is trying to tackle its negative role in producing fake information.Kendra Schaefer, Beijing-based partner at Trivium China consultancy, pointed CNBC toward her note published in February when the draft rules were announced, in which she discussed the implications of the landmark regulation."The interesting bit is that China is taking aim at one of the critical threats to our society in the modern age: the erosion of trust in what we see and hear, and the increasing difficulty of separating truth from lies," the note said.Through the introduction of regulation, China's various regulatory bodies have been building experience in enforcing tech rules. There are some parts of the deepfake regulation that are unclear, such as how to prove you have consent from another to use their image. But on the whole, Trivium said in its note, China's existing regulatory system will help it enforce the rules."China is able to institute these rules because it already has systems in place to control the transmission of content in online spaces, and regulatory bodies in place that enforce these rules," the note said. | AI Policy and Regulations |
ANN/THE JAPAN NEWS – The government’s proposals for generative artificial intelligence (AI) regulations will call for technological initiatives to help combat misinformation, The Yomiuri Shimbun has learned.
The government will present the proposals on Friday at a meeting of the AI Strategic Council, chaired by University of Tokyo Professor Yutaka Matsuo.
The Group of Seven communique issued at the Hiroshima summit in May called for the establishment of a framework to discuss generative AI among G7 nations, dubbing it the ‘Hiroshima AI Process’.
The government has compiled the proposals as part of such efforts.
According to multiple government sources, an outline of the proposals states that effective measures to counter misinformation include a proposed web standard called Originator Profile, which is aimed at providing web users with third party verified information about content creators, site operators and advertisers.
The technology allows web users to confirm the source of information on social media and other websites and evaluate the trustworthiness of the content.
The outline of the proposals highlights the possibility that generative AI could “easily create sophisticated false information” and expresses concern that instances of misinformation being spread could increase in the future.
According to the sources, the outline states that digital initiatives designed to combat misinformation “are not 100 per cent effective, but are effective,” citing Originator Profile as an example.
As well as suggesting innovations such as Originator Profile, the proposals will call for promotion of cutting-edge projects, with the development of technology to identify AI-generated content in mind.
The proposals will also urge greater cooperation on this issue with the Organization for Economic Cooperation and Development and the Global Partnership on Artificial Intelligence – an international initiative formed by governments and organisations.
It is important to strike a balance between protecting intellectual property rights and promoting the effective use of intellectual property, according to the outline.
To prevent infringements of intellectual property rights, the proposals will call for “technological solutions” to control the kinds of data used to train AI models.
According to the outline, businesses will be urged to formulate “governance policies” regarding their use of artificial intelligence. AI firms and other companies will be expected to devise measures to prevent the inappropriate use of the technology by customers and employees and check for misuse.
However, only large companies and public entities such as hospitals will be expected to do so, according to the sources, so small and midsize firms are unlikely to experience significant burdens as a result.
The government wants to lead G7 efforts on the issue, aiming to establish the direction by the end of this year.
As Japan chairs the bloc this year, Prime Minister Fumio Kishida intends to present the government’s position on AI regulations at summit-level online talks that will be held in autumn at the earliest. | AI Policy and Regulations |
ChatGPT – a catalyst for what kind of future?
Statement of the Digital Humanism Initiative, March 2023
The release of ChatGPT has stirred worldwide enthusiasm as well as anxieties. It has triggered popular awareness of the far-reaching potential impact of the latest generative AI, which ranges from numerous beneficial uses to worrisome concerns for our open democratic societies and the lives of citizens.
This development offers an unexpected, but welcome, occasion to explain to the wider public and policy-makers what AI tools like ChatGPT are and how they work; to highlight beneficial uses, but also to raise concerns about its considerable risks, especially for liberal democracies; to underline the urgency for public discussion, society-wide response and the timely development of appropriate regulation; and to argue that academic research needs a fair chance to set research directions independently of the large corporations and their huge investments that are now being poured into further commercial exploitation.
ChatGPT: some basic facts
Released by the Open AI company near the end of 2022 as a conversational version of their Generative AI models, ChatGPT (Generative Pre-Trained Transformer) is a Large Language Model (LLM) based on the innovative combination of both unsupervised training and reinforcement learning from humans. The large data sets come from sources such as books, (news) articles, websites or posts, or comments from social networks to perform its core function as a dialogue system simulating human conversation. It achieves this by ‘estimating‘ via probabilities which word(s) is likely to follow the previous word(s). This is done in accordance with specific writing styles or tones which, in turn, creates the illusion of conversing with a human. While apparently good at mastering this aspect of language, these systems are rather limited at the functional level, e.g., lack of reasoning and abstraction capabilities, or very limited situation modelling. We humans speak to communicate with other humans with the intention of achieving some goal. We tend to attribute this intention to all agents that produce language. We are thus easily seduced to project human intelligence as we understand it onto machines capable of some form of language imitation.
It is important to underline that such models can build convincing combinations of words and sentences, but it does not have human understanding of our questions nor its own answers. Neither does it have an understanding of what ‘facts‘ are and is prone to produce factual errors and to ‘hallucinate’. Open AI admits the limitations of its models: ‘ChatGPT sometimes writes answers that sound reasonable, but in fact are incorrect or nonsense’. This has been the reason for characterizations such as ‘stochastic parrot’ or, less kindly, ‘confident bullshitter’ and others which, of course, are occasionally applicable to humans as well.
OpenAI and other companies have now entered a fierce competitive race and continue to gather feedback data from a rapidly increasing number of users. This means that all of us are the subjects of a huge ongoing field experiment that these companies are conducting – without our consent.
Potential ‘good’ uses
A common attribute of potential uses that are deemed to be beneficial is that AI tools like ChatGPT are used as ‘side-kicks’ or assistants to humans, complementing humans ability to cooperate and participate in society. Such an approach, augmenting instead of replacing humans, has already been argued in the early days of AI. Under this general proviso, there is a long list of potential beneficial applications ranging from assistance in the preparation of legal briefs, translation, programming, chip design, material science to drug discovery and, of course, education and training. This can stimulate innovation, lead to new business opportunities, and create productivity gains in many sectors of the economy.
The history of technology demonstrates that many uses cannot be foreseen, as the social, economic or cultural contexts in which new technologies are adopted and appropriated by users vary considerably. Users, therefore, are not merely passive consumers, but have shown in the past the ability to twist or invent new uses better suited to their needs.
Potential ‘bad’ uses and risks
Currently, the list of concerns about potential abuses of the technology is long, including
– ‘Industrial level’ production of ever more convincing scam emails of all kinds and their wide diffusion by a range of actors, including some governments;
– automatic production of fake news and large numbers of websites for targeted disinformation campaigns by individuals, businesses, and states;
– automatization of communication with scam victims e.g. instructing them how to pay the ransom asked;
– fast, efficient production of customised malware code including new breeds of malware that can “listen-in” on the victim’s attempts to counter it;
– deep fakes by systems trained on images.
Of concern is also the risk posed by the use of these systems by young people during their formative years in school. This is the period where key human cognitive capabilities are developed. We are worried that excessive use of these tools as (themselves-not-well-understood) shortcuts to learning and practising could severely impair these capabilities.
We are also concerned about the lack of transparency and accountability coupled with the loss of shared reference to what is true/false or good/bad. There are reasons to worry that models may intentionally be misused or, even inadvertently, cause dramatic accidents and social disruption that leave completely open who can be held responsible for the harm and damage. It is also very likely that a new ‘arms race’ between ‘robbers’ and ‘cops’ will be set into motion, with cybersecurity experts constantly having to upgrade preventive measures. This comes at a high economic cost but might also lead to further curtailment of civil liberties.
Finally, we are concerned about the enormous concentration of power, resources, and prioritisation of future AI R&D directions in the hands of Big Tech, if the current unconstrained development continues. There are sufficient historic examples to show that the concentration of economic power rapidly leads to a concentration of political power and vice versa.
ChatGPT, Big Tech and liberal democracies
During the last decade numerous studies have shown the fragile state of liberal democracies around the world, concluding that they are ‘back-sliding’ or even in ‘precipitous decline’. There is also a geographical retreat as by now over half of the world lives under authoritarian regimes. Economic inequalities, the effects of unrestrained globalization and constitutional fault lines are among the leading causes for the decline. These are closely intertwined with the role played by Big Tech and their platforms.
The concentration of economic and political power in the hands of a small elite heading a small number of big companies is a major concern related to the outsized influence they exert on democratic processes, institutions and the erosion of the public sphere. Their political power, besides lobbying, stems from the increased capabilities made available via their platforms to nudge, herd, manipulate and polarise public opinion. These capabilities have been and are being used by internal and external perpetrators who seek to undermine democratic processes.
It is in such a context, evolving before our eyes, that the threats to liberal democracy may get exacerbated by the AI race among these tech giants, kicked-off by the release and phenomenal publicity of ChatGPT. Huge investments have poured into a rapidly evolving digital ecosystem whose direction, scale and further investment is solely determined by a few companies.
The cost of training a very large AI system like ChatGPT and the associated requirements for computing power and data sets, represent a real danger as the power of these tools is concentrated in the privileged hands of a few companies and a few governments. Nobody can build such tools in a garage and academic institutions are less and less able to keep up with these companies and the generously funded start-ups that are then acquired.
If the vicious cycle of the platform economy with its ‘winner takes it all’ phenomena is allowed to continue, any remnant of the perhaps idealistic vision of a pluralistic digital ecosystem developing AI tools that empower citizens, complement and thus augment their capabilities to participate meaningfully in an open democratic society – the digital humanism vision to put it shortly – will be brutally swept away.
In summary: The development of AI tools like ChatGPT, that questions what it means to be human, cannot be left in the hands of a few powerful companies. Such AI must be a public social good and democratically governed.
What needs to be done
The centralised control of this experiment and the related decisions on AI research directions represent a threat to the sustainability of liberal democracy which is clear, imminent and vividly highlighted by the glamour and publicity currently surrounding ChatGPT.
The fact that this threat is raising flags of concern among political decision makers (note the recent turmoil in the European Parliament debating the AI Act triggered by ChatGPT), academic networks, initiatives like Digital Humanism, and other like-minded ones, gives hope and motivation to take action during this fortuitous but probably limited window of time.
The need for regulation and our concern that unregulated AI will, on the whole, be bad AI, have not gone unnoticed. In the European Union, the EU AI Act is under intense discussion with the aim to be approved in the coming months. In the USA there have been several antitrust suits for monopolistic behaviour by the Federal Trade Commission and the Department of Justice, with the latest one filed against Google on January 25, 2023. Related to this, in the transatlantic Trade and Technology Council, the USA and EU pursue a Joint Roadmap for Trustworthy AI and Risk Management. The implementation of AI policy must be continuously monitored and updated in a dynamic way.
What needs to be done, in addition to pressing for good regulation and its implementation, is to keep the general public and policymakers informed. They must be made aware of what is at stake regarding the future of democratic institutions and processes and the risk that citizens become pawns in a closed competitive race about profit and market shares. The public sphere for open deliberations and participation is at risk of being taken over and flooded by content that is deliberately designed for misinformation, utter nonsense or undermining the sense of democratic, collective belonging.
We, the Digital Humanism Initiative, academics and researchers in the computer science domain and in the social sciences and humanities, working on current developments of AI and its societal and cultural impact from the perspective of Digital Humanism, feel responsible to inform and explain to the wider public and to policymakers the opportunities and risks that come with ChatGPT (and similar AI tools). Future directions of AI research and development should be driven by human-centered concerns and human needs, in a future which is not dominated by the profit-oriented goals of large companies.
We commit to apply the digital humanism approach in our AI research and development, remain publicly accountable, and stay open to constructive debate in order to improve our approach as generative AI technology, its uses, and our understanding continue to evolve.
“ChatGPT – ein Katalysator für welche Zukunft?” Stellungnahme der Initiative Digitaler Humanismus, deutsche Version als PDF. | AI Policy and Regulations |
LONDON (AP) — The breathtaking development of artificial intelligence has dazzled users by composing music, creating images and writing essays, while also raising fears about its implications. Even European Union officials working on groundbreaking rules to govern the emerging technology were caught off guard by AI’s rapid rise.
The 27-nation bloc proposed the Western world’s first AI rules two years ago, focusing on reining in risky but narrowly focused applications. General purpose AI systems like chatbots were barely mentioned. Lawmakers working on the AI Act considered whether to include them but weren’t sure how, or even if it was necessary.
“Then ChatGPT kind of boom, exploded,” said Dragos Tudorache, a Romanian member of the European Parliament co-leading the measure. “If there was still some that doubted as to whether we need something at all, I think the doubt was quickly vanished.”
The release of ChatGPT last year captured the world’s attention because of its ability to generate human-like responses based on what it has learned from scanning vast amounts of online materials. With concerns emerging, European lawmakers moved swiftly in recent weeks to add language on general AI systems as they put the finishing touches on the legislation.
The EU’s AI Act could become the de facto global standard for artificial intelligence, with companies and organizations potentially deciding that the sheer size of the bloc’s single market would make it easier to comply than develop different products for different regions.
“Europe is the first regional bloc to significantly attempt to regulate AI, which is a huge challenge considering the wide range of systems that the broad term ‘AI’ can cover,” said Sarah Chander, senior policy adviser at digital rights group EDRi.
Authorities worldwide are scrambling to figure out how to control the rapidly evolving technology to ensure that it improves people’s lives without threatening their rights or safety. Regulators are concerned about new ethical and societal risks posed by ChatGPT and other general purpose AI systems, which could transform daily life, from jobs and education to copyright and privacy.
The White House recently brought in the heads of tech companies working on AI including Microsoft, Google and ChatGPT creator OpenAI to discuss the risks, while the Federal Trade Commission has warned that it wouldn’t hesitate to crack down.
China has issued draft regulations mandating security assessments for any products using generative AI systems like ChatGPT. Britain’s competition watchdog has opened a review of the AI market, while Italy briefly banned ChatGPT over a privacy breach.
The EU’s sweeping regulations — covering any provider of AI services or products — are expected to be approved by a European Parliament committee Thursday, then head into negotiations between the 27 member countries, Parliament and the EU’s executive Commission.
European rules influencing the rest of the world — the so-called Brussels effect — previously played out after the EU tightened data privacy and mandated common phone-charging cables, though such efforts have been criticized for stifling innovation.
Attitudes could be different this time. Tech leaders including Elon Musk and Apple co-founder Steve Wozniak have called for a six-month pause to consider the risks.
Geoffrey Hinton, a computer scientist known as the “Godfather of AI,” and fellow AI pioneer Yoshua Bengio voiced their concerns last week about unchecked AI development.
Tudorache said such warnings show the EU’s move to start drawing up AI rules in 2021 was “the right call.”
Google, which responded to ChatGPT with its own Bard chatbot and is rolling out AI tools, declined to comment. The company has told the EU that “AI is too important not to regulate.”
Microsoft, a backer of OpenAI, did not respond to a request for comment. It has welcomed the EU effort as an important step “toward making trustworthy AI the norm in Europe and around the world.”
Mira Murati, chief technology officer at OpenAI, said in an interview last month that she believed governments should be involved in regulating AI technology.
But asked if some of OpenAI’s tools should be classified as posing a higher risk, in the context of proposed European rules, she said it’s “very nuanced.”
“It kind of depends where you apply the technology,” she said, citing as an example a “very high-risk medical use case or legal use case” versus an accounting or advertising application.
OpenAI CEO Sam Altman plans stops in Brussels and other European cities this month in a world tour to talk about the technology with users and developers.
Recently added provisions to the EU’s AI Act would require “foundation” AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislation obtained by The Associated Press.
Foundation models, also known as large language models, are a subcategory of general purpose AI that includes systems like ChatGPT. Their algorithms are trained on vast pools of online information, like blog posts, digital books, scientific articles and pop songs.
“You have to make a significant effort to document the copyrighted material that you use in the training of the algorithm,” paving the way for artists, writers and other content creators to seek redress, Tudorache said.
Officials drawing up AI regulations have to balance risks that the technology poses with the transformative benefits that it promises.
Big tech companies developing AI systems and European national ministries looking to deploy them “are seeking to limit the reach of regulators,” while civil society groups are pushing for more accountability, said EDRi’s Chander.
“We want more information as to how these systems are developed — the levels of environmental and economic resources put into them — but also how and where these systems are used so we can effectively challenge them,” she said.
Under the EU’s risk-based approach, AI uses that threaten people’s safety or rights face strict controls.
Remote facial recognition is expected to be banned. So are government “social scoring” systems that judge people based on their behavior. Indiscriminate “scraping” of photos from the internet used for biometric matching and facial recognition is also a no-no.
Predictive policing and emotion recognition technology, aside from therapeutic or medical uses, are also out.
Violations could result in fines of up to 6% of a company’s global annual revenue.
Even after getting final approval, expected by the end of the year or early 2024 at the latest, the AI Act won’t take immediate effect. There will be a grace period for companies and organizations to figure out how to adopt the new rules.
It’s possible that industry will push for more time by arguing that the AI Act’s final version goes farther than the original proposal, said Frederico Oliveira Da Silva, senior legal officer at European consumer group BEUC.
They could argue that “instead of one and a half to two years, we need two to three,” he said.
He noted that ChatGPT only launched six months ago, and it has already thrown up a host of problems and benefits in that time.
If the AI Act doesn’t fully take effect for years, “what will happen in these four years?” Da Silva said. “That’s really our concern, and that’s why we’re asking authorities to be on top of it, just to really focus on this technology.”
___
AP Technology Writer Matt O’Brien in Providence, Rhode Island, contributed. | AI Policy and Regulations |
New Bill Seeks to Establish a Commission for AI Regulations
A bipartisan group of lawmakers in the US has introduced the National AI Commission Act, which aims to create a blue-ribbon commission to explore the regulation of artificial intelligence (AI). The move comes amid growing concerns regarding the potential dangers of this emerging technology.
New Bill Seeks for a 20-Member Commission to Regulate AI
Representatives Ted Lieu (D-Calif), Ken Buck (R-Colo.), and Anna Eshoo (D-Calif.) have introduced a bill to establish a commission on AI regulation. The legislation is expected to be introduced in the Senate by Senator Brian Schatz (D-Hawaii).
The proposal would establish a 20-member commission to study AI regulation, including how responsibility for regulation is divided between agencies, the capacity of such agencies to regulate, and ensuring that enforcement actions are aligned. Members of the commission will come from civil society, government, industry, and labor and will not be dominated by one sector.
“Our bill forges a path toward responsible AI regulation that promotes technological progress while keeping Americans safe,” the bill’s lead sponsor Representative Lieu said in a statement. He added:
“Transparency is critical when legislating on something as complicated as AI, and this bipartisan, blue ribbon commission will provide policymakers and the American public with the basis and reasoning for the recommendations and what information was relied upon.”
The commission would also recommend new plans to oversee powerful AI systems and how they are regulated, including an evidence-based approach to regulations and build upon previous federal and international regulatory efforts. Further, the commission would be tasked with establishing a risk-based, binding approach built upon previous national and international AI regulatory efforts.
The bipartisan commission would be required to submit three reports to Congress and the president. These include an initial report six months after the law is enacted, a final report one year after the passage of the final regulatory framework, and an additional report one year later on new findings that may have emerged or revisions to AI recommendations.
“As Co-Chair of the bipartisan Congressional Artificial Intelligence Caucus, I understand how complex the issue of artificial intelligence is,” Representative Eshoo said. “The National AI Commission Act is an important first step to bring together stakeholders and experts to better understand how we can regulate AI.”
Join our Telegram group and never miss a breaking digital asset story.
AI’s Exponential Rise Forces Lawmakers to Rush Into Regulating the Sector
Since the release of OpenAI’s ChatGPT in November last year, AI chatbots have taken the internet by storm. These tools come with vast potential and extensive functionality, including the ability to have intelligent-sounding conversations, write music, and code, among other things.
The skyrocketing popularity of AI tools has also led to increasing calls for regulations. Earlier this month, the European Union and the US announced collaborating to develop a voluntary code of conduct for AI. Officials will seek feedback from industry players and invite parties to sign up to curate a proposal for the industry to commit to voluntarily.
The EU has been working on its Artificial Intelligence Act, a sprawling document in the works, for around two years. The legislation aims to classify and regulate AI applications based on their risk.
There have also been some attempts to regulate AI in the US. In April, the US government put out a formal public request for comment regarding AI chatbots to help formulate advice for US policymakers about approaching these emerging technologies.
Furthermore, the Biden administration has already recommended five principles companies should uphold regarding developing AI technologies through a volunteer “bill of rights.”
Do you think AI regulations are a matter of urgency? Let us know in the comments below. | AI Policy and Regulations |
EU industry chief Thierry Breton has said new proposed artificial intelligence rules will aim to tackle concerns about the risks around the ChatGPT chatbot and AI technology, in the first comments on the app by a senior Brussels official.
Just two months after its launch, ChatGPT — which can generate articles, essays, jokes and even poetry in response to prompts — has been rated the fastest-growing consumer app in history.
Some experts have raised fears that systems used by such apps could be misused for plagiarism, fraud and spreading misinformation, even as champions of artificial intelligence hail it as a technological leap.
Breton said the risks posed by ChatGPT — the brainchild of OpenAI, a private company backed by Microsoft — and AI systems underscored the urgent need for rules which he proposed last year in a bid to set the global standard for the technology. The rules are currently under discussion in Brussels.
"As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens, but can also pose risks. This is why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data," he told Reuters in written comments.
Microsoft declined to comment on Breton's statement. OpenAI — whose app uses a technology called generative AI — did not immediately respond to a request for comment.
OpenAI has said on its website it aims to produce artificial intelligence that "benefits all of humanity" as it attempts to build safe and beneficial AI.
Under the EU draft rules, ChatGPT is considered a general purpose AI system which can be used for multiple purposes including high-risk ones such as the selection of candidates for jobs and credit scoring.
Breton wants OpenAI to cooperate closely with downstream developers of high-risk AI systems to enable their compliance with the proposed AI Act.
"Just the fact that generative AI has been newly included in the definition shows the speed at which technology develops and that regulators are struggling to keep up with this pace," a partner at a US law firm, said.
'HIGH RISK' WORRIES
Companies are worried about getting their technology classified under the "high risk" AI category which would lead to tougher compliance requirements and higher costs, according to executives of several companies involved in developing artificial intelligence.
A survey by industry body appliedAI showed that 51 percent of the respondents expect a slowdown of their AI development activities as a result of the AI Act.
Effective AI regulations should centre on the highest risk applications, Microsoft President Brad Smith wrote in a blog post on Wednesday.
"There are days when I'm optimistic and moments when I'm pessimistic about how humanity will put AI to use," he said.
Breton said the European Commission is working closely with the EU Council and European Parliament to further clarify the rules in the AI Act for general purpose AI systems.
"People would need to be informed that they are dealing with a chatbot and not with a human being. Transparency is also important with regard to the risk of bias and false information," he said.
Generative AI models need to be trained on huge amount of text or images for creating a proper response leading to allegations of copyright violations.
Breton said forthcoming discussions with lawmakers about AI rules would cover these aspects.
Concerns about plagiarism by students have prompted some US public schools and French university Sciences Po to ban the use of ChatGPT.
© Thomson Reuters 2023
Affiliate links may be automatically generated - see our ethics statement for details. | AI Policy and Regulations |
Subsets and Splits