article_text
stringlengths
294
32.8k
topic
stringlengths
3
42
On 9 and 10 November 2022, Creative Commons hosted a pair of webinars on artificial intelligence (AI). We assembled two panels of experts at the intersection of AI, art, data, and intellectual property law to look at issues related to AI inputs — works used in training and supplying AI — and another focused on how open works and better sharing intersect with AI outputs — works generated by AI that are, could be, or should be participating in the open commons. In both webinars, we were looking to explore how the proliferation of AI connects to better sharing: sharing that is inclusive, just and equitable — where everyone has wide opportunity to access content, to contribute their own creativity, and to receive recognition and rewards for their contributions? And how the proliferation of AI connects to a better internet: a public interest vision for an internet that benefits us all? AI Inputs Our first panel, AI Inputs and the Public Commons, looked at AI training data. The panel for this discussion included Abeba Birhane, the Senior Fellow in Trustworthy AI at the Mozilla Foundation; Alek Tarkowski, the Co-Founder and Director of Strategy for the Open Future Foundation; Anna Bethke, the Principal Ethical AI Data Scientist for Salesforce; and Florence Chee, Associate Professor in the School of Communication and Director of Center for Digital Ethics and Policy at Loyola University in Chicago. Stephen Wolfson, Associate Director for Research and Copyright Services, School of Law, University of Georgia, moderated the panel. AI Inputs focused on the potential harms that can come from AI systems when they are trained on problematic data. AI models require massive amounts of training data to function, and in the past, developers had to hand curate AI datasets. Today, however, large scale datasets are widely available thanks to widely available internet content, and these datasets have enabled the existence of incredibly powerful AI models like never before. But along with the availability of these massive datasets, concerns have arisen over the content they contain and how they are being used. Abeba Birhane, who audits large datasets as part of her research, has found illegal, racist, and/or unethical content in datasets that are used to train common AI models. Using this data in turn embeds biases into AI models, which tends to harm marginalized groups disproportionately. Even if the content contained within these datasets is not illegal, it may have been obtained without affirmative consent from the creators or subjects of those data. People are also rarely able to withdraw their data from these datasets. Indeed, because the datasets contain millions or even billions of data points, it may be practically impossible to remove pieces of content from them. At the same time, obtaining individualized consent from everyone who has data in one of these datasets would be incredibly difficult, if not impossible. Moreover, requiring consent would likely greatly constrain development of AI because it would be so hard to get consent to use every element in the large datasets that have increased innovation in AI. Unfortunately, there is no clear solution to the problems associated with AI inputs. Our panel agreed that some legal regulation over AI training data can be useful to improve the quality and ethics of AI inputs. But regulation alone is unlikely to solve the many problems that arise. We also need guidelines for researchers who are using data, including openly licensed data, for AI training purposes. For instance, guidelines may discourage the use of openly licensed content as AI inputs where such a use could lead to problematic outcomes, even if the use does not violate the license, such as with facial recognition technology. Public discussions like this are essential to developing a consensus among stakeholders about how to use AI inputs ethically, to raise awareness of these issues, and try to improve AI models going forward. Links shared by panelists and participants - Recommendation on the Ethics of Artificial Intelligence - The New Legal Landscape for Text Mining and Machine Learning by Matthew Sag - OSI’s Deep Dive is an essential discussion on the future of AI and open source - Podcast Archive – Deep Dive: AI AI Outputs We continued our conversation about AI the following day with our second panel, AI Outputs and the Public Commons. For this panel, we brought together a group of experts that included artists, AI researchers, and intellectual property and communications scholars to discuss generative AI. This panel included Andreas Guadamuz, a reader in intellectual property law at the University of Sussex; Daniel Ambrosi, an artist who uses AI technology to help him produce original, creative works; Mark Riedl, a professor at the Georgia Institute of Technology School of Interactive computing; and Meera Nair, a copyright specialist at the Northern Alberta Institute of Technology. Kat Walsh, General Counsel for Creative Commons, moderated the panel. While our second panel touched on many topics, the issue that ran through most of the conversation on AI outputs was whether works produced by artificial intelligence are somehow different from works produced by humans. Right away, the panel questioned whether this distinction between human-created works and AI-generated content makes sense. Generative AI tools exist along a spectrum from those that require the least amount of human interaction required to to those that require the most, and, at least with modern AI systems, humans are involved at every step of AI content creation. Humans both develop the AI models as well as use the systems to produce content. Since truly autonomously generated works do not exist at present — that is, art produced entirely by AI without any human intervention at all — our panelists suggested that perhaps it is better to think about AI systems as a type of art tool that artists can use, rather than something entirely new. From the paintbrush, to the camera, to generative AI, humans have used technology to produce art throughout history, and these technologies always raise questions about the nature of art and creativity. Still, there are some important differences between AI and human creations. For example, AI systems can create new works at a scale and speed that humans cannot match. AI is not bound by human limitations when creating content. AI doesn’t need to sleep, eat, or do the other things that slow down human artistic creation. AI can produce all the time, without distraction or pause. Because of the possibility for the production of vast amounts of AI-generated content, our panelists discussed whether AI-generated content belongs in the public domain or whether it should receive copyright protection. Andres Gaudamuz raised an interesting point that is front and center for us at Creative Commons — should AI-generated works be in the public domain by default or is copyright protection, in fact, a better option. He encouraged the community to consider whether it is desirable to put AI-generated content in the public domain, if that could result in harm to human artists or chill human creativity. If copyright exists to encourage the creation of new works, could a public domain that is filled with AI-generated content discourage human creation by making too much art available for free? Would an abundance of AI-generated content put human artists out of work? At the same time, the panel also recognized that the public domain is necessary for the creation of art. Artists, human and AI alike, do not create in a vacuum. Instead, they build upon what has come before them to produce new works. AI systems create by mixing and matching parts they learn from their training data; similarly, humans experience art and use what has come before them to produce their own works. CC has addressed AI outputs and creativity a few times in the past. In general, we believe that AI-generated content should not qualify for copyright protection without direct and significant human input. We have been skeptical that AI creations should be considered “creative” in the same way that human works are. And, importantly, we believe that human creativity is fundamental to copyright protection. As such, copyright is incompatible with AI outputs generated by AI alone. Ultimately, like our panel on AI inputs, this panel on AI outputs could not, and was not designed to solve the many issues our panelists discussed. With AI becoming an increasingly integral part of our lives, conversations like these are essential to figuring out how we can safeguard against harms produced by AI, promote creativity, and ultimately use AI to benefit us all. Creative Commons plans to be in the middle of the conversation about the intersection of AI policy and intellectual property rights. In the upcoming months, we will continue these discussions through an on-going series of conversations with experts as we try to better understand how we should make sense of AI policy and IP rights. Links shared by panelists and participants - The Runaway Species: How Human Creativity Remakes the World: Eagleman, David, Brandt, Anthony - Dreamscapes – Daniel Ambrosi - Neural Style Transfer: Using Deep Learning to Generate Art - Artificially intelligent painters invent new styles of art | New Scientist - An interview with David Holz, CEO of AI image-generator Midjourney: it’s ‘an engine for the imagination’ – The Verge - Fine Art and the Unseen Hand. Reconsidering the role of technology in… | by Daniel Ambrosi | Medium - @FairDuty Tweet: CIPO appears to have ignored the requirement of originality in terms of what qualifies for (c) protection — an exercise of skill and judgement that is more than trivial – so said by our Supreme Court in 2004 (CCH). Fascinating thread of discussion about AI output and copyright. - List of tools for creating prompts for text-to-image AI generators - Ed Sheeran Awarded Over $1.1 Million in Legal Fees in ‘Shape of You’ Copyright Case - Selling Wine Without Bottles: The Economy of Mind on the Global Net | Electronic Frontier Foundation - Caspar David Friedrich - You Can’t Copyright Style — THE [LEGAL] ARTIST - Part 1: Is it copyright infringement or not? – Arts Law Centre of Australia - This artist is dominating AI-generated art. And he’s not happy about it. | MIT Technology Review - The Hard Drive With 68 Billion Melodies
AI Policy and Regulations
Multiple generative AI apps have been removed from Apple's China App Store, two weeks ahead of the country's new generative AI regulations that are set to take effect on August 15. From a report: The move came after Chinese developers received notices from Apple informing them of their apps' removal. In its letter to OpenCat, a native ChatGPT client, Apple cited "content that is illegal in China" as the reason for pulling the app. In July, China announced a set of measures to regulate generative AI services, including API providers. The rules require AI apps operating in China to obtain an administrative license, which is reflected in Apple's removal notice. "As you may know, the government has been tightening regulations associated with deep synthesis technologies (DST) and generative AI services, including ChatGPT. DST must fulfill permitting requirements to operate in China, including securing a license from the Ministry of Industry and Information Technology (MIIT)," Apple said to OpenCat. "Based on our review, your app is associated with ChatGPT, which does not have requisite permits to operate in China." "As you may know, the government has been tightening regulations associated with deep synthesis technologies (DST) and generative AI services, including ChatGPT. DST must fulfill permitting requirements to operate in China, including securing a license from the Ministry of Industry and Information Technology (MIIT)," Apple said to OpenCat. "Based on our review, your app is associated with ChatGPT, which does not have requisite permits to operate in China."
AI Policy and Regulations
As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios." The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?" "I would say yes," Schwarz said, likening regulating AI before "a little bit of harm" is caused to passing driver's license laws before people died in car accidents. "The first time we started requiring driver's licenses, it was after many dozens of people died in car accidents, right?" Schwarz said. "And that was the right thing," because "if you would've required driver's licenses when there were the first two cars on the road," then "we would have completely screwed up that regulation." Seemingly, in Schwarz's view, the cost of regulations—perhaps the loss of innovation—should not outweigh the benefits. "There has to be at least a little bit of harm, so that we see what is the real problem," Schwarz explained. "Is there a real problem? Did anybody suffer at least a thousand dollars' worth of damage because of that? Should we jump in to regulate something on a planet of 8 billion people when there is not even a thousand dollars of damage? Of course not." Lawmakers are racing to draft AI regulations that acknowledge harm but don't threaten AI progress. Last year, the US Federal Trade Commission (FTC) warned Congress that lawmakers should exercise "great caution" when drafting AI policy solutions. The FTC regards harms as instances where "AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance." More recently, the White House released a blueprint for an AI Bill of Rights, describing some outcomes of AI use as "deeply harmful," but "not inevitable." To ensure that preventable harms were avoided, the Biden administration provided guidance to stop automated systems from meaningfully impacting "the public’s rights, opportunities, or access to critical needs." Just today, European lawmakers agreed to draft tougher AI rules in what could become the world's first comprehensive AI legislation, Reuters reported. These rules would classify AI tools by risk levels, so that countries can protect civil rights without harming important AI innovation and progress. Schwarz seemed to discount the urgency of lawmakers' rush to prevent harms to civil rights, though, suggesting instead that preventing monetary harms should be the goal of regulations and that there is seemingly no need for that yet. "You don't put regulation in place to prevent a thousand dollars' worth of harm, where the same regulation prevents a million dollars' worth of benefit to people around the world," Schwarz said. Of course, there have already been lawsuits seeking damages from makers of AI tools, such as a class-action copyright lawsuit against image generators Stability AI and Midjourney and other lawsuits causing a legal earthquake in AI. OpenAI's ChatGPT data leak caused Italy to temporarily ban the speech generator, worried about Italian users' data privacy, and an Australian mayor threatened a defamation lawsuit after ChatGPT falsely claimed he had gone to prison. Schwarz does not appear to be totally against AI regulations but says as an economist, he likes efficiency and would want laws to balance costs and gains from AI. About six minutes into the panel, Schwarz warned that "AI will be used by bad actors" and "will cause real damage," saying that companies should be "very careful and very vigilant" when it comes to developing AI technologies. He also said that "if we can impose a regulation that causes more good than harm, of course, we should impose it." Microsoft is an investor in OpenAI, and during the panel, Schwarz was asked how Microsoft's and OpenAI's visions overlap. "I think both companies are really committed to making sure that AI is safe, that AI is used for good, and not used for bad," Schwarz said. "We do have to worry a lot about safety of this technology, just like with any other technology." Microsoft and OpenAI did not immediately respond to Ars' request to comment.
AI Policy and Regulations
Artificial intelligence is far from new, yet it continues to enthrall the world. It entered its current golden age in the 1970s, but is still making headlines. In April, for example, a Japanese city decided to delegate some administrative government work to ChatGPT, an AI-powered chatbot released a mere six months previously. But AI’s history also includes systems that have discriminated against certain job candidates and convinced people to take their own lives. All over the world, in every industry imaginable, people will continue to use this technology to make improvements at work. At the same time, its ability to do serious harm has been documented widely. So why hasn’t any country enacted laws aimed at regulating AI yet? To help us to understand the complex tango involved in increasing AI’s benefits to society while reducing its potential to cause harm, Welcome To The Jungle spoke with Ian Andrew Barber, an international human rights lawyer based in London. Barber, who is a senior legal officer at Global Partners Digital, a digital rights non-governmental organization, offers fresh insight into the challenges lawmakers face when it comes to AI, legal initiatives that are in the works and the logic behind them. *This interview took place on April 14th 2023, prior to Italy’s lift of the ChatGPT ban and Sam Altman’s appearance in congress. How does AI factor into your work as a human rights lawyer? AI is a growing issue everywhere. It seems like everyone is talking about AI, trying to wrap their heads around how it works and what developments we will see in the future. But lawyers are increasingly paying attention to AI in the context of existing and proposed regulations, which have required us to take a more coordinated and proactive approach. It’s clear that AI offers a number of benefits to society. It can optimize anything from agriculture to urban living, or facilitate greater access to information and education. But at the same time, we’re also seeing risks emerging from these technologies — to human rights and democratic values. These include the right to a fair trial, the right to privacy, guarantees of non-discrimination, etc. In recent years, we’ve seen a number of non-binding AI guidelines or frameworks proposed by the technical community and international organizations such as Unesco and the OECD. On top of that, we now have countries coming up with their own national AI strategies, which outline how they will deal with the benefits and the risks. In the US, for example, there’s a blueprint for an AI Bill of Rights. For the moment, these approaches are non-binding and haven’t translated into legislation. With that said, in the past year or so, there have been efforts by regional blocs and intergovernmental organizations to create legally-binding frameworks with respect to AI. Why has it been so difficult for lawmakers to legislate for AI? It’s been difficult for a number of reasons. First, [lawmakers have] just a basic understanding of the technology. It’s quite complicated and policymakers are not always the most tech savvy. Most people outside of the technical community are faced with a very steep learning curve when it comes to understanding technology. For example, [at] recent US congressional hearings, legislators lacked a basic understanding of what algorithms are and how they are used by social media platforms. And recent Supreme Court arguments about intermediary liability show how different branches of government simply don’t grasp the fundamentals when it comes to technology, particularly when it’s complex or emerging. Then, there’s a disconnect between how a “normal” person understands AI and what policymakers are doing, or trying to understand. On one end, we have these developments happening at the Council of Europe or the European Union with really brilliant policymakers trying to find solutions to the risks posed by AI. On the other end, you have people on social media, even influential individuals, claiming AI has grown too powerful or is going to take over the world. Policymakers should respond to what the people need and want, but there’s a general lack of understanding. I think it’s the biggest issue across the board. Also, the technology is still developing at a rapid pace. A decade ago AI systems still struggled to distinguish a photo of a cat from a human, and now we have seen the ongoing proliferation of generative AI [the technology behind ChatGPT]. This has added another level of complexity to this technology that all stakeholders need to be able to understand in order to effectively regulate it. Lawmakers seem to be constantly playing catch up. What can legislators do to keep up? Regulations need to build on existing legal frameworks. We already have recognized protections and rights that are guaranteed at the international and national levels, so there’s no reason to simply abandon them. For example, when the modern international human rights system was developed in the aftermath of World War 2, they probably didn’t envision issues around digital mass surveillance. But we’ve adapted to apply these existing protections, such as privacy, in a new context. Sometimes, that requires a bit of work and finesse — but we’ve made it happen before. More recently, we saw how existing data protection laws apply to AI when Italy banned ChatGPT. It said there was no legal basis to justify the mass collection and storage of personal data for the purposes of training ChatGPT’s algorithms. [It has since rescinded the ban.] But with AI, it’s a bit more complicated. For example, under international human rights law, you have the right to an effective remedy, meaning that individuals should have the ability to challenge a violation of their rights and seek redress. But the issue with AI is that there is often this “black box” or opacity around decision-making. How do you know how it made a particular decision? Is it because of a biased developer or the training data? You don’t know, and therefore might not be able to challenge a violation of your rights or seek redress. So, how do we solve that? We still want to make sure the right to remedy is viable and available. That’s why it’s important for legislatures to focus on transparency requirements, accountability mechanisms and oversight. Can you tell us about current initiatives to legislate for AI and what they’re focusing on? At the national level, it’s at a very nascent stage right now. We’re not seeing AI-specific legislation that’s comprehensive, but we are starting to see countries consider it. The National Telecommunications and Information Administration (NTIA) of the US Department of Commerce has requested comments on AI system accountability measures and policies, and the UK has an open consultation on its new approach to AI. These are key opportunities for activists, experts and organizations to make comments and attempt to influence government policy. Besides these national efforts, there are also two really important initiatives that are emerging out of Europe. On one hand, we have the EU Artificial Intelligence Act. This is the 27 EU member states working together to create a harmonized legal framework for the development and use of AI within the internal market. It aims to accomplish this using a risk-based approach, where intervention and obligations are based on the level of risk based on the intended use of an AI system. Right now, there are four distinct levels, as well as a more recent addition of general purpose AI systems. First, there’s “unacceptable risk”: these applications would simply be banned because they contravene EU values and violate fundamental rights. Social scoring, as seen in China, would be one example. The proposal would also ban AI systems that exploit the vulnerabilities of individuals based on age, physical or mental disability, or social and economic situation. There’s a very short list of prohibited uses — only four at this point. Then, there’s “high risk.” These AI systems may have an adverse impact on safety or fundamental rights. This category applies to AI systems used in border control, law enforcement, medical devices, recruiting procedures, among others. They would be subject to various requirements such as those relating to data quality, transparency, and oversight. Then there’s “limited risk.” So these would just have basic transparency requirements, including making users aware they are interacting with an AI system. More recently, EU policymakers have tried to respond to the issue of general purpose AI, such as ChatGPT, and have inserted a new category into the proposal. (They’re trying to put it in its own category.) The problem is that this AI can be used for different purposes with varying levels of risk. I can use it to book my vacation or churn out misinformation at scale. I think it shows how the EU is already having to adapt to technological changes, and the law hasn’t even been put into effect yet. And the other European initiative you mentioned? The Council of Europe’s proposed Convention on AI. The continent’s premier human rights organization, which is composed of 46 countries, is setting out a binding framework that provides obligations and principles on the design, development and application of AI systems. This is to be based on the Council of Europe’s standards on human rights, democracy, and the rule of law. It’s essentially a thematic human rights treaty. While the EU’s Artificial Intelligence Act is a proposed law that will directly apply in the EU once enacted, the Convention on AI has the potential to be the world’s first legally binding treaty on AI. This is because countries outside of Europe such as Canada, the US, Israel, Japan and others would be able to sign on, setting international standards for approaching AI governance. By setting out new, globally-recognized safeguards for AI, it could radically drive up protections on human rights. Could these initiatives stifle innovation? Well, I guess the entire point is that you want to ensure you’re not stifling innovation. That’s why both of these initiatives are taking a risk-based, proportionate approach. It’s not necessary for AI systems that pose a limited risk to be subject to the same obligations as more dangerous systems. It’s important that all regulation is tailored and targeted, and not overly burdensome. So, these approaches are both pro-innovation and address risks. Both of these things can happen at the same time. I think that providing effective guardrails inherently lends itself towards innovation. People want to use products and services that they know are safe and secure. We see this with the regulation of IoT [the internet of things] devices. IoT devices aren’t limited to smart watches or household devices, they are also used in healthcare and manufacturing. These technologies are increasingly subject to regulations, but that hasn’t slowed the pace of development. There’s all this talk about an AI race. But isn’t a slightly less advanced, safer and reliable technology better than one that’s slightly more advanced, but potentially dangerous? I think we will ultimately decide that the answer is “yes.” Do you think Italy’s ChatGPT ban will stifle the country’s ability to stay competitive? No, I don’t. I think two things will likely happen. One, ChatGPT will decide that it will comply with the data protection law in question. Compliance would enable its product to reach a broader number of people, and ultimately benefit the company. So the financial incentive alone is pretty significant. Two, it’s not as if ChatGPT is the only option out there. There are dozens of alternatives that people can use and that are available in Italy, and will likely take ChatGPT’s place in the Italian market. So, digital rights decisions in the EU affect the AI market in the US? Yes, it’s called the “Brussels effect” when EU regulations have a significant influence on laws around the world. This already happened with the General Data Protection Regulation (GDPR), and we’re now seeing it play out with the EU’s Digital Services Act, which requires online platforms such as Meta and Twitter to adhere to certain transparency, moderation and due diligence obligations. So even though the US doesn’t have comprehensive data protection laws or online platform regulations, a lot of their biggest companies have to abide by European laws if they want a part of that market. In many ways, we’re seeing the US becoming a rule taker and not rule maker. What’s the bare minimum an AI policy needs to include if it is to be able to mitigate threats? At the most basic level, you need to first understand the potential harm of whatever system and context you’re facing. So you need to have some means of assessing an AI system, how it can be used and the risks that it might pose. You also need to have transparency, explainability, and accountability requirements, as well as independent and effective oversight. That means understanding where the data came from, but also being transparent with an individual when AI is being used and providing information about how the AI made a decision. It goes back to the right to redress and the importance of providing explainability. Ultimately, you need to be able to challenge a decision made by AI and figure out where the onus lies. Photo: Bess Adler for Welcome to the Jungle More inspiration: Future of work So, are robots really coming for our jobs? A conversation with futurist Gary Bolles Fears of an AI apocalypse loom large, but what are experts saying about the future? Mar 07, 2023 "Long Life Learning": How do we prepare for the jobs of the future? In her new book, Dr. Michelle R. Weise explores a future of work in which our increased longevity leads to careers that span a century Feb 28, 2023 What about the jobs ChatGPT could create? While warnings of a labor apocalypse grow louder, some experts believe AI will increase demand for human workers Feb 20, 2023 With a universal income, will we stop working? Universal income is gaining traction in Europe, but questions about essential jobs remain… Dec 21, 2022 Will working for a DAO be better than a corporate job? Some say it's the democratization of work. Others warn of toxic power structures... so what's the deal with decentralized autonomous organizations? Nov 10, 2022 The newsletter that does the job Want to keep up with the latest articles? Twice a week you can receive stories, jobs, and tips in your inbox. Looking for your next job opportunity? Over 200,000 people have found a job with Welcome to the Jungle.Explore jobs
AI Policy and Regulations
Sam Altman wants to hit the reset button. The OpenAI CEO threw a tantrum last week when it became apparent that the European Union planned to go ahead with a proposed law that would institute a broad regulatory framework to protect against the more disruptive impacts of artificial intelligence. While Altman himself has called for AI regulation and even proposed a new AI regulatory agency before Congress, his response was to threaten to pull ChatGPT and other OpenAI products out of Europe entirely if the law went forward: “If we can comply, we will, and if we can’t, we’ll cease operating,” he said at a tech conference. Now, however, Altman seems to be singing a different tune. Apparently deciding that pitching a fit isn’t the best way to get what you want, the AI executive not only abruptly backtracked on his previous comments (in a tweet, Altman said his company was “excited to continue to operate [in Europe]...and of course have no plans to leave”) but is claiming that, actually, he loves Europe. In fact, he loves it so much that he says OpenAI definitely needs a headquarters there. “We really need an office in Europe,” Altman told Politico late last week. “We also just really want one.” In short: instead of ditching the continent, Altman appears to be moving in. Given recent events, that makes a whole lot of sense. Altman has been on a world-spanning roadtrip over the past few weeks, jetting from one country to the next in the hopes of getting governments to embrace a light-touch regulatory approach when it comes to ChatGPT and generative AI. If you want to be a trailblazer in an AI “revolution,” it kinda helps to have everybody on the same page, right? Europe isn’t the only pitstop on the OpenAI CEO’s charm tour, though it is a critically important one. But Altman doesn’t just want a European office, he’s actually obligated to set one up. That’s because, under Europe’s proposed AI regulations, companies that want to offer AI services in the EU will need to have a presence there, Politico writes. The country that Altman picks as OpenAI’s European HQ will be the country that has direct oversight over how the company is regulated under the EU’s pending legislation. Location scouting for the new office was also an opportunity to smooth things over with Europeans who may have been irked by Altman’s recent dismissive rhetoric. During an event in Paris last week, the ChatGPT creator told a crowd that he wasn’t serious when he said OpenAI might ditch Europe: “We plan to comply. We want to offer services in Europe,” he said. “We just want to make sure we’re technically able to. And the conversations have been super-productive this week.” At the same time that Altman is seeking to set up shop on the bloc, Reuters reports that the tech exec also has plans to meet with EU regulators next month—in the hopes of discussing how OpenAI will implement the EU’s proposed regulations. Altman is scheduled to meet with Thierry Breton, Commissioner for the Internal Market of the European Union (economic competition agency), in San Francisco, where the two will discuss the regulations, as well as a voluntary “pact” that the EU is pushing on companies to adopt the regulations ahead of the new law’s enactment. Because the regulations that the EU has proposed could take up to three years to go into effect, both the U.S. and the EU are also said to be discussing a potential “code of conduct” that would be voluntary and would encourage companies to steer clear of using AI in majorly disruptive ways. The meeting is a chance for both men to make nice after Altman’s hissy fit made things awkward last week. Indeed, after the tech CEO’s comments about pulling out of Europe, Breton notably commented that the proposed regulations were not up for debate. “Let’s be clear, our rules are put in place for the security and well-being of our citizens and this cannot be bargained,” Breton previously told Reuters after Altman made a stink. The European Union has been a trailblazer when it comes to regulating Big Tech. From its passage of the privacy-protecting GDPR to a recent legislative package regulating cryptocurrencies, the EU is miles ahead of the U.S. and other western democracies when it comes to instituting limitations on Silicon Valley. As such, it tracks that the EU has also issued a draft policy, dubbed simply “the AI Act,” that would put forth some of the first rules regulating artificial intelligence. As it stands now, the AI Act would institute a number of new restrictions on how AI technology could be wielded. Most pertinently for firms like OpenAI, the bill would potentially force them to disclose a full list of the copyrighted material that went into building generative models like the DALL-E image generator. For obvious reasons, this could cause big—and, one might imagine, potentially catastrophic—impacts for the business model of such companies. That is to say, if OpenAI is shown to have used thousands of artists’ paintings to inform the algorithm powering its image generator, what are the chances that those artists are going to want some sort of compensation? Altman can obviously smell the lawsuits from here and doesn’t want to go that route: “That sounds like a great thing to ask for,” Altman recently told Politico, of the bill’s copyright stipulation. “But — due to the way these datasets are collected and the fact people have been copying data in different ways on different websites — to say I have to legally warrant every piece of copyrighted content in there is not as easy as it sounds.”
AI Policy and Regulations
In its bid to curb misinformation, TikTok said on Tuesday it will begin launching a new tool that will help creators label AI-generated content they produce. TikTok said in a news release that the tool will help creators easily comply with the company's existing AI policy, which requires all manipulated content that shows realistic scenes to be labeled in a way that indicates they’re fake or altered. TikTok prohibits deepfakes – videos and images that have been digitally created or altered with artificial intelligence - that misled users about real-world events. It doesn’t allow deepfakes of private figures and young people, but is OK with altered images of public figures in certain contexts, including for artistic and educational purposes. Additionally, the company said on Tuesday it will begin testing an “AI-generated” label this week that will eventually apply to content it detects to been edited or created by AI. It will also rename effects on the app that have AI to explicitly include “AI” in their name and corresponding label. The move by TikTok comes amid rising concerns about how the AI arms race will affect misinformation. The European Union, for example, has been pushing online platforms to step up the fight against false information by adding labels to text, photos and other content generated by artificial intelligence.
AI Policy and Regulations
China’s top digital regulator proposed bold new guidelines this week that prohibit ChatGPT-style large language models from spitting out content believed to subvert state power or advocate for the overthrow of the country’s communist political system. Experts speaking with Gizmodo said the new guidelines mark the clearest signs yet of Chinese authorities’ eagerness to extend its hardline online censorship apparatus to the emerging world of generative artificial intelligence. China’s Great Firewall now encircles AI. “We should be under no illusions. The Party will wield the new Generative AI Guidelines to carry out the same function of censorship, surveillance, and information manipulation it has sought to justify under other laws and regulations,” Michael Caster, Asia Digital Programme Manager for Article 19, a human rights organization focused on online free expression, told Gizmodo. The draft guidelines, published by the Cyberspace Administration of China, come hot on the heels of new generative AI products from Baidu, Alibaba, and other Chinese tech giants. AI developers looking to operate in China moving forward will be required to submit their products to a government security review before they are released to the public and ensure all AI-generated content is clearly labelled. Chatbots will have to verify users’ identities, and their makers will be obligated to ensure content served by AI is factual—so far a big problem for their American counterparts—and does not discriminate against users’ race, ethnicity, belief, country, region, or gender. While most of those safeguards appear in line with calls from AI safety experts in other countries, the guidelines sharply diverge on the issues of potentially subversive political content. On that question, China wants to impose stringent measures largely in line with its current policies moderating speech on social media. Here’s a translated portion of the Cyberspace Administration’s proposed guidelines. “The content generated by generative artificial intelligence should reflect the core values of socialism, and must not contain subversion of state power, overthrow of the socialist system, incitement to split the country, undermine national unity, promote terrorism, extremism, and promote ethnic hatred and ethnic discrimination, violence, obscene and pornographic information, false information, and content that may disrupt economic and social order.” Caster fears Beijing’s new guidelines on generative AI could lead to a clampdown on foreign articles translated by chatbots or suggestions on how internet users could use VPNs or other tools to sidestep the country’s so-called Great Firewall content filter. Caster specifically highlighted the recent arrest of a blogger named Ruan Xiaohuan, who was imprisoned for seven years for incitement of subversion of state power. AI models republishing any of his writing, under these guidelines, could face retaliation. “These are the types of independent information deemed subversive in China and what would run afoul of the new guidelines should a dataset inadvertently pull from his [Xiaohuan’s] website in delivering generative content,” Caster said. Human Rights Watch Senior China Researcher Yaqiu Wang told Gizmodo those strict rules, while new, were “totally expected.” Even without these guidelines, Wang said she believed Chinese government officials could still effectively punish companies for spreading content deemed critical of the political system. Having written rules in place specifically mentioning generative AI makes it administratively simpler for officials to cite precise statutes when targeting potentially violating tech firms. “Even without the guidelines, they can do the same thing,” Wang said. “The guidelines are just a convenient tool they can point to.” She agreed with Caster’s assessment, saying it seemed possible Chinese authorities could use the text of the new AI draft rules to strike down peaceful and “totally legitimate speech.” Large tech firms like Baidu have likely already built and trained their models knowing something akin to these restrictions would pass. Previous reporting from the Wall Street Journal showed how earlier versions of Chinese chatbots choked up when asked critical questions about Chinese President Xi Jinping or when prompted to discuss Chinese politics. Some frustrated users reportedly call the censored ChatGPT wannabes “ChatCCP.” “If you are operating in the Chinese system, you know there are things you cannot talk about,” Wang said. “The guidelines are just another warning.” The Cyberspace Administration of China (CAC), formed in 2013, has rapidly evolved in recent years and taken on a role as the country’s foremost internet censor and a server of regulatory nightmares for rapidly growing Chinese tech firms. The CAC was responsible for suddenly knocking Chinese ride-hailing giant Didi out of app stores in 2021 just days after its massive $4.4 billion IPO, and has played a leading role in crafting two of China’s most significant and severe data privacy laws. Critics of the CAC, like Caster of Article 19, say the agency’s close ties to President Xi Jinping mean it’s directly involved in censorship demands handed down from the highest levels of power. Caster warned the CAC’s suggested bans on content that promotes terrorism or extremism, though laudable in the abstract, could similarly be weaponized to crack down on political dissents or marginalized groups like the country’s Uyghur Muslim minority. In that case, Chinese government authorities have categorized Uyghurs as extremists to justify actions multiple human rights groups have described as state-sanctioned persecution. AI-generated content that simply acknowledges Uyghur history or culture, under the new guidelines, could be seen as promoting extremism, Caster said. Chinese regulators haven’t been shy about their concerns over potential political interference attributed to US-made AI chatbots. In February, Tencent and Ant Group reportedly clamped down on users trying to access OpenAI’s ChatGPT, which regulators reportedly warned could be used to, “spread false information.” Even though ChatGPT is blocked in China, users on WeChat and other apps were reportedly sharing exchanges with the model after accessing through VPNs. Some of the answers served up by ChatGPT, according to the Guardian, were perceived by Chinese authorities to be “consistent with the political propaganda of the US government.” On the other side of the Pacific, US lawmakers are voicing similar concerns about Chinese-made AI models. Speaking at an Axios event last month, House China select committee Chair Republican Rep. Mike Gallagher described Chinese AI models as weapons that government officials could use to perfect a “Orwellian techno-totalitarian surveillance state.” That might sound dumb, and it is, but other more respected China hawks, like former Google CEO Eric Schmidt have likewise the US must do “whatever it takes,” to win an AI race against China. “I think our challenge,” Gallagher said, “Is to ensure that AI is used as an instrument for human flourishing and freedom.” The restrictions on foreign chatbots come just as homegrown brands like Baidu and Alibaba race to release their own similar alternatives. Baidu showed off its own alternative, dubbed Ernie Bot, during a pre-recorded demo last month. It shares some similarities with OpenAI’s models, but critics note the Baidu competitor appeared to struggle with basic logic. Internally, the Wall Street Journal notes, Baidu worked around the clock, scrambling to ensure Ernie was capable of completing basic functions. Alibaba’s more recently released Tongyi Qianwen model, which the company opened up to corporate clients, reportedly excels at writing poems in multiple languages and solving basic math problems, but similarly struggles with basic logic problems. US tech firms’ apparent lead in large language models, at least for now, is thanks in part to relatively stricter AI regulations in China and in influx of investment by American companies. Last year, according to a recent analysis conducted by Stanford researchers, US companies invested $47.4 billion on AI projects, a figure 3.5 times more than China. Those figures stand in contrast to claims by some US AI enthusiasts who have suggested US firms could lose their edge due to an anything-goes, non-regulatory environment in China. “It’s clear that China is moving in step with global momentum on regulating AI,” AI Now Institute Executive Director Amba Kak told Gizmodo. “This and other regulatory moves from the Chinese government, like on competition enforcement, directly contradict claims that Chinese tech companies have an edge in the ‘US v China AI race’ because they are left unregulated.” “These loosely backed claims are dangerous because they’re used to push back against regulation of Big Tech firms in the US, promoting a race to the bottom when it comes to standards for privacy, competition and consumer protection,” Kak added.
AI Policy and Regulations
Sam Altman says OpenAI could leave EU over proposed AI law OpenAI LP Chief Executive Sam Altman says a proposed artificial intelligence law in the European Union could lead the startup to shutter its regional operations. The Financial Times reported Altman’s remarks today. The executive detailed OpenAI’s position during a Wednesday visit to London, where he traveled as part of a world tour focused partly on AI regulation. Altman reportedly expects to visit 17 cities during the trip. In 2021, the EU proposed a law called the AI Act to regulate the use of AI and address the risks associated with the technology. This month, officials unveiled an updated version of the legislation that includes additional regulatory requirements. Some of those requirements apply to companies such as OpenAI that are developing foundational AI models. “The details really matter,” Altman told reporters in London this week. “We will try to comply, but if we can’t comply we will cease operating.” In its current form, the EU’s AI Act would require developers of foundation AI models to identify potential risks associated with their products as well as address them. Advanced models would have to meet a set of “design, information and environmental” requirements. Furthermore, companies to which the rules apply would have to register their work in an EU database. Some of the newly added sections in the AI Act focus on copyright. Under the proposed legislation, companies such OpenAI would have to disclose if their models are trained on copyrighted data. Other rules in the legislation would require companies to mark AI-generated content as such and prevent the generation of illegal content. The European Commission, the EU’s executive branch, and member states will have to negotiate a final version of the legislation before it’s implemented. The law is expected to go into force in 2025. However, OpenAI’s ChatGPT service has already come under regulatory scrutiny in the bloc. In late March, Italy’s privacy regulator ordered OpenAI to stop processing local users’ data temporarily. The move was driven by concerns that OpenAI may have failed to comply with local data protection requirements. Following the development, the startup blocked access to ChatGPT in Italy for a few weeks. OpenAI restored access in late April after taking a series of steps to address the privacy regulator’s concerns. As part of the initiative, the startup added a form that allows users to request the removal of their personal information. Shortly after Italy’s privacy regulator ordered OpenAI to stop processing locals’ data, Germany’s commissioner for data protection stated that “in principle, such action is also possible in Germany.” Moreover, officials in France and Ireland have reportedly held discussions with Italy’s privacy regulator about its decision. The EU’s push to regulate foundation models has drawn the attention of not only OpenAI but also Google LLC. On Wednesday, the day Altman raised the prospect of OpenAI leaving the bloc, Google Chief Executive Officer Sundar Pichai reportedly met with EU officials to discuss AI policy. Pichai is said to have emphasized the need for regulation “that did not stifle innovation.” Google operates a chatbot service called Bard that competes with OpenAI’s ChatGPT. Moreover, it offers foundation models through commercial application programming interfaces. OpenAI has likewise made GPT-4 and several of its other models available via paid APIs. Image: OpenAI A message from John Furrier, co-founder of SiliconANGLE: Your vote of support is important to us and it helps us keep the content FREE. One-click below supports our mission to provide free, deep and relevant content. Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts. THANK YOU
AI Policy and Regulations
On Wednesday, New York City imposed a new law regulating companies that use artificial intelligence to screen candidates for open positions. The city’s Automated Employment Decision Tools (AEDT) law requires companies to prove their AI tool is free from racial and gender bias before it can be used. AEDT Law, also known as Local Law 144, was introduced in 2021 and took effect this week, making New York City the first to pave the way for corporate AI regulations with others expected to follow suit. The tools are algorithms that use AI to make decisions about who to hire and/or promote, although the filtered selection is reportedly passed to a human for final review. Companies will need to annually file a ‘bias audit’ and if they don’t comply, first-time offenders will be subject to a $500 fine and repeat violations can carry up to $1,500 fines. If a company doesn’t comply with the bias audit, it will face a fine, per AI tool, of up to $1,500 each day, according to Conductor AI. The bias audit will calculate the selection rate and impact ratio for each category of people who were hired including male versus female categories, race/ethnicity, and intersectional categories combining sex, ethnicity, and race, according to the law. Although using AI tools can significantly cut back on employers wading through hundreds of resumes, the risk is that the tool could mirror human stereotypes and discriminate against certain candidates. “That’s the risk in all of this, that left unchecked, humans sometimes can’t even explain what data points the algorithm is picking up on. That’s what was largely behind this legislation,” John Hausknecht, a professor of human resources at Cornell University’s School of Industrial and Labor Relations, told CBS News. “It’s saying let’s track it, collect data, analyze it, and report it, so over time, we can make changes to the regulations.” According to AEDT, if applicable, a company must provide alternative instructions for an applicant to “request an alternative selection process or a reasonable accommodation under other laws,” although employers are not required to offer an alternative selection process. “We are only talking about those tools that take the place of humans making decisions,” Camacho Moran an employment attorney at Farrell Fritz told CBS. “If you have an AI tool that runs through 1,000 applications and says, ‘these are the top 20 candidates,’ that is clearly a tool that falls within the definition of an AEDT.” The law may spread to other cities as the draw toward remote hiring becomes increasingly popular for companies, both in New York City and elsewhere. But the law is still limited, Julia Stoyanovich, a computer science professor at New York University and a founding member of the city’s Automatic Decisions Systems Task Force, told NBC News. She told the outlet that AEDT still doesn’t cover some important categories of applicants including those based on age or disability discrimination. “First of all, I’m really glad the law is on the books, that there are rules now and we’re going to start enforcing them,” Stoyanovich told the outlet. “But there are also lots of gaps. So, for example, the bias audit is very limited in terms of categories. We don’t look at age-based discrimination, for example, which in hiring is a huge deal, or disabilities.” It is still unclear how the AEDT law will be enforced, but a spokesperson for New York’s Department of Consumer and Worker Protection said the agency will “collect and investigate complaints” against companies.
AI Policy and Regulations
AI is defining the future, even as many US senators struggle to understand it in the present.“It would have been better if it had been held in a room where the acoustics were better,” Senator Chuck Grassley, an Iowa Republican, says of a much-anticipated—if overdue—All-Senators AI briefing orchestrated by Senate Majority Leader Chuck Schumer earlier this month.The shoddy acoustics of the first of three closed-door meetings—kept private to insulate senators from electoral pressure to perform before cameras—were far from Grassley’s biggest complaint. “I would say that the next [one] will be more valuable, because this was a very general overview,” he says.As AI expands its foothold across industries, households, and legislative bodies—including amongst some at the Capitol itself—Congress is under pressure to act quickly, even though many lawmakers still don’t know what they’re being asked to regulate. While Schumer, the White House, and industry leaders are spotlighting the revolutionary power of artificial intelligence, it’s still unclear if this hyper-dysfunctional Congress—currently consumed by the 2024 election cycle—can address AI before it remakes our world in its generative image.For now, AI seems to be the least partisan issue in Washington, even as today’s bipartisan optimism is coupled with bicameral fear. This otherwise divided Congress is tuned in to AI—from Nafta flashbacks as some imagine AI upending today’s already upended job market to persistent Cold War fears now trained on AI’s potential to launch nuclear strikes. And that’s to say nothing of the electoral threat posed by generative AI and increasingly sophisticated deepfakes. These dauntingly high stakes may explain the Senate’s collective shrug after Schumer’s first big, closed-door AI reveal.“There wasn’t much there that I hadn’t heard, and that’s a pretty low bar,” says Senator John Hickenlooper, a Colorado Democrat. “I wish it was more substantive.”Senators inside the room when the doors closed say Massachusetts Institute of Technology professor Antonio Torralba was informative, especially when answering basic—yet seismic—questions, like how does AI learn? While the briefings are secret, earlier this week Schumer delivered a highly publicized AI address in which he laid out his shiny new SAFE Innovation Framework for AI Policy at the Center for Strategic & International Studies.“In many ways, we’re starting from scratch, but I believe Congress is up to the challenge,” Schumer told the crowd. “AI moves so quickly and changes at near-exponential speed, and there’s such little legislative history on this issue, so a new process is called for.”As a once-distant future rapidly becomes our present, the overarching question is: Can the current Congress learn fast enough to adapt?For what is likely this briefest of windows, many US senators are gathering before a vetted few AI intellectuals, unified by their earnest search for answers. The thing is, some of those answers may not exist, even as humanity—and the Senate—journeys into the algorithmic unknown.Lobbyists on the Sidelines—for NowOver the Silicon Valley tech boom of the past two decades, Congress has held many hearings—some less embarrassing than others—but when it comes to actual regulations, lawmakers have been mostly hands-off.Schumer is now vowing to do what he and his colleagues have failed to do thus far: regulate the tech titans who have spent tens of millions of dollars all but painting the Senate in their trademarked hues. That pressure is yet another reason the doors are closed for these All-Senators AI briefings.“I think it’s nonpartisan. The partisan elements within our parties don’t have a reason yet for this to become partisan, and I think we need to think through and come up with good policy before that happens, because then it’ll get infinitely harder to get something done,” Senator Martin Heinrich, a Democrat from New Mexico, says of regulating AI. He’s braced for potential legislative land mines from the tech lobby. “I’m sure they’re coming.”Besides Heinrich, Schumer’s helping orchestrate these private AI tutorials with Senate Republicans Todd Young of Indiana and Mike Rounds of South Dakota. The group’s goal is lofty, to collectively educate the Senate while empowering lawmakers to tackle the minutiae of what may be the most complex technology humans have ever encountered.While Hickenlooper was frustrated by the lack of substance at the first briefing, others are more patient with colleagues who are “at different spots along the learning curve,” says Senator Mark Warner, a Democrat from Virginia.“I think there were a lot of senators where it was needed because a lot of the senators in that room were where I was nine months ago,” says Warner, who chairs the Senate Intelligence Committee. “It takes a number of these sessions before you kind of get it.”An AI Briefing BombshellThe four senators at the heart of this AI lecture series understand criticisms of the first one. It was “kinda an AI 101,” Rounds says. The bipartisan group of senators has already vetted the next speaker—who I can only assume is scheduled to outline scenarios of war, peace, and global domination—after the Senate’s two-week July 4 recess.“The next one with regard to what we’re doing offensively and defensively within the Department of Defense is really the one that’s going to wake people up and let them see how critical it is that we continue to allow AI to be enhanced in the United States,” Rounds says.In contrast to Congress’ battle with TikTok and its Chinese-owned parent company, ByteDance, lawmakers aren’t exempting Silicon Valley firms from their AI inquiries. But when it comes to the military, Rounds and the others fear the AI rush among America’s adversaries, something they plan to highlight for their colleagues in the next briefing, which won’t be open to the public.“This is really important to get this one as well-attended as possible,” Rounds says. “The speaker that’s going to be laying this out, we’ve already had a report from him. It’s a great educational opportunity. It’s one that I’m really trying to get as many members of my conference to come to as possible because the speaker does a very good job of laying it out in some detail.”While Schumer outlined a sprawling blueprint of his goals, Rounds and the others say they’re as much on a listening tour as they are tour managers for this summer’s AI lecture circuit.“What the idea is, is to get these ideas out so people can vet them,” Rounds says. “I’m encouraging people to bring their ideas in, and every committee should have the opportunity to look at what’s critical to their committee. This is just a great way to allow for all of those to kind of find a central location where we can work our way through a lot of them.”A Senate Odd CoupleEven with some senators still catching up on the basics of AI, Schumer and his bipartisan allies were overall happy with their first all-senators meeting. “Listen, if you’re delivering a lecture to scores of senators, they’ll be at varying degrees of expertise, so I actually think it was appropriate for the audience,” Senator Young of Indiana says.Young has worked with Schumer before, and they hope their past success portends good things here. In 2019, Young and Schumer began negotiating the Endless Frontier Act, which was focused on increasing US competitiveness against China. It evolved into the US Innovation & Competition Act. Then it became America Competes Act, and finally the CHIP Act–and even the CHIPS+ Act—before President Biden signed the CHIPS and Science Act into law last year.While negotiations started with Young and Schumer, they didn’t end there. Rather, the pair heard input from other congressional committees and worked that into the final package.“That was the most utilization I’ve seen of the committee process since I’ve been in Congress, and I think this has an opportunity to be even more inclusive,” Young says. “Senator Schumer and I started off with legislation, but then we drew extensively from different committees of jurisdiction. I think that this effort will be even more decentralized.”While many senators will introduce their own AI measures, Young says the bipartisan effort is aimed at getting lawmakers on the same page.“So some of us may have bills, but the real point of emphasis here will be on crowding in ideas from others, so I think this will be more committee-focused,” Young says.Schumer’s Democratic partner in the AI talks is Heinrich, the New Mexico Democrat, who says the closed-door meetings in the Senate are meant to help strengthen the Senate’s long-standing committees.“I think where we are right now is encouraging everyone through the normal processes,” Heinrich says. “Different committees are going to have very different jurisdictions.”And there are a lot of committees and many AI-related issues to tackle. For example, the Judiciary Committee will need to sort out copyright questions, the Armed Services Committee will handle questions of war, peace, and nuclear Armageddon (concerns Senator Ed Markey, a Massachusetts Democrat, has raised). And the Education Committee will handle AI’s potential impact on public education.Lawmakers—and their staffs—also have to pore over today’s laws to see which work and which need a reboot, like copyright law in the AI era. “Some of that, the existing law is adequate, and in other places, it’s not,” Heinrich says.Counting CriticsFor now, the AI talks have largely remained above the partisan fray. Last week, a bipartisan and bicameral group unveiled a new proposal to erect a national AI commission—comprising 10 Democrats and 10 Republicans—to address AI in a more dispassionate manner than we’ve come to expect from Congress. Even so, pro-industry critics are starting to voice their concerns over what they see as a rush to regulate.“Putting the federal government in charge of the granular development of AI is a strategy certain to ensure that China beats us in every respect in the development of AI—and that would be catastrophic,” says Senator Ted Cruz.Cruz is the top Republican on the Senate’s Commerce, Science & Transportation Committee, which has sweeping jurisdiction over the economy. The junior senator from Texas fears Congress is going to overstep and crush innovation in the name of digital protectionism.“I think that’s foolhardy. Very few members of Congress have any idea what AI is, much less how to regulate it. There are—no doubt, there are risks and risks we need to take seriously, but there are also enormous potential productivity gains. And the last thing we want to do is turn technology innovation into the Department of Motor Vehicles,” Cruz says.Like his 99 colleagues, Cruz will get his say in due time. While the bipartisan AI working group isn’t focused on producing a massive, catch-all AI bill, its members know that such legislation could be the final outcome, following on the heels of the CHIPS and Science Act of 2022.If that happens, it will be legislation the likes of which the Senate has never seen, in part because AI appears to be all-encompassing.“It’s going to be big. It’s going to be big, and our hope is that all of the relevant committees do the hard work of figuring out where those things are,” Heinrich says. “Hopefully, we can get on the same page on a number of those things and then package that together.”
AI Policy and Regulations
Fox News China expert Gordon Chang revealed the startling reason why the Chinese Communist Party is requiring AI to “reflect socialist values” in its new artificial intelligence regulations. The Communist Chinese government unveiled a slew of insane regulations on artificial intelligence (AI) that would give it control over all AI in the country. Under the new drafted Chinese Communist Party (CCP) regulations, all AI would have to support CCP ideology. The regulations draft was issued April 11. Stanford University’s DigiChina provided a translation of the draft of the CCP’s “Measures for the Management of Generative Artificial Intelligence Services – April 2023.” The drafted regulations say that, while the CCP supports “indigenous innovation, broad application, and international cooperation” for AI, “Content generated through the use of generative AI shall reflect the Socialist Core Values.” [Emphasis added] Chang warned about the dangers of such regulations in exclusive comments to MRC Free Speech America. “China wants control,” Chang said. “It wants to dominate the world, it actually has this notion that it’s the only sovereign state in the world, and that the moon and Mars are part of the People’s Republic of China.” He added that while the CCP “wants to have the world’s most sophisticated technology,” it will choose control of tech instead. “The implications would be that it would be very difficult to propagate notions supporting democracy,” Chang added. “You’d have only China-sponsored messages…it’s freedom versus totalitarianism.” The CCP is totalitarian, Chang said. “They want to have great AI…but they want it to further their goals of control.” The CCP, besides being the greatest mass murderer in world history under Mao Zedong and Xi Jinping, runs an authoritarian surveillance state. This means control over AI would endow its surveillance apparatus with enormous newfound power. George Orwell is turning over in his grave. The so-called “values” included a ban on undermining the government, which could theoretically involve journalistic reporting on the ongoing Uyghur Muslim genocide, religious persecution, and other ongoing crimes of the CCP. The CCP is vying to put all AI use in China under the control of the censorship-obsessed Cyberspace Administration of China (CAC). “Before using generative AI products to provide services to the public, a security assessment must be submitted to the state cyberspace and information department,” the regulations say. In addition, CAC and other government departments will have the power to “order suspension or termination of…use of generative AI provider services.” The CCP might well interpret the rules however it wants anyway, considering how contradictory its supposed ethical guidelines are to its documented actions. Its current oppression of Uyghurs, Chinese Catholics, and other groups shows how insincere is the AI restriction against “discrimination on the basis of race, ethnicity, religious belief, nationality, region, sex, age, or profession.” The CCP also constantly peddles falsehoods in its state propaganda, yet the AI regulations supposedly ban “false information.” The drafted AI regulations further provide the possibility of direct government surveillance of every AI user in China, as “users shall be required to provide real identity information.” There are various CCP laws listed with which all generative AI has to be in accord. Violations are to be reported to CCP government departments. Big Brother is always watching. Conservatives are under attack. Contact your representatives and demand Big Tech be held to account to mirror the First Amendment and provide transparency. If you have been censored, contact us at the Media Research Center contact form, and help us hold Big Tech accountable.
AI Policy and Regulations
WASHINGTON, D.C. – AI regulation is in the hands of "JCPenney leisure suit"-wearing lawmakers who still have "8-track tape players," which could mean trouble, says one Republican lawmaker. Last week, the U.S. House of Representatives took a small step toward building an AI regulatory framework by advancing the AI Accountability Act, which called for the government to study AI accountability and report back in 2025. "Let a bunch of guys up here that are wearing JCPenney leisure suits that still have 8-track tape players in their '72 Vegas start talking about technology, then you got some problems," Rep. Tim Burchett, R-Tenn., told Fox News when asked about regulation keeping pace with innovation in the AI sector. SHOULD CONGRESS DO MORE TO REGULATE AI TO KEEP UP WITH ITS INNOVATION? LAWMAKERS WEIGH IN. WATCH: "I don't know that we need regulation," Burchett said. "You want to stifle growth, you start putting laws on it." The Senate had another listening session on AI development last Wednesday, but many lawmakers agreed that Congress still doesn't understand enough about AI yet to create regulations. "Right now, we're in the Wild West," Connecticut Democrat Sen. Richard Blumenthal told Fox News. "AI enables, not only in effect, appropriation of creative products … but also impersonation, deepfakes, a lot of bad stuff. We need to invest in the kinds of restraints and controls if there's a danger of AI becoming autonomous." "The problem with AI is that it's advancing so fast," Republican Rep. Nancy Mace of South Carolina said. "It's very difficult to regulate because you don't know what the next thing is going to be." Artificial intelligence, a branch of computer science designed to understand and store human intelligence, has excelled in recent months as the tool increasingly mimics human capabilities. China and the European Union have drafted AI regulations this year, but Congress hasn't passed any legislation since the tech's rapid development started and as more critics voiced their concerns. "If you overregulate, like the government often does, you stifle innovation," Mace told Fox News. "And if we just stop AI, nothing is stopping China. We want to make sure that we are No. 1 in AI technology in the world and that it stays that way." Sen. Josh Hawley of Missouri, a Republican, told Fox News that AI will be great for the big corporations involved, but he questioned whether it would benefit everyday Americans. "Will it be good, though, for the American people, for American workers?" he said. AI advancements could reduce or eliminate 300 million jobs globally, according to a Goldman Sachs analysis published in March. Up to 30% of hours currently worked across the U.S. economy could become automated by 2030, creating the possibility of around 12 million occupational transitions in the coming years, according to a McKinsey Global Institute study published in July. Lower-wage workers are up to 14 times more likely to need to change occupations than those in the highest-wage positions, and women are 1.5 times more likely to lose their jobs than men with continued AI development, the study found. "We can't keep up with it," California Democrat Rep. Robert Garcia said. "The way AI is being used is unbelievable." Rep. Jim Himes, D-Conn., said that "Congress doesn't understand AI well enough right now to be promulgating regulation." "We need to start with the fact that there's a lot associated with AI," he said. "We need to start breaking those down and thinking about where we really think there's an urgent need for regulation." To watch lawmakers' full interviews, click here.
AI Policy and Regulations
Some of the most powerful people in America assembled in Washington, DC, today to help shape the future of artificial intelligence (AI) safeguards. The unprecedented meeting took place as the US Senate gears up to draft legislation that will regulate the rapidly advancing AI industry, which many of the world's best minds fear could destroy humanity if left unchecked. The gathering brought 22 of the most influential voices in the tech sector - who had a combined net worth of over $400billion - and 100 senators under one roof, bridging the gap between Silicon Valley and the nation's capital. The high-profile event included notorious AI critic Elon Musk, who today called for tighter regulation of AI, as well as Mark Zuckerberg, Bill Gates and the CEOs of Google and IBM. The private meeting was a crash course for legislators on how best to regulate AI: a technical achievement which some of these same industry leaders likened to the 'extinction'-level risk of nuclear weapons. Those who fear AI fear it could surpass human intelligence and develop independent thinking. This means it would no longer need or listen to humans, in a worst-case scenario stealing nuclear codes, create pandemics and spark world wars. So, who was at the meeting? In additional to his reputation for massive wealth, Elon Musk is known for his influential role in AI. As the CEO of Tesla, he has been a driving force behind the development of autonomous vehicles, pushing the boundaries of AI in the automotive industry with features like Tesla's Autopilot. The 52-year-old has also been a vocal advocate for AI safety and has co-founded OpenAI to ensure responsible AI development. After the meeting with US lawmakers today, Musk has been a vocal proponent of AI safety, and said a 'referee' is needed to monitor systems. Mark Zuckerberg, the CEO of Meta Platforms (formerly Facebook), oversees one of the world's leading social media and tech conglomerates, including: Instagram, Threads, Facebook and WhatsApp. While AI plays a crucial role in Meta's operations, including content recommendations and augmented reality, Zuckerberg has also ventured into AI research with projects like Jarvis, his personal AI assistant. Meta introduced Llama 2, a model similar to ChatGPT, that could challenge what is one of the fastest-growing apps of all time. At the conference today, the 39-year-old pushed for 'open source' technology, arguing that open-sourcing infrastructure will minimize potential safety risks and maximize access. As the CEO of Alphabet, Sundar Pichai, 51, manages Google's parent company, which is at the forefront of AI research. Google's AI innovations range from improving search algorithms to pioneering developments in natural language processing with products like Google Assistant. Pichai told Wired that he is not in a rush to catch up on OpenAI. He said releasing Google's AI products before ChatGPT was launched 'wouldn't have worked out as well.' Sam Altman, the CEO of OpenAI, is arguably the most powerful person in AI development today. The future of AI will be impacted by his beliefs and actions. The 38-year-old has played a central role in advancing AI safeguards. Under his leadership, OpenAI has focused on creating AI technologies that attempt to benefit society, including notable features like GPT-3. Jensen Huang, the CEO of Nvidia, has steered the company towards AI dominance. Nvidia's GPUs are pivotal in accelerating AI workloads, powering everything from deep learning research to AI-driven gaming experiences. The 60-year-old founded Nvidia in 1993, which originally worked to create increasingly immersive video games. Today, Nvidia is the world's 'dominant producer of the microprocessors that power the AI revolution,' according to the Atlantic, pushing Nvidia's stock to skyrocket nearly 200 percent over the past year to reach a $1.1 trillion valuation. Satya Nadella, CEO of Microsoft, has overseen the company's significant investments in AI. Microsoft Azure's AI services, as well as the acquisition of LinkedIn and GitHub, have solidified Microsoft's position as a key player in AI development and cloud services. Nadella, 56, believes the benefits of AI far outweigh potential consequences. He told Wired that he can't imagine life without AI. Arvind Krishna, CEO of IBM, has led the company in AI and quantum computing endeavors. IBM's Watson AI platform has been a trailblazer in AI applications across various industries, from healthcare to finance. Krishna, 61, is a strong supporter of the future of AI, claiming 'the world needs AI to help offset productivity losses because of declines in the working age population,' according to Fortune. While he believe white-collar jobs will be among the first to be impacted by AI, he ultimately says AI will create more jobs than it will replace. Bill Gates, the former CEO of Microsoft, has been a long-standing advocate for technology and AI. Although he stepped down from his CEO role, Gates continues to be involved in philanthropic efforts, including funding AI research to address global challenges like healthcare and climate change. The 67-year-old believes AI has potential to change the future of health and education. He said could transform production systems worldwide, according to CNBC. Today, Shuler argued that workers must be central to AI policy. Ahead of the meeting, Shuler, 53, released a statement expressing her concern for workers: 'Public support for unions is at near record highs because workers are tired of being guinea pigs in an AI live experiment. The labor movement knows AI can empower workers and increase prosperity – but only if workers are centered in its creation and the rules that govern it.' Some additional top tech tycoons who were summoned before Congress: Charles Rivkin, the chairman and CEO of the Motion Picture Association; Janet Murguía, the president of Unidos US; Rumman Chowdhury, CEO of Humane Intelligence; Eric Schmidt, former CEO of Google and Chair of the Special Competitive Studies Project; Gary Kelly executive chairman of the board, Southwest Airlines; Clément Delangue, CEO of Hugging Face; and Maya Wiley, the president and CEO of the Leadership Conference on Civil & Human Rights.
AI Policy and Regulations
UNITED NATIONS -- Just a few years ago, artificial intelligence got barely a mention at the U.N. General Assembly's convocation of world leaders. But after the release of ChatGPT last fall turbocharged both excitement and anxieties about AI, it's been a sizzling topic this year at diplomacy's biggest yearly gathering. Presidents, premiers, monarchs and cabinet ministers convened as governments at various levels are mulling or have already passed AI regulation. Industry heavy-hitters acknowledge guardrails are needed but want to protect the technology's envisioned benefits. Outsiders and even some insiders warn that there also are potentially catastrophic risks, and everyone says there's no time to lose. And many eyes are on the United Nations as perhaps the only place to tackle the issue at scale. The world body has some unique attributes to offer, including unmatched breadth and a track record of brokering pacts on global issues, and it's set to launch an AI advisory board this fall. “Having a convergence, a common understanding of the risks, that would be a very important outcome,” U.N. tech policy chief Amandeep Gill said in an interview. He added that it would be very valuable to reach a common understanding on what kind of governance works, or might, to minimize risks and maximize opportunities for good. As recently as 2017, only three speakers brought up AI at the assembly meeting’s equivalent of a main stage, the “ General Debate.” This year, more than 20 speakers did so, representing countries from Namibia to North Macedonia, Argentina to East Timor. Secretary-General António Guterres teased plans to appoint members this month to the advisory board, with preliminary recommendations due by year's end — warp speed, by U.N. standards. Lesotho’s premier, Sam Matekane, worried about threats to privacy and safety, Nepalese Prime Minister Pushpa Kamal Dahal about potential misuse of AI, and Icelandic Foreign Minister Thórdís Kolbrún R. Gylfadóttir about the technology “becoming a tool of destruction.” Britain hyped its upcoming “AI Safety Summit,” while Spain pitched itself as an eager host for a potential international agency for AI and Israel touted its technological chops as a prospective developer of helpful AI. Days after U.S. senators discussed AI behind closed doors with tech bigwigs and skeptics, President Joe Biden said Washington is working “to make sure we govern this technology — not the other way around, having it govern us.” And with the General Assembly as a center of gravity, there were so many AI-policy panel discussions and get-togethers around New York last week that attendees sometimes raced from one to another. “The most important meetings that we are having are the meetings at the U.N. — because it is the only body that is inclusive, that brings all of us here,” Omar Al-Olama, the United Arab Emirates' minister for artificial intelligence, said at a U.N.-sponsored event featuring four high-ranking officials from various countries. It drew such interest that a half-dozen of their counterparts offered comments from the audience. Tech industry players have made sure they're in the mix during the U.N.'s big week, too. “What’s really encouraging is that there’s so much global interest in how to get this right — and the U.N. is in a position to help harmonize all the conversations” and work to ensure all voices get heard, says James Manyika, a senior vice president at Google. The tech giant helped develop a new, artificial intelligence-enabled U.N. site for searching data and tracking progress on the world body's key goals. But if the United Nations has advantages, it also has the challenges of a big-tent, consensus-seeking ethos that often moves slowly. Plus its members are governments, while AI is being driven by an array of private companies. Still, a global issue needs a global forum, and "the U.N. is absolutely a place to have these conversations,” says Ian Bremmer, president of the Eurasia Group, a political risk advisory firm. Even if governments aren't developers, Gill notes that they can “influence the direction that AI takes.” “It’s not only about regulating against misuse and harm, making sure that democracy is not undermined, the rule of law is not undermined, but it’s also about promoting a diverse and inclusive innovation ecosystem" and fostering public investments in research and workforce training where there aren't a lot of deep-pocketed tech companies doing so, he said. The United Nations will have to navigate territory that some national governments and blocs, including the European Union and the Group of 20 industrialized nations, already are staking out with summits, declarations and in some cases regulations of their own. Ideas differ about what a potential global AI body should be: perhaps an expert assessment and fact-establishing panel, akin to the Intergovernmental Panel on Climate Change, or a watchdog like the International Atomic Energy Agency? A standard-setting entity similar to the U.N.'s maritime and civil aviation agencies? Or something else? There's also the question of how to engender innovation and hoped-for breakthroughs — in medicine, disaster prediction, energy efficiency and more — without exacerbating inequities and misinformation or, worse, enabling runaway-robot calamity. That sci-fi scenario started sounding a lot less far-fetched when hundreds of tech leaders and scientists, including the CEO of ChatGPT maker OpenAI, issued a warning in May about “the risk of extinction from AI.” An OpenAI exec-turned-competitor then told the U.N. Security Council in July that artificial intelligence poses “potential threats to international peace, security and global stability” because of its unpredictability and possible misuse. Yet there are distinctly divergent vantage points on where the risks and opportunities lie. “For countries like Nigeria and the Global South, the biggest issue is: What are we going to do with this amazing technology? Are we going to get the opportunity to use it to uplift our people and our economies equally and on the same pace as the West?” Nigeria's communications minister, Olatunbosun Tijani, asked at an AI discussion hosted by the New York Public Library. He suggested that “even the conversation on governance has been led from the West.” Chilean Science Minister Aisén Etcheverry believes AI could allow for a digital do-over, a chance to narrow gaps that earlier tech opened in access, inclusion and wealth. But it will take more than improving telecommunications infrastructure. Countries that got left behind before need to have “the language, culture, the different histories that we come from, represented in the development of artificial intelligence,” Etcheverry said at the U.N.-sponsored side event. Gill, who's from India, shares those concerns. Dialogue about AI needs to expand beyond a “promise and peril” dichotomy to “a more nuanced understanding where access to opportunity, the empowerment dimension of it ... is also front and center,” he said. Even before the U.N. advisory board sets a detailed agenda, plenty of suggestions were volunteered amid the curated conversations around the General Assembly. Work on global minimum standards for AI. Align the various regulatory and enforcement endeavors around the globe. Look at setting up AI registries, validation and certification. Focus on regulating uses rather than the technology itself. Craft a “rapid-response mechanism” in case dreaded possibilities come to pass. From Dr. Rose Nakasi's vantage point, though, there was a clear view of the upsides of AI. The Ugandan computer scientist and her colleagues at Makerere University's AI Lab are using the technology to streamline microscopic analysis of blood samples, the gold-standard method for diagnosing malaria. Their work is aimed at countries without enough pathologists, especially in rural areas. A magnifying eyepiece, produced by 3D printing, fits cellphone cameras and takes photos of microscope slides; AI image analysis then picks out and identifies pathogens. Google's charitable arm recently gave the lab $1.5 million. AI is “an enabler” of human activity, Nakasi said between attending General Assembly-related events. “We can’t be able to just leave it to do each and every thing on its own," she said. "But once it is well regulated, where we have it as a support tool, I believe it can do a lot.”
AI Policy and Regulations
House lawmakers are urging federal agencies to quickly and aggressively adopt artificial intelligence technology, at a time when the push from civil rights and industry groups for new AI regulations is still waiting to get off the ground. The House Appropriations Committee, led by Rep. Kay Granger, R-Texas, released several spending bills this week that encourage the government to incorporate AI into everything from national security functions to routine office work to the detection of pests and diseases in crops. Several of those priorities are not just encouraged but would get millions of dollars in new funding under the legislation still being considered by the committee. And while comprehensive AI regulations are likely still months away and are unlikely to be developed this year, lawmakers seem keen on making sure the government is deploying AI where it can. The bills are backed by the GOP majority, and Rep. Don Beyer, D-Va., the vice chair of the Congressional Artificial Intelligence Caucus, said agencies shouldn't have to wait to start using AI. "We should support federal agencies harnessing the power and benefits of AI, as it has proven itself to be a powerful tool and will continue to be an invaluable asset for our federal agencies," he told Fox News Digital. "The Departments of Energy and Defense, for example, have been leveraging AI for technical projects to enhance precision and accomplish tasks beyond human capabilities." Beyer added that he is "encouraged" by commitments some agencies have made to ensure AI is used ethically, such as those made by the Department of Defense and intelligence agencies. In the spending bill for the Department of Homeland Security, language is included that would fund AI and machine learning capabilities to help review cargo shipments at U.S. ports and for port inspections. "As the Committee has previously noted, delays in the integration of artificial intelligence, machine learning, and autonomy into the program require CBP Officers to manually review thousands of images to hunt for anomalies," according to report language on the bill. "Automation decreases the chance that narcotics and other contraband will be missed and increases the interdiction of narcotics that move through the nation's [ports of entry]." The bill encourages DHS to use "commercial, off-the-shelf artificial intelligence capabilities" to improve government efforts to catch travelers and cargo that should not be allowed to enter the United States. It also calls on DHS to explore using AI to enforce the border, to help ensure the right illegal immigrants are removed, and at the Transportation Security Agency. The committee’s bill to fund the Defense Department warns that the Pentagon is not moving fast enough to adopt AI technologies. "Capabilities such as automation, artificial intelligence, and other novel business practices – which are readily adopted by the private sector – are often ignored or under-utilized across the Department's business operations," the report said. "This bill takes aggressive steps to address this issue." Among other things, the bill wants DOD to explore how to use AI to "significantly reduce or eliminate manual processes across the department," and says that effort justifies a $1 billion cut to the civilian defense workforce. The bill also wants DOD to report on how it can measure its efforts to adopt AI, and to take on more student interns with AI experience. The spending bill funding Congress itself wants legislative staff to explore how AI might be used to create closed captioning services for hearings, and how else AI might be used to improve House operations. House lawmakers also see a need for AI at the Department of Agriculture. Among other things, the bill adds more money for AI in an agricultural research program run by the U.S. and Israel, proposes the use of AI and machine learning to detect pests and diseases in crops, and supports ongoing work to use AI for "precision agriculture and food system security." The effort to expand the government’s use of AI comes despite the pressure that has been building on Congress to quickly impose a regulatory framework around this emerging and already widely used technology. Lawmakers in the House and Senate have held several hearings on the issue, which have raised ideas that include a new federal agency to regulate AI and an AI commission. But despite the urgency, Congress continues to move slowly. Senate Majority Leader Chuck Schumer, D-N.Y., said last week that he still wanted to take several months to take input, and implied that an AI regulatory plan might not be passed by Congress until next year. "Later this fall, I will convene the top minds in artificial intelligence here in Congress for a series of AI Insight Forums to lay down a new foundation for AI policy," he said last week. The full committee is expected to take up these and other spending bills in the coming months – Republicans have made it clear they want to move funding bills for fiscal year 2024 on time this year, which means finishing by the summer.
AI Policy and Regulations
Last year, the White House Office of Science and Technology Policy announced that the US needed a bill of rights for the age of algorithms. Harms from artificial intelligence disproportionately impact marginalized communities, the office’s director and deputy director wrote in a WIRED op-ed, and so government guidance was needed to protect people against discriminatory or ineffective AI.Today, the OSTP released the Blueprint for an AI Bill of Rights, after gathering input from companies like Microsoft and Palantir as well as AI auditing startups, human rights groups, and the general public. Its five principles state that people have a right to control how their data is used, to opt out of automated decision-making, to live free from ineffective or unsafe algorithms, to know when AI is making a decision about them, and to not be discriminated against by unfair algorithms.“Technologies will come and go, but foundational liberties, rights, opportunities, and access need to be held open, and it's the government's job to help ensure that's the case,” Alondra Nelson, OSTP deputy director for science and society, told WIRED. “This is the White House saying that workers, students, consumers, communities, everyone in this country should expect and demand better from our technologies.”However, unlike the better known US Bill of Rights, which comprises the first ten amendments to the constitution, the AI version will not have the force of law—it’s a non-binding white paper.The White House’s blueprint for AI rights is primarily aimed at the federal government. It will change how algorithms are used only if it steers how government agencies acquire and deploy AI technology, or helps parents, workers, policymakers, or designers ask tough questions about AI systems. It has no power over the large tech companies that arguably have the most power in shaping the deployment of machine learning and AI technology.The document released today resembles the flood of AI ethics principles released by companies, nonprofits, democratic governments, and even the Catholic church in recent years. Their tenets are usually directionally right, using words like transparency, explainability, and trustworthy, but they lack teeth and are too vague to make a difference in people’s everyday lives.Nelson of OSTP says the Blueprint for an AI Bill of Rights differs from past recitations of AI principles because it’s intended to be translated directly into practice. The past year of listening sessions was intended to move the project beyond vagaries, Nelson says. “We too understand that principles aren’t sufficient,” Nelson says. “This is really just a down payment. It's just the beginning and the start.”The OSTP received emails from about 150 people about its project and heard from about 130 additional individuals, businesses, and organizations that responded to a request for information earlier this year. The final blueprint is intended to protect people from discrimination based on race, religion, age, or any other class of people protected by law. It extends the definition of sex to include “pregnancy, childbirth, and related medical conditions,” a change made in response to concerns from the public about abortion data privacy.Annette Zimmermann, who researches AI, justice, and moral philosophy at the University of Wisconsin-Madison, says she’s impressed with the five focal points chosen for the AI Bill of Rights, and that it has the potential to push AI policy and regulation in the right direction over time.But she believes the blueprint shies away from acknowledging that in some cases rectifying injustice can require not using AI at all. “We can’t articulate a bill of rights without considering non-deployment the most rights-protecting option,” she says. Zimmerman would also like to see enforceable legal frameworks that can hold people and companies accountable for designing or deploying harmful AI.When asked why the Blueprint for an AI Bill of Rights does not include mention of bans as an option to control AI harms, a senior administration official said the focus of is shielding people from tech that threatens their rights and opportunities, not to call for the prohibition of any type of technology.The White House also announced actions by federal agencies today to curtail harmful AI. The Department of Health and Human Services will release a plan for reducing algorithmic discrimination in healthcare by the end of the year. Some algorithms used to prioritize access to care and guide individual treatments have been found to be biased against marginalized groups. The Department of Education plans to release recommendations on the use of AI for teaching or learning by early 2023.The limited bite of the White House’s AI Bill of Rights stands in contrast to more toothy AI regulation currently under development in the European Union.Members of the European Parliament are considering how to amend the AI Act and decide which forms of AI should require public disclosure or be banned outright. Some MEPs argue predictive policing should be forbidden because it “violates the presumption of innocence as well as human dignity.” Late last week, the EU's executive branch, the European Commission, proposed a new law that allow people treated unfairly by AI to file lawsuits in civil court.
AI Policy and Regulations
- After the Biden administration unveiled the first-ever AI executive order on Monday, leaders across industries began digging in to the 111-page document. - One core debate centers on a question of AI fairness: Does the executive order focus enough on addressing real-world harms that stem from AI models — especially those affecting marginalized communities — and introducing ways to minimize them? - For many civil society leaders CNBC spoke with, the answer is no, although they say it's a meaningful step. After the Biden administration unveiled the first-ever executive order on artificial intelligence on Monday, a frenzy of lawmakers, industry groups, civil rights organizations, labor unions and others began digging into the 111-page document — making note of the priorities, specific deadlines and, in their eyes, the wide-ranging implications of the landmark action. One core debate centers on a question of AI fairness. Many civil society leaders told CNBC the order does not go far enough to recognize and address real-world harms that stem from AI models — especially those affecting marginalized communities. But they say it's a meaningful step along the path. Many civil society and several tech industry groups praised the executive order's roots — the White House's blueprint for an AI bill of rights, released last October — but called on Congress to pass laws codifying protections, and to better account for training and developing models that prioritize AI fairness instead of addressing those harms after-the-fact. "This executive order is a real step forward, but we must not allow it to be the only step," Maya Wiley, president and CEO of The Leadership Conference on Civil and Human Rights, said in a statement. "We still need Congress to consider legislation that will regulate AI and ensure that innovation makes us more fair, just, and prosperous, rather than surveilled, silenced, and stereotyped." Cody Venzke, senior policy counsel at the American Civil Liberties Union, believes the executive order is an "important next step in centering equity, civil rights and civil liberties in our national AI policy" — but that the ACLU has "deep concerns" about the executive order's sections on national security and law enforcement. In particular, the ACLU is concerned about the executive order's push to "identify areas where AI can enhance law enforcement efficiency and accuracy," as is stated in the text. "One of the thrusts of the executive order is definitely that 'AI can improve governmental administration, make our lives better and we don't want to stand in way of innovation,'" Venzke told CNBC. "Some of that stands at risk to lose a fundamental question, which is, 'Should we be deploying artificial intelligence or algorithmic systems for a particular governmental service at all?' And if we do, it really needs to be preceded by robust audits for discrimination and to ensure that the algorithm is safe and effective, that it accomplishes what it's meant to do." Margaret Mitchell, researcher and chief ethics scientist of AI startup Hugging Face said she agreed with the values the executive order puts forth — privacy, safety, security, trust, equity and justice — but is concerned about the lack of focus on ways to train and develop models to minimize future harms, before an AI system is deployed. "There was a call for an overall focus on applying red-teaming, but not other more critical approaches to evaluation," Mitchell said. "'Red-teaming' is a post-hoc, hindsight approach to evaluation that works a bit like whack-a-mole: Now that the model is finished training, what can you think of that might be a problem? See if it's a problem and fix it if so." Mitchell wished she had seen "foresight" approaches highlighted in the executive order, such as disaggregated evaluation approaches, which can analyze a model as data is scaled up. Dr. Joy Buolamwini, founder and president of the Algorithmic Justice League, said Tuesday at an event in New York that she felt the executive order fell short in terms of the notion of redress, or penalties when AI systems harm marginalized or vulnerable communities. Even experts who praised the executive order's scope believe the work will be incomplete without action from Congress. "The President is trying to extract extra mileage from the laws that he has," said Divyansh Kaushik, associate director for emerging technologies and national security at the Federation of American Scientists. For example, it seeks to work within existing immigration law to make it easier to retain high-skilled AI workers in the U.S. But immigration law has not been updated in decades, said Kaushik, who was involved in collaborative efforts with the administration in crafting elements of the order. It falls on Congress, he added, to increase the number of employment-based green cards awarded each year and avoid losing talent to other countries. On the other side, industry leaders expressed wariness or even stronger feelings that the order had gone too far and would stifle innovation in a nascent sector. Andrew Ng, longtime AI leader and cofounder of Google Brain and Coursera, told CNBC he is "quite concerned about the reporting requirements for models over a certain size," adding that he is "very worried about overhyped dangers of AI leading to reporting and licensing requirements that crush open source and stifle innovation." In Ng's view, thoughtful AI regulation can help advance the field, but over-regulation of aspects of the technology, such as AI model size, could hurt the open-source community, which would in turn likely benefit tech giants. Nathan Benaich, founder and general partner of Air Street Capital, also had concerns about the reporting requirements for large AI models, telling CNBC that the compute threshold and stipulations mentioned in the order are a "flawed and potentially distorting measure." "It tells us little about safety and risks discouraging emerging players from building large models, while entrenching the power of incumbents," Benaich told CNBC. NetChoice's Vice President and General Counsel Carl Szabo was even more blunt. "Broad regulatory measures in Biden's AI red tape wishlist will result in stifling new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation," said Szabo, whose group counts Amazon, Google, Meta and TikTok among its members. "Thus, this order puts any investment in AI at risk of being shut down at the whims of government bureaucrats." But Reggie Townsend, a member of the National Artificial Intelligence Advisory Committee (NAIAC), which advises President Biden, told CNBC that he feels the order doesn't stifle innovation. "If anything, I see it as an opportunity to create more innovation with a set of expectations in mind," said Townsend. David Polgar, founder of the nonprofit All Tech Is Human and a member of TikTok's content advisory council, had similar takeaways: In part, he said, it's about speeding up responsible AI work instead of slowing technology down. "What a lot of the community is arguing for — and what I take away from this executive order — is that there's a third option," Polgar told CNBC. "It's not about either slowing down innovation or letting it be unencumbered and potentially risky."
AI Policy and Regulations
| | | | | | | | Technology | | | | | | Schumer unveils AI roadmap, forum series | | Senate Majority Leader Chuck Schumer (D-N.Y.) on Wednesday outlined his two-pronged approach for crafting artificial intelligence (AI) policy, as Congress and the administration race to regulate the booming industry. | | © Anna Rose Layden, The Hill | | The Democratic leader unveiled his framework for AI regulation and announced a series of expert forums to guide Congress as lawmakers tackle a range of issues posed by the technology, from national security concerns to copyright law. Schumer said his SAFE Innovation Framework for AI aims to incorporate safeguards raised by stakeholders while still promoting innovation in the industry. “I call it that because the right framework must prioritize innovation. It’s essential to our country,” he said during a speech at the Center for Strategic and International Studies. “The U.S. has always been a leader in innovating on the greatest technologies that shape the modern world.” The Senate majority leader unveiled more details of the framework, which was first announced in April, one day after President Biden met with tech leaders to discuss AI and a day after a bipartisan House bill on the technology was introduced. Schumer’s framework has five key pillars: security, accountability, protecting foundations, explainability and innovation. In his speech Wednesday, the senator said innovation must be “our North Star” in crafting regulation and stressed the need for bipartisanship in setting the ground rules for AI. Read more in a full report at TheHill.com. | | Welcome to The Hill’s Technology newsletter, I’m Rebecca Klar — tracking the latest moves from Capitol Hill to Silicon Valley. | | | | | | | | How policy will be impacting the tech sector now and in the future: | | | | | | The Federal Trade Commission (FTC) filed a lawsuit against Amazon on Wednesday that accuses it of tricking customers into enrolling in its Amazon Prime program and preventing them from canceling their subscriptions. The lawsuit alleges Amazon used manipulative, coercive or deceptive designs known as “dark patterns” to convince customers to sign up for a subscription, which automatically renews. The agency said Amazon … | | | | | | | | The largest newspaper publisher by total circulation in the country has sued Google over allegations that the company is violating antitrust law in controlling tools used to buy and sell ads. Gannett filed the lawsuit Tuesday against Google and its parent company, Alphabet, arguing that it controls how publishers sell their ad slots and is forcing them to sell increasingly more ad space to Google at lower prices, leading … | | | | | | | | Jack Teixeira, the Massachusetts Air National Guard member who is accused of leaking a slew of highly classified documents from the Pentagon online, pleaded not guilty to all the charges he is facing Wednesday. | | | | | | | | News we’ve flagged from the intersection of tech and other topics: | | | | | | Instagram lets users download Reels | | Instagram is letting users download public Reels, the short-term video function on the app, adding in the feature rival TikTok has allowed for years, TechCrunch reported. | | | | | | Predators using Discord for abductions | | A dark side of the popular app Discord is being used by predators to groom children before abducting them, trade sexual exploitation material and extort minors, NBC News reported, citing reviews of criminal complaints, articles and law enforcement communication. | | | | | | Upcoming news themes and events we’re watching: | | - The Senate Commerce Committee will hold a hearing to examine nominations to the Federal Communications Commission on Thursday at 10 a.m. | | | | | | Interested in exploring a new career? Visit The Hill Jobs Board to discover millions of roles worldwide, including: VP, Inv Banking – Defense & Government Services — Raymond James Financial, Reston, Va. Apply Senior Telemedicine Engineer-On site — Inova Health System, Fairfax, Va. Apply Banker Associate, Government – Middle Market Banking – Associate — JPMorgan Chase Bank, N.A., Washington, D.C. Apply Director, Partnership Sales — D.C. United, Washington, D.C. Apply Click here to get your job mentioned | | | | | | Branch out with other reads on The Hill: | | | | | | 3 hurt when Google critic crashes car into building near company’s NYC headquarters, police say | | NEW YORK (AP) — A man who has claimed for years that Google was torturing users with flashing lights crashed a car into a building near the company’s New York City headquarters, injuring three pedestrians, authorities said. The man, 34, drove onto the sidewalk and crashed his Ford Fusion into … | | | | | | Two key stories on The Hill right now: | | | | | | An average American income isn’t enough for a comfortable living in 2023, according to two recent reports. The typical U.S. family earns about … Read more | | | | The House advanced a resolution Wednesday to censure Rep. Adam Schiff (Calif.), overcoming a procedural hurdle that blocked a similar measure targeting … Read more | | | | | | Opinions related to tech submitted to The Hill: | | | | | | You’re all caught up. See you tomorrow!
AI Policy and Regulations
Google Bard, its ChatGPT rivalling AI assistant, expands to 180 countries. But it won’t be available in the EU or Canada. EU’s General Data Protection Regulation (GDPR) might be the reason for the exclusion. Google’s I/O event showcased AI developments, Bard access expansion, and new language support, but EU countries were left out. OpenAI faced similar issues with Italy’s temporary ban on ChatGPT, requiring compliance with transparency and data processing regulations. By not releasing Bard in the EU, Google avoids potential legal issues. Google’s Bard: A a new AI powerhouse Google Bard, powered by the PaLM 2 AI model, is designed to compete with OpenAI’s ChatGPT in the rapidly growing market of generative AI chatbots. As part of Google’s push to integrate generative AI into its core products, such as Gmail and Google Photos, Bard signifies the company’s ambition to revolutionize its services and take on competitors like Microsoft’s integration of ChatGPT into Bing. Alphabet CEO Sundar Pichai announced the development of PaLM 2, a lightweight version of the AI model compatible with smartphones, further demonstrating Google’s commitment to AI innovation. AI Regulation in the EU As Google expands Bard’s availability, the EU is moving closer to implementing stricter AI regulations. EU parliamentary committees recently agreed on new rules aimed at making AI systems safer and more ethical, while protecting individuals’ rights. The proposed legislation includes bans on certain types of AI, mandatory transparency, and liability rules for developers. This regulatory landscape may explain Google’s decision to exclude EU countries from Bard’s rollout. GDPR and AI in the European market GDPR, the EU’s data protection regulation, guarantees user rights to access, rectification, erasure, restriction of processing, data portability, and objection, as well as the right to reject automated decision-making, such as profiling. Companies risk fines if their AI training data prevents EU users from exercising these rights. With Bard’s chatbot collecting user information and potentially using data for training, the AI assistant might face difficulties complying with GDPR requirements. OpenAI’s ChatGPT experience in Italy OpenAI’s ChatGPT faced a temporary ban in Italy due to GDPR violations. The ban was lifted after OpenAI implemented privacy changes, clarified user data deletion, and complied with measures around transparency and data processing. By not releasing Bard in the EU, Google avoids similar regulatory hoops and potential legal issues that OpenAI encountered. AI legislation and the future of AI in Europe The proposed AI legislation in the EU aims to control the dangerous use of AI, prohibit subliminal or manipulative techniques, and protect fundamental rights, health, safety, the environment, democracy, and the rule of law. Companies deploying AI are required to ensure compliance with these principles. The legislation is awaiting a full EU parliament vote before final negotiations and implementation. As the AI market continues to grow and the EU enacts stricter AI rules, it remains to be seen whether Google’s Bard will find a way to adapt and comply with European regulations.
AI Policy and Regulations
(Bloomberg) -- Beijing is poised to implement sweeping new regulations for artificial intelligence services this week, trying to balance state control of the technology with enough support that its companies can become viable global competitors. Most Read from Bloomberg The government issued 24 guidelines that require platform providers to register their services and conduct a security review before they’re brought to market. Seven agencies will take responsibility for oversight, including the Cyberspace Administration of China and the National Development and Reform Commission. The final regulations are less onerous than an original draft from April, but they show China, like Europe, moving ahead with government oversight of what may be the most promising — and controversial — technology of the last 30 years. The US, by contrast, has no legislation under serious consideration even after industry leaders warned that AI poses a “risk of extinction” and OpenAI’s Sam Altman urged Congress in public hearings to get involved. “China got started very quickly,” said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace who is writing a series of research papers on the subject. “It started building the regulatory tools and the regulatory muscles, so they’re going to be more ready to regulate more complex applications of the technology.” China’s regulations go beyond anything contemplated in Western democracies. But they also include practical steps that have support in places like the US. Beijing, for example, will mandate conspicuous labels on synthetically created content, including photos and videos. That’s aimed at preventing deceptions like an online video of Nancy Pelosi that was doctored to make her appear drunk. China will also require any company introducing an AI model to use “legitimate data” to train their models and to disclose that data to regulators as needed. Such a mandate may placate media companies that fear their creations will be co-opted by AI engines. Additionally, Chinese companies must provide a clear mechanism for handling public complaints about services or content. While the US’ historically hands-off approach to regulation gave Silicon Valley giants the space to become global juggernauts, that strategy holds serious dangers with generative AI, said Andy Chun, an artificial intelligence expert and adjunct professor at the City University of Hong Kong. “AI has the potential to profoundly change how people work, live, and play in ways we are just beginning to realize,” he said. “It also poses clear risks and threats to humanity if AI development proceeds without adequate oversight.” In the US, federal lawmakers have proposed a wide range of AI regulations but efforts remain in the early stages. The US Senate has held several AI briefings this summer to help members come up to speed on the technology and its risks before pursuing regulations. In June, the European Parliament passed a draft of the AI Act, which would impose new guardrails and transparency requirements for artificial intelligence systems. The parliament, EU member states and European Commission must negotiate final terms before the legislation becomes law. Beijing has spent years laying the groundwork for the rules that take effect Tuesday. The State Council, the country’s cabinet, put out an AI roadmap in 2017 that declared development of the technology a priority and laid out a timetable for putting government regulations in place. Agencies like the CAC then consulted with legal scholars such as Zhang Linghan from the China University of Political Science and Law about AI governance, according to Sheehan. As China’s draft guidelines on generative AI evolved into the latest version, there were months of consultation between regulators, industry players and academics to balance legislation and innovation. That initiative on Beijing’s part is driven in part by the strategic importance of AI, and the desire to gain a regulatory edge over other governments, said You Chuanman, director of the Institute for International Affairs Center for Regulation and Global Governance at the Chinese University of Hong Kong in Shenzhen. Now, China’s biggest AI players, from Baidu Inc. to Alibaba Group Holding and SenseTime Group Inc., are getting to work. Beijing has targeted AI as one of a dozen tech priorities and, after a two-year regulatory crackdown, the government is seeking private sector help to prop up the flagging economy and compete with the US. After the introduction of ChatGPT set off a global AI frenzy, leading tech executives and aspiring entrepreneurs are pouring billions of dollars into the field. “In the context of fierce global competition, lack of development is the most unsafe thing,” Zhang, the scholar from China University of Political Science and Law, wrote about the guidelines. In a flurry of activity this year, Alibaba, Baidu and SenseTime all showed off AI models. Xu Li, chief executive officer of SenseTime, pulled off the flashiest presentation, complete with a chatbot that writes computer code from prompts either in English or Chinese. Still, Chinese companies trail global leaders like OpenAI and Alphabet’s Google. They will likely struggle to challenge such rivals, especially if American companies are regulated by no one but themselves. “China is trying to walk a tightrope between several different objectives that are not necessarily compatible,” said Helen Toner, a director at Georgetown’s Center for Security and Emerging Technology. “One objective is to support their AI ecosystem, and another is to maintain social control and maintain the ability to censor and control the information environment in China.” In the US, OpenAI has shown little control over information even if it’s dangerous or inaccurate. Its ChatGPT made up fake legal precedents and provided bomb-building instructions to the public. A Georgia radio host claims the bot generated a false complaint that accused him of embezzling money. In China, companies have to be much more careful. This February, the Hangzhou-based Yuanyu Intelligence pulled the plug on its ChatYuan service only days after launch. The bot had called Russia’s attack on Ukraine a “war of aggression” — in contravention of Beijing’s stance — and raised doubts about China’s economic prospects, according to screenshots that circulated online. Now the startup has abandoned a ChatGPT model entirely to focus on an AI productivity service called KnowX. “Machines cannot achieve 100% filtering,” said Xu Liang, head of the company. “But what you can do is to add human values of patriotism, trustworthiness, and prudence to the model.” Beijing, with its authoritarian powers, plays by different rules than Washington. When Chinese agencies reprimand and fine tech companies, the corporations can’t fight back and often publicly thank the government for its oversight. In the US, Big Tech hires armies of lawyers and lobbyists to contest almost any regulatory action. Alongside the robust public debate among stakeholders, this will make it difficult to install effective AI regulations, said Aynne Kokas, associate professor of media studies at the University of Virgina. In China, AI is beginning to make its way into the sprawling censorship regime that keeps the country’s internet scrubbed of taboo and controversial topics. That doesn’t mean it is easy, technically speaking. “One of the most attractive innovations of ChatGPT and similar AI innovations is its unpredictability or its own innovation beyond our human intervention,” You, from the Chinese University of Hong Kong, said. “In many cases, it’s beyond control of the platform service providers.” Some Chinese tech companies are using two-way keyword filtering, using one large language model to ensure that another LLM is scrubbed of any controversial content. One tech startup founder, who declined to be named due to political sensitivities, said the government will even do spot-checks on how AI services are labeling data. “What is potentially the most fascinating and concerning time-line is the one where censorship happens through new large language models developed specifically as censors,” said Nathan Freitas, a fellow at Harvard University’s Berkman Klein Center for Internet and Society. The European Union may be the most progressive in protecting individuals from such overreach. The draft law passed in June ensures privacy controls and curbs the use of facial recognition software. The EU proposal would also require companies to perform some analysis of the risks their services entail, for, say, health systems or national security. But the EU’s approach has drawn objections. OpenAI’s Altman suggested his company may “cease operating” within countries that implement overly onerous regulations. One thing Washington can learn from Chinese regulators is to be “targeted and iterative,” Sheehan said. “Build these tools that they can keep improving as they keep regulating.” --With assistance from Emily Cadman, Alice Truong and Seth Fiegerman. Most Read from Bloomberg Businessweek ©2023 Bloomberg L.P.
AI Policy and Regulations
Nearly one year after the technology firm OpenAI released the chatbot ChatGPT, companies are in an arms race to develop ‘generative’ artificial-intelligence (AI) systems that are ever more powerful. Each version adds capabilities that increasingly encroach on human skills. By producing text, images, videos and even computer programs in response to human prompts, generative AI systems can make information more accessible and speed up technology development. Yet they also pose risks. AI systems could flood the Internet with misinformation and ‘deepfakes’ — videos of synthetic faces and voices that can be indistinguishable from those of real people. In the long run, such harms could erode trust between people, politicians, the media and institutions. The integrity of science itself is also threatened by generative AI, which is already changing how scientists look for information, conduct their research and write and evaluate publications. The widespread use of commercial ‘black box’ AI tools in research might introduce biases and inaccuracies that diminish the validity of scientific knowledge. Generated outputs could distort scientific facts, while still sounding authoritative. The risks are real, but banning the technology seems unrealistic. How can we benefit from generative AI while avoiding the harms? Governments are beginning to regulate AI technologies, but comprehensive and effective legislation is years off (see Nature 620, 260–263; 2023). The draft European Union AI Act (now in the final stages of negotiation) demands transparency, such as disclosing that content is AI-generated and publishing summaries of copyrighted data used for training AI systems. The administration of US President Joe Biden aims for self-regulation. In July, it announced that it had obtained voluntary commitments from seven leading tech companies “to manage the risks posed by Artificial Intelligence (AI) and to protect Americans’ rights and safety”. Digital ‘watermarks’ that identify the origins of a text, picture or video might be one mechanism. In August, the Cyberspace Administration of China announced that it will enforce AI regulations, including requiring that generative AI developers prevent the spread of mis-information or content that challenges Chinese socialist values. The UK government, too, is organizing a summit in November at Bletchley Park near Milton Keynes in the hope of establishing intergovernmental agreement on limiting AI risks. In the long run, however, it is unclear whether legal restrictions or self-regulation will prove effective. AI is advancing at breakneck speed in a sprawling industry that is continuously reinventing itself. Regulations drawn up today will be outdated by the time they become official policy, and might not anticipate future harms and innovations. In fact, controlling developments in AI will require a continuous process that balances expertise and independence. That’s why scientists must be central to safeguarding the impacts of this emerging technology. Researchers must take the lead in testing, proving and improving the safety and security of generative AI systems — as they do in other policy realms, such as health. Ideally, this work would be carried out in a specialized institute that is independent of commercial interests. However, most scientists don’t have the facilities or funding to develop or evaluate generative AI tools independently. Only a handful of university departments and a few big tech companies have the resources to do so. For example, Microsoft invested US$10 billion in OpenAI and its ChatGPT system, which was trained on hundreds of billions of words scraped from the Internet. Companies are unlikely to release details of their latest models for commercial reasons, precluding independent verification and regulation. Society needs a different approach1. That’s why we — specialists in AI, generative AI, computer science and psychological and social impacts — have begun to form a set of ‘living guidelines’ for the use of generative AI. These were developed at two summits at the Institute for Advanced Study at the University of Amsterdam in April and June, jointly with members of multinational scientific institutions such as the International Science Council, the University-Based Institutes for Advanced Study and the European Academy of Sciences and Arts. Other partners include global institutions (the United Nations and its cultural organization, UNESCO) and the Patrick J. McGovern Foundation in Boston, Massachusetts, which advises the Global AI Action Alliance of the World Economic Forum (see Supplementary information for co-developers and affiliations). Policy advisers also participated as observers, including representatives from the Organisation for Economic Co-operation and Development (OECD) and the European Commission. Here, we share a first version of the living guidelines and their principles (see ‘Living guidelines for responsible use of generative AI in research’). These adhere to the Universal Declaration of Human Rights, including the ‘right to science’ (Article 27). They also comply with UNESCO’s Recommendation on the Ethics of AI, and its human-rights-centred approach to ethics, as well as the OECD’s AI Principles. Key principles of the living guidelines First, the summit participants agreed on three key principles for the use of generative AI in research — accountability, transparency and independent oversight. Accountability. Humans must remain in the loop to evaluate the quality of generated content; for example, to replicate results and identify bias. Although low-risk use of generative AI — such as summarization or checking grammar and spelling — can be helpful in scientific research, we advocate that crucial tasks, such as writing manuscripts or peer reviews, should not be fully outsourced to generative AI. Transparency. Researchers and other stakeholders should always disclose their use of generative AI. This increases awareness and allows researchers to study how generative AI might affect research quality or decision-making. In our view, developers of generative AI tools should also be transparent about their inner workings, to allow robust and critical evaluation of these technologies. Independent oversight. External, objective auditing of generative AI tools is needed to ensure that they are of high quality and used ethically. AI is a multibillion-dollar industry; the stakes are too high to rely on self-regulation. Six steps are then needed. Set up a scientific body to audit AI systems An official body is needed to evaluate the safety and validity of generative AI systems, including bias and ethical issues in their use (see ‘An auditor for generative AI’). It must have sufficient computing power to run full-scale models, and enough information about source codes to judge how they were trained. The auditing body, in cooperation with an independent committee of scientists, should develop benchmarks against which AI tools are judged and certified, for example with respect to bias, hate speech, truthfulness and equity. These benchmarks should be updated regularly. As much as possible, only the auditor should be privy to them, so that AI developers cannot tweak their codes to pass tests superficially — as has happened in the car industry2. The auditor could examine and vet training data sets to prevent bias and undesirable content before generative AI systems are released to the public. It might ask, for example, to what extent do interactions with generative AI distort people’s beliefs3 or vice versa? This will be challenging as more AI products arrive on the market. An example that highlights the difficulties is the HELM initiative, a living benchmark for improving the transparency of language models, which was developed by the Stanford Center for Research on Foundation Models in California (see go.nature.com/46revyc). Certification of generative AI systems requires continuous revision and adaptation, because the performance of these systems evolves rapidly on the basis of user feedback and concerns. Questions of independence can be raised when initiatives depend on industry support. That is why we are proposing living guidelines developed by experts and scientists, supported by the public sector. The auditing body should be run in the same way as an international research institution — it should be interdisciplinary, with five to ten research groups that host specialists in computer science, behavioural science, psychology, human rights, privacy, law, ethics, science of science and philosophy. Collaborations with the public and private sectors should be maintained, while retaining independence. Members and advisers should include people from disadvantaged and under-represented groups, who are most likely to experience harm from bias and misinformation (see ‘An auditor for generative AI’ and go.nature.com/48regxm). Similar bodies exist in other domains, such as the US Food and Drug Administration, which assesses evidence from clinical trials to approve products that meet its standards for safety and effectiveness. The Center for Open Science, an international organization based in Charlottesville, Virginia, seeks to develop regulations, tools and incentives to change scientific practices towards openness, integrity and reproducibility of research. What we are proposing is more than a kitemark or certification label on a product, although a first step could be to develop such a mark. The auditing body should proactively seek to prevent the introduction of harmful AI products while keeping policymakers, users and consumers informed of whether a product conforms to safety and effectiveness standards. Keep the living guidelines living Crucial to the success of the project is ensuring that the guidelines remain up to date and aligned with rapid advances in generative AI. To this end, a second committee composed of about a dozen diverse scientific, policy and technical experts should meet monthly to review the latest developments. Much like the AI Risk Management Framework of the US National Institute of Standards and Technology4, for example, the committee could map, measure and manage risks. This would require close communication with the auditor. For example, living guidelines might include the right of an individual to control exploitation of their identity (for publicity, for example), while the auditing body would examine whether a particular AI application might infringe this right (such as by producing deep fakes). An AI application that fails certification can still enter the marketplace (if policies don’t restrict it), but individuals and institutions adhering to the guidelines would not be able to use it. These approaches are applied in other fields. For example, clinical guidelines committees, such as the Stroke Foundation in Australia, have adopted living guidelines to allow patients to access new medicines quickly (see go.nature.com/46qdp3h). The foundation now updates its guidelines every three to six months, instead of roughly every seven years as it did previously. Similarly, the Australian National Clinical Evidence Taskforce for COVID-19 updated its recommendations every 20 days during the pandemic, on average5. Another example is the Transparency and Openness Promotion (TOP) Guidelines for promoting open-science practices, developed by the Center for Open Science6. A metric called TOP Factor allows researchers to easily check whether journals adhere to open-science guidelines. A similar approach could be used for AI algorithms. Obtain international funding to sustain the guidelines Financial investments will be needed. The auditing body will be the most expensive element, because it needs computing power comparable to that of OpenAI or a large university consortium. Although the amount will depend on the remit of the body, it is likely to require at least $1 billion to set up. That is roughly the hardware cost of training GPT-5 (a proposed successor to GPT-4, the large language model that underlies ChatGPT). To scope out what’s needed, we call for an interdisciplinary scientific expert group to be set up in early 2024, at a cost of about $1 million, which would report back within six months. This group should sketch scenarios for how the auditing body and guidelines committee would function, as well as budget plans. Some investment might come from the public purse, from research institutes and nation states. Tech companies should also contribute, as outlined below, through a pooled and independently run mechanism. Seek legal status for the guidelines At first, the scientific auditing body would have to operate in an advisory capacity, and could not enforce the guidelines. However, we are hopeful that the living guidelines would inspire better legislation, given interest from leading global organizations in our dialogues. For comparison, the Club of Rome, a research and advocacy organization aimed at raising environmental and societal awareness, has no direct political or economic power, yet still has a large impact on international legislation for limiting global warming. Alternatively, the scientific auditing body might become an independent entity within the United Nations, similar to the International Atomic Energy Agency. One hurdle might be that some member states could have conflicting opinions on regulating generative AI. Furthermore, updating formal legislation is slow. Seek collaboration with tech companies Tech companies could fear that regulations will hamper innovation, and might prefer to self-regulate through voluntary guidelines rather than legally binding ones. For example, many companies changed their privacy policies only after the European Union put its General Data Protection Regulation into effect in 2016 (see go.nature.com/3ten3du).However, our approach has benefits. Auditing and regulation can engender public trust and reduce the risks of malpractice and litigation. These benefits could provide an incentive for tech companies to invest in an independent fund to finance the infrastructure needed to run and test AI systems. However, some might be reluctant to do so, because a tool failing quality checks could produce unfavourable ratings or evaluations leading to negative media coverage and declining shares. Another challenge is maintaining the independence of scientific research in a field dominated by the resources and agendas of the tech industry. Its membership must be managed to avoid conflicts of interests, given that these have been demonstrated to lead to biased results in other fields7,8. A strategy for dealing with such issues needs to be developed9. Address outstanding topics Several topics have yet to be covered in the living guidelines. One is the risk of scientific fraud facilitated by generative AI, such as faked brain scans that journal editors or reviewers might think are authentic. The auditing body should invest in tools and recommendations to detect such fraud10. For example, the living guidelines might include a recommendation for editors to ask authors to submit high-resolution raw image data, because current generative AI tools generally create low-resolution images11. Another issue is the trade-off between copyright issues and increasing the accessibility of scientific knowledge12. On the one hand, scientific publishers could be motivated to share their archives and databases, to increase the quality of generative AI tools and to enhance accessibility of knowledge. On the other hand, as long as generative AI tools obscure the provenance of generated content, users might unwittingly violate intellectual property (even if the legal status of such infringement is still under debate). The living guidelines will need to address AI literacy so that the public can make safe and ethical use of generative AI tools. For example, a study this year demonstrated that ChatGPT might reduce ‘moral awareness’ because individuals confuse ChatGPT’s random moral stances with their own13. All of this is becoming more urgent by the day. As generative AI systems develop at lightning speed, the scientific community must take a central role in shaping the future of responsible generative AI. Setting up these bodies and funding them is the first step.
AI Policy and Regulations
White House sends hackers against the most powerful AIs For the Biden administration, a question hangs over the biggest tech development in years: How big a security threat is it? On Friday, in hotels across Las Vegas, some of the world’s most powerful artificial intelligence systems will come under simultaneous attack by a small army of hackers trying to find their hidden flaws. The White House is not only aware of the public assault — it’s endorsing it. In May, the Biden administration threw its support behind a deliberate, coordinated test attack on AI systems, called red-teaming, set to play out over three days at an annual hacker convention this weekend. Several leading AI companies, including OpenAI, Google and Meta, agreed to have some of their latest and most powerful AI systems attacked for the exercise. The hacker attack highlights what has become one of the White House’s key concerns about the powerful, fast-growing new AI models: How secure they really are, and whether they could pose a threat either to American citizens, or to national security on the global stage. “Our framing — and this comes from the president — is that to to harness the opportunities of AI, we first need to manage the risks, too,” said Alan Mislove, a senior official at the White House Office of Science and Technology Policy who helped the hacking challenge organizers develop this weekend’s red teaming exercises. “For things like large language models, those risks are quite broad, in many cases can be less clear than other systems,” and “cover our society, our economy, national security,” he said. As Congress struggles to pin down what new laws to pass on AI, and federal agencies flex their existing authorities over an emerging technology, the Biden White House has emerged as the most active player on AI policy. It has drafted an AI Bill of Rights framework, convened tech CEOs, and held a series of press conferences on the wide range of threats and opportunities presented by the technology. Though these threats range across society, from job loss to discrimination to misinformation, many of the White House’s most tangible steps have focused on the security issue. Its new special adviser for AI, Ben Buchanan, has a national security rather than a technical background. When the White House convened AI leaders to announce a set of voluntary commitments last month, “safety” topped the list, and security played a key role through the document. The high priority on security reflects the anxiety — among experts, regulators and the industry itself — that the complex new AI systems present a range of new issues not fully understood, from their potential to be hacked and misdirected by an adversary, to the idea that they could expose user data, to darker uses like building bioweapons. “It’s possible to get these models to do things that their designers and vendors do not anticipate or do not want them to be able to do. So yes, I think there are real security considerations,” said Arati Prabhakar, director of the White House’s Office of Science of Technology Policy. AI can also be a tool for improving security: This week the Pentagon announced a two-year challenge for developers to use AI to harden critical American cybersecurity. For this weekend’s red-teaming challenge, the White House partnered with the AI Village at DEFCON, an annual convention where organizers stage hacking wargames and cybersecurity professionals reveal the latest holes in ubiquitous technologies. Government agencies like the Pentagon have turned to the hacker community to find cybersecurity vulnerabilities before: At a DEFCON hacking challenge last year, a participant found a disabling flaw in the army’s electrical microgrid after feeding it false weather data. But this year’s version is unusual for both the level of government buy-in and industry participation. Companies in the tech industry have traditionally been reluctant to expose proprietary software to public view for testing. But this year, urged by the White House, tech companies OpenAI, Anthropic, Google, Hugging Face, NVIDIA, Meta, Cohere, and Stability have all offered up their large language models for scrutiny. They will supply gated-off versions of their models for attack by a range of hackers — from the conference’s usual experienced attendees to community college students flown in specifically for the challenge. The idea for the White House’s involvement in the DEFCON exercise was born at an earlier tech conference: South by Southwest (SXSW) in Austin, Texas, said OSTP’s Prabhakar. After an initial meeting at SXSW, the AI Village organizers met with White House officials to discuss the possibility of scaling up their red-teaming exercise at DEFCON to feature the most popular large language models on the market. “We thought it was a terrific idea, a great way to seed something that really mattered,” Prabhakar said. The firms agreed, although there’s a caveat: The results from the DEFCON red-teaming exercise won’t be made public until February, so they can fix security holes or problems before they get exploited. With AI, the process is complicated. “It’s not as simple as just patching like a software flaw,” said Meta security researcher Chris Rohlf. For industry, the stakes include winning public trust for an emerging technology that has ignited both widespread anxiety and excitement. “Showing that these models are tested,” said Meta’s Rohlf, “will build trust with the community long term.” Michael Sellitto, head of geopolitics and security policy at Anthropic, meanwhile, is hopeful that the exercise will spark a safety competition in the tech industry itself. “One of the things that we really want to see is a safety race to the top,” he said. Despite the fanfare, the exercise itself is not likely to reveal all the ways in which AI systems can misbehave, especially since each participant gets very limited time to hack into a large language model (on the order of 50 minutes per try) and are limited to the technical equipment available at the event, said Anthropic’s Sellitto. Mislove — the senior White House official involved in the red teaming planning process — said the Biden administration sees this DEFCON exercise as a model for the future. In part, it’s intended to find the best way to run more large-scale red teaming exercises on AI. The White House’s objective with DEFCON is to set a precedent: “Where we want to get to is a future in which red-teaming is widely done by many parties,” said Prabhakar.
AI Policy and Regulations
Make way for yet another headline-grabbing AI policy intervention: Hundreds of AI scientists, academics, tech CEOs and public figures — from OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis to veteran AI computer scientist Geoffrey Hinton, MIT’s Max Tegmark and Skype co-founder Jaan Tallinn to Grimes the musician and populist podcaster Sam Harris, to name a few — have added their names to a statement urging global attention on existential AI risk. The statement, which is being hosted on the website of a San Francisco-based, privately-funded not-for-profit called the Center for AI Safety (CAIS), seeks to equate AI risk with the existential harms posed by nuclear apocalypse and calls for policymakers to focus their attention on mitigating what they claim is ‘doomsday’ extinction-level AI risk. Here’s their (intentionally brief) statement in full: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. Per a short explainer on CAIS’ website the statement has been kept “succinct” because those behind it are concerned to avoid their message about “some of advanced AI’s most severe risks” being drowned out by discussion of other “important and urgent risks from AI” which they nonetheless imply are getting in the way of discussion about extinction-level AI risk. However we have actually heard the self-same concerns being voiced loudly and multiple times in recent months, as AI hype has surged off the back of expanded access to generative AI tools like OpenAI’s ChatGPT and DALL-E — leading to a surfeit of headline-grabbing discussion about the risk of “superintelligent” killer AIs. (Such as this one, from earlier this month, where statement-signatory Hinton warned of the “existential threat” of AI taking control. Or this one, from just last week, where Altman called for regulation to prevent AI destroying humanity.) There was also the open letter signed by Elon Musk (and scores of others) back in March which called for a six-month pause on development of AI models more powerful than OpenAI’s GPT-4 to allow time for shared safety protocols to be devised and applied to advanced AI — warning over risks posed by “ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control”. So, in recent months, there has actually been a barrage of heavily publicized warnings over AI risks that don’t exist yet. This drumbeat of hysterical headlines has arguably distracted attention from deeper scrutiny of existing harms. Such as the tools’ free use of copyrighted data to train AI systems without permission or consent (or payment); or the systematic scraping of online personal data in violation of people’s privacy; or the lack of transparency from AI giants vis-a-vis the data used to train these tools. Or, indeed, baked in flaws like disinformation (“hallucination”) and risks like bias (automated discrimination). Not to mention AI-driven spam! It’s certainly notable that after a meeting last week between the UK prime minister and a number of major AI execs, including Altman and Hassabis, the government appears to be shifting tack on AI regulation — with a sudden keen in existential risk, per the Guardian’s reporting. Talk of existential AI risk also distracts attention from problems related to market structure and dominance, as Jenna Burrell, director of research at Data & Society, pointed out in this recent Columbia Journalism Review article reviewing media coverage of ChatGPT — where she argued we need to move away from focusing on red herrings like AI’s potential “sentience” to covering how AI is further concentrating wealth and power. So of course there are clear commercial motivates for AI giants to want to route regulatory attention into the far-flung theoretical future, with talk of an AI-driven doomsday — as a tactic to draw lawmakers’ minds away from more fundamental competition and antitrust considerations in the here and now. And data exploitation as a tool to concentrate market power is nothing new. Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now. OpenAI was a notable non-signatory to the aforementioned (Musk signed) open letter but a number of its employees are backing the CAIS-hosted statement (while Musk apparently is not). So the latest statement appears to offer an (unofficial) commercially self-serving reply by OpenAI (et al) to Musk’s earlier attempt to hijack the existential AI risk narrative in his own interests (which no longer favor OpenAI leading the AI charge). Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape “democratic processes for steering AI”, as Altman put it. So the company is actively positioning itself (and applying its investors’ wealth) to influence the shape of any future mitigation guardrails, alongside ongoing in-person lobbying efforts targeting international regulators. Elsewhere, some signatories of the earlier letter have simply been happy to double up on another publicity opportunity — inking their name to both (hi Tristan Harris!). But who is CAIS? There’s limited public information about the organization hosting this message. However it is certainly involved in lobbying policymakers, at its own admission. Its website says its mission is “to reduce societal-scale risks from AI” and claims it’s dedicated to encouraging research and field-building to this end, including funding research — as well as having a stated policy advocacy role. An FAQ on the website offers limited information about who is financially backing it (saying its funded by private donations). While, in answer to an FAQ question asking “is CAIS an independent organization”, it offers a brief claim to be “serving the public interest”: CAIS is a nonprofit organization entirely supported by private contributions. Our policies and research directions are not determined by individual donors, ensuring that our focus remains on serving the public interest. We’ve reached out to CAIS with questions. In a Twitter thread accompanying the launch of the statement, CAIS’ director, Dan Hendrycks, expands on the aforementioned statement explainer — naming “systemic bias, misinformation, malicious use, cyberattacks, and weaponization” as examples of “important and urgent risks from AI… not just the risk of extinction”. “These are all important risks that need to be addressed,” he also suggests, downplaying concerns policymakers have limited bandwidth to address AI harms by arguing: “Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and.’ From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.” The thread also credits David Krueger, an assistant professor of Computer Science at the University of Cambridge, with coming up with the idea to have a single-sentence statement about AI risk and “jointly” helping with its development.
AI Policy and Regulations
The U.S. will likely have a tough time trying to regulate AI-generated content, such as requiring watermarks on computer-made media, a university art lecturer told Fox News. "[F]or us to enforce it would be a lot more difficult," Tyler Coleman, who teaches University of Texas classes focused on AI, said. "I think it will be harder to achieve in the U.S. than it would be in China." WATCH: AI ART LECTURER: AI REGULATIONS WOULD BE ‘DIFFICULT’ TO ENFORCE IN U.S. China's government announced regulations in December 2022 requiring any AI-generated content to include a flag such as a watermark to indicate its origin. While Coleman described the regulations as "a very smart idea," he doubted America's ability to replicate them. "I don't think it would be a bad move for us to attempt to do so," he told Fox News. "I just don't think with our form of capitalism we will succeed." Beijing, through its communist rule, has forced "a lot of structure for what is allowed on the Internet," according to Coleman. America's democratically elected government, meanwhile, remains "very open" about what it allows on the internet, he told Fox News. "There's very few limitations to what we can do online," the AI educator said. Coleman said he believed America's copyright rules and fair use guidelines, which dictate how art can be used, might impede potential watermark requirements for AI-generated content in the U.S. Artificial intelligence software companies often train machine learning technologies with data culled from the internet and use that information to create content such as AI-generated art. This data may include copyrighted material, creating legal and ethical issues for both the AI companies and the original copyright owners. "Artificial intelligence machine learning is, for all intents and purposes, a very advanced system for taking an understanding of all the little things on the internet, billions of points of data, trillions of points of data, and being able to sort of mix them in a way to create a new piece of content," said Coleman, who's experimented with AI since roughly 2017 in his role as a gaming developer. "There's this term, the de minimis effect defense, which is saying we use … such a small piece that we're not really impeding on the copyright only because it was such a small element," Coleman said. "The concept that the AI model creation tools has is … if it's using only a little bit of many, many images, is it impeding on each one's copyright?" AI's limited use of up to trillions of distinct data points may allow it to bypass the de minimis effect concept, according to Coleman. "By using such small samples from each one, is it actually kind of passing through that de minimis?" he said. Ultimately, Coleman said he hopes to continue educating people on AI's increasing use across the art world. "It's getting to the point where it is very hard to understand the difference between an AI-generated image and one that was made via painting, photography, digital works," he told Fox News. "That's going to be a challenge for us in the future as we need to know the difference." To hear more of Coleman's thoughts on AI-generated art regulation, click here.
AI Policy and Regulations
Nearly seven months after it began publishing machine-generated stories without disclosing their true authorship (or lack thereof) to readers, CNET has finally, publicly changed its policy on the use of AI in its journalistic endeavors. In short, stories written by its in-house artificial intelligence — which it calls Responsible AI Machine Partner (RAMP) — are no more, but the specter of AI in its newsroom is far from exorcised. The site indicates, however, that there are still two broad categories of pursuits where RAMP will be deployed. The first, which it calls "Organizing large amounts of information" provides an example that seems more authorial than that umbrella descriptor lets on. "RAMP will help us sort things like pricing and availability data and present it in ways that tailor information to certain audiences. Without an AI assist, this volume of work wouldn’t be possible." The other ("Speeding up certain research and administrative portions of our workflow.") is more troubling. "CNET editors could use AI to help automate some portions of our work so we can focus on the parts that add the most unique value," the guidelines state."RAMP may also generate content such as explanatory material (based on trusted sources) that a human could fact-check and edit. [emphasis ours]" You'd be forgiven if that sounds nearly identical to what got CNET into trouble in the first place. The venerable tech site first posted an innocuously titled explainer ("What Is a Credit Card Charge-Off?") on November 11, 2022, under the byline "CNET Money Staff" with no further explanation as to its provenance, and continued posting dozens more small finance stories under that byline through mid-January. It was around that time that Futurism discovered two important details: CNET Money Staff stories were AI-generated, and much of that work was wildly inaccurate. CNET issued corrections on over half of those stories and had, by all appearances, stopped using these sorts of tools in response to the deserved criticisms they created. In the interim, the remaining CNET staff publicly announced their intention to unionize with the Writer's Guide of America, East. Among the more typical areas of concern for a shrinking newsroom during these trying times in the media industry (retention, severance, editorial independence, et cetera), the bargaining unit also specifically pushed back against the site's intention to keep deploying AI. Other than these new guidelines on AI, the union has not received any official update from management on its demands over the past three weeks, a staffer told Engadget. And based on the union's response on Twitter, the guidelines fall well short of the kinds of protections CNET's workers were hoping for. "Before the tool rolls out, our union looks forward to negotiating," they wrote. "How & what data is retrieved; a regular role in testing/reevaluating tool; right to opt out & remove bylines; a voice to ensure editorial integrity. New AI policy @CNET affects workers. Before the tool rolls out, our union looks forward to negotiating: how & what data is retrieved; a regular role in testing/reevaluating tool; right to opt out & remove bylines; a voice to ensure editorial integrity. https://t.co/7FQFWhRoui— CNET Media Workers Union (@cnetunion) June 6, 2023 Granted, CNET claims it will never deploy RAMP to write full stories, though it also denies it ever did so. However, the new guidelines leave the door open for that possibility, as well as the eventuality that it uses AI to generate images or videos, promising only that where "text that originated from our AI tool, we’ll include that information in a disclosure." CNET's apparent bullishness on AI (and its staff's wariness) also arrive against a backdrop of news organizations broadly looking to survive the technology's potential ill-effects. The New York Times and other media groups began preliminary talks this week to discuss AI's role in disinformation and plagiarism, as well as how to ensure fair compensation when authorship becomes murky. The prior CNET Money Staff articles have since been updated to reflect the new editorial guidelines. Each is credited to "CNET Money" and also lists the name of a human editor; a disclosure appears at the beginning and end of the stories, reading "This article was assisted by an AI engine and reviewed, fact-checked and edited by our editorial staff." This sort of basic disclosure is neither difficult nor unusual. Including the provenance of information has been one of the core tenants of journalism since well before AI became advanced enough to get a credit on the masthead, and The Associated Press has been including such disclosures in its cut-and-paste-level financial beat stories for the better part of a decade. On the one hand much of the embarrassment around CNET's gaffe could have been avoided if it had simply warned readers where the text of these stories had come from at the outset. But the larger concern remains that, unlike AP's use of these tools, CNET seems poised to allow RAMP more freedom to do more substantive work, the bounds of which are not meaningfully changed by these guidelines.
AI Policy and Regulations
With AI large language models like ChatGPT being developed around the globe, countries have raced to regulate AI. Some have drafted strict laws on the technology, while others lack regulatory oversight. China and the EU have received particular attention, as they have created detailed, yet divergent, AI regulations. In both, the government plays a large role. This greatly differs from countries like the United States, where there is no federal legislation on AI. Government regulation comes as many countries have raised concerns about various aspects of AI. These mainly includes privacy concerns, and the potential for societal harm with the controversial software. The following is a description of how countries across the globe have managed regulation of the growing use of AI programs. - US regulation - Chinese regulation - What other countries have passed legislation? 1. US regulation The United States has yet to pass federal legislation on AI. OpenAI, a US-based company, has created the most talked about AI software to date, ChatGPT. ChatGPT has heavily influenced the AI conversation. Countries around the world are now generating AI software of their own, with similar functions to ChatGPT. Despite the lack of federal legislation, the Biden Administration, in conjunction with the National Institute of Standards and Technology (NIST) released the AI Bill of Rights. The document essentially offers guidance on how AI should be used and some ways it can be misused. Yet, the framework is not legally binding. However, multiple states across the country have introduced their own sets of laws on AI. Vermont, Colorado and Illinois began by creating task forces to study AI, according to the National Conference of State Legislatures (NCSL). The District of Columbia, Washington, Vermont, Rhode Island, Pennsylvania, New York, New Jersey, Michigan, Massachusetts, Illinois, Colorado and California are also considering AI laws. While many of the laws are still being debated, Colorado, Illinois, Vermont, and Washington have passed various forms of legislation. For example, the Colorado Division of Insurance requires companies to account for how they use AI in their modeling and algorithms. In Illinois, the legislature passed the Artificial Intelligence Video Interview Act, which requires employee consent if AI technology is used to evaluate job applicants' candidacies. Washington state requires its chief information officer to establish a regulatory framework for any systems in which AI might impact public agencies. While AI regulation in the United States is a hot topic and ever-growing conversation, it remains to be seen when Congress may begin to exercise regulatory discretion over AI. 2. The Chinese regulatory approach China is a country in which the government plays a large part in AI regulation. There are lots of Chinese based tech companies that have recently released AI software such as chatbots and image generators. For example, Baidu, SenseTime and Alibaba have all released various artifical intelligence software. Alibaba has a large language model out called Tongyi Qianwen and SenseTime has a slew of AI services like SenseChat, which functions similarly to ChatGPT, a service unavailable in the country. Ernie Bot is another chatbot that was released in China by Baidu. The Cyberspace Administration of China (CAC) released regulation in April 2023 that includes a list of rules that AI companies need to follow and the penalties they will face if they fail to adhere to the rules. One of the rules released by the CAC is that security reviews must be conducted before an AI model is released on a public level, according to the Wall Street Journal. Rules like this give government considerable oversight of AI. The CAC said that while it supports the innovation of safe AI, it must be in line with China's socialist values, according to Reuters. Another specific regulation detailed by the CAC is that providers are the ones responsible for the accuracy of the data being used to train their AI software. There also must be measures in place that prevent any discrimination when the AI is created, according to the source. AI services additionally must require users to submit their real identities when using the software. There are also penalties, including fines, suspended services, and criminal charges for violations, according to Reuters. Also, if there is inappropriate content released through any AI software, the company has three months to update the technology to ensure it doesn't happen again, according to the source. The rules created by the CAC hold AI companies responsible for the information that their software is generating. 3. What other countries have passed legislation? Rules established by the European Union (EU). include the Artificial Intelligence Act (AIA) which debuted in April 2021. However, the act is still under review in the European Parliament, according to the World Economic Forum. The EU regulatory framework divides AI applications into four categories: minimal risk, limited risk, high risk and unacceptable risk. Applications that are considered minimal or limited risk have light regulatory requirements, but must meet certain transparency obligations. On the other hand, applications that are categorized as unacceptable risk are prohibited. Applications that fall in the high risk category can be used, but they are required to follow more strict guidelines, and be subject to heavy testing requirements. Within the context of the EU, Italy's Italian Data Protection Authority placed a temporary ban on ChatGPT in March. The ban was largely based on privacy concerns. Upon implementing the ban, the regulatory agency gave OpenAI 20 days to address specific concerns, including age verification, clarification on personal data usage, privacy policy updates, and providing more information to users about how personal data is used by the application. The ban on ChatGPT in Italy was rescinded at the end of April, after the chatbot was found to be in compliance with regulatory requirements. Another country that has undertaken AI regulation is Canada with the Artificial Intelligence and Data Act (AIDA) that was drafted in June 2022. The AIDA requires transparency from AI companies as well as providing for anti-discrimination measures.
AI Policy and Regulations
On Wednesday, the UK hosted an AI Safety Summit attended by 28 countries, including the US and China, which gathered to address potential risks posed by advanced AI systems, reports The New York Times. The event included the signing of "The Bletchley Declaration," which warns of potential harm from advanced AI and calls for international cooperation to ensure responsible AI deployment. "There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models," reads the declaration, named after Bletchley Park, the site of the summit and a historic World War II location linked to Alan Turing. Turing wrote influential early speculation about thinking machines. Rapid advancements in machine learning, including the appearance of chatbots like ChatGPT, have prompted governments worldwide to consider regulating AI. Their concerns led to the meeting, which has drawn criticism for its invitation list. In the tech world, representatives from major companies included those from Anthropic, Google DeepMind, IBM, Meta, Microsoft, Nvidia, OpenAI, and Tencent. Civil society groups, like Britain's Ada Lovelace Institute and the Algorithmic Justice League in Massachusetts, also sent representatives. Political summit representatives from the US included Vice President Kamala Harris and Gina Raimondo, the secretary of commerce. China's vice minister of science and technology, Wu Zhaohui, expressed Beijing's willingness to "enhance dialogue and communication" on AI safety. UK government representatives like Technology Secretary Michelle Donelan played a starring role in the event, hoping to place the UK front and center in the AI space. According to The Guardian, UK Prime Minister Rishi Sunak applauded the Bletchley Declaration and emphasized the need to identify potential threats related to advanced AI systems that may eventually surpass human intelligence. Along that line of thinking, Elon Musk, who attended the summit, said, "For the first time, we have a situation where there’s something that is going to be far smarter than the smartest human... It’s not clear to me we can actually control such a thing," according to The Guardian. Musk has been a prominent outspoken member of a movement to warn about the hypothetical existential risks of AI. However, some AI experts like Google Brain co-founder Andrew Ng and Meta Chief AI Scientist Yann LeCun call those risks overhyped. "The opinion of the *vast* majority of AI scientists and engineers (me included) is that the whole debate around existential risk is wildly overblown and highly premature," LeCun wrote on X (formerly Twitter) on October 9. Other critics of AI technology prefer to focus on current perceived harms from AI, including environmental, privacy, ethics, and bias issues, instead of hypothetical threats. While the summit began an international dialogue on AI safety, it stopped short of setting specific policy goals, according to The Times. That may have something to do with the nebulous nature of "AI" itself. "Artificial intelligence" is a very broad term with a fuzzy definition that can encompass many technologies, ranging from chess-playing computer programs to large language models that can code in Python. In particular, the declaration refers to what it calls "frontier AI," which the document vaguely defines as "highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks—as well as relevant specific narrow AI that could exhibit capabilities that cause harm—which match or exceed the capabilities present in today’s most advanced models." Similarly, there remains a lack of consensus on what global AI regulations should entail or who should be responsible for drafting them. The US, for its part, has announced a separate American AI Safety Institute, and President Biden recently issued an executive order on AI. The European Union is working on an AI bill to establish regulatory principles and guidelines for specific AI technologies. While the summit signifies a move toward international cooperation on AI safety, some analysts believe it leaned more toward posturing and symbolism. Just before the summit, Prime Minister Sunak announced a live Thursday tie-in interview with Musk that will take place on Musk's social media platform, X.
AI Policy and Regulations
On the picket lines outside Los Angeles film studios, artificial intelligence has become a central antagonist of the Hollywood writers’ strike, with signs warning studio executives that writers will not let themselves be replaced by ChatGPT That hasn’t stopped tech industry players from selling the promise of a future in which AI is an essential tool for every part of Hollywood production, from budgeting and concept art, to script development, to producing a first cut of a feature film with a single press of a button. The writer’s strike has put the spotlight on escalating tensions over whether an AI-powered production process will be a dream or a nightmare for most Hollywood workers and for their audiences. Los Angeles’s AI boosters tout the latest disruptive technology as a democratising force in film, one that will liberate creators by taking over dull and painstaking tasks like motion capture, allowing them to turn their ideas into finished works of art without a budget of millions or tens of millions of dollars. They envision a world in which every artist has a “holographic vision board”, which will enable them to instantly see any possible idea in action. Critics say that studio executives simply want to replace unionized artists with compliant robots, a process that can only lead to increasingly mediocre, or even inhuman, art. All these tensions were on display last week when tech companies that specialise in AI, including Dell, Hewlett-Packard Enterprise and Nvidia, were among the sponsors of an “AI on the Lot” conference in Hollywood, which attracted an estimated 400 people to overflowing sessions about how artificial intelligence was disrupting every facet of film production. One tech investor described the mood as both high energy and high anxiety. The day before the AI conference, a crowdfunded plane had flown over multiple studios with a banner message: “Pay the writers, you AI-holes.” But several speakers at the AI LA conference argued that fear of artificial intelligence is for the weak. “The people who hate it or are fearful of it are insecure about their own talent,” said Robert Legato, an Academy Award-winning visual effects expert who has worked on films like Titanic, the Jungle Book and the Lion King. “It’s like a feeling amplifier,” said Pinar Seyhan Demirdag, an artist turned AI entrepreneur. “If you feel confident, you will excel. If you feel inferior – ,” she paused. The tech crowd laughed. ‘No more Godfather, no more Wizard of Oz’ It’s hard to know how exactly the battles over AI in Hollywood will play out, given the heavy haze of marketing bombast, fearmongering and simple confusion about the technology that’s currently hovering over the industry. “A lot of us are at a cocktail party pretending we know what we’re talking about,” Cynthia Littleton, the co-editor-in-chief of Variety magazine, told the Hollywood AI conference. But it’s clear that some of the emerging conflicts will focus on job losses from automation, copyright and intellectual property disputes and deeper questions about how much a profit-driven studio system actually cares about human creativity. Getty Images recently sued Stability AI, the maker of a prominent text-to-image generator, accusing it of improperly training its algorithms on 12m Getty photographs, while officially working with another AI company, Nvidia, to develop licensed photo and video AI products that will provide royalties to content creators Because AI video technology is still lagging behind audio or image generation, the music industry is currently “the tip of the spear” for AI battles, said Littleton, pointing to the controversy over recent AI simulations of songs by Drake and The Weeknd. But Hollywood is gearing up for the era of AI-generated actors: Metaphysic, an AI company that specializes in “deep fakes,” announced a partnership that would work to develop new tools for the clients of Creative Artists Agency CAA’s, a major entertainment and sports talent agency. Joanna Popper, the talent agency’s new “chief metaverse officer,” told Deadline in January that the new technology will offer flexibility to actors and other entertainers, who will still retain the rights to their image and likeness. “Some actors have done commercials where essentially their synthetic media double did the commercial rather than the actor traveling around the world,” she said. “If the actor isn’t available for the reshoots a director needs, you can have a stand-in for the actor and then use this technology for face replacement and still get the job done in the needed timeline,” Popper offered. “If you wanted the actor to speak in a different language, you could use AI to create an international dub that sounds like the actor’s voice speaking various other languages.” Some Hollywood writers and actors have begun to denounce these developments, arguing that the coming age of AI is a threat to workers across the industry. “AI has to be addressed now or never,” Justine Bateman, a writer and director who was a television actor in the 90s, argued in a viral Twitter thread, calling on the Screen Actors Guild to follow the writers’ guild in making AI regulations a central part of their coming contract negotiations. “If we don’t make strong rules now, they simply won’t notice if we strike in three years, because at that point they won’t need us.” The more than 160,000 members of SAG-AFTRA, which includes screen actors, broadcast journalists and a wide range of other performers and media professionals, are currently voting on whether to authorize their own strike. Many Hollywood critics argue that too much reliance on AI in filmmaking is a threat to the very humanity of art itself. Speaking at Cannes, actor Sean Penn expressed support for writers, calling the use of AI in writing scripts a “human obscenity”. “ChatGPT doesn’t have childhood trauma,” one viral writers strike sign quipped. If studios pivot to producing AI-generated stories to save money, they may end up alienating audiences and bankrupting themselves, leaving TikTok and YouTube as the only surviving entertainment giants, Hunger Games screenwriter Billy Ray warned on a recent podcast. “No more Godfather, no more Wizard of Oz, it’ll just be 15 second clips of human folly,” he said. Black film and TV writers in particular have been speaking out about the ways AI could be used by studios to generate “diverse” content without actually having to work with a diversity of artists. “We’re going to get the stories of people who have been disempowered told through the voice of the algorithm rather than people who have experienced it,” Star Trek writer Diandra Pendleton-Thompson warned on the first day of the writers strike. ‘AI lacks courage to write something truly human’ Some recent entrants to the AI industry say that the current technology is being overhyped, and its likely impact, particularly on writers, has been exaggerated. “When people tell me the studios are going to replace writers with AI, to me, that person has never tried to do anything really difficult with large language models,” said Mike Gioia, one of the executives of Pickaxe, a new Chat GPT-based platform for writers with a few hundred paying customers. He called the idea that AI could produce full scripts “science fiction”. “The worst case scenario for writers is that the size of writers rooms is reduced,” he said. Many early Pickaxe customers, Gioia said, are using it to automate mundane tasks, like filing internal reports or making interactive FAQs for e-commerce sites. While the technology can generate a rough draft of a formulaic TV script for a writer to tinker with, Gioia said, he believed it would be “a fool’s errand” to try to get it to produce good dialogue. While AI is good at understanding the “meta structure” of a piece of text, Gioia said, “It lacks the courage to try to write something truly human.” AI writing tools could have big effects in less glamorous segments of the film industry. Pickaxe is currently exploring whether they can use the AI tools to help automate the budgeting process of reading a script, breaking down the different visual effects needed to produce each shot and then estimating the cost of those effects, Ian Eck, another Pickaxe executive, said. Writers have made AI central to their strike in part because “it’s a good story”, Gioia argued and partly because they are much less accustomed to being disrupted by technology than other industry workers. “A lot of people in post production have lived through multiple technological revolutions in their fields, but writers haven’t lived through a single one,” he said. James Blevins, whose decade-long career in special effects and post production has taken him from 1996’s Space Jam to The Mandalorian, told attendees of the AI LA conference that the anxiety around AI reminded him of the anxiety around the digitization of film in the late 1990s and early 2000s. “I’ve always done the job that will be replaced. I’ve always been automated out of my job. It’s just the way it is,” he said. He cautioned that there was no way to escape the changes that AI would bring to the industry. “It’s so disruptive, it’s kind of like being afraid of the automobile, or, ‘Oh my god, we shouldn’t go to the moon,’” he said. What went unanswered in the panel discussion was how many of Hollywood’s technical workers, from set designers to hairstylists, would be able to translate their skills into a more virtual film world – and how many might simply be laid off. IATSCE, the union representing 168,000 entertainment industry technicians, artisans and craftspeople, announced in early May that it would be forming its own commission on artificial intelligence to investigate the impact of the technology on workers. The union is also interested in helping to unionizing new segments of workers that may emerge in the wake of AI disruptions – including the new category of AI wranglers that the tech boosters are currently calling “prompt engineers”, said Justin Loeb, IATSCE’s director of communications. But in a tech industry driven by hype, it’s still not clear how much change is really coming, or how fast. “VR was going to be huge in the 90s, and well, that didn’t really happen and then it was going to be huge about five years ago and that hasn’t happened,” Gregory Shiff, who works on media and entertainment issues for Dell, said on the panel briefly moderated by an avatar of Vermeer’s Girl with a Pearl Earring. “Is AI going to be the same? I don’t think so, but I don’t know.”
AI Policy and Regulations
LONDON (AP) — The breathtaking development of artificial intelligence has dazzled users by composing music, creating images and writing essays, while also raising fears about its implications. Even European Union officials working on groundbreaking rules to govern the emerging technology were caught off guard by AI’s rapid rise. The 27-nation bloc proposed the Western world’s first AI rules two years ago, focusing on reining in risky but narrowly focused applications. General purpose AI systems like chatbots were barely mentioned. Lawmakers working on the AI Act considered whether to include them but weren’t sure how, or even if it was necessary. “Then ChatGPT kind of boom, exploded,” said Dragos Tudorache, a Romanian member of the European Parliament co-leading the measure. “If there was still some that doubted as to whether we need something at all, I think the doubt was quickly vanished.” The release of ChatGPT last year captured the world’s attention because of its ability to generate human-like responses based on what it has learned from scanning vast amounts of online materials. With concerns emerging, European lawmakers moved swiftly in recent weeks to add language on general AI systems as they put the finishing touches on the legislation. The EU’s AI Act could become the de facto global standard for artificial intelligence, with companies and organizations potentially deciding that the sheer size of the bloc’s single market would make it easier to comply than develop different products for different regions. “Europe is the first regional bloc to significantly attempt to regulate AI, which is a huge challenge considering the wide range of systems that the broad term ‘AI’ can cover,” said Sarah Chander, senior policy adviser at digital rights group EDRi. Authorities worldwide are scrambling to figure out how to control the rapidly evolving technology to ensure that it improves people’s lives without threatening their rights or safety. Regulators are concerned about new ethical and societal risks posed by ChatGPT and other general purpose AI systems, which could transform daily life, from jobs and education to copyright and privacy. The White House recently brought in the heads of tech companies working on AI including Microsoft, Google and ChatGPT creator OpenAI to discuss the risks, while the Federal Trade Commission has warned that it wouldn’t hesitate to crack down. China has issued draft regulations mandating security assessments for any products using generative AI systems like ChatGPT. Britain’s competition watchdog has opened a review of the AI market, while Italy briefly banned ChatGPT over a privacy breach. The EU’s sweeping regulations — covering any provider of AI services or products — are expected to be approved by a European Parliament committee Thursday, then head into negotiations between the 27 member countries, Parliament and the EU’s executive Commission. European rules influencing the rest of the world — the so-called Brussels effect — previously played out after the EU tightened data privacy and mandated common phone-charging cables, though such efforts have been criticized for stifling innovation. Attitudes could be different this time. Tech leaders including Elon Musk and Apple co-founder Steve Wozniak have called for a six-month pause to consider the risks. Geoffrey Hinton, a computer scientist known as the “Godfather of AI,” and fellow AI pioneer Yoshua Bengio voiced their concerns last week about unchecked AI development. Tudorache said such warnings show the EU’s move to start drawing up AI rules in 2021 was “the right call.” Google, which responded to ChatGPT with its own Bard chatbot and is rolling out AI tools, declined to comment. The company has told the EU that “AI is too important not to regulate.” Microsoft, a backer of OpenAI, did not respond to a request for comment. It has welcomed the EU effort as an important step “toward making trustworthy AI the norm in Europe and around the world.” Mira Murati, chief technology officer at OpenAI, said in an interview last month that she believed governments should be involved in regulating AI technology. But asked if some of OpenAI’s tools should be classified as posing a higher risk, in the context of proposed European rules, she said it’s “very nuanced.” “It kind of depends where you apply the technology,” she said, citing as an example a “very high-risk medical use case or legal use case” versus an accounting or advertising application. OpenAI CEO Sam Altman plans stops in Brussels and other European cities this month in a world tour to talk about the technology with users and developers. Recently added provisions to the EU’s AI Act would require “foundation” AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislation obtained by The Associated Press. Foundation models, also known as large language models, are a subcategory of general purpose AI that includes systems like ChatGPT. Their algorithms are trained on vast pools of online information, like blog posts, digital books, scientific articles and pop songs. “You have to make a significant effort to document the copyrighted material that you use in the training of the algorithm,” paving the way for artists, writers and other content creators to seek redress, Tudorache said. Officials drawing up AI regulations have to balance risks that the technology poses with the transformative benefits that it promises. Big tech companies developing AI systems and European national ministries looking to deploy them “are seeking to limit the reach of regulators,” while civil society groups are pushing for more accountability, said EDRi’s Chander. “We want more information as to how these systems are developed — the levels of environmental and economic resources put into them — but also how and where these systems are used so we can effectively challenge them,” she said. Under the EU’s risk-based approach, AI uses that threaten people’s safety or rights face strict controls. Remote facial recognition is expected to be banned. So are government “social scoring” systems that judge people based on their behavior. Indiscriminate “scraping” of photos from the internet used for biometric matching and facial recognition is also a no-no. Predictive policing and emotion recognition technology, aside from therapeutic or medical uses, are also out. Violations could result in fines of up to 6 percent of a company’s global annual revenue. Even after getting final approval, expected by the end of the year or early 2024 at the latest, the AI Act won’t take immediate effect. There will be a grace period for companies and organizations to figure out how to adopt the new rules. It’s possible that industry will push for more time by arguing that the AI Act’s final version goes farther than the original proposal, said Frederico Oliveira Da Silva, senior legal officer at European consumer group BEUC. They could argue that “instead of one and a half to two years, we need two to three,” he said. He noted that ChatGPT only launched six months ago, and it has already thrown up a host of problems and benefits in that time. If the AI Act doesn’t fully take effect for years, “what will happen in these four years?” Da Silva said. “That’s really our concern, and that’s why we’re asking authorities to be on top of it, just to really focus on this technology.” AP Technology Writer Matt O’Brien in Providence, Rhode Island, contributed.
AI Policy and Regulations
Eric Schmidt speaks at an emerging technology summit on July 13, 2021 in Washington, D.C. | Kevin Dietsch/Getty Images Eric Schmidt, the former CEO of Google who has long sought influence over White House science policy, is helping to fund the salaries of more than two dozen officials in the Biden administration under the auspices of an outside group, the Federation of American Scientists. The revelation of Schmidt’s role in funding the jobs, the extent of which has not been previously reported, adds to a picture of the tech mogul’s growing influence in the White House science office and in the administration – at a time when the federal government is looking closely at future technologies and potential regulations of Artificial Intelligence. Schmidt has become one of the United States’ most influential advocates for federal research and investment in AI, even as privacy advocates call for greater regulation. A spokesperson for Schmidt defended the arrangement, saying in a statement that “Eric, who has fully complied with all necessary disclosure requirements, is one of many successful executives and entrepreneurs committed to addressing America’s shortcomings in AI and other related areas.” The spokesperson also defended the existence of privately funded fellows, chosen by FAS, in key policy making areas as both legal and beneficial to the public. “While it is appropriate to review the relationship between the public and private sectors to ensure compliance and ethics oversight, there are people with the expertise and experience to make monumental change and advance our country, and they should have the opportunity to work across sectors to maintain our competitive advantage for public benefit,” the statement said. For its part, a White House spokesperson said: “Neither Eric Schmidt nor the Federation of American Scientists exert influence on policy matters. Any suggestion otherwise is false. We enacted the most stringent ethics guidelines of any administration in history to ensure our policy processes are free from undue influence.” But a POLITICO investigation found that members of the administration are well aware that a significant amount of the money for the salaries of FAS’s fellows comes from Schmidt’s research and investment firm, Schmidt Futures, and that the organization was critical to the program to fund administration jobs. In fact, the influence of Schmidt Futures at FAS is such that they are sometimes conflated. Thus, some close observers of AI policy believe that Schmidt is using the program to enhance his clout within the administration and to advance his AI agenda. “Schmidt is clearly trying to influence AI policy to a disproportionate degree of any person I can think of,” said Alex Engler, a fellow at the Brookings Institution who specializes in AI policy. “We’ve seen a dramatic increase in investment toward advancing AI capacity in government and not much in limiting its harmful use.” Schmidt Futures was founded in 2017 by the Google mogul and his wife, Wendy. It advertises itself as a philanthropic initiative but is registered as a limited liability corporation called the Future Action Network. As such, it is legally barred from funding positions in the federal government in the way FAS, a non-profit, is able to do through a 50-year-old law – the Intergovernmental Personnel Act of 1970. That law allows certain non-profit groups, universities, and federally funded research and development centers to cover the salaries of people in the executive branch who help fill skill gaps with temporary assignments — though the Office of Personnel Management says that relatively few groups and agencies take advantage of it. Founded in 1945 by atomic researchers in the aftermath of the detonation of the atomic bomb, FAS is one of the nation’s most respected non-partisan organizations. It describes Schmidt Futures as one of “20-plus philanthropic funders” to its “Day One Project,” which was launched on January 23, 2020 to begin recruiting people to fill key science and technology positions in the executive branch starting with the next presidential inauguration, no matter which party was victorious. The project has placed its fellows in important science posts throughout the administration. POLITICO previously reported that two officials inside the White House science office had been funded by the project, but the group has also recruited people to serve throughout the administration in many posts related to technology policy. FAS fellows, known as IPAs after the law that created them, have served or currently serve in Biden’s White House Council of Economic Advisers, the White House Council on Environmental Quality, the Department of Energy, the Department of Education, the Department of Health and Human Services, the Department of Transportation, the Department of Homeland Security and the Federal Trade Commission. At least six FAS IPAs work in the Office of Evaluation Sciences in the General Services Administration, which serves as a sort of outside consultant to help agencies across the executive branch. Schmidt Futures also has downplayed its role as “just one of 20 organizations or initiatives to contribute to [The Day One Project].” A spokesperson for Schmidt Futures said that the group believes less than 30 percent of total contributions to the “Day One Project” comes from the organization. FAS confirmed that no funder, including Schmidt Futures, provides 30 percent of the funding. “It’s not illegal, and it is with the best of intents…when it comes to this work, this is literally coming from him wanting to build a better world and Schmidt Futures as well,” said the Schmidt Futures spokesperson. “I don’t believe that we have any more undue influence than anyone else in [The Day One Project].” The spokesperson also disputed the idea that Schmidt Futures was playing a leading role in the program. “Define lead,” the spokesperson said. “We helped galvanize people? That’s what we do. That’s part of our mission to help galvanize other donors and partners in business and in government to help further the public good.” Within the White House, officials have sometimes viewed FAS and Schmidt Futures interchangeably, as the dual vehicles for the funding of jobs. In internal emails from the White House science office in August of 2021 previously reported on by POLITICO, Elaine Ho, the office’s deputy chief of staff for workforce, wrote that the Department of Energy “has secured Schmidt Futures as a funding source … I have already reached out to our contact at FAS/Day One.” In other departments and agencies, officials regularly refer to the FAS personnel as “Schmidt fellows.” At the annual Arizona State University and Global Silicon Valley summit, John Whitmer, a FAS-funded fellow at the Department of Education, was identified as a “Schmidt Impact Fellow Federation of American Scientists.” In addition to his role in the education department focused on “using advanced algorithmic techniques from natural language processing, learning engineering, and large-scale data analysis,” Whitmer also works as an adviser to Schmidt Futures, according to the bio. (An education department spokesperson told POLITICO that IPAs, like federal employees, pledge to act in accordance with the Ethics in Government Act.) “Issues in Science and Technology,” a quarterly science journal, also singled out Schmidt Futures as the driver of the Day One Project. They wrote in November 2021 that “many science foundations, led by Schmidt Futures, have supported establishment of the Day One Project.” Biden, too, appears to have noticed. He offered his personal endorsement to another Schmidt Futures program, the “Quad Fellowship” for 100 American, Indian, Japanese and Australian graduate school students each year to study the United States. The connections between Schmidt and FAS are extensive and include the top FAS leadership. The science organization’s chair since 2009, a venture capitalist named Gilman Louie, has been the chief executive of the Schmidt-backed “America’s Frontier Fund” since 2021. AFF is billed as both a nonprofit and venture capital fund that is focused on emerging technologies. A spokesperson for America’s Frontier Fund said Schmidt Futures and Schmidt supplied less than 30 percent of the funding to AFF. “As a founding member, Gilman has donated multiples of his salary,” said the spokesperson. “America’s Frontier Fund is focused on expanding America’s global leadership in technology by revitalizing local communities through innovation and manufacturing.” The Tech Transparency Project, a nonprofit watchdog organization and an early critic of the growing technology industry, first reported Louie’s role as AFF’s chief executive in May but the position is not included in his FAS bio listing his other jobs. In May, 2022, Biden named Louie to his Intelligence Advisory Board. Tom Kalil, the chief innovation officer at Schmidt Futures, and Kumar Garg, then-managing director at Schmidt Futures, each spoke at the FAS launch event for the “Day One Project.” FAS’s website for the “Day One Project” also boasts a “Kalil’s corner,” through which the Schmidt Futures innovation officer provides “reflections to advance a range of science and technology priorities through policy and philanthropy.” While still working at Schmidt Futures, Kalil also worked as an unpaid consultant in the White House science office for four months in 2021 until ethics complaints prompted his departure. The Intergovernmental Personnel Act of 1970 was originally crafted to help enhance the federal bureaucracy by allowing agencies to bring in outside people with expertise for temporary jobs. Since then, the program has been utilized to varying degrees by different sectors of the executive branch. A January 2022 report by the Government Accountability Office found that the IPA program can be beneficial to agencies but its use is sporadic. After surveying four agencies, the GAO report concluded that IPAs “represented less than 1 percent of their total civilian workforce in a given fiscal year.” The federal Office of Personnel Management also publicly urges using IPAs because “agencies do not take full advantage of the IPA program.” As a result, the FAS Day One Project has stood out among some in the administration as being an aggressive user of the IPA program. Schmidt Futures’ leadership has long looked to the program as a tool. In a 2019 interview with the podcast “80,000 Hours,” Kalil said the IPA was “a very underappreciated law” and discussed how it can be leveraged to bring more talented people into the executive branch. But the IPA program has been criticized in recent years for a lack of oversight and transparency. In a January, 2022, report to Congress, the GAO wrote that the program has many “advantages” but noted that the Office of Personnel Management “does not have complete and accurate data needed to track mobility program use. Thus, OPM does not know how often the program is being used across the federal government.” A 2017 report by the Inspector General for the National Science Foundation also found ethics concerns and recommended that the foundation “take corrective actions to strengthen controls over IPA conflicts of interests, including reassess controls to ensure staff do not have access to awards and proposals for which they are conflicted.” In a statement, FAS said that “we have strict policies in place to ensure the integrity and independence of the process by mandating a firewall between our 20+ philanthropic funders, including Schmidt Futures, and the federal agencies that solely determine roles and talent placements.” FAS added that “it is proud to continue the long-standing tradition of supporting federal agencies in finding world-class talent for their critical needs.” Schmidt’s collaboration with FAS is only a part of his broader advocacy for the U.S. government to invest more in technology and particularly in AI, positions he advanced as chair of the federal National Security Commission on Artificial Intelligence from 2018 to 2021. The commission’s final report recommended that the government spend $40 billion to “expand and democratize federal AI research and development” and suggested more may be needed. “If anything, this report underplays the investments America will need to make,” the report stated. Schmidt’s intense support for AI dovetails with some of his personal business and philanthropic efforts, which has put him in the crosshairs of watchdog organizations and, most recently, Sen. Elizabeth Warren (D-MA), as CNBC reported in December. “Eric Schmidt appears to be systematically abusing this little-known set of programs to exert his influence in the federal government,” said Katie Paul, the director of the Tech Transparency Project which published a report Tuesday on Schmidt and the IPA program. “The question is, on whose behalf is it? Google, where he’s still a major shareholder? Is it to advance his own portfolio of investments–artificial intelligence and bioengineering or energy? The public has a right to know who is paying their public servants and why.” Schmidt has argued that he is not motivated by making money but rather a sincere conviction that the 21st century will largely be defined by which countries have the most advanced artificial intelligence capabilities. “AI promises to transform all realms of human experience,” he wrote in a 2021 book with former Secretary of State Henry Kissinger and MIT’s Daniel Huttenlocher titled “The Age of AI: And Our Human Future.” “Other countries have made AI a national project. The United States has not yet, as a nation, systematically explored its scope, studied its implications, or begun the process of reconciling with it,” they wrote. “If the United States and its allies recoil before the implications of these capabilities and halt progress on them, the result would not be a more peaceful world.” As a result, Schmidt has become increasingly involved with the Pentagon in recent years. Schmidt chaired the Pentagon’s Defense Innovation Board from 2016 to 2020. He is also an investor in and sits on the board of the AI-focused defense contractor Rebellion Defense which has won a number of contracts from the Biden Pentagon. Two officials from Rebellion Defense also served on Biden’s transition team. Rebellion also recently hired David Recordon, the director of technology at the White House science office, to be the chief technology director at Rebellion. In 2018, the then-chairman of the House Armed Services Committee, Rep. Mac Thornberry (R-TX), nominated Schmidt to the National Security Commission on Artificial Intelligence. After it wound down, Schmidt launched a private sector group called the Special Competitive Studies Project to continue the work of developing AI policy and hired over a dozen of the commission’s staffers, CNBC reported. Thornberry, who has since retired, is on the board. Thornberry did not respond to a request for comment. Schmidt Futures also supplied a grant to FAS and the Day One Project to “shape the establishment of a congressional commission to examine the relevance of the [Department of Defense] Planning, Programming, Budgeting, and Execution (PPBE) system and its associated resource allocation processes,” according to FAS. Beyond military applications, Schmidt also has argued that AI is critical to economic power, from software to pharmaceuticals. Schmidt’s quiet funding of positions in the Biden administration first came to light in the aftermath of the resignation of Eric Lander — Schmidt’s friend and close ally — as the head of the White House science office early this year. The resignation came after POLITICO reported that a White House investigation found Lander had bullied employees — at least one of whom had sought to raise ethical questions about the acceptance of Schmidt-linked money.
AI Policy and Regulations
After ChatGPT burst on the scene last November, some government officials raced to prohibit its use. Italy banned the chatbot. New York City, Los Angeles Unified, Seattle, and Baltimore School Districts either banned or blocked access to generative AI tools, fearing that ChatGPT, Bard, and other content generation sites could tempt students to cheat on assignments, induce rampant plagiarism, and impede critical thinking. This week, US Congress heard testimony from Sam Altman, CEO of OpenAI, and AI researcher Gary Marcus as it weighed whether and how to regulate the technology.In a rapid about-face, however, a few governments are now embracing a less fearful and more hands-on approach to AI. New York City Schools chancellor David Banks announced yesterday that NYC is reversing its ban because “the knee jerk fear and risk overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial.” And yesterday, City of Boston chief information officer Santiago Garces sent guidelines to every city official encouraging them to start using generative AI “to understand their potential.” The city also turned on use of Google Bard as part of the City of Boston’s enterprise-wide use of Google Workspace so that all public servants have access.The “responsible experimentation approach” adopted in Boston—the first policy of its kind in the US—could, if used as a blueprint, revolutionize the public sector’s use of AI across the country and cause a sea change in how governments at every level approach AI. By promoting greater exploration of how AI can be used to improve government effectiveness and efficiency, and by focusing on how to use AI for governance instead of only how to govern AI, the Boston approach might help to reduce alarmism and focus attention on how to use AI for social good. Boston’s policy outlines several scenarios in which public servants might want to use AI to improve how they work, and even includes specific how-tos for effective prompt writing.Generative AI, city officials were told in an email that went out from the CIO to all city officials on May 18, is a great way to get started on memos, letters, and job descriptions, and might help to alleviate the work of overburdened public officials. The tools can also help public servants “translate” government-speak and legalese into plain English, which can make important information about public services more accessible to residents. The policy explains that public servants can indicate the reading level or audience in the prompt, allowing the AI model to generate text suitable for elementary school students or specific target audiences.Generative AI can also help with translation into other languages so that a city’s non-English speaking populations can enjoy equal and easier access to information about policies and services affecting them. City officials were also encouraged to use generative AI to summarize lengthy pieces of text or audio into concise summaries, which could make it easier for government officials to engage in conversations with residents.The Boston policy even explains how AI can help produce code snippets and assist less technical individuals. As a result, even interns and student workers could start to engage in technical projects, such as creating web pages that help to communicate much needed government information.Still, the policy advocates for a critical approach to the technology and for taking personal responsibility for use of the tools. Thus, public servants are encouraged to proof any work developed using generative AI to ensure that hallucinations and mistakes do not creep into what they publish. The guidelines emphasize that privacy, security, and the public purpose should be prioritized in the use of technology, weighing impact on the environment and constituents' digital rights.These principles represent a shift from fear-mongering about the dangers of AI to a more proactive and responsible approach that provides guidance on how to use AI in the public workforce. Instead of the usual narrative about AI killing jobs or talking only about AI bias, the city’s letter explains that, by enabling better communication and conversation with residents of all kinds, AI could help repair historical harm to marginalized communities and foster inclusivity. Boston’s generative AI policy sets a new precedent in how governments approach AI. By supporting responsible experimentation, transparency, and collective learning, it opens the door to realizing the potential of AI to do good in governance. If more public servants and politicians embrace these technologies, practical experience can inform sensible regulations. Furthermore, generative AI’s ability to simplify communication, summarize conversations, and create appealing visuals can radically enhance government inclusivity and accessibility. Boston’s vision serves as an inspiration for other governments to break free from fear and embrace the opportunities presented by generative AI.
AI Policy and Regulations
The UK should introduce new legislation to control artificial intelligence or risk falling behind the EU and the US in setting the pace for regulating the technology, MPs have said. Rishi Sunak’s government was urged to act as it prepares to host a global AI safety summit at Bletchley Park, home of the Enigma codebreakers, in November. The science, innovation and technology committee said on Thursday the regulatory approach outlined in a recent government white paper risked falling behind others. “The AI white paper should be welcomed as an initial effort to engage with this complex task, but its proposed approach is already risking falling behind the pace of development of AI,” the committee said in an interim report on AI governance. “This threat is made more acute by the efforts of other jurisdictions, principally the European Union and the United States, to set international standards.” The EU, a trendsetter in tech regulation, is pushing ahead with the AI Act, while in the US the White House has published a blueprint for an AI bill of rights and the US senate majority leader, Chuck Schumer, has published a framework for developing AI regulations. The committee report, whose introductory paragraph is written by the ChatGPT chatbot, lists 12 governance challenges for AI that it says must be addressed by policymakers and should guide the Bletchley summit, which will be attended by international governments, leading AI firms and researchers. The technology has risen up the political agenda after breakthroughs in generative AI – the term for tools such as ChatGPT and Midjourney that are trained on vast troves of data taken from the internet – that can generate plausible text, image and audio content from human prompts. The challenges include: addressing bias in AI systems; systems producing deepfake material that misrepresents someone’s behaviour and opinions; lack of access to the data and compute power needed to build AI systems; regulation of open-source AI, where the code behind an AI tool is made freely available to use and adapt; protecting the copyright of content used to build AI tools; and dealing with the potential for AI systems to create existential threats. The government’s AI white paper published in March sets out five guiding principles for managing the technology: safety, transparency, fairness, accountability, and the ability of newcomers to challenge established players in AI. The white paper said there was no intention to introduce new legislation to cover AI and instead expects different regulators – such as the UK data watchdog and the communications regulator, Ofcom – to thread those principles through their work, assisted by the government. However, the document does refer to introducing a “statutory duty” on regulators to follow the principles. The committee report said that commitment alone pointed to the need for an AI bill in the king’s speech, which sets out the government’s legislative agenda for the next parliamentary year. Otherwise, the report said, “other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective that what the UK can offer”. The report also recommended that the safety summit should include as wide a range of countries as possible, amid speculation that China, a big tech and AI power, will not be invited. Asked this week whether China should be invited to Bletchley Park, Greg Clark, the conservative chair of the committee, said: “If this is to be the first global AI summit then to have as many voices there as possible, I think would be beneficial. But it needs to be accompanied with a caveat that we don’t expect that some of the security aspects to be resolved at that level. Our recommendation would be that we need a more trusted forum for that.” A government spokesperson said the potential of AI should be harnessed “safely and responsibly” and the forthcoming AI summit will address the threat of risks and harms from the technology. The spokesperson added that the white paper sets out a “proportionate and adaptable approach to regulation in the UK”. The government has also established a foundation model taskforce, referring to the underlying technology for AI tools such as text or image generators, which will look at the safe development of AI models.
AI Policy and Regulations
Last year, the White House Office of Science and Technology Policy (OSTP) announced that the US needed a bill of rights for the age of algorithms. Harms from artificial intelligence disproportionately impact marginalized communities, the office’s director and deputy director wrote in a WIRED op-ed, and so government guidance was needed to protect people against discriminatory or ineffective AI.Today, the OSTP released the Blueprint for an AI Bill of Rights, after gathering input from companies like Microsoft and Palantir as well as AI auditing startups, human rights groups, and the general public. Its five principles state that people have a right to control how their data is used, to opt out of automated decisionmaking, to live free from ineffective or unsafe algorithms, to know when AI is making a decision about them, and to not be discriminated against by unfair algorithms.“Technologies will come and go, but foundational liberties, rights, opportunities, and access need to be held open, and it’s the government’s job to help ensure that’s the case,” Alondra Nelson, OSTP deputy director for science and society, told WIRED. “This is the White House saying that workers, students, consumers, communities, everyone in this country should expect and demand better from our technologies.”However, unlike the better known US Bill of Rights, which comprises the first 10 amendments to the constitution, the AI version will not have the force of law—it’s a nonbinding white paper.The White House’s blueprint for AI rights is primarily aimed at the federal government. It will change how algorithms are used only if it steers how government agencies acquire and deploy AI technology, or helps parents, workers, policymakers, or designers ask tough questions about AI systems. It has no power over the large tech companies that arguably have the most power in shaping the deployment of machine learning and AI technology.The document released today resembles the flood of AI ethics principles released by companies, nonprofits, democratic governments, and even the Catholic church in recent years. Their tenets are usually directionally right, using words like transparency, explainability, and trustworthy, but they lack teeth and are too vague to make a difference in people’s everyday lives.Nelson of OSTP says the Blueprint for an AI Bill of Rights differs from past recitations of AI principles because it’s intended to be translated directly into practice. The past year of listening sessions was intended to move the project beyond vagaries, Nelson says. “We too understand that principles aren’t sufficient,” Nelson says. “This is really just a down payment. It’s just the beginning and the start.”The OSTP received emails from about 150 people about its project and heard from about 130 additional individuals, businesses, and organizations that responded to a request for information earlier this year. The final blueprint is intended to protect people from discrimination based on race, religion, age, or any other class of people protected by law. It extends the definition of sex to include “pregnancy, childbirth, and related medical conditions,” a change made in response to concerns from the public about abortion data privacy.Annette Zimmermann, who researches AI, justice, and moral philosophy at the University of Wisconsin-Madison, says she’s impressed with the five focal points chosen for the AI Bill of Rights, and that it has the potential to push AI policy and regulation in the right direction over time.But she believes the blueprint shies away from acknowledging that in some cases rectifying injustice can require not using AI at all. “We can’t articulate a bill of rights without considering non-deployment, the most rights-protecting option,” she says. Zimmerman would also like to see enforceable legal frameworks that can hold people and companies accountable for designing or deploying harmful AI.When asked why the Blueprint for an AI Bill of Rights does not include mention of bans as an option to control AI harms, a senior administration official said the focus of it is to shield people from tech that threatens their rights and opportunities, not to call for the prohibition of any type of technology.The White House also announced actions by federal agencies today to curtail harmful AI. The Department of Health and Human Services will release a plan for reducing algorithmic discrimination in health care by the end of the year. Some algorithms used to prioritize access to care and guide individual treatments have been found to be biased against marginalized groups. The Department of Education plans to release recommendations on the use of AI for teaching or learning by early 2023.The limited bite of the White House’s AI Bill of Rights stands in contrast to more toothy AI regulation currently under development in the European Union.Members of the European Parliament are considering how to amend the AI Act and decide which forms of AI should require public disclosure or be banned outright. Some MEPs argue predictive policing should be forbidden because it “violates the presumption of innocence as well as human dignity.” Late last week, the EU's executive branch, the European Commission, proposed a new law that allow people treated unfairly by AI to file lawsuits in civil court.
AI Policy and Regulations
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More The Federal Trade Commission (FTC) received a new complaint today from the Center for AI and Digital Policy (CAIDP), which calls for an investigation of OpenAI and its product GPT-4. The complaint argues that the FTC has declared that the use of AI should be “transparent, explainable, fair, and empirically sound while fostering accountability,” but claims that OpenAI’s GPT-4 “satisfies none of these requirements” and is “biased, deceptive, and a risk to privacy and public safety.” CAIDP is a Washington, D.C.-based independent, nonprofit research organization that “assesses national AI policies and practices, trains AI policy leaders, and promotes democratic values for AI.” It is headed by president and founder Marc Rotenberg and senior research director Merve Hickok. “The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices. We believe that the FTC should look closely at OpenAI and GPT-4,” said Rotenberg in a press release about the complaints. “We are specifically asking the FTC to determine whether the company has complied with the guidance the federal agency has issued.” The complaint comes a day after an open letter calling for a six-month “pause” on developing large-scale AI models beyond GPT-4 highlighted the fierce debate around risks vs. hype as the speed of AI development accelerates. Event Transform 2023 Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls. >>Follow VentureBeat’s ongoing generative AI coverage<< FTC has made recent public statements about generative AI The complaint also comes 10 days after the FTC published a business blog post called “Chatbots, deepfakes, and voice clones: AI deception for sale,” authored by Michael Atleson, an attorney at the FTC division of advertising practices. The blog post said that the FTC Act’s “prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive — even if that’s not its intended or sole purpose.” Companies should consider whether they should even be making or selling the AI tool and whether they are effectively mitigating the risks. “If you decide to make or offer a product like that, take all reasonable precautions before it hits the market,” says the blog post. “The FTC has sued businesses that disseminated potentially harmful technologies without taking reasonable measures to prevent consumer injury.” In a separate post from February, “Keep your AI claims in check,” Atleson wrote that the FTC may be “wondering” if a company advertising an AI product is aware of the risks. “You need to know about the reasonably foreseeable risks and impact of your AI product before putting it on the market. If something goes wrong — maybe it fails or yields biased results — you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a “black box” you can’t understand or didn’t know how to test.” FTC attorney said agency will always apply ‘bedrock’ advertising law principles In an interview with VentureBeat last week, unrelated to the CAIDP complaint and focused solely on advertising law, Atleson said that the basic message of both of his recent AI-focused blog posts is that no matter how new or different the product or service is, the FTC will always apply the “bedrock” advertising law principles in the FTC Act — that you can’t misrepresent or exaggerate what your product can do or what it is, and you can’t sell things that are going to cause consumers substantial harm. “It doesn’t matter whether it’s AI or whether it turns out we’re all living in a multiverse,” he said. “Guess what? That prohibition of false advertising still applies to every single instance.” He added that admittedly, AI technology development is happening quickly. “We’re certainly right in the middle of a corporate rush to get a certain type of AI product to market, different types of generative AI tools,” he said. The FTC has focused on AI for a while now, he added, but the difference is that AI is more in the public eye, “especially with these new generative AI tools to which consumers have direct access.” Federal AI regulation may come from FTC With the growth of AI and speed of its development, legal experts say that FTC rulemaking about AI could be coming in 2023. In a December 2022 article written by Alston and Bird, federal AI regulation may be emerging from the FTC even though AI-focused bills introduced in Congress have not yet gained significant support. “In recent years, the FTC issued two publications foreshadowing increased focus on AI regulation,” the article said, stating that the FTC had developed AI expertise in enforcing a variety of statutes, such as the Fair Credit Reporting Act, Equal Credit Opportunity Act and the FTC Act. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
AI Policy and Regulations
Schumer tees up Senate plan for AI regulation Senate Majority Leader Chuck Schumer (D-N.Y.) on Wednesday outlined his two-pronged approach for crafting artificial intelligence (AI) policy, as Congress and the administration race to regulate the booming industry. The Democratic leader unveiled his framework for AI regulation and announced a series of expert forums to guide Congress as lawmakers tackle a range of issues posed by the technology, from national security concerns to copyright law. Schumer said his SAFE Innovation Framework for AI aims to incorporate safeguards raised by stakeholders while still promoting innovation in the industry. “I call it that because the right framework must prioritize innovation. It’s essential to our country,” he said during a speech at the Center for Strategic and International Studies. “The U.S. has always been a leader in innovating on the greatest technologies that shape the modern world.” The Senate majority leader unveiled more details of the framework, which was first announced in April, one day after President Biden met with tech leaders to discuss AI and a day after a bipartisan House bill on the technology was introduced. Schumer’s framework has five key pillars: security, accountability, protecting foundations, explainability and innovation. In his speech Wednesday, the senator said innovation must be “our North Star” in crafting regulation and stressed the need for bipartisanship in setting the ground rules for AI. To help in “laying down a new foundation for AI policy,” Schumer said Congress should hear from top minds in AI through a series of forums later this year. “If we take the typical path — holding Congressional hearing with opening statements and each member asking questions five minutes at a time, often on different issues — we simply won’t be able to come up with the right policies,” he said. “By the time we act, AI will have evolved into something new.” Schumer said the forums “can’t and won’t” replace efforts already underway in Congress. Members of both parties have zoomed in on issues posed by AI. In the House, a bipartisan bill was introduced Tuesday that would create a commission to review, recommend and establish regulations for AI. The Senate Judiciary Committee, among others, have held a series of hearings on AI risks and opportunities — including a high-profile hearing in May featuring Sam Altman, the CEO of OpenAi, the company behind the popular ChatGPT tool. ChatGPT burst onto the scene in November and has skyrocketed in popularity since. The tool, along with the launch of rival AI chatbots like Google’s Bard and other generative AI video and image tools, has left U.S. lawmakers and global leaders mulling AI regulation. Schumer stressed the urgency of congressional action to set guardrails for the technology that “align with democratic norms.” If not, he said, there’s a risk others — namely the Chinese Communist Party — could set standards and “democracy could enter an era of steep decline.” Schumer has established a bipartisan group of senators to lead on the issue alongside himself: Sen. Martin Heinrich (D-N.M.), Sen. Todd Young (R-Ind.) and Sen. Mike Rounds (R-S.D.). The framework is just one piece of the federal government’s sweeping attempt to rein in the booming AI industry. On Tuesday, Biden met with tech leaders in San Francisco to discuss AI. Biden said he expects to see “more change in the next 10 years than we’ve seen in the last 50 years and maybe beyond that,” adding that AI is driving that change. “We need to manage the risks to our society, to our economy, and our national security,” he said. “I have a lot to learn and we also have a lot to discuss” The White House has said that AI is a top priority for the presiden, and chief of staff Jeff Zients “is overseeing a process to rapidly develop decisive actions” on AI to take over the coming weeks, a White House official told The Hill on Tuesday. The meeting in California followed previous meetings Biden and Vice President Harris had with executives of companies leading the way on AI, including Google and Microsoft, at the White House in May. Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
AI Policy and Regulations
The Biden Administration is reportedly set to unveil a broad executive order on artificial intelligence next week. According to The Washington Post, the White House’s “sweeping order” would use the federal government’s purchasing power to enforce requirements on AI models before government agencies can use them. The order is reportedly scheduled for Monday, October 30, two days before an international AI Safety Summit in the UK. The order will allegedly require advanced AI models to undergo a series of assessments before federal agencies can adopt them. In addition, it would ease immigration for highly skilled workers, which was heavily restricted during the Trump administration. Federal agencies, including the Defense Department, Energy Department and intelligence branches, would also have to assess how they might incorporate AI into their work. The report notes that the analyses would emphasize strengthening the nation’s cyber defenses. On Tuesday evening, the White House reportedly sent invitations for a “Safe, Secure, and Trustworthy Artificial Intelligence” event for Monday, October 30, hosted by President Biden. The Washington Post indicates that the executive order isn’t finalized, and details could still change. Meanwhile, European officials are working on AI regulations across the Atlantic, aiming for a finalized package by the end of the year. The US Congress is also in the earlier stages of drafting AI regulations. Senator Charles Schumer (D-NY) hosted AI leaders on Tuesday at the second AI Insights Forum. AI regulation is currently one of the most buzzed-about topics in the tech world. Generative AI has rapidly advanced in the last two years as image generators like Midjourney and DALL-E 3 emerged, producing convincing photos that could be disseminated for disinformation and propaganda (as some political campaigns have already done). Meanwhile, OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Bard and other advanced large language model (LLM) chatbots have arguably sparked even more concern, allowing anyone to compose fairly convincing text passages while answering questions that may or may not be truthful. There are even AI models for cloning celebrities’ voices. In addition to misinformation and its potential impact on elections, generative AI also sparks worries about the job market, especially for artists, graphic designers, developers and writers. Several high-profile media outlets, most infamously CNET, have been caught using AI to compose entire error-ridden articles with only the thinnest of disclosures.
AI Policy and Regulations
(Bloomberg) -- The US warned the European Union that its proposed law to regulate artificial intelligence would favor companies with the resources to cover the costs of compliance while hurting smaller firms, according to previously undisclosed documents. Most Read from Bloomberg The US analysis focuses mostly on the European Parliament version of the AI Act, which includes rules on generative AI. Some rules in the parliament law are based on terms that are “vague or undefined,” according to the documents, which were obtained by Bloomberg News. The analysis is Washington’s most detailed position on the EU legislation that could set the tone for other countries writing rules for AI. One US concern is that the European Parliament focuses on how AI models are developed, whereas the US would prefer an approach that focuses on the risk involved in how these models are actually used. The analysis warns that EU regulations risk “dampening the expected boost to productivity and potentially leading to a migration of jobs and investment to other markets.” The new rules would also likely hamper “investment in AI R&D and commercialization in the EU, limiting the competitiveness of European firms,” because training large language models is resource-intensive, it said. Read More: Biden Vows to Stay ‘Vigilant’ on AI as Firms Unveil Safeguards The US State Department feedback, including a line-by-line edit of certain provisions in the law, was shared with European counterparts in recent weeks, according to people familiar with the matter who asked not to be identified discussing private documents. One of the people said the comments were offered in the spirit of cooperation and alignment of values. Some of the US concerns have been echoed by EU member countries in response to the European Parliament version, the person said. The State Department and the European Commission declined to comment. The EU Parliament’s AI Act, which lawmakers voted on in June, would require more transparency about the source material used to train the large language models that underpin most generative AI products. That vote cleared the way for negotiations among parliament, the European Commission and member states, and officials hope to have a deal by the end of the year for the final rules. The US analysis is in keeping with the State Department’s calls for a more hands-off approach to the technology so as not to stifle innovation. Secretary of State Antony Blinken objected to a number of the EU Parliament’s proposals to control generative AI during a meeting with commission officials in Sweden at the end of May. Read More: The Man Keeping Musk, Zuckerberg and Big Tech in Line in Europe At the same time, Washington has given mixed messages to EU policymakers about its views on regulation. While the US pushed back when the commission first proposed the AI Act in 2021, some American officials have begun to view mandatory rules more favorably as AI developers and ethicists warn about the possible harms from the technology. Aaron Cooper, head of global policy at BSA The Software Alliance, a trade group that has engaged with both US and EU officials regarding AI regulation, said it’s important for countries’ AI rules to agree on basics, including definitions. “The most important thing that the Biden administration can do is continue to have a good candid conversation with their European counterparts about what the objectives are for AI policy,” Cooper said. Read More: ChatGPT Risks Divide Biden Administration Over EU’s AI Rules While the EU is pressing ahead with the AI Act, it is still debating questions about how to regulate the building blocks of the technology, known as foundation models, and general purpose AI. Some nations worry that over-regulating the technology will make Europe less competitive. After OpenAI Inc. introduced ChatGPT and ignited a boom in generative AI last year, the European Parliament added rules that explicitly target the technology. Previous versions of the EU’s AI Act followed risk-based focus favored by the US for AI regulation, which was also the approach laid out in a framework released earlier this year by the Commerce Department’s National Institute of Standards and Technology. --With assistance from Iain Marlow. Most Read from Bloomberg Businessweek ©2023 Bloomberg L.P.
AI Policy and Regulations
Ashton Kutcher says your company will probably be 'out of business' if you're 'sleeping' on AI - During a panel, Ashton Kutcher said that companies not using AI tools like ChatGPT will shutter. - The actor and investor said AI like ChatGPT has the "power" and "potential" to benefit humanity. Stars, they're just like us — they have been taken aback by AI tools like OpenAI's ChatGPT. In fact, actor-turned-investor Ashton Kutcher is so impressed that he thinks businesses that aren't using AI tools will suffer. "If you're a company, and you're sleeping on this, you're probably going to be out of business," he said about ChatGPT during a panel at the Milken Institute's Global Conference on Monday. "It's that good and that powerful from a utilization standpoint." He said that AI has the "promise" and "potential" to benefit a range of industries. Education, for instance, will be revolutionized by AI, Kutcher said. Large language models, he added, are excellent at the analogies and metaphors," which means they can break down complex concepts like "homomorphic encryption" — or the process of performing tasks on encrypted data — in an easy, digestible way. "The ability to have an always-on tutor that is personalized to you is an extraordinary advantage," Kutcher says. "We're going to have students and kids and people across the world that are going to be able to use this to learn about something in a way that they've never learned about it before." It's not just students. Kutcher thinks that AI can make professional services accessible to all. "We're going to have personalized medicine; we're going to have personalized law; we're going to have personalized education; we're going to personalize everything," Kutcher said. Personalized law, for example, would mean that the people who "can't afford" legal services will finally be able to access them at a lower price:"One paralegal can have 1,000 AI agents working underneath them," he said. "I look at AI as an equity and inclusion play that is massive," he says. Kutcher's thoughts on AI come in response to the meteoric rise of generative AI tools. ChatGPT attracted over 100 million users two months after it launched last November, inspiring people to use the chatbot to boost their productivity and make their lives easier. Some are even using it to start their own businesses. The buzz around these AI tools could be why Kutcher's venture capital firm launched a $240 million AI fund this month that already has investments in major AI companies like OpenAI, Anthropic, and Stability AI, Variety reported. He said that his firm will continue to support AI startups for the long haul. "We believe this is potentially the most significant technology we will experience since the advent of the internet," Kutcher told Variety in a statement. "The foundation model layer companies are defining the category, and, in our view, they have the power to transform businesses and everyday life." - Go First airlines flights to remain cancelled on May 3, 4 amid financial crunch - Go First says 'serial failure' of P&W engines forced airline to approach NCLT - All about this year’s Met Gala that celebrates legendary designer Karl Lagerfeld - Amazon Great Summer Sale 2023 – Best Kickstarter deals - Adani Total Gas reports 8% rise in FY23 net profit to ₹546 crore, announces plans to expand EV charging points to 3,000
AI Startups
To get a roundup of TechCrunch’s biggest and most important stories delivered to your inbox every day at 3 p.m. PDT, subscribe here. Happy Tuesday! Couple of quick things: Apply now to pitch at TechCrunch Live’s Atlanta pitch-off. Also, today’s head scratcher of an article is from Devin, who reports that Acapela lets anyone back up their own voice. Until literally 10 seconds ago, we had no idea that that was even possible, and now we really want to do it. The TechCrunch Top 3 - Big bet on AI: Like other Big Tech companies we’ve talked about before, IBM took its turn this week unveiling what Kyle writes is “a slew of new AI services,” including IBM Watsonx, which will “deliver tools to build AI models and provide access to pretrained models for generating computer code, text and more.” - More layoffs: LinkedIn is phasing out its China jobs app and with it goes 716 jobs, Catherine reports. The company attributed the app’s demise to “fierce competition and a challenging macroeconomic climate.” - Gotta get paid: A new WhatsApp feature enables users to pay businesses within the app. It is already making the rounds in South America and Asia and now lands in Singapore, Ivan reports. Startups and VC Shopify last week announced that it would be the latest Big Tech firm to undergo mass layoffs. The company is cutting 20% of its 11,600-person staff. The news arrived during earnings that beat Wall Street expectations, shooting its stock price up as a result. Also included in the announcement was news that the Canadian e-commerce giant had found a new owner for 6 River Systems, the warehouse automation firm it purchased in 2019 for nearly a half-billion dollars, reports Brian. UVeye’s automated vehicle inspection technology may have started out as a system to detect security threats, but the six-year-old Israeli startup has found deep interest and investment from the automotive sector, and it lands the startup a $100 million round of investment from GM and CarMax, among others, Kirsten reports. And we have five more for you: - Rotten zomatos: Zomato shares plunge after Invesco cut rival Swiggy’s valuation, reports Manish. - Computers explaining computers: OpenAI’s new tool attempts to explain language models’ behaviors, by Kyle. - Stretching those pennies: Charlie’s new banking app aims to help seniors “make the most of their limited resources,” reports Mary Ann. - PNW pot of gold: Ascend raises $25 million for pre-seed AI startups in the Pacific Northwest, writes Becca. - Texts, but more Instagramified: This app, backed by Marissa Mayer and Peter Thiel, is making texts more expressive, reports Natasha M. Hidden in plain sight: 5 red flags for investors Investors may review hundreds of pitches each year, which means they’re compelled to make decisions quickly. It’s not a great system — because it’s largely based on relationships, bias is baked into the recipe. And due to the rapid pace of dealmaking, “even the most experienced angel investors — and VCs — can overlook red flags that are subtle and not immediately apparent,” writes Marjorie Radlo-Zandi. Drawing from her years as a mentor, an angel and a board member, she shares five scenarios that should give investors second thoughts — for example, “where the founder has a romantic or spousal relationship with a staff member.” Two more from the TC+ team: - SadFin: We’re close to peak pessimism around fintech, by Alex. - How do you copyright that?: Generative AI and copyright law: What’s the future for IP? by Gai Sher and Ariela Benchlouch. Big Tech Inc. Niantic’s new game Peridot personifies cuteness overload. In fact, Amanda called it “Pokémon GO meets Tamagotchi,” if you recall the toy from the 1990s. Amanda describes the game as “a pet simulator, but it takes place completely within augmented reality (AR). You can feed, play with, walk, breed and socialize with your Peridots, but don’t worry — if you take a break from the game, your creatures will not poop all over your screen and/or die.” Meanwhile, TikTok’s parent ByteDance is eyeing a new role as an e-publisher in the United States, even going so far as to submit a trademark application with the U.S. Patent and Trademark Office for book publishing products and services with the name “8th Note Press.” Rita has more. And we have five more for you: - Music to our ears: Apple is launching Final Cut Pro and Logic Pro on iPad later this month, reports Aisha. - Car talk: General Motors has a new head of software, tapping former Apple cloud services executive Mike Abbott, writes Kirsten. Meanwhile, Porsche is looking at some automated driving functions and is now working with Mobileye to provide that in future models, Rebecca reports. - Appealing, isn’t it?: Former FTX CEO Sam Bankman-Fried seeks to dismiss most of the U.S. charges against him, reports Jacquelyn. - For the love of money: Meta revamps its Ads on Reels monetization program to make it performance-based. Aisha has more. - It was written in the stars: Satellite rivals Viasat and Inmarsat got the green light from the U.K.’s Competition and Markets Authority to continue their $7.3 billion merger, Paul writes.
AI Startups
If you look past the financial headlines, what are today’s AI startups building? News coverage of the AI boom has been hectic and mainly covered a few categories: Financial, big tech, concern and hype, and startup activity. The financial side is simple: Investors are working to put capital into companies that are either building new AI-powered products or embedding it into existing products. The Exchange explores startups, markets and money. The big tech collection is also easy to understand: Google and Microsoft are racing to own the cloud layer underneath major AI technology and building generative AI services into their existing productivity and search products. Meta, Amazon and Baidu are also busy. The list goes on. Hype is not hard to find, nor is the doomer perspective. Reality will likely be somewhere in between. I suspect that we’ll grow accustomed to having AI-powered tools and services around us at all times, and some of the use cases will be positive while others will prove negative. But these conversations often don’t actually discuss what is being built. So, this morning, I’ll go back through our recent generative AI coverage to provide a few notes on what folks are working to create. I am approaching the topic as a generalist who has a pro-tech, pro-progress and pro-capitalism perspective tempered by a dash of anxiety. Call me an optimist with an asterisk. Fair enough? Let’s get to work. Looking past the money We’re going to look at Together, Contextual AI, Instabase, Adept and Cohere.
AI Startups
German AI start-up Aleph Alpha has raised a Series B funding round of $500 million from a consortium of seven new investors, as well as existing investors from previous rounds. Founded in 2019, Aleph Alpha’s funding may be dwarfed by that of Microsoft-backed OpenAI ($11.3B), but the startup makes great play of the fact that its clients have ‘full sovereignty’ over the implementation of AI into their businesses. So although it’s tempting to compare Aleph Alpha with other foundational models like Open AI, it’s much closer to startups like France’s Mistral ($112 million in funding), which works with large corporates to deploy LLMs internally. The consortium is led by the Innovation Park Artificial Intelligence (Ipai). The round was co-led by Schwarz Group (the owners of the Lidl supermarket chain) and Bosch Ventures. Other new investors include Berlin-based Christ&Company Consulting, Hewlett Packard Enterprise and SAP, as well as Burda Principal Investments. Existing institutional investors also participated. Ipai — an AI hub based in the south-west German city of Heilbronn and was jointly set up by a foundation established by Dieter Schwarz, the Lidl founder — is funded by the state of Baden-Württemberg and is staking its claim to become Europe’s largest AI cluster. In July SAP invested in Aleph Alpha alongside two other investments in AI startups Anthropic and Cohere. Back in 2021 the startup raised $27 million in a Series A funding co-led by Earlybird VC, Lakestar and UVC Partners, following a seed round of €5.3 million from LEA Partners, 468 Capital and Cavalry Ventures in November 2020. Aleph Alpha, which has about 70 employees, majors on areas such as EU-regulated data protection, security and often works with governmental bodies, law enforcement and healthcare. In a statement Jonas Andrulis, CEO and founder of Aleph Alpha, said the company “will continue to expand its offerings while maintaining independence and flexibility for customers in infrastructure, cloud compatibility, on-premise support and hybrid setups.”
AI Startups
Generative AI is hot among venture capital firms now, with $4.5 billion invested in 2022. Narrato, a AI content creation and collaboration platform, announced today it has joined the ranks of other generative AI startups with VC funding. Based in San Francisco, Narrato raised a $1 million pre-seed round led by AirTree Ventures, the Australian firm that was an early investor in Canva, Linktree and Employment Hero. Other participants in the round included OfBusiness, a B2B e-commerce platform, and serial entrepreneur Shreesha Ramdas. Narrato is used by customers including payments SaaS startup ChargeBee, language learning app Preply and customer onboarding software Rocketlane. It will work with AirTree to expand across the United States. Founded in January 2022 by Sophia Solanki, an Australian serial entrepreneur whose previous startup was content marketing and social media management SaaS platform Drumup. Narrato foundder Sophia Solanki Solanki told TechCrunch that the Narrato’s team first idea was a build the “Github for content,” with a workspace for marketing teams that offers automation, collaboration and publishing, among other features. But they had also been tracking generative AI over the past couple of years and “with its current state of maturity, it’s an extremely powerful tool for content creation.” The Narrato team decided to embed generative AI into different stages of the content process. Narrato’s main feature is a AI content assistant that helps with planning, including automatic brief generation, content creation and optimization. It also has collaboration and workflow tools and automated publishing features. Solanki explained that for both AI and non-AI content creation, users chose from templates, including blogs, articles, web copy, emails, video scripts, social media content and art. Narrato also has a chat-like format for content creation through AI, and plans to expand its selection of generative AI-assisted content templates to hundreds. Once briefs are created through generative AI, writers can use them for SEO guides and outlines. They also include research and benchmarking to help content creators reach a wider audience. Solanki named several startups as Narrato’s indirect and direct competitors. Notion, Clickup and Airtable are used by content creators for content project management, while Jasper and Copy.ai are content creation platforms that also use AI. How Narrato wants to differentiate is by embedding generative AI into the entire marketing and content creation workflow in a single platform. In a statement about the funding, AirTree partner Elicia McDonald said, “Having identified the massive opportunity for generative AI in content marketing and already successfully built two companies in this space, Sophia knows the market inside out. She has a strong connection to the problem and has achieved impressive traction for a company at such an early stage, especially considering they’ve bootstrapped to date.”
AI Startups
Artificial intelligence (AI) is a rapidly growing field with the potential to revolutionize many industries. As AI technology continues to develop, there will be an increasing demand for skilled AI professionals. This means that there are many opportunities to make money with AI. Here are a few ways to make money with AI: Develop AI products and services. Offer consulting services. Teach AI. Invest in AI companies. Develop AI products and services One way to make money with AI is to develop AI products and services. This could include anything from AI-powered chatbots to AI-powered marketing platforms. If you have the skills and experience to develop AI products and services, you can start your own business or work for a company that is developing AI solutions. Here are some tips for developing AI products and services: Start with a problem. The best AI products and services solve real-world problems. Before you start developing an AI product or service, take the time to identify a problem that you can solve with AI. Do your research. Once you have identified a problem, do your research to see if there are other AI products or services that are already solving that problem. If there are, you will need to find a way to differentiate your product or service from the competition. Build a prototype. Once you have a good idea of what your AI product or service will do, build a prototype. This will help you to test your idea and to get feedback from potential users. Get feedback. Once you have a prototype, get feedback from potential users. This feedback will help you to improve your product or service. Iterate. Once you have received feedback, iterate on your product or service. This means making changes to your product or service based on the feedback you have received. Launch. Once you are satisfied with your product or service, launch it. This means making it available to potential users. Offering consulting services Another way to make money with AI is to offer consulting services. This could involve helping businesses adopt AI, develop AI strategies, or choose the right AI tools. If you have the skills and experience to consult on AI, you can start your own business or work for a company that provides AI consulting services. Here are some tips for offering consulting services on AI: Build your network. One of the best ways to find clients is to build your network. Attend industry events, connect with other consultants, and let people know that you offer consulting services on AI. Become an expert. The more you know about AI, the more valuable you will be to businesses. Make sure you stay up-to-date on the latest AI trends and developments. Be passionate. Businesses want to work with consultants who are passionate about AI. If you’re not passionate about AI, it will show in your work. Be reliable. Businesses need to be able to rely on their consultants. Make sure you meet deadlines, deliver high-quality work, and be responsive to client needs. Here are some specific examples of how you can offer consulting services on AI: Help businesses adopt AI. Many businesses are interested in adopting AI, but they don’t know where to start. You can help them by providing guidance on which AI technologies are right for their business, how to implement AI, and how to measure the success of their AI initiatives. Develop AI strategies. Businesses need to have a clear strategy for how they will use AI. You can help them by developing a strategy that aligns with their business goals, identifies the right AI technologies to use, and outlines a plan for implementation. Choose the right AI tools. There are many different AI tools available, and it can be difficult for businesses to choose the right ones. You can help them by evaluating different AI tools, considering their business needs, and making recommendations. Teaching AI You can also make money with AI by teaching others about AI. This could involve teaching a class on AI at a local college or university, creating an online course on AI, or writing a book on AI. If you have the knowledge and passion to teach AI, you can start your own business or work for a company that provides AI education. Here are some tips for teaching AI: Find your niche. What are you passionate about in AI? What do you know a lot about? Once you know your niche, you can start to develop your teaching materials. Make it engaging. People learn best when they are engaged. Use a variety of teaching methods, such as lectures, demonstrations, and group activities. Be patient. Teaching can be challenging, especially when you are teaching a new topic. Be patient with your students and be willing to answer their questions. Be passionate. If you are not passionate about AI, it will show in your teaching. Students can tell when a teacher is not interested in the material, and they are less likely to be engaged. Here are some specific examples of how you can teach AI: Teach a class on AI at a local college or university. This is a great way to share your knowledge of AI with a large group of people. You can also use this opportunity to conduct research on AI and publish your findings. Create an online course on AI. This is a great way to reach a wider audience and teach AI to people who may not have access to a traditional college or university. Write a book on AI. This is a great way to share your knowledge of AI with a global audience. You can also use this opportunity to explore AI in more depth than you would be able to in a traditional classroom setting. No matter how you choose to teach AI, be sure to do your research and make sure that you are qualified to teach the material. You should also be prepared to answer questions from your students and provide them with support. If you are passionate about AI and have the skills to teach it, then you can make a good living in this growing field. Investing in AI companies Finally, you can make money with AI by investing in AI companies. This could involve investing in AI startups or AI-focused mutual funds. If you are willing to take on some risk, you can potentially make a lot of money by investing in AI companies. Here are some tips for investing in AI companies: Do your research. Before you invest in any AI company, do your research and understand the company’s business model, technology, and management team. Diversify your portfolio. Don’t put all your eggs in one basket. Diversify your portfolio by investing in a variety of AI companies. Invest for the long term. The AI industry is still in its early stages of development. Invest for the long term and don’t expect to get rich quick. Conclusion There are many ways to make money with AI. If you are interested in pursuing a career in AI, there are many resources available to help you get started. With the right skills and experience, you can make a good living in the growing field of AI. Additional tips: Stay up-to-date on the latest AI trends and developments. The AI field is constantly evolving, so it’s important to stay up-to-date on the latest trends and developments. This will help you make informed decisions about your AI career. Network with other AI professionals. Networking with other AI professionals is a great way to learn about new opportunities and get advice from experienced professionals. You can network at AI conferences, online forums, and through social media. Be patient. It takes time to build a successful career in AI. Don’t expect to become an overnight success. Be patient, work hard, and never give up on your dreams.
AI Startups
Developer velocity, the speed at which an organization ships code, is often impacted by necessary but lengthy processes like code review, writing documentation and testing. Inefficiencies threaten to make theses processes even longer. According to one source, developers waste 17.3 hours per week due to technical debt and bad — i.e. nonfunctional — code. Machine learning Ph.D. Matan Grinberg and Eno Reyes, previously a data scientist at Hugging Face and Microsoft, thought that there had to be a better way. During a Hackathon in San Francisco, Grinberg and Reyes built a platform that could autonomously solve simple coding problems — a platform that they later came to believe had commercial potential. After the hackathon, the pair expanded the platform to handle more software development tasks and founded a company, Factory, to monetize what they’d built. “Factory’s mission is to bring autonomy to software engineering,” Grinberg told TechCrunch in an email interview. “More concretely, Factory helps large engineering organizations automate parts of their software development lifecycle via autonomous, AI-powered systems.” Factory’s systems — which Grinberg calls “Droids,” a term Lucasfilm might have a problem with — are built to juggle various repetitive, mundane but normally time-consuming software engineering tasks. For example, Factory has “Droids” for reviewing code, refactoring or restructuring code and even generating new code from prompts a la GitHub Copilot. Grinberg explains: “The review Droid leaves insightful code reviews and provides context for human reviewers on every change to the codebase. The documentation Droid generates and continually updates documentation as needed. The test Droid writes tests and maintains test coverage percentage as new code is merged. The knowledge Droid lives in your communication platform (e.g. Slack) and answers deeper questions about the engineering system. And the project Droid helps plan and design requirements based on customer support tickets and feature requests.” All of Factory’s Droids are built on what Grinberg refers to as the “Droid core”: an engine that ingests and processes a company’s engineering system data to build a knowledge base, and an algorithm that pulls insights from the knowledge base to solve various engineering problems. A third Droid core component, Reflection Engine, acts as a filter for the third-party AI models that Factory leverages, enabling the company to implement its own safeguards, security best practices and so on on top of those models. “The enterprise angle here is that this is a software suite that allows engineering organizations to output better product faster, while also improving engineering morale by lightening the load of tedious tasks like code review, docs and testing,” Grinberg said,. “Additionally, due to the autonomous nature of the Droids, little is required by way of user education and onboarding.” Now, if Factory can consistently, reliably automate all those dev tasks, the platform would pay for itself indeed. According to a 2019 survey by Tidelift and The New Stack, developers spend 35% of their time managing code, including testing and responding to security issues — and less than a third of their time actually coding. But the question is, can it? Even the best AI models today aren’t above making catastrophic mistakes. And generative coding tools can introduce insecure code, with one Stanford study suggesting that software engineers who use code-generating AI are more likely to cause security vulnerabilities in the apps they develop. Grinberg was upfront about the fact that Factory didn’t have the capital to train all of its models in house — and thus is at the mercy of third-party limitations. But, he asserts, the Factory platform is still delivering value while relying on third-party vendors for some AI muscle. “Our approach is building these AI systems and reasoning architectures, making use of cutting-edge … models and establishing relationships with customers to deliver value now,” Grinberg said. “As an early startup, it’s a losing battle to train [large] models. Compared to incumbents, you have no monetary advantage, no chip access advantage, no data advantage and (almost certainly) no technical advantage.” Factory’s long-term play is to train more of its own AI models to build an “end-to-end” engineering AI system — and to differentiate these models by soliciting engineering training data from its early customers, Grinberg said. “As time goes on, we’ll have more capital, the chip shortage will clear up and we’ll have direct access (with permission) to a treasure trove of data (i.e. the historical timeline of entire engineering organizations),” he continued. “We’ll build Droids to be robust, fully autonomous — with minimal required human interaction — and tailored to customers’ needs from day one.” Is that an overly optimistic view? Perhaps. The market for AI startups grows more competitive by the day. But to Grinberg’s credit, Factory’s already working with a core group of around 15 companies. Grinberg wouldn’t name names, save the clients — which have used Factor’s platform to author thousands of code reviews and hundreds of thousands of lines of code to date — range in size from “seed stage” to “public.” And Factory recently closed a $5 million seed round co-led by Sequoia and Lux with participation from SV Angel, BoxGroup, DataBricks CEO Ali Ghodsi, Hugging Face co-founder Clem Delangue and others. Grinberg says that the new capital will be put toward expanding Factory’s six-person team and platform capabilities. “The major challenges in this AI code generation industry are trust and differentiation,” he said. “Every VP of engineering wants to improve their organization’s output with AI. What stands in the way of this is the unreliable nature of many AI tools, and the reticence of large, labyrinthine organizations to trust this new, futuristic sounding technology … Factory is building a world where software engineering itself is an accessible, scalable commodity.”
AI Startups
Welcome to Startups Weekly. Sign up here to get it in your inbox every Friday afternoon. The event horizon for when we can expect to end up in (literal) hot water when it comes to climate has come a lot closer. You wanna know how close? The climate deadline is close enough that it is interesting to VCs again. VC is an asset class with a 10-year cycle. In other words: Some of the most powerful money people believe they will see a return on their investments in the next 10 years. In a nutshell, as I wrote in my TC+ column this week, that scares the crap out of me. Social media dramaaaaaaaaaa I’m sure Twitter is relieved to see that Reddit is taking many of the headlines this week. The Reddit leadership team seems to have hit the slow-mo button and taken control of a couple of trains before hitting the “full steam ahead” button and pointing the trains at each other. This slow-motion train crash is quite a thing, and it’s all related to Reddit’s new API pricing. Popular third-party Reddit app Apollo announced it was shutting down, and Reddit’s CEO wasn’t a big fan of how that went down, giving the Apollo developer both barrels in a drama-filled AMA. Reddit goes down briefly, subreddits and moderators were protesting — thousands of them, in fact — and it seems like a lot of them are planning to stay shut down indefinitely. Good heavens, that’s a one-way ticket to Yikes City all around. Elon shouldn’t be too relieved, though; the bird sanctuary got its own headlines, and most of them were … not super encouraging. On the bright side, Elon didn’t have to CEO anymore, as Linda Yaccarino officially took the big chair as Twitter CEO. The very next news story was Twitter is being evicted from its Boulder office over unpaid rent. Whoops. Alex broke down how far Twitter’s advertising revenues have fallen. TL;DR: It ain’t pretty. - Easier money for YouTube creators: Ivan reports that YouTube is lowering the barrier to be eligible for its monetization program. - Dark clouds in the blue sky: Morgan reports that Bluesky’s growing pains strain its relationship with Black users. - LOL just kidding, as you were: Taylor reports that Twitch backtracks on changes to branded content rules after streamer backlash. The robots are coming! The robots are coming! AI keeps dominating our news coverage, as there’s a lot of movement in that space. France’s Mistral AI blows raised a $113 million seed round to take on OpenAI. It may prove to have an advantage in the notoriously privacy-focused EU — but, not gonna lie, I also raised an eyebrow at the company’s $260 million valuation — that’s a lot of equity to give up in a seed round. There was a whole bunch of corporate AI news, not all of which is relevant to startups, but the recap is that Meta open sources an AI-powered music generator, while Itoka wants to license AI-generated music via the blockchain. I have made little secret of my disdain for blockchain tech in general (and the painful irony of blockchain for climate in particular), but that one seems like a particularly potent nonstarter. Prove me wrong, Itoka, prove me wrong. Apropos potent — Morgan argues that Blush, the AI lover from the same team as Replika, is more than just a sexbot. I’m kind of into it, actually — sexting is fun, and I’m intrigued by the creative possibilities of getting saucy with a robot. As long as I don’t get a textually transmitted infection. Ahem. - While my gu-AI-tar gently weeps/ Strawberry f-AI-lds forever/In m-AI life: Darrell reports that Paul McCartney used AI to make a new Beatles song. - Gotta watch where you point that thing: Natasha reports that U.K. watchdogs urge founders to not rush generative AI apps to market without tackling privacy risks. - Hey that looks just like me: Ingrid reports that Hyper raises $3.6M from Amazon and more for its iPhone-based, VTuber-friendly avatar platform. - Half a b-AI-llion dollars: Kyle reports that Salesforce pledges to invest $500M in generative AI startups. Dude, where’s my … do we even call ‘em cars anymore? Carvana was on an incredible share-price rally, until it hit the guard rails. Harri and Alex pick apart the soaring heights, the crushing depths and the weird, in-between bounce-back the company seems to find itself in. The used-car platform saw an epic stock surge where the shorted stock surged 56%, as the company predicts record profits. Sucks if you were one of the WallStreetBets bros sure of the company’s demise, but I’m sure the startup itself was relieved. The other big story that’s been on our radar is Tesla and its charging standard. We reported that GM and Ford could help spark a charging standards war and concluded that EV charger networks are turning to the Tesla standard, as support for it accelerates. On TC+, Tim wonders whether Tesla’s Supercharger network will strain under the weight of GM and Ford deals — but also that Tesla has a winner on its hands. Back in February, Google added EV chargers to its Google Maps product, and this week, Apple followed suit — Apple Maps will show open spots near you. Also this week, I lamented that EVs are going backward, in that they are getting bigger, heavier and stupider — only for Harri to report on two counterpoints: Telo bets America is ready for a dreamy little pickup and Fiat’s little Topolino concept that looks totes adorbs. Yass. Not exactly startup news, but worth keeping on your radar as a startup founder in this space: - Moar power! Moar range! Moar North Star! Matt reports that The 2024 Polestar 2 features more power and range. - Get in! We’re going places! Fresh on the heels of the EV-powered Lightship RV, Kirsten reports that Pebble Mobility wants to build the iPhone of electric RVs. - Bad startup! No IPO for you! Alex reports that Turo’s Q1 2023 results indicate it may be awhile until we see its IPO. - Look East: There’s a lot of interesting EV news happening around the world, in particular in China. BYD is overtaking Tesla, but it chooses to forego the U.S. market for now. Meanwhile, Nio said it wasn’t going to join an EV price war, but now it’s cutting $4K across all models. Top reads on TechCrunch this week - Hey, that thing looks just like you: Kyle reports that Synthesia secures $90 million for AI that generates custom avatars. - Like reading glasses but different: I wrote about the Sol Reader, which is a VR headset exclusively for reading books. - Here’s something we can all agree on: Rebecca reports that nobody is happy with NYC’s $18 delivery worker minimum wage. - AI 101: One of my favorite TC writers, Devin, put together an AI overview in case you only just woke up from a five-year coma: Everything you need to know about artificial intelligence. - Bye bye, Blue: Brian reports that Blue owned the consumer podcast mic market, but that the brand is being phased out. - I’m basically 90% chatbot already, but Amanda reports that there’s now an app for that: Teaser’s AI dating app turns you into a chatbot. - How to learn from the big bois: Finally, I wrote a TC+ piece where I’m arguing that you can secure your startup’s future by watching the big corporations. That’s it, folks! If you want to send me news tips, here’s what I cover and how to reach me. And if you love a good pitch deck, here’s the full list of my Pitch Deck Teardowns on TechCrunch+ and how you can submit your own. Peace, see you next week! Get your TechCrunch fix IRL. Join us at Disrupt 2023 in San Francisco this September to immerse yourself in all things startup. From headline interviews to intimate roundtables to a jam-packed startup expo floor, there’s something for everyone at Disrupt. Save up to $600 when you buy your pass now through August 11, and save 15% on top of that with promo code STARTUPS. Learn more.
AI Startups
China's top SUV maker to add ChatGPT-like bot into carsGreat Wall Motor will use Baidu's Ernie 3.5 foundational language model which rivals OpenAI's ChatGPT4.Jijo Malayil| Aug 04, 2023 06:00 PM ESTCreated: Aug 04, 2023 06:00 PM ESTtransportationEmpty cockpit of autonomous carmetamorworks/iStock Stay ahead of your peers in technology and engineering - The BlueprintBy subscribing, you agree to our Terms of Use and Policies You may unsubscribe at any time.Marking the entry of AI systems into mass-market cars, Chinese automaker Great Wall Motor (GWM) is set to integrate Baidu’s ChatGPT-like AI system, which enables conversation between driver and car.According to South China Morning Post (SCMP). GMW has partnered with technology firm Baidu to produce automobiles integrated with the latter's chatbot tool, Ernie Bot, bolstering a push to make cars more intelligent and user-friendly.Baidu's foundational AI model, the Ernie bot, is being pitched as a Chinese rival to OpenAI's ChatGPT. “Several innovative features have been tested in those vehicles that are being mass-produced. They will be gradually put into commercial use on a wide basis," said a GWM statement quoted by SCMP. See Also Related ChatGPT creator launches eye-scanning crypto-based ID to distinguish humans from AI Artificial general intelligence: Understanding the future of AI Baidu invests $140 million to foster generative AI startups Baidu is betting big on AI, primarily focused on developing its language model Ernie. An announcement in May said it's investing $140 million (1 billion yuan) to incubate Chinese startups focusing on generative AI.To advance in-car experienceBaidu revealed last month that their Ernie 3.5 beta had achieved significant progress, outperforming ChatGPT (3.5) in total ability ratings and outperforming GPT-4 in specific Chinese language skills. Using the latest iteration of the Ernie model, GWM and Baidu have been cooperating to research applications of its latest language model in intelligent in-car interactions. They have already proven many novel features to implement in mass-produced vehicle models. Baidu Apollo, China's search giant's autonomous driving solutions platform, presented different intelligent driving technologies created on ERNIE for in-car scenarios, including journey planning, in-car entertainment, knowledge Q&A, and AI sketching, during the Shanghai Auto Show in April 2023. services such as journey planning and in-car entertainment.Demand for intelligent solutions The automotive business is rapidly evolving, and AI has emerged as a critical aspect for key automotive OEM firms seeking to differentiate themselves and deliver new value propositions. Consumers and the industry are united in their need for intelligent cockpits that provide more intuitive interfaces, expanded functionalities, and smoother experiences.Chinese manufacturers like Lynk and Smart have also announced intentions to create vehicles equipped with Ernie Bot technology and its own electric vehicle (EV) subsidiary Jidu Auto, which will begin production in late 2023. Uber also recently stated that it was working on a ChatGPT-like AI bot to incorporate into its app.GMW, the largest SUV maker in China, did not specify which models will be the first to include built-in Ernie Bot, China's response to OpenAI's ChatGPT, a huge language model. It also did not provide a timeline for releasing its first car equipped with conversational technology.Baidu is also actively looking at possibilities to integrate Ernie Bot into other businesses, such as its cloud services, and it's sure to heat the competition and take the fight to its Western rivals like OpenAI, Google, Microsoft, and Apple. HomeTransportationAdd Interesting Engineering to your Google News feed.Add Interesting Engineering to your Google News feed.SHOW COMMENT (1) For You Ad Astra: The future of propulsion technology7,000 year-old DNA proves European Neolithics had only one partner at a time310-million-year-old fossil of ancient spider species found in GermanyCould a gene switch off anxiety?Sea change: The engineering challenges of harnessing our oceans to remove CO2SpaceX tests Starship water deluge system for second time without permitDark matter search advances with new experiment to spot axionsStrangelets won't destroy the Earth, but are still spooky as hellLK-99 superconductor: Chinese researchers demonstrate magnetic levitation as proof'Eerie-blue glow' seen with nuclear fusion for the first time Job Board
AI Startups
As Google continues to refine its own AI chatbot named Bard, its parent company Alphabet Inc. has a clear directive for its employees: Be careful around chatbots, even Bard. Four sources close to the matter told Reuters that the massive tech giant has advised employees not to enter confidential information into chatbots like OpenAI’s ChatGPT or Google’s own Bard over fears of leaks. Alphabet is reportedly concerned with employees inputting sensitive information into these chatbots since human reviewers may sit on the other end reviewing chat entries. These chatbots may also use previous entries to train themselves, posing another risk of a leak. That risk is warranted, as Samsung confirmed last month that its own internal data had been leaked after staff used ChatGPT. Google did not immediately return Gizmodo’s request for comment on the employee directive. In January, an Amazon lawyer urged employees at the company not to share code with ChatGPT. The lawyer specifically requested that employees not share “any Amazon confidential information (including Amazon code you are working on)” with ChatGPT, according to screenshots of Slack messages reviewed by the Insider. Last month, Apple pushed a similar injunction onto its employees. Internal documents obtained by The Wall Street Journal showed that Apple forbade employees from using ChatGPT and the Microsoft-owned GitHub Copilot, an AI code writer. Sources also told the Journal that Apple, like every big player in tech, is interested in building its own large language model, and Apple purchased two AI startups in 2020 for $200 million and $50 million, respectively. Google released Bard, its ChatGPT competitor, in March. Bard is built with Google’s own in-house artificial intelligence engine called Language Model for Dialogue Applications, or LaMDA. A little over a month before Bard’s release, a leaked memo revealed that Google CEO Sundar Pichai asked Googlers across the company to test Bard for two to four hours during their day. This week, Google delayed the release of Bard in the European Union after Irish regulators cited privacy concerns. The Irish Data Protection Commission claims that Google and Bard do not comply with the Personal Data Protection law. Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators, The Best ChatGPT Alternatives, and Everything We Know About OpenAI’s ChatGPT.
AI Startups
The European Union has signalled a plan to expand access to its high performance computing (HPC) supercomputers by letting startups use the resource to train AI models. However there’s a catch: Startups wanting to gain access to the EU’s high power compute resource — which currently includes pre-exascale and petascale supercomputers — will need to get with the bloc’s program on AI governance. Back in May, the EU announced a plan for a stop-gap set of voluntary rules or standards targeted at industry developing and applying AI while formal regulations continued being worked — saying the initiative would aim to prepare firms for the implementation of formal AI rules in a few years’ time. The bloc also has the AI Act in train: A risk-based framework for regulating applications of AI that’s still being negotiated by EU co-legislators but which is expected to be adopted in the near future. On top of that it has instigated efforts to work with the US and other international partners on an AI Code of Conduct to help bridge international legislative gaps as different countries work on their own AI governance regimes. But the EU AI governance strategy involves some carrots, too — in the form of access to high performance compute for “responsible” AI startups. A spokesman for the Commission confirmed the startup-focused plan aims to build on the existing policy that does already allow industry to access the supercomputers (via a EuroHPC Access Calls for proposals process) — with “a new initiative to facilitate and support access to European supercomputer capacity for ethical and responsible AI start-ups”. The HPC access for AI startups initiative was announced earlier today by EU president Ursula von der Leyen during the annual ‘State of the Union’ address. Extinction risk warning During the speech the EU’s president also took some time to flag concerns raised by certain corners of the tech industry about AI posing an extinction-level risk to humanity — warning the tech is “moving faster than even its developers anticipated”; and using that as a springboard to argue: “We have a narrowing window of opportunity to guide this technology responsibly.” “[AI] will improve healthcare, boost productivity, address climate change. But we also should not underestimate the very real threats,” she suggested. “Hundreds of leading AI developers, academics and experts warned recently in the following words — and I quote: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.” She went on to promote the EU’s efforts to pass comprehensive legislation on AI governance and floated the idea of establishing a “similar body” to the IPCC to support policymakers globally with research and briefings on the latest science around risks attached to AI — assuming, presumably, the aforementioned existential concerns. “I believe Europe, together with partners, should lead the way on a new global framework for AI, built on three pillars: guardrails, governance and guiding innovation,” she said, asserting: “Our AI Act is already a blueprint for the whole world. We must now focus on adopting the rules as soon as possible and turn to implementation.” Expanding on the EU’s wider strategy for AI governance, she suggested: “[W]e should also join forces with our partners to ensure a global approach to understanding the impact of AI in our societies. Think about the invaluable contribution of the IPCC for climate, a global panel that provides the latest science to policymakers. “I believe we need a similar body for AI — on the risks and its benefits for humanity. With scientists, tech companies and independent experts all around the table. This will allow us to develop a fast and globally coordinated response — building on the work done by the [G7] Hiroshima Process and others.” Von der Leyen’s invocation of (possible) existential AI risks looks notable, as the EU’s focus on AI safety has — to date — been directed at considering how to shrink less theoretical risks flowing from automation, such as related to physical safety; problems with bias, discrimination and disinformation; liability issues, and so on. London-based AI safety startup, Conjecture, was among those welcoming the high level intervention on existential AI risk. “Great to see Ursula von der Leyen, Commission president, acknowledged today that AI constitutes an extinction risk, as even the CEOs of the companies developing the largest AI models have admitted on the record,” Andrea Miotti, its head of strategy and governance, told TechCrunch. “With these stakes, the focus can’t be pitting geographies against each other to gain some ‘competitiveness’; it’s stopping proliferation and flattening the curve of capabilities increases.” EU push for ‘responsible’ AI On the third pillar — guiding innovation — von der Leyen’s address trailed the plan to expand access to the bloc’s HPC supercomputers to AI startups for model training, saying more steerage efforts would follow. Currently the EU has eight supercomputers which are sited around the bloc, often located in research institutions — including Lumi a pre-exascale HPC supercomputer located in Finland; MareNostrum 5, a pre-exascale supercomputer hosted in Spain; and Leonardo, a third pre-exascale supercomputer sited in Italy — with two (even more powerful) exascale supercomputers set to come on stream in the future (aka, Jupiter in Germany; and Jules Verne in France). “Thanks to our investment in the last years, Europe has now become a leader in supercomputing — with 3 of the 5 most powerful supercomputers in the world,” she noted. “We need to capitalise on this. This is why I can announce today a new initiative to open up our high-performance computers to AI start-ups to train their models. But this will only be part of our work to guide innovation. We need an open dialogue with those that develop and deploy AI. It happens in the United States, where seven major tech companies have already agreed to voluntary rules around safety, security and trust. “It happens here, where we will work with AI companies, so that they voluntarily commit to the principles of the AI Act before it comes into force. Now we should bring all of this work together towards minimum global standards for safe and ethical use of AI.” Scientific institutes, industry and public administration do already have access to EuroHPC supercomputers through the aforementioned calls access policy process — which requires them to apply and justify their need for (and capacity to use) “extremely large allocations in terms of compute time, data storage and support resources”, per the Commission spokesman. But he said this EuroHPC JU [joint undertaking] access policy will be “fine-tuned with the aim to have a dedicated and swifter access track for SMEs and AI startups”. “The ethical criterion used for Horizon [research] projects is already used to evaluate access to EPC supercomputers. In the same vein, this can be a criterion for calls for candidates to avail of HPC access under an AI scheme,” the spokesman added. Riffing on von der Leyen’s announcement in a blog post on LinkedIn, Thierry Breton, the EU’s internal market commissioner, also wrote: “[W]e will launch the EU AI Start-Up Initiative, leveraging one of Europe’s biggest assets: Its public high-performance computing infrastructure. We will identify the most promising European start-ups in AI and give them access to our supercomputing capacity.” “Access to Europe’s supercomputing infrastructure will help start-ups bring down the training time for their newest AI models from months or years to days or weeks. And it will help them lead the development and scale-up of AI responsibly and in line with European values,” Breton suggested, adding that the new initiative would aim to build on broader Commission efforts to foster AI innovation — such as the launch in January of Testing and Experimentation Facilities for AI; and its focus on developing Digital Innovation Hubs. He also pointed to the development of regulatory sandboxes under the incoming AI Act, and efforts to boost AI research via the European Partnership on AI, Data and Robotics and the HorizonEurope research program. How much of a competitive advantage the EU initiative to support select startups with HPC for AI model training could be remains to be seen. But it’s a clear effort by the EU to use (in-demand) resource to encourage ‘the right kind of innovation’ (aka, tech that’s in line with European values). AI governance talking shop In a further announcement, Breton’s blog post reveals the EU plans to power up an existing AI talking shop to drive for more inclusive governance. “When developing governance for AI, we must ensure the involvement of all – not only big tech, but also start-ups, businesses using AI across our industrial ecosystems, consumers, NGOs, academic experts and policy-makers,” he wrote. “This is why I will convene in November the European AI Alliance Assembly, bringing together all these stakeholders.” In light of this announcement, a recent UK government effort to pitch itself as a global AI Safety leader — by convening an AI Summit this fall — looks set to have some regional competition running in parallel. It’s not clear who will attend the UK summit but there has been early concern the UK government is not consulting as broadly as claimed as ministers program the conference. The initiative also attracted swift and effusive backing from AI giants — including a pledge of early/priority access to “frontier” models for UK AI safety research from Google DeepMind, OpenAI and Anthropic — shortly after a series of meetings between the CEOs of the companies and the UK prime minister. So it’s possible to read Breton’s line about ensuring “the involvement of all” in AI governance — “not only big tech, but also start-ups, businesses using AI across our industrial ecosystems, consumers, NGOs, academic experts and policy-makers” — as a swipe at the UK’s Big Tech-backed approach. (Albeit, OpenAI’s CEO Sam Altman also met with von der Leyen in June during his wider European tour, which may explain her sudden attention to “extinction level” AI risk.) The European AI Alliance, meanwhile, was launched by the Commission back in 2018, initially as an online discussion forum but also conveying a variety of in-person meetings and workshops the EU says has brought together thousands of stakeholders to-date, with the stated intention of establishing “an open policy dialogue on artificial intelligence”. This has included steering the work of the High-Level Expert Group on AI which helped shape the Commission’s policymaking as it drafted the AI Act. “The AI Alliance has existed since 2019. It has not met for the past two years, so commissioner Breton considered it timely to convene the Alliance again,” the Commission’s spokesman told us. “The Assembly in November will come at an important time in the adoption process for the AI Act. There will be a focus on the implementation of the AI Act & AI Pact and on our broader efforts to promote excellence and trust in AI.”
AI Startups
(Bloomberg) -- OpenAI is now letting users build custom versions of ChatGPT to accomplish specific personal and professional tasks as the artificial intelligence startup works to beat back competition in an increasingly crowded market. Most Read from Bloomberg With the new option, users will be able to quickly create their own specialized versions of ChatGPT — simply called GPTs — that can help teach math to a child or explain the rules of a board game, the company said on Monday. No coding is required, the company said. OpenAI also plans to introduce a store later this month where users can find tailored GPTs from other users — and make money from their own — much as they might with apps in Apple Inc.’s App Store. At its first-ever developer conference on Monday, OpenAI also said it’s introducing a preview version of GPT-4 Turbo, a more powerful and speedier version of its most recent large language model, the technology that underpins ChatGPT. ChatGPT was released to the public a year ago this month, kicking off a global frenzy around all things AI. Roughly 100 million people now use ChatGPT each week, the company said at the conference, and more than 90% of Fortune 500 businesses are building tools on OpenAI’s platform. But the ChatGPT maker is also confronting rival products from well-funded AI startups, tech giants and, most recently, Elon Musk, an early OpenAI backer. For OpenAI, the conference represents a chance to show how much influence it wields over the developer community. Hosting a developers conference is also standard for leading tech companies, including Apple, Alphabet Inc.’s Google and Meta Platforms Inc.’s Facebook. Often, these annual events offer a chance for tech companies to preview major software or product updates. OpenAI said the Turbo version of GPT-4 was built with a trove of online data running through April of this year, giving it a greater awareness of current events. The original version of GPT-4 had access to data running through September, 2021, though the company rolled out a feature this year that enabled ChatGPT users to browse the internet to get up-to-date information. OpenAI said the Turbo version of ChatGPT will be able to process and respond to novel-length prompts from users. By comparison, the company’s GPT-4 model has been limited to as much as about 50 pages worth of text. Turbo will also be cheaper for developers to use, the company said. Founded in 2015, OpenAI has put out numerous AI models over the years. The technology has become more adept at what’s known as generative AI — software that can ingest a short written prompt and spit out content in response, whether it’s text that can mimic what’s written by humans or realistic-looking images. Some people have already used OpenAI’s tools to write lyrics, draft emails, do homework assignments and create children’s books. But OpenAI and its rivals have also ignited a new wave of copyright concerns. On Monday, OpenAI said it would pay any costs users incur from copyright infringement claims. Microsoft Corp. and Google have previously taken similar steps. OpenAI’s event was held just blocks from San Francisco’s Hayes Valley neighborhood, which some have nicknamed “Cerebral Valley” for the growing number of AI startups based there. The venue, SVN West, is a multi-story event space that in past incarnations was a ballroom and, more recently, a Honda dealership. Most Read from Bloomberg Businessweek ©2023 Bloomberg L.P.
AI Startups
Imbue, the AI research lab formerly known as Generally Intelligent, has raised $200 million in a Series B funding round that values the company at over $1 billion. Among those participating are the Astera Institute, Nvidia, Cruise CEO Kyle Vogt and Notion co-founder Simon Last. The new tranche takes Imbue’s total raised to $220 million, placing it among the better-funded AI startups in recent months. It’s only slightly behind AI21 Labs ($283 million), the Tel Aviv-based firm developing a range of text-generating AI tools, as well as generative AI vendors like Cohere ($435 million) and Adept ($415 million). “This latest funding will accelerate our development of AI systems that can reason and code, so they can help us accomplish larger goals in the world,” Imbue wrote in a blog post published this morning. “Our goal remains the same: to build practical AI agents that can accomplish larger goals and safely work for us in the real world.” Imbue launched out of stealth last October with an ambitious goal: to research the fundamentals of human intelligence that machines currently lack. Its plan, as presented to TechCrunch back then, was to turn “fundamentals” into an array of tasks to be solved, and to design different AI models and test their ability to learn to solve these tasks in complex 3D worlds built by the Imbue team. The company’s approach seems to have shifted somewhat since then. Rather than unleash AI on 3D worlds, Imbue says that it’s developing models it finds “internally useful” to start, including models that can code (a la GitHub Copilot and Amazon CodeWhisperer). Plenty of models can code. But what sets Imbue’s apart are their ability to “robustly reason,” the company claims. “We believe reasoning is the primary blocker to effective AI agents,” Imbue wrote in the blog post. “Robust reasoning is necessary for effective action. It involves the ability to deal with uncertainty, to know when to change our approach, to ask questions and gather new information, to play out scenarios and make decisions, to make and discard hypotheses and generally to deal with the complicated, hard-to-predict nature of the real world.” Imbue also believes that code is an important use case beyond enabling its team to build AI apps at scale. In the blog post, the company makes the case that code can improve reasoning and is one of the more effective ways for models to take actions on a machine. “An agent that writes a SQL query to pull information out of a table is much more likely to satisfy a user request than an agent that tries to assemble that same information without using any code,” the company wrote. “Moreover, training on code helps models learn to reason better; training without code seems to result in models that reason poorly.” It’s a philosophy that’s not dissimilar to Adept’s, which aims to build AI that can automate any software process. Google DeepMind has also explored approaches for teaching AI to control computers, like having an AI observe keyboard and mouse commands from people completing “instruction-following” computer tasks such as booking a flight. Imbue says that its models are “tailor-made” for reasoning in the sense that they’re trained on data to “reinforce good reasoning patterns,” and using techniques that spend “far more compute during inference time” to arrive at “robust conclusions and actions.” Specifically, Imbue’s training “very large” models — models with over 100 billion parameters — optimized to perform well on its internal benchmarks for reasoning. (“Parameters” are the parts of a model learned from training data and essentially define the skill of the model on a problem, like generating text or code.) This training is being conducted on a compute cluster co-designed by Nvidia, containing 10,000 GPUs from Nvidia’s H100 series. Imbue is also investing in building its own AI and machine learning tooling, like AI prototypes for debugging and visual interfaces on top of AI models. And it’s conducting research into understanding the learning process in large language models. Imbue doesn’t intend to productionize much of what it’s working on at the moment. Rather, it sees these tools and models as a way to improve future, more general-purpose AI, and to establish the groundwork for a platform that people will be able to use to create their own custom models. “When we build AI agents, we’re actually building computers that can understand our goals, communicate proactively and work for us in the background,” Imbue continued in the blog post. “Ultimately, we hope to release systems that enable anyone to build robust, custom AI agents that put the productive power of AI at everyone’s fingertips … This latest funding will accelerate our development of AI systems that can reason and code, so they can help us accomplish larger goals in the world.”
AI Startups
An app called Poe will now let users make their own chatbot using prompts combined with an existing bot, like ChatGPT, as the base. First launched publicly in February, Poe is the latest product from the Q&A site Quora, which has long provided web searchers with answers to the most Googled questions. With chatbots now potentially powering the future of web search and Q&A, the company chose to expand into this market by allowing consumers to play with the latest AI technologies from companies like OpenAI and Anthropic via a simple mobile interface. Initially, Poe debuted with support for a handful of general knowledge chatbots including Sage and Dragonfly, powered by OpenAI technology, and Claude, powered by Anthropic. Last month, Poe rolled out subscriptions that allow users to pay to access the more powerful bots based on new language models including GPT-4 from OpenAI and Claude+ from Anthropic. Poe is also the only consumer-facing internet product with access to either Claude or Claude+, the company noted at the time. Now, Poe will offer the ability for users to create their own bots using prompts — that is, ways of directing a chatbot to perform highly specific tasks. Today, people are using prompts to direct bots to output text in the style of a favorite author, in a particular format, or aimed at a certain audience, among other things. Essentially, the idea is that better prompts drive better outputs. This has led to the creation of a new creator class within the field of prompt engineering. Online communities have also sprung up to enable people to share their prompt ideas with one another. With Poe’s new feature, Quora CEO Adam D’Angelo explained in a recent Twitter thread, users can make their own bots based on either Claude or ChatGPT. Once created, the bot will have its own unique URL (poe.com/botname) which will open the bot directly in Poe. D’Angelo also shared a few fun bots the company created to demonstrate the new feature, including a “talk like a pirate bot” at poe.com/PirateBot, a Japanese language tutor, a bot that turns your messages into emoji at poe.com/emojis, and a bot that mildly roasts you at poe.com/RoastMaster. “We’ve seen a lot of great experimentation with prompts on LLMs both among the community on Poe and across the internet, and it’s amazing how much value prompting can unlock from language models,” D’Angelo wrote. “We hope this new feature can help people who are talented at prompting share their ability with the rest of the world, and provide simple interfaces for everyone to get the most out of AI,” he said. Users will be able to access the bots via Poe’s iOS or Android app or via its web interface. When you find a bot you like, you can click a button to follow the bot so you can easily return to it later. The bot will then appear in Poe’s sidebar bot list alongside general-purpose bots like Sage, Claude, and others. Quora plans to cover all the costs involved with operating this feature for the time being, including the LLM fees, which it notes could get to be expensive if any bots become popular. In the future, the plan is to offer bot creators feedback about how people are using their bot so they can iterate on improvement. Later on, the company also plans to develop an API that would allow anyone to host a bot from a server they operate, which would allow for even more complex bots — and a potential new business for Quora, as well. Already, some users announced within the Twitter thread how they used the feature to make bots for both practical purposes, like trip planning or learning math, as well as for fun, like flirting. (Poe’s platform guidelines restrict a variety of use cases that could be problematic, like hate speech, violence, illegal activities, fraud, IP infringement, and others, but it remains to be seen if any bots will skirt its rules.) Poe is not the only mobile app catering to mobile users. Though OpenAI hasn’t launched an official app, dozens of AI chatbots flooded the App Store claiming to offer ChatGPT access and now, the top AI apps are pulling in millions of dollars. Microsoft’s Bing and Edge apps also integrated AI technology made possible through the company’s partnership with OpenAI. Meanwhile, other AI startups, like Perplexity, have recently launched their mobile apps, too. That said, consumer demand for Poe has been going well. To date, the mobile app version of Poe has 1.17 million installs and has generated $520,000 in gross revenue, according to app intelligence firm data.ai. The app is currently ranked No. 32 in the Productivity category on the App Store.
AI Startups
Illustration: Aïda Amer/AxiosGoogle's research arm on Wednesday showed off a whiz-bang assortment of artificial intelligence (AI) projects it's incubating, aimed at everything from mitigating climate change to helping novelists craft prose.Why it matters: AI has breathtaking potential to improve and enrich our lives — and comes with hugely worrisome risks of misuse, intrusion and malfeasance, if not developed and deployed responsibly.Driving the news: The dozen-or-so AI projects that Google Research unfurled at a Manhattan media event are in various stages of development, with goals ranging from societal improvement (such as better health diagnoses) to pure creativity and fun (text-to-image generation that can help you build a 3D image of a skirt-clad monster made of marzipan). On the "social good" side:Wildfire tracking: Google's machine-learning model for early detection is live in the U.S., Canada, Mexico and parts of Australia.Flood forecasting: A system that sent 115 million flood alerts to 23 million people in India and Bangladesh last year has since expanded to 18 additional countries (15 in Africa, plus Brazil, Colombia and Sri Lanka). Maternal health/ultrasound AI: Using an Android app and a portable ultrasound monitor, nurses and midwives in the U.S. and Zambia are testing a system that assesses a fetus' gestational age and position in the womb.Preventing blindness: Google's Automated Retinal Disease Assessment (ARDA) uses AI to help health care workers detect diabetic retinopathy. More than 150,000 patients have been screened by taking a picture of their eyes on their smartphone.The "1,000 Languages Initiative": Google is building an AI model that will work with the world's 1,000 most-spoken languages.On the more speculative and experimental side:Self-coding robots: In a project called "Code as Policies," robots are learning to autonomously generate new code.In a demonstration, Google's Andy Zeng told a robot hovering over three plastic bowls (red, blue and green) and three pieces of candy (Skittles, M&M's and Reese's) that I liked M&M's and that my bowl was blue. The robot placed the correct candy in the right bowl, even though it wasn't directly told to "place M&M's in the blue bowl."Wordcraft: Several professional writers are experimenting with Google's AI fiction-crafting tool. It isn't quite ready for prime time, but you can read the stories they devised with it here.At left, Andy Zeng of Google Research showed how a robot could be taught to understand terms like "Willy Wonka" as a metaphor for chocolate. Right, Daniel Tse of Google Research shows off the AI-driven maternal sonography system he's developing. Photos: Jennifer A. KingsonThe big picture: Fears about AI's dark side — from privacy violations and the spread of misinformation to losing control of consumer data — recently prompted the White House to issue a preliminary "AI Bill of Rights," encouraging technologists to build safeguards into their products.While Google published its principles of AI development in 2018 and other tech companies have done the same, there's little-to-no government regulation.Although investors have been pulling back on AI startups recently, Google's deep pockets could give it more time to develop projects that aren't immediate moneymakers.Yes, but: Google executives sounded multiple notes of caution as they showed off their wares.AI "can have immense social benefits" and "unleash all this creativity," said Marian Croak, head of Google Research's center of expertise on responsible AI. "But because it has such a broad impact on people, the risk involved can also be very huge. And if we don't get that right ... it can be very destructive."Threat level: A recent Georgetown Center for Security and Emerging Technology report examined how text-generating AI could "be used to turbocharge disinformation campaigns."And as Axios' Scott Rosenberg has written, society is only just beginning to grapple with the legal and ethical questions raised by AI's new capacity to generate text and images.Still, there's fun stuff: This summer, Google Research introduced Imagen and Parti — two AI models that can generate photorealistic images from text prompts (like "a puppy in a nest emerging from a cracked egg"). Now they're working on text-to-video:Imagen Video can create a short clip from phrases like "a giraffe underneath a microwave."Phenaki is "a model for generating videos from text, with prompts that can change over time and videos that can be as long as multiple minutes," per Google Research.AI Test Kitchen is an app that demonstrates text-to-image capabilities through two games, "City Dreamer" (build cityscapes using keywords) and "Wobble" (create friendly monsters that can dance).The bottom line: Despite recent financial headwinds, AI is steamrolling forward — with companies such as Google positioned to serve as moral arbiters and standard-setters. "AI is the most profound technology we are working on, yet these are still early days," Google CEO Sundar Pichai said in a recorded introduction to Wednesday's event.
AI Startups
When I mentioned “the rise of AI” in a recent email to investors, one of them sent me an interesting reply: “The ‘rise of AI’ is a bit of a misnomer.” What that investor, Rudina Seseri, a managing partner at Glasswing Ventures, means to say is that sophisticated technologies like AI and deep learning have been around for a long time now, and all this hype around AI is ignoring the simple fact that they have been in development for decades. “We saw the earliest enterprise adoption in 2010,” she pointed out. Still, we can’t deny that AI is enjoying unprecedented levels of attention, and companies across sectors around the world are busy pondering the impact it could have on their industry and beyond. Dr. Andre Retterath, a partner at Earlybird Venture Capital, feels several factors are working in tandem to generate this momentum. “We are witnessing the perfect AI storm, where three major ingredients that evolved throughout the past 70 years have finally come together: Advanced algorithms, large-scale datasets, and access to powerful compute,” he said. Still, we couldn’t help but be skeptical at the number of teams that pitched a version of “ChatGPT for X” at Y Combinator’s winter Demo Day earlier this year. How likely is it that they will still be around in a few years? Karin Klein, a founding partner at Bloomberg Beta, thinks it’s better to run the race and risk failing than sit it out, since this is not a trend companies can afford to ignore. “While we’ve seen a bunch of “copilots for [insert industry]” that may not be here in a few years, the bigger risk is to ignore the opportunity. If your company isn’t experimenting with using AI, now is the time or your business will fall behind.” And what’s true for the average company is even more true for startups: Failing to give at least some thought to AI would be a mistake. But a startup also needs to be ahead of the game more than the average company does, and in some areas of AI, “now” may already be “too late.” To better understand where startups still stand a chance, and where oligopoly dynamics and first-mover advantages are shaping up, we polled a select group of investors about the future of AI, which areas they see the most potential in, how multilingual LLMs and audio generation could develop, and the value of proprietary data. This is the first of a three-part survey that aims to dive deep into AI and how the industry is shaping up. In the next two parts to be published soon, you will hear from other investors on the various parts of the AI puzzle, where startups have the highest chance of winning, and where open-source might overtake closed source. We spoke with: - Manish Singhal, founding partner, pi Ventures - Rudina Seseri, founder and managing partner, Glasswing Ventures - Lily Lyman, Chris Gardner, Richard Dulude and Brian Devaney of Underscore VC - Karin Klein, founding partner, Bloomberg Beta - Xavier Lazarus, partner, Elaia - Dr Andre Retterath, partner, Earlybird Venture Capital - Matt Cohen, managing partner, Ripple Ventures Manish Singhal, founding partner, pi Ventures Will today’s leading genAI models and the companies behind them retain their leadership in the coming years? This is a dynamically changing landscape when it comes to applications of LLMs. Many companies will form in the application domain, and only a few will succeed in scaling. In terms of foundation models, we do expect OpenAI to get competition from other players in the future. However, they have a strong head start and it will not be easy to dislodge them. Which AI-related companies do you feel aren’t innovative enough to still be around in 5 years? I think in the applied AI space, there should be significant consolidation. AI is becoming more and more horizontal, so it will be challenging for applied AI companies, which are built on off-the-shelf models, to retain their moats. However, there is quite a bit of fundamental innovation happening on the applied front as well as on the infrastructure side (tools and platforms). They are likely to do better than the others. Is open source the most obvious go-to-market route for AI startups? It depends on what you are solving for. For the infrastructure layer companies, it is a valid path, but it may not be that effective across the board. One has to consider whether open source is a good route or not based on the problem they are solving. Do you wish there were more LLMs trained in other languages than English? Besides linguistic differentiation, what other types of differentiation do you expect to see? We are seeing LLMs in other languages as well, but of course, English is the most widely used. Based on the local use cases, LLMs in different languages definitely make sense. Besides linguistic differentiation, we expect to see LLM variants that are specialized in certain domains (e.g., medicine, law and finance) to provide more accurate and relevant information within those areas. There is already some work happening in this area, such as BioGPT and Bloomberg GPT. LLMs suffer from hallucination and relevance when you want to use them in real production grade applications. I think there will be considerable work done on that front to make them more usable out of the box. What are the chances of the current LLM method of building neural networks being disrupted in the upcoming quarters or months? It can surely happen, although it may take longer than a few months. Once quantum computing goes mainstream, the AI landscape will change significantly again. Given the hype around ChatGPT, are other media types like generative audio and image generation comparatively underrated? Multi-modal generative AI is picking pace. For most of the serious applications, one will need those to build, especially for images and text. Audio is a special case: there is significant work happening in auto-generation of music and speech cloning, which has wide commercial potential. Besides these, auto-generation of code is becoming more and more popular, and generating videos is an interesting dimension — we will soon see movies completely generated by AI! Are startups with proprietary data more valuable in your eyes these days than they were before the rise of AI? Contrary to what the world may think, proprietary data gives a good head start, but eventually, it is very difficult to keep your data proprietary. Hence, the tech moat comes from a combination of intelligently designed algorithms that are productized and fine tuned for an application along with the data. When could AGI become a reality, if ever? We are getting close to human levels with certain applications, but we are still far from a true AGI. I also believe that it is an asymptotic curve after a while, so it may take a very long time to get there across the board. For true AGI, several technologies, like neurosciences and behavioral science, may also have to converge. Is it important to you that the companies you invest in get involved in lobbying and/or discussion groups around the future of AI? Not really. Our companies are more targeted towards solving specific problems, and for most applications, lobbying does not help. It’s useful to participate in discussion groups, as one can keep a tab on how things are developing. Rudina Seseri, founder and managing partner, Glasswing Ventures Will today’s leading genAI models and the companies behind them retain their leadership in the coming years? The foundation layer model providers such as Alphabet, Microsoft/Open AI and Meta will likely maintain their market leadership and function as an oligopoly over the long term. However, there are opportunities for competition in models that provide significant differentiation, like Cohere and other well-funded players at the foundational level, placing a strong emphasis on trust and privacy. We have not invested and likely will not invest in the foundation layer of generative AI. This layer will probably end in one of two states: In one scenario, the foundation layer will have oligopoly dynamics akin to what we saw with the cloud market, where a select few players will capture most of the value. The other possibility is that foundation models are largely supplied by the open source ecosystem. We see the application layer holding the biggest opportunity for founders and venture investors. Companies that deliver tangible, measurable value to their customers can displace large incumbents in existing categories and dominate new ones. Our investment strategy is explicitly focused on companies offering value-added technology that augments foundation models. Just as value creation in the cloud did not end with the cloud computing infrastructure providers, significant value creation has yet to arrive across the genAI stack. The genAI race is far from over. Which AI-related companies do you feel aren’t innovative enough to still be around in 5 years? A few market segments in AI might not be sustainable as long-term businesses. One such example is the “GPT wrapper” category — solutions or products built around OpenAI’s GPT technology. These solutions lack differentiation and can be easily disrupted by features launched by existing dominant players in their market. As such, they will struggle to maintain a competitive edge in the long run. Similarly, companies that do not provide significant business value or do not solve a problem in a high-value, expensive space will not be sustainable businesses. Consider this: A solution streamlining a straightforward task for an intern will not scale into a significant business, unlike a platform that resolves complex challenges for a chief architect, offering distinct and high-value benefits. Finally, companies with products that do not seamlessly integrate within current enterprise workflows and architectures, or require extensive upfront investments, will face challenges in implementation and adoption. This will be a significant obstacle for successfully generating meaningful ROI, as the bar is far higher when behavior changes and costly architecture changes are required.
AI Startups
The UK isn’t going to be setting hard rules for AI any time soon. Today, the Department for Science, Innovation and Technology (DSIT) published a white paper setting out the government’s preference for a light-touch approach to regulating artificial intelligence. It’s kicking off a public consultation process — seeking feedback on its plans up to June 21 — but appears set on paving a smooth road of ‘flexible principles’ that AI can speed through. Worries about the risks of increasingly powerful AI technologies are very much treated as a secondary consideration, relegated far behind a political agenda to talk up the vast potential of high tech growth — and thus, if problems arise, the government is suggesting the UK’s existing (overstretched) regulators will have to deal with them, on a case-by-case basis, armed only with existing powers (and resources). So, er, lol! The 91-page white paper, which is entitled “A pro-innovation approach to AI regulation”, talks about taking “a common-sense, outcomes-oriented approach” to regulating automation — by applying what the government frames as a “proportionate and pro-innovation regulatory framework”. In a press release accompanying the white paper’s publication — with a clear eye on generating newspaper headlines that frame a narrative of ministers seeking to “turbocharge growth” — the government confirms there will be no dedicated watchdog for artificial intelligence, merely a set of “principles” for existing regulators to work with; so no new legislation, rather a claim of “adaptable” (but not legally binding) regulation. DSIT says legislation “could” be introduced — at some unspecified future period, and when parliamentary time allows — “to ensure regulators consider the principles consistently”. So, yep, that’s the sound of a can being kicked down the road. But expect to see guidance emerging from a number of existing UK regulators over the next 12 months — along with some tools and “risk assessment templates” which AI makers may be encouraged to play around with (if they like). There will also be the inexorable sandbox (funded with £2M from the public purse) — or at least a “sandbox trial to help businesses test AI rules before getting to market”, per DSIT. But evidently there won’t be a hard legal requirement to actually use it. The government says its approach to AI will focus on “regulating the use, not the technology” — ergo, there won’t be any rules or risk levels assigned to entire sectors or technologies. Which is quite the contrast with the European Union’s direction of travel with its risk-based framework that includes some up-front prohibitions on certain users of AI, with define regimes for use-cases specified as high risk and self regulation for lower risk uses. “Instead, we will regulate based on the outcomes AI is likely to generate in particular applications,” the government stipulates, arguing — for example, and somewhat boldly in its choice of example here — that classifying all applications of AI in critical infrastructure as high risk “would not be proportionate or effective” because there might be some uses of AI in critical infrastructure that can be “relatively low risk”. Because ministers have opted for what the white paper calls “context-specificity”, they decided against setting up a dedicated regulator for AI — hence the responsibility falls on existing bodies with expertise across various sectors. “To best achieve this context-specificity we will empower existing UK regulators to apply the cross-cutting principles,” it writes on this. “Regulators are best placed to conduct detailed risk analysis and enforcement activities within their areas of expertise. Creating a new AI-specific, cross-sector regulator would introduce complexity and confusion, undermining and likely conflicting with the work of our existing expert regulators.” Under the plan, existing regulators will be expected to apply a set of five principles — setting out “key elements of responsible AI design, development and use” — that the government wants/hopes to guide businesses as they develop artificial intelligence. “Regulators will lead the implementation of the framework, for example by issuing guidance on best practice for adherence to these principles,” it suggests, adding that they will be expected to apply the principles “proportionately” to address the risks posed by AI “within their remits, in accordance with existing laws and regulations” — arguing this will enable the principles to “complement existing regulation, increase clarity, and reduce friction for businesses operating across regulatory remits”. It says it expects relevant regulators to need to issue “practical guidance” on the principles or update existing guidance — in order to “provide clarity to business” in what may otherwise be a vacuum of ongoing legal uncertainty. It also suggests regulators may need to publish joint guidance focused on AI use cases that cross multiple regulatory remits. So more work and more joint working is coming down the pipe for UK oversight bodies. “Regulators may also use alternative measures and introduce other tools or resources, in addition to issuing guidance, within their existing remits and powers to implement the principles,” it goes on, adding that it will “monitor the overall effectiveness of the principles and the wider impact of the framework” — stipulating that: “This will include working with regulators to understand how the principles are being applied and whether the framework is adequately supporting innovation.” So it’s seemingly leaving the door open to rowing back on certain principles if they’re considered too arduous by business. ‘Flexible principles’ “We recognise that particular AI technologies, foundation models for example, can be applied in many different ways and this means the risks can vary hugely. For example, using a chatbot to produce a summary of a long article presents very different risks to using the same technology to provide medical advice. We understand the need to monitor these developments in partnership with innovators while also avoiding placing unnecessary regulatory burdens on those deploying AI,” writes Michelle Donelan, the secretary of state for science, innovation and technology in the white paper’s executive summary where the government sets out its “pro-innovation” stall. “To ensure our regulatory framework is effective, we will leverage the expertise of our world class regulators. They understand the risks in their sectors and are best placed to take a proportionate approach to regulating AI. This will mean supporting innovation and working closely with business, but also stepping in to address risks when necessary. By underpinning the framework with a set of principles, we will drive consistency across regulators while also providing them with the flexibility needed.” The existing regulatory bodies the government is intending to saddle with more tasks — drafting “tailored, context-specific approaches” which AI model makers can also only take on advisement (i.e. ignore) — include the Health and Safety Executive; the Equality and Human Rights Commission; and the Competition and Markets Authority (CMA), per DSIT. The PR doesn’t mention the Information Commissioner’s Office (ICO), aka the data protection regulator, but it gets several references in the white paper and looks set to be another body pressganged into producing AI guidance (usefully, enough, the ICO has already offered some thoughts on AI snake oil). One quick aside here: The CMA is still waiting for the government to empower a dedicated Digital Markets Unit (DMU) that was supposed to be reining in the market power of Big Tech, i.e. by passing the necessary legislation. But, last year, ministers opted to kick that can into the long grass — so the DMU has still not been put on a statutory footing almost two years after it soft launched in expectation of parliamentary time being found to empower it… So it’s becoming abundantly clear this government is a lot more fond of drafting press releases than smart digital regulation. The upshot is the UK has been left trailing the whole of the EU on the salient area of digital competition (the bloc has the Digital Markets Act coming in application in a few months) — while Germany updated its national competition regime with an ex ante digital regime at the start of 2021 and has a bunch of pro-competition enforcements under its belt already. Now — by design — UK ministers intend the country to trail peers on AI regulation, too; framing this as a choice to “avoid heavy-handed legislation which could stifle innovation”, as DSIT puts it, in favor of a mass of sectoral regulatory guidance that businesses can choose whether to follow — literally in the same breath as penning the line that: “Currently, organisations can be held back from using AI to its full potential because a patchwork of legal regimes causes confusion and financial and administrative burdens for businesses trying to comply with rules.” So, um… legal certainty good or bad — which is it?! In short this looks like a very British (post-Brexit) mess. Across the English Channel, meanwhile, EU lawmakers are in the latter stages of negotiations over setting a risk-based framework for regulating AI — a draft law the European Commission presented way back in 2021; now with MEPs pushing for amendments to ensure the final text covers general purpose AIs like OpenAI’s ChatGPT. The EU also has a proposal for updating the bloc’s liability rules for software and AI on the table too. In the face of the EU’s carefully structured risk-baed framework, UK lawmakers are left trumpeting voluntary risk assessment templates and a toy sandbox — and calling this ‘DIY’ approach to generating trustworthy AI a ‘Brexit bonus’. Ouch. The five principles the government wants to guide the use of AI — or, specifically, that existing regulators “should consider to best facilitate the safe and innovative use of AI in the industries they monitor” — are: - safety, security and robustness: “Applications of AI should function in a secure, safe and robust way where risks are carefully managed” - transparency and explainability: “Organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI” - fairness: “AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes” - accountability and governance: “Measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes” - contestability and redress: “People need to have clear routes to dispute harmful outcomes or decisions generated by AI” All of which sound like fine words indeed. But without a legal framework to turn “principles” into hard rules — and ensure consistent application and enforcement atop entities that choose not to bother with any of that expensive safety stuff — it looks about as useful as whistling the Lord’s Prayer and hoping for the best if it’s trustworthy AI you’re looking for… (Oh yes — and don’t forget the UK government is also in the process of watering down the aforementioned UK GDPR — after it recently invited businesses to “co-design” a new data protection framework. Which led to a revised reform emerging that aims to make it easier for commercial entities to process people’s data for use-cases like research, and which risks eroding the independence of the privacy watchdog by adding a politically appointed board, in order to (and I quote Donelan here) ensure “we are the most innovative economy in the world and that we cement ourselves as a Science and Technology Superpower”.) The clear trend in the UK is of existing protections being rowed back as the government seeks to roll out the red carpet for AI-fuelled “innovation”, without a thought for what that might mean for rather essential stuff like safety or fairness — and therefore trustworthiness, assuming you want people to have a sliver of trust in the AIs you’re pumping out — but ministers are essentially saying: ‘Don’t worry, just lie back and think of GB’s GDP!’ Of course any developers building AI models in the UK and wanting to scale beyond those shores will have to consider regulations that apply outside the UK. So the freedom to be so lightly regulated may, ultimately, come with a hard requirement to comply with foreign frameworks anyway — or else be tightly limited in geographical scope. (And, well, tech innovators do love to scale.) Still, DSIT’s PR has a canned quote from Lila Ibrahim, COO (and UK AI Council Member) at Google-owned DeepMind — an AI giant that has been lagging behind rivals like OpenAI on the buzzy artificial intelligence tech of the moment (generative AI) — who lauds the government’s proposed “context-driven approach”, rubberstamping the direction of travel with the claim that it will “help regulation keep pace with the development of AI, support innovation and mitigate future risks”. “AI has the potential to advance science and benefit humanity in numerous ways, from combating climate change to better understanding and treating diseases. This transformative technology can only reach its full potential if it is trusted, which requires public and private partnership in the spirit of pioneering responsibly,” Ibrahim also suggests. The government is clearly hoping its offer of ‘no rules except the ones you choose’ will encourage AI startups to pick the UK over other locations — where automation is being more tightly regulated. (Or as Donelan pens it: “Our pro-innovation approach will also act as a strong incentive when it comes to AI businesses based overseas establishing a presence in the UK.”) It’s quite the gamble by the Conservative government — given the highly scalable potential for too lightly regulated AI to go horribly wrong. And headlines about ‘AI-powered critical infrastructure that failed’ won’t wait for government press releases; they’ll write themself. But Rishi Sunak’s Tories are apparently going all in on this one. In a press statement, Donelan goes on to offer this interesting construction — to explain what the government is doing: AI has the potential to make Britain a smarter, healthier and happier place to live and work. Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely. Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow. It’s quite the collection of sentences — combining an assertion of the need to have “rules to make sure [AI] is developed safely”, with a sleight-of-hand segue to the offer of (just) “strong principles so that people can trust businesses to unleash this technology of tomorrow”. So not rules then. (And, I mean, ‘trusting businesses to unleash technology’ is not at all the same thing as trusting businesses to unleash trusted technology. In fact you could say they’re poles apart. So the choice of phase there is quite the tell.) But back to rules vs principles. We asked ChatGPT what the difference is between these two terms. And here’s what it told us: Rules and principles are both guidelines that govern behavior, but they differ in their nature and application. Rules are specific instructions or directives that dictate what one should or should not do. They are usually created by authorities or institutions and enforced through consequences such as punishments or rewards. Rules are often rigid and inflexible and do not take into account situational differences or individual circumstances. For example, “Do not exceed the speed limit on the highway” is a rule that applies to all drivers regardless of the context. Principles, on the other hand, are broad concepts or values that guide behavior and decision-making. They are more flexible and adaptable than rules and provide a framework for making ethical or moral judgments. Principles are usually not enforced through punishments or rewards but are instead internalized and followed voluntarily. For example, the principle of honesty is a value that guides behavior in a variety of situations, such as being truthful in communication, respecting others’ property, and fulfilling obligations. In summary, rules are specific and inflexible instructions while principles are broad and flexible values that guide behavior. Rules are enforced through external means while principles are internalized and followed voluntarily. So, assuming this large language model is not simply hallucinating again, and the nuance it’s identifying is correct, Donelan is both recognizing that fixed rules are required for AI to be safe while confirming the government has decided against setting any right now. The verbal downgrade is to purely voluntary principles. Or, basically, it’s going to let businesses make up their own minds and do what they must in order to grow as fast as possible for the foreseeable future (or at least until after the next election). What could possibly go wrong!? It’s clear the government’s growth-at-all costs agenda has eaten a full course meal of AI hype. Pity the poor Brits set to become guinea pigs in the name of unleashing mindless automation atop a rudderless bark christened “innovation”. Citizens of the UK will want to strap themselves in for this ride. Because if something does go wrong they’ll be forced to wait for the government to make parliamentary time available to actually pass some safety rules. Which may be a lot of breath to hold.
AI Startups
Amazon is throwing its hat into the generative AI ring. But rather than build AI models entirely by itself, it’s recruiting third parties to host models on AWS. AWS today unveiled Amazon Bedrock, which provides a way to build generative AI-powered apps via pretrained models from startups including AI21 Labs, Anthropic and Stability AI. Available in a “limited preview,” Bedrock also offers access to Titan FMs (foundation models), a family of models trained in-house by AWS. “Applying machine learning to the real world — solving real business problems at scale — is what we do best,” Vasi Philomin, VP of generative AI at AWS, told TechCrunch in a phone interview. “We think every application out there can be reimagined with generative AI.” The debut of Bedrock was somewhat telegraphed by AWS’ recently-inked partnerships with generative AI startups in the past few months, in addition to its growing investments in the tech required to build generative AI apps. Last November, Stability AI selected AWS as its preferred cloud provider, and in March, Hugging Face and AWS collaborated to bring the former’s text-generating models onto the AWS platform. More recently, AWS launched a generative AI accelerator for startups and said it would work with Nvidia to build “next-generation” infrastructure for training AI models. Bedrock and custom models Bedrock is Amazon’s most forceful play yet for the generative AI market, which could be worth close to $110 billion by 2030, according to estimates from Grand View Research. With Bedrock, AWS customers can opt to tap into AI models from a variety of different providers, including AWS, via an API. The details are a bit murky — Amazon hasn’t announced formal pricing, for one. But the company did emphasize that Bedrock is aimed at large customers building “enterprise-scale” AI apps, differentiating it from some of the AI model hosting services out there, like Replicate (plus the incumbent rivals Google Cloud and Azure). One presumes that generative AI model vendors were incentivized by AWS’ reach or potential revenue sharing to join Bedrock. Amazon didn’t reveal terms of the model licensing or hosting agreements, however. The third-party models hosted on Bedrock include AI21 Labs’ Jurassic-2 family, which are multilingual and can generate text in Spanish, French, German, Portuguese, Italian and Dutch. Claude, Anthropic’s model on Bedrock, can perform a range of conversational and text-processing tasks. Meanwhile, Stability AI’s suite of text-to-image Bedrock-hosted models, including Stable Diffusion, can generate images, art, logos and graphic designs. As for Amazon’s bespoke offerings, the Titan FM family comprises two models at present, with presumably more to come in the future: a text-generating model and an embedding model. The text-generating model, akin to OpenAI’s GPT-4 (but not necessarily on a par performance-wise), can perform tasks like writing blog posts and emails, summarizing documents, and extracting information from databases. The embedding model translates text inputs like words and phrases into numerical representations, known as embeddings, that contain the semantic meaning of the text. Philomin claims it’s similar to one of the models that powers searches on Amazon.com. AWS customers can customize any Bedrock model by pointing the service at a few labeled examples in Amazon S3, Amazon’s cloud storage plan — as few as 20 is enough. No customer data is used to train the underlying models, Amazon says. “At AWS … we’ve played a key role in democratizing machine learning and making it accessible to anyone who wants to use it,” Philomin said. “Amazon Bedrock is the easiest way to build and scale generative AI applications with foundation models.” Of course, given the unanswered legal questions surrounding generative AI, one wonders exactly how many customers will bite. Microsoft has seen success with its generative AI model suite, Azure OpenAI Service, which bundles OpenAI models with additional features geared toward enterprise customers. As of March, over 1,000 customers were using Azure OpenAI Service, Microsoft said in a blog post. But there’s several lawsuits pending over generative AI tech from companies including OpenAI and Stability AI, brought by plaintiffs who allege that copyrighted data, mostly art, was used without permission to train the generative models. (Generative AI models “learn” to create art, code and more by “training” on sample images and text, usually scraped indiscriminately from the web.) Another case making its way through the courts seeks to establish whether code-generating models that don’t give attribution or credit can in fact be commercialized, and an Australian mayor has threatened a defamation suit against OpenAI for inaccuracies spouted by its generative model ChatGPT. Philomin didn’t instill much confidence, frankly, refusing to say which data exactly Amazon’s Titan FM family was trained on. Instead, he stressed that the Titan models were built to detect and remove “harmful” content in the data AWS customers provide for customization, reject “inappropriate” content users input, and filter outputs containing hate speech, profanity and violence. Of course, even the best filtering systems can be circumvented, as demonstrated by ChatGPT. So-called prompt injection attacks against ChatGPT and similar models have been used to write malware, identify exploits in open source code and generate abhorrently sexist, racist and misinformational content. (Generative AI models tend to amplify biases in training data, or — if they run out of relevant training data — simply make things up.) But Philomin brushed aside those concerns. “We’re committed to the responsible use of these technologies,” he said. “We’re monitoring the regulatory landscape out there… we have a lot of lawyers helping us look at which data we can use and which we can’t use.” Philomin’s attempts at assurance aside, brands might not want to be on the hook for all that could go wrong. (In the event of a lawsuit, it’s not entirely clear whether AWS customers, AWS itself or the offending model’s creator would be held liable.) But individual customers might — particularly if there’s no charge for the privilege. CodeWhisperer, Trainium and Inferentia2 launch in GA On the subject and coinciding with its big generative AI push today, Amazon made CodeWhisperer, its AI-powered code-generating service, free of charge to developers without any usage restrictions. The move suggests that CodeWhisperer hasn’t seen the uptake Amazon hoped it would. Its chief rival, GitHub’s Copilot, had over a million users as of January, thousands of which are enterprise customers. CodeWhisperer has ground to make up, surely — which it aims to do on the corporate side with the simultaneous launch of CodeWhisperer Professional Tier. CodeWhisperer Professional Tier adds single sign-on with AWS Identity and Access Management integration as well as higher limits on scanning for security vulnerabilities. CodeWhisperer launched in late June as part of the AWS IDE Toolkit and AWS Toolkit IDE extensions as a response, of sorts, to the aforementioned Copilot. Trained on billions of lines of publicly available open source code and Amazon’s own codebase, as well as documentation and code on public forums, CodeWhisperer can autocomplete entire functions in languages like Java, JavaScript and Python based on only a comment or a few keystrokes. CodeWhisperer now supports several additional programming languages — specifically Go, Rust, PHP, Ruby, Kotlin, C, C++, Shell scripting, SQL and Scala — and, as before, highlights and optionally filters the license associated with functions it suggests that bear a resemblance to existing snippets found in its training data. The highlighting is an attempt to ward off the legal challenges GitHub’s facing with Copilot. Time will tell whether it’s successful. “Developers can become a lot more productive with these tools,” Philomin said. “It’s difficult for developers to be up to date on everything… tools like this help them not have to worry about it.” In less controversial territory, Amazon announced today that it’s launching Elastic Cloud Compute (EC2) Inf2 instances in general availability, powered by the company’s AWS Inferentia2 chips, which were previewed last year at Amazon’s re:Invent conference. Inf2 instances are designed to speed up AI runtimes, delivering ostensibly better throughput and lower latency for improved overall inference price performance. In addition, Amazon EC2 Trn1n instances powered by AWS Trainium, Amazon’s custom-designed chip for AI training, is also generally available to customers as of today, Amazon announced. They offer up to 1600 Gbps of network bandwidth and are designed to deliver up to 20% higher performance over Trn1 for large, network-intensive models, Amazon says. Both Inf2 and Trn1n compete with rival offerings from Google and Microsoft, like Google’s TPU chips for AI training. “AWS offers the most effective cloud infrastructure for generative AI,” Philomin said with confidence. “One of the needs for customers is the right costs for dealing with these models … It’s one of the reasons why many customers haven’t put these models in production.” Them’s fighting words — the growth of generative AI reportedly brought Azure to its knees. Will Amazon suffer the same fate? That’s to be determined.
AI Startups
(Bloomberg) -- Salesforce Inc. is elevating new generative artificial intelligence features in its products and doubling its investment in AI startups as the company banks on the emerging technology to help resuscitate sales growth. Most Read from Bloomberg The software maker’s venture capital fund focused on generative AI will increase to $500 million from the initial $250 million announced in March, the company said Monday in a statement. In addition, its portfolio of artificial intelligence tools will now be called AI Cloud, putting it on par with other major product lines such as Sales Cloud and Service Cloud. All the major technology companies are embracing generative AI and trying to add new tools after the introduction of OpenAI’s ChatGPT spurred intense interest from businesses across a swath of industries. Ahead of a Salesforce event focused on AI, the company is unveiling security standards for its technology, including preventing large language models from being trained on customer data. “Every client we talk to, this has been their biggest concern,” said Adam Caplan, senior vice president of AI, of the potential for confidential information to leak through the use of these models. Large language models are programmed to learn through trial-and-error using massive amounts of text and data. A type of these models is used for generative AI, which creates text and images from a user’s conversational prompts. After a difficult six months that included job cuts, executive departures and public pressure from activist investors, Salesforce has been winning back the faith of many shareholders, and the stock had jumped 62% this year through Friday’s close. Investors, however, are concerned about sales growth — particularly after the company on May 31 projected revenue in the current quarter would gain 10% from a year earlier. That would be the slowest jump on record and a significant dropoff from a time when 30% increases were routine. Executives have talked up the potential for AI to drive expansion. Like cloud computing and mobile apps before it, generative AI “is going to spark a massive new tech buying cycle,” Chief Executive Officer Marc Benioff said during an earnings call after the recent results. Underlining the company’s decision to prioritize AI is the decision last month to name Clara Shih as CEO of Salesforce AI. Shih once led Salesforce’s most lucrative product segment Service Cloud. Read More: A Cheat Sheet to AI Buzzwords and Their Meanings Customers will pay additional fees to use the new generative AI features across the company’s suite of software, Caplan said. Salesforce is still testing pricing levels, including whether it should be based on flat-rate subscriptions or use, he said. An AI Cloud “starter pack” with 50 licenses for unlimited use in Salesforce, Slack, and Tableau will cost $360,000 per year, the company said during the event in New York. The shares declined about 1% to $213.48 at 2:55 p.m. in New York, while most software peers gained. Salesforce said the AI tools will become generally available in products for sales and customer support this summer before being rolled out across the portfolio in coming months. Salesforce had previously introduced some generative AI tools using OpenAI’s technology, including a chatbot for its Slack business communication unit and for tasks such as drafting customer service responses. Bloomberg Intelligence analysts estimated earlier this month that generative AI may produce $1.3 trillion in sales of hardware, software, services and other tools by 2032. Caplan previously oversaw Salesforce’s Web3, or blockchain and cryptocurrency-related initiatives. He said the applications for generative AI are much clearer than Web3 and customers have expressed massive interest. Salesforce’s access to large amounts of customer data may be an ingredient for success in the new, highly hyped technology, Brad Zelnick, an analyst at Deutsche Bank, wrote Friday in a note to investors. “We see leading platforms such as Salesforce — with troves of trusted, high-quality data, connected processes, strong brands, distribution and ecosystems — as the natural winners in a generative AI world.” (Updates with pricing in the eighth paragraph.) Most Read from Bloomberg Businessweek ©2023 Bloomberg L.P.
AI Startups
A tougher fundraising environment reveals which companies and sectors investors have real conviction in, and which areas aren’t attractive outside of a bull market. AI startups dominated dealmaking this year, but there is another sector that VCs have stayed committed to: defense tech. We saw the latest example of this trend just this week. On Tuesday, Shield AI raised a $200 million Series F round led by Thomas Tull’s US Innovative Technology Fund, with participation from Snowpoint Ventures and Riot Ventures, among others. The round values the San Diego–based autonomous drone and aircraft startup at $2.7 billion. The sheer size of the round alone makes this deal interesting. “Mega-rounds” over $100 million have become uncommon enough to warrant raised eyebrows in today’s climate. Through the third quarter of 2023, only 194 rounds above $100 million were raised, compared to 538 in 2022 and 841 in 2021, according to PitchBook. Late-stage fundraising has also been largely muted for much of 2023. Just over $57.3 billion was invested into late-stage startups through the third quarter of this year, much lower than the $94 billion such companies raised in 2022, and the $152 billion we saw in 2021. Brandon Tseng, the co-founder and president of Shield AI, told TechCrunch+ his company was able to raise in this environment largely because of its metrics. The company’s revenue is growing 90% year over year, per Tseng, and it is on the path to becoming profitable in 2025. This round is also made more interesting by the space the company operates in, since it’s the latest sign of how much investors have leaned into defense tech in recent years. Tseng agreed that the investor appetite for companies like his has improved a lot, and he recalled how Shield AI’s first few fundraises were particularly hard.
AI Startups
We’re pleased to announce that Disrupt, TechCrunch’s annual flagship conference, will feature a new stage this year: the AI Stage. The AI Stage will feature experts from across the AI landscape, including ethicists, entrepreneurs and investors enmeshed in developments around AI and machine learning technologies. Their areas of expertise touch on generative AI, but also copyright issues as they concern AI, like whether training AI systems on copyrighted material might be considered fair use. The AI Stage will play host to several timely panels, ranging from panels covering startups in the AI space to diversity in AI and how AI startups can make a compelling pitch deck for VCs. There’s plenty to discuss. This year, generative AI has dominated — and continues to dominate — the conversation, what with the release of text-generating AI systems like OpenAI’s GPT-4. As the tech enters the mainstream, each new day brings a new lawsuit — which the experts on the AI Stage will break down in detail. Among other topics, the AI Stage will cover art-generating AI and the many controversies surrounding it, as well as its applicability to fields ranging from marketing to journalism. It’ll also touch on automation, including how AI is being used to streamline workflows that were previously done by human workers. What are you waiting for? Grab your early bird pass today and save $800 before prices go up May 12.
AI Startups
A serial artificial intelligence investor is raising alarm bells about the dogged pursuit of increasingly-smart machines, which he believes may soon advance to the degree of divinity. In an op-ed for the Financial Times, AI mega-investor Ian Hogarth recalled a recent anecdote in which a machine learning researcher with whom he was acquainted told him that "from now onwards," we are on the brink of developing artificial general intelligence (AGI) — an admission that came as something of a shock. "This is not a universal view," Hogarth wrote, noting that "estimates range from a decade to half a century or more" before AGI comes to fruition. All the same, there exists a tension between the explicitly AGI-seeking goals of AI companies and the fears of machine learning experts — not to mention the public — who understand the concept. "'If you think we could be close to something potentially so dangerous,' I said to the researcher, 'shouldn’t you warn people about what’s happening?'" the investor recounted. "He was clearly grappling with the responsibility he faced but, like many in the field, seemed pulled along by the rapidity of progress." Like many other parents, Hogarth said that after this encounter, his mind drifted to his four-year-old son. "As I considered the world he might grow up in, I gradually shifted from shock to anger," he wrote. "It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight." When wondering whether "the people racing to build the first real AGI have a plan to slow down and let the rest of the world have a say," the investor noted that although it feels like a "them" versus "us" situation, he has to admit that he, too, is "part of this community" as someone who's invested in more than 50 AI startups. "A three-letter acronym doesn’t capture the enormity of what AGI would represent, so I will refer to it as what is: God-like AI," Hogarth declared. "A superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it." "To be clear, we are not here yet," Hogarth continued. "But the nature of the technology means it is exceptionally difficult to predict exactly when we will get there. God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race." While the investor has spent his career funding and curating AI research — even going so far as to start his own venture capital firm and launching an annual "State of AI" report — something appears to have changed, where now, "the contest between a few companies to create God-like AI has rapidly accelerated." "They do not yet know how to pursue their aim safely and have no oversight," Hogarth mused. "They are running towards a finish line without an understanding of what lies on the other side." While he plans to invest in startups that will pursue AI more responsibly, the AI mega-funder said that he hasn't gotten much traction with his counterparts. "Unfortunately, I think the race will continue," Hogarth wrote. "It will likely take a major misuse event — a catastrophe — to wake up the public and governments." More on apocalyptic AI: A Third of Researchers Think that AI Could Cause a Nuclear-Level Catastrophe
AI Startups
Chatbots are AI programs that can talk to humans. They can do many things, like help customers, entertain users, or teach students. Some chatbots, like ChatGPT or Bard, use large language models that can write texts on any topic. They can also learn from data and conversations. But these chatbots may leak sensitive information if employees use them for work. Human reviewers may see the chat entries, or the chatbots may save and use the data. That's why Alphabet Inc., Google's parent company, told its employees to be careful around chatbots, even Bard. It asked them not to share confidential information with these chatbots, such as code, passwords, or plans. Alphabet is not alone. Amazon, Apple, and Samsung also warned or banned their employees from using chatbots like ChatGPT or GitHub Copilot. These companies also want to make their own chatbots, as they see the value of AI for innovation and productivity. They bought AI startups or funded AI research to improve their skills. But they also face problems and rules in launching their chatbots to the public. For example, Google delayed Bard's release in the EU after Irish regulators raised privacy issues. Chatbots are an amazing and powerful technology that can improve human communication and creativity. But they also have ethical and security issues that need careful attention.
AI Startups
As the race to build generative AI tools for the enterprise devolves into a battle royale, big tech companies are busy wielding their most powerful weapons: checkbooks. The Exchange explores startups, markets and money. Earlier today, Typeface raised $100 million at a $1 billion valuation, mere months after a $65 million round in February. It’s something to think about considering that the company was founded in 2022. But besides the fact that we are once again seeing rapid-fire venture rounds at unicorn valuations, the investor list in Typeface’s round is worth noting. Salesforce Ventures led the round. The CRM and cloud giant recently launched a $500 million fund to invest in generative AI startups, so its presence in this deal is not a complete shock, but the SaaS pioneer had company: both Alphabet (through its GV investing arm) and Microsoft (through its M12 investing effort) invested in Typeface. That’s a strange set of bedfellows: Salesforce and Microsoft have competing CRM products, and Microsoft and Alphabet compete in, to pick a few areas, search, productivity software, and public cloud infrastructure. The Typeface cap table engenders a simple question: Where else are major corporate venture capital (CVC) investors putting their money to work? To get a feel for the situation, I listed deals from a number of historically active CVC arms of major tech companies. Turns out, the Typeface round is funny for its internally competitive investor list, but it isn’t an outlier at all when it comes to big tech dollars flowing into startup accounts. The majors are busy these days.
AI Startups
The House Fund, the pre-seed and early stage venture capital fund focused on UC Berkeley startups, specifically AI startups, today announced that it’s closed its third tranche — Fund III — at $115 million. With the close of Fund III, Ken Goldberg, the UC Berkeley professor and prolific roboticist, will join The House Fund as a part-time partner, said Jeremy Fiance, the managing partner at The House Fund, in an email interview with TechCrunch. “We’re called The House Fund because we’re the home for the Berkeley startup community,” Fiance said. “We support Berkeley people in their entrepreneurial journey, whether that’s joining startups, starting startups, advising them, providing feedback on their ideas well before any startup materializes and so much more.” Fund III will invest in Berkeley-affiliated AI startups — whether founded by alumni, faculty, Ph.D. candidates, postdoctoral and grad students, recent graduates, undergraduates or dropouts. Roughly 70% of Fund III will go toward startups at the pre-seed stage, Fiance says. But The House Fund will also lead, co-lead and participate in seed rounds and “consider” a “small number” of first-round Series A rounds with founders who’ve had previous exits worth $500 million to over $1 billion. “We write first checks up to $2 million and reserve for follow-ons,” Fiance added. “We can write a check as small as $100,000 in a recently graduated founder or dropout and are fine being the only investor, for example.” The House Fund, launched in 2016, claims to be the “first-ever fund” focused on Berkeley startups and the only fund backed by both the University of California System Endowment and UC Berkeley’s campus endowment. The House Fund currently has $330 million under management and over 100 funds have follow-on invested in its startups, Fiance says. (The VC firm’s first fund was $6 million, and its second fund, closed in late 2019, was $44 million.) Notable investments from The House Fund’s previous funds include Anyscale, a company building a framework for distributed compute projects; software development platform Crowdbotics; and Goldberg’s Ambi Robotics. “There’s roughly 600,000 Berkeley people — among the biggest alumni bases in the world,” Fiance said. “And there’s been a longstanding ask from most alumni for more accessible community engagement and frictionless ways to unlock from Berkeley as an alum. As a big public school, Berkeley historically hasn’t had the resources to meet this demand. So we took matters into our own hands in service to our community … We exist to meet the needs of entrepreneurs, curating comprehensive resources for Berkeley AI founders and creating the connected environment startups need to thrive.” Startups who receive backing from The House Fund get access to tech from the VC firm’s partners, mentorship from The House Fund’s LPs and advisors, access to talent from the Berkeley campus and alumni base and introductions to potential customers with Berkeley relationships.
AI Startups
How dangerous is AI? Regulate it before it’s too late As an Artificial Intelligence researcher, I’ve always felt the worst feature of AI is its role in the spread of lies. The AI-amplification of lies in Myanmar reportedly contributed to the Rohingya massacre, the spread of COVID-19 and vaccine misinformation likely contributed to hundreds of thousands of preventable deaths and election misinformation has weakened our democracy and played a part in the Jan. 6, 2021 insurrection. This was all possible because humans turned algorithms into weapons, manipulating them to spread noxious information on platforms that claimed to be neutral. These algorithms are all proprietary to companies, and they are unregulated. And so far, none of the companies have admitted any liability. Apparently, no one feels guilty. If the federal government doesn’t start regulating AI companies, it will get a lot worse. Billions of dollars are pouring into AI technology that generates realistic images and text, with essentially no good controls on who generates what. This will make it exponentially easier to generate fake news, fake violence, fake extremist articles, non-consensual fake nudity and even fake “scientific” articles that look real on the surface. Venture capital firms investing in this technology liken it to the early launch of the internet. And as we know, it’s much easier to spread outrageous falsehoods than it is to spread the truth. Is this really like the beginning of the internet? Or is this like launching a nuclear bomb on the truth? AI startups say that by making this technology public, they are “democratizing AI.” It’s hard to believe that coming from companies that stand to potentially gain billions by getting people to believe it. If they were instead about to be the victim of a massacre stemming from AI-generated misinformation, or even a victim of AI-amplified bullying, perhaps they might feel differently. Misinformation is not innocent — it is a major cause of wars (think of WWII or Vietnam), although most people are unfamiliar with the connection. There are things we can do right now to address these critical problems. We need regulations around the use and training of specific types of AI technology Let’s start with regulating facial recognition technology (FRT) — that is, unless you don’t mind being recognized by AI and then kicked out of Radio City Music Hall because of ongoing litigation involving your employer. FRT users should have to get a license or certification to use it or develop it, which comes with training for all users and developers. We should figure out how to reduce the spread of particularly harmful misinformation; an easy solution to this is to make social media companies responsible for posted content, like any other publisher. Other countries have such laws, but the U.S. doesn’t. We also should enforce existing laws around monopolistic practice, which will allow users to choose social media platforms. If you cannot easily download your data from your social media platform and upload it into a new one, then the social media company is holding your data hostage, which is arguably monopolistic. More competition will allow users to choose content moderation platforms. We do not all need to be supporting companies that can host and perpetuate real harm online and in the real world, without much effort to combat it. We do not all need to be subject to the same attention-seeking algorithmic behavior. We should force companies to remove all child abuse content. It is embarrassing that AI can easily find this content but is not enabled to remove it. Even more embarrassing is that the companies apparently don’t always remove it when they are notified or delay efforts to do so. It is extremely important that interpretable (transparent) models are used for high-stakes decisions that deeply affect people’s lives. I have written extensively about this, pointing out that for high-stakes decisions, interpretable models have performed just as well as black box models, even on difficult benchmark datasets. My lab has been instrumental in developing such interpretable machine learning models, some of which are used in high-stakes decisions, even in intensive care units. Finally, we should figure out how to regulate any new and potentially dangerous technology before it causes harm on a wide scale. Sen. Ted Lieu’s (D-Calif.) poignant New York Times op-ed suggested the creation of a government agency for AI — which is a great idea. This technology feels like a runaway train that we’re chasing on foot. With little incentive to do good, technology companies don’t appear to care about how their products impact — or even wreck — society. It seems they make too much money to truly care, so we, the citizens, need to step in and demand regulation. If not, we’re very likely in for a dangerous avalanche of misinformation. Cynthia Rudin is a professor of computer science; electrical and computer engineering; statistical science; as well as biostatistics and bioinformatics at Duke University, where she directs the Interpretable Machine Learning Lab. Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
AI Startups
MindsDB helps developers apply machine learning to their data to build AI applications. Founded in 2017, the startup began as an AI open source project that some doubted in its early days. Now MindsDB has 100 enterprise customers and recently raised $25 million in a seed extension round. MindsDB cofounder and CEO Jorge Torres has been an artificial-intelligence evangelist for most of his career and believes in the power of the technology for "human augmentation." That mindset has really driven him as he's navigated his career, first as an electrical engineer, and then a developer, wanting to build things to help people. "I fell in love with the programming part," Torres said. "I felt that you could build something on your laptop and have the outcome of what you build on your laptop be useful for tens of thousands of people, hundreds of thousands of people, millions of people." It was that "fascinating" opportunity to help people at scale with his work that ultimately led Torres to cofound MindsDB, an AI startup that essentially helps software developers take their data and apply machine-learning frameworks to build their own AI applications. Torres estimates that there's roughly 30 million developers in the world and MindsDB's technology will allow them to perform the roles of machine-learning engineers. That was exactly the kind of reach he had in mind. MindsDB recently raised $25 million in a seed extension round led by Mayfield Fund with existing investors like Benchmark and OpenOcean contributing. The round valued the startup at $160 million post-money. Less than three months before that fundraise, Benchmark led the startup's $16.5 million seed round at a $56 million valuation, Forbes reported. Investors and businesses across the tech industry are now suddenly paying more attention to MindsDB and the utility of its technology amid an explosion in interest in AI and VCs writing rare checks these days to AI startups. MindsDB is in some ways apart of that wave and currently on a growth rocketship with no signs of slowing down. But six years ago, MindsDB was just an abstraction and later an open source AI project that few paid attention to. From failure to a lightbulb moment Torres got the idea that became the basis for MindsDB while he was in between jobs. In 2016, he and MindsDB cofounder Adam Carrigan had just shut down their London-based startup Real Life Analytics, which helped to enable targeted advertising on any digital screen through visual recognition technology. The company achieved this with tiny cameras that scanned viewers of screens in public places to gauge their demographics. But, the company "failed fantastically" since hardware is a tough business to build and scale, Torres said. But Torres said he wasn't entirely discouraged. "We wanted to be cofounders of some big idea," he said. After Real Life Analytics flopped, Torres went on to work for Aneesh Chopra, the former chief technology officer of the US, appointed by President Barack Obama. at his company, CareJourney. Torres said they were taking claims data from health care programs Medicare and Medicaid and applying machine learning to identify patterns that could lead to higher costs for patients. That experience was a lightbulb moment for Torres, who then called up Carrigan to share that he wanted to start a company that would help people apply machine learning to their data. But in the first six months of development, Torres and Carrigan encountered doubters concerning their idea. "'You're going to be able to automate the work of a machine learning engineer? That's impossible'" Torres recalled people telling him at the time. The Y Combinator Effect It wasn't until Torres happened to meet Michael Seibel, Y Combinator's managing director and partner, that the startup, which by then, was an open source AI project, began to get more attention. Torres said Seibel encouraged them to apply for the startup accelerator's program and they were accepted to its winter 2020 cohort. "After YC, we became an actual company," Torres said. Luck has been on the company's side since, Torres admits. There's been a tidal wave of growth for MindsDB, particularly this year. It's now raised $41.5 million in 2023 alone with marquee investors like Benchmark and Mayfield joining its rocketship. The startup is also adding enterprise customers regularly for its paid product, which recently reached 100 customers. Torres estimates that if the company continues at this rate, it could have 200 customers by the end of June. To keep up with the demand of its growing customer base, Torres says his team has to "build as the airplane is flying" to ensure the user experience is spot on. "It is what is next for us," he said. "Guaranteeing that when we get to 18,000 users, they all have the same experience as the first 15." Read the original article on Business Insider
AI Startups
The U.S.-China decoupling is giving rise to a divided tech landscape between the two major economies, shaping the development of the red-hot area of generative AI, which turns text into various forms of content like prose, images, and videos. China, in order to reduce dependence on the U.S. technological foundation, has been pursuing its own large language models that match OpenAI’s GPT models. But unlike the U.S., some of its most advanced AI endeavors are happening at established internet juggernauts, such as Baidu. The search engine and autonomous driving giant rolled out its counterpart to ChatGPT in March. Now the 23-year-old firm wants to have a stake in other AI startups, too. The company aims to have a stake in other AI startups. During a JPMorgan summit in China this week, Baidu’s co-founder and CEO Robin Li announced the launch of a billion yuan ($145 million) fund to back generative AI companies. The fund can be compared to the OpenAI Startup Fund, which started at $100 million and eventually grew to $175 million, as my colleague Connie noted. The fund will invest up to 10 million yuan, approximately $1.4 million, in a project. Given the check size, the fund is clearly targeting early-stage AI applications, which isn’t surprising, given that Chinese generative AI startups haven’t experienced widespread adoption and most investments are concentrated in the seed and early stages. Furthermore, Baidu intends to use the fund to grow the adoption of its own large language model Ernie Bot. “American developers are building new applications based on ChatGPT or other language models. In China, there will be an increasing number of developers building AI applications using Ernie as their foundation,” Li said. In that sense, the fund seems to be calling for applications of AI rather than developers of its foundational layer. The fund won’t be short of pitches. Over the years, Chinese startups have gained recognition for their ingenuity in devising novel business models, ranging from live streaming, live commerce to short videos. Li predicted that, in the generative AI age, Chinese companies will once again lead the way in discovering commercial applications for AI. “I’m very bullish on China’s AI development. Over the past few decades, China has warmly embraced new technologies. Even though we didn’t invent Android, iOS or Windows, we developed a host of very innovative applications like WeChat, Douyin and Didi. Many of them are popular and useful. The same trend is playing out in the AI age. Technology ushers in a myriad of possibilities and we are good at capturing them to build applications.” However, a pertinent question lies in whether the foundational level — China’s homegrown large language models — will be robust enough to support the range of real-life scenarios expected of them. China wants its homegrown LLMs so it won’t be prone to U.S. sanctions that cut off key technological supply, as seen in the semiconductor industry. Aside from Baidu, Chinese tech giants like Alibaba and Tencent are also developing their own large language models.
AI Startups
Dust is a new AI startup based in France that is working on improving team productivity by breaking down internal silos, surfacing important knowledge and providing tools to build custom internal apps. At its core, Dust is using large language models (LLMs) on internal company data to give new super powers to team members. Co-founded by Gabriel Hubert and Stanislas Polu, the pair has known each other for more than a decade. Their first startup called Totems was acquired by Stripe in 2015. After that, they both spent a few years working for Stripe before parting ways. Stanislas Polu joined OpenAI where he spent three years working on LLMs’ reasoning capabilities while Gabriel Hubert became the head of product at Alan. They teamed up once again to create Dust. Unlike many AI startups, Dust isn’t focused on creating new large language models. Instead, the company wants to build applications on top of LLMs developed by OpenAI, Cohere, AI21, etc. The team first worked on a platform that can be used to design and deploy large language model apps. It has then focused its efforts on one use case in particular — centralizing and indexing internal data so that it can be used by LLMs. From an internal ChatGPT to next-gen software There are a handful of connectors that constantly fetch internal data from Notion, Slack, Github and Google Drive. This data is then indexed and can be used for semantic search queries. When a user wants to do something with a Dust-powered app, Dust will find the relevant internal data, use it as the context of an LLM and return an answer. For example, let’s say you just joined a company and you’re working on a project that was started a while back. If your company fosters communication transparency, you will want to find information in existing internal data. But the internal knowledge base might not be up to date. Or it might be hard to find the reason why something is done this way as it’s been discussed in an archived Slack channel. Dust isn’t just a better internal search tool as it doesn’t just return search results. It can find information across multiple data sources and format answers in a way that is much more useful to you. It can be used as a sort of internal ChatGPT, but it could also be used as the basis of new internal tools. “We're convinced that natural language interface is going to disrupt software,” Gabriel Hubert told me. “In five years' time, it would be disappointing if you still have to go and click on edit, settings, preferences, to decide that your software should behave differently. We see a lot more of our software adapting to your individual needs, because that's the way you are, but also because that's the way your team is — because that's the way your company is.” The company is working with design partners on several ways to implement and package the Dust platform. “We think there are a lot of different products that can be created in this area of enterprise data, knowledge workers and models that could be used to support them,” Stanislas Polu told me. It’s still early days for Dust, but the startup is exploring an interesting problem. There are many challenges ahead when it comes to data retention, hallucination and all of the issues that come with LLMs. Maybe hallucination will become less of an issue as LLMs evolve. Maybe Dust will end up creating its own LLM for data privacy reasons. Dust has raised $5.5 million (€5 million) in a seed round led by Sequoia with XYZ, GG1, Seedcamp, Connect, Motier Ventures, Tiny Supercomputer, AI Grant and a bunch of business angels also participating, such as Olivier Pomel from Datadog, Julien Codorniou, Julien Chaumond from Hugging Face, Mathilde Colin from Front, Charles Gorintin and Jean-Charles Samuelian-Werve from Alan, Eléonore Crespo and Romain Niccoli from Pigment, Nicolas Brusson from BlaBlaCar, Howie Liu from Airtable, Mathieu Rouiff from PhotoRoom, Igor Babuschkin and Irwan Bello. If you take a step back, Dust is betting that LLMs will greatly change how companies work. A product like Dust works even better in a company that fosters radical transparency instead of information retention, written communication instead of endless meetings, autonomy instead of top-down management. If LLMs deliver on their promise and greatly improve productivity, some companies will gain an unfair advantage by adopting these values as Dust will unlock a lot of untapped potential for knowledge workers.
AI Startups
When it first appeared on the scene around ten years ago, Notion Capital was poised to take advantage of the oft-repeated phrase, “Software is eating the world.” In particular, it was aiming at what was then called the enterprise SaaS space. Several years on, and it’s capitalised on that thesis, investing in in more than 100 early-stage companies. It’s now entering its next phase, having completed the final close of its new €300 million ($325.6m) fund, its fifth, which closed at its ‘hard cap’. The proof of the pudding is in the eating, and Notion’s software portfolio has certainly been ‘eating’ a lot of the world. It’s invested into CurrencyCloud, GoCardless, Paddle and Yulife in the UK, HeyJobs and Upvest in Germany, Mews in the Czech Republic, Cobee in Spain, TestGorilla in the Netherlands, Unbabel in Portugal and Workable in Greece. Fund Five has also already invested in Bound, DataOps, M3ter and Resistant AI, and the VC fund expects to make around 20 core Series A+ investments in total from this fund. Plus, there’s a difference, perhaps driven by Brexit. This “Notion V” fund is Euro denominated and Luxembourg-based, meaning it will have an increasingly pan-European focus for investments. The firm also announced three senior promotions. Partner Itxaso del Palacio becomes General Partner and Stephanie Opdam and Kamil Mieczakowski will move to Partner. Del Palacio joined Notion in 2018 from M12 (Microsoft Ventures) where she was an Investment Partner, and since joining has led investments in Bound, Cledara, Cobee and Yulife, amongst others. Opdam and Mieczakowski also joined in 2018 and have led investments in DataOps and Resistant AI. But Stephen Chandler, Managing Partner at Notion Capital, believes the phrase “Entperprise SaaS” is no longer fit for purpose. He told me: “We used to really use the word SAS exclusively for the focus, but, you know, to be honest, our focus for a while has been broader than that. SaaS has evolved beyond the application layer into infrastructure, hybrid, cloud, edge computing, all of those kinds of things. But also different monetization models. It’s no longer just a case of subscription software. We’re seeing people taking a share of payments, having other embedded finance propositions, marketplaces. All of those things sit within the Notion wheelhouse. We tend to use the phrase ‘business software’ now, even though sometimes I think that’s a bit boring.” He added that “the strategy remains very much the same. We continue to focus at Series A, where our sweet spot is.” He said Notion will also put small checks into pre-seed companies “We will literally go as low as 50,000 euros into company. And then we will also come in on growth stage opportunities.” Does he see an opportunity with Generative AI?: “Where we see the opportunity within our kind of remit is more a verticalized play. In particular, where a company has proprietary data that it can leverage. We see some really interesting companies emerging from that. And also some threats I’m sure as well.” He also criticised investors who had gone big into AI startups when so much of the sector is unproven, and singled out the recent $105 million investment into Mistral: “I think there’s been a tonne of hype and when I start seeing companies doing $100 million pre-seed raises I find it a little concerning, to be honest. You sit and look at the names of the people that have invested and you think, well, actually, even if the company really nails it, you’re never gonna make any money and that really?” He thinks the “horizontal play” of Generative AI is much more the purview of Big Tech: “I think it’s going to be the big guys that lead and I also think it’s going to commoditize quite quickly.” Notion will also offer a third Opportunities Fund in 2024, putting additional growth follow-on capital into the firm’s best performing venture assets and other growth stage business software companies in Europe. Investors in the new fund include Cortes Capital LLC, KfW Capital, and TNO. Returning LPs included: British Patient Capital, Novo Holdings and RSJ.
AI Startups
Pranjali Awasthi, 16, came up with her startup while interning in university research labs. She was accepted to the September 2021 HF0 residency where she launched her product. Awasthi is skipping college to focus on building Delv.AI after finishing high school early. This as-told-to essay is based on a conversation with Pranjali Awasthi, a 16-year-old AI founder. The following has been edited for length and clarity. My dad is an engineer who believes computer science is a course that should be taught alongside other core programs in schools. His passion and values encouraged me to get into coding when I was seven. When I moved to Florida with my family at age 11 from India, my curiosity thrived as I could take computer science classes and do competitive math. Interning in university research labs gave me the idea for my company When I was 13, I started interning in university research labs at Florida Internal University working on machine learning projects alongside going to high school. Because of the pandemic, my high school had gone virtual, so I was able to intern for about 20 hours a week. My tasks included doing searches, extracting data, and creating literature reviews. In 2020, OpenAI released its ChatGPT-3 beta, and I knew we could use it to make extracting and summarizing research data easier. As a research intern, I was hyper-aware of how hard it was getting to find exactly what you needed on search engines. I began thinking about how AI could solve this problem. That was the seed for my company, Delv.AI. Delv.AI wasn't a fully formed idea yet, but I knew I wanted start a company using machine learning to extract data and eliminate data silos. I landed a spot on an accelerator for AI startups In 2021, I went to a Miami Hack Week where I met Lucy Guo and Dave Fontenot, partners at Backend Capital. They also founded HF0 residency – a live-in startup accelerator in San Francisco and Miami. I was accepted into their September 12-week cohort in exchange for a small piece of my future company. My parents told me I should take the opportunity – the network alone would be worth it, so I took an absence from high school. The residency paid for me to commute back and forth to the house via Uber every day. I launched the beta for Delv.AI on Product Hunt, a platform for people to share software for free, during the residency on my birthday – I'd just turned 15. It became the number three product of the day. As more content gets uploaded online, it's getting harder for people to find the right information, especially when that information is very specific. Delv.AI helps researchers leverage AI to find exactly the information they're looking for. The residency helped me land investment and build my network My residency at HF0 concluded with an investor-focused demo day in late 2021. After presenting my project, I received my first investment through On Deck and Village Global. The initial investment in early 2022 was enough to hire my first engineer and work on a minimum viable product. I formed strong connections in the A.I. community throughout the fellowship. This network was helpful for fundraising in the months following the residency. My success on Product Hunt added to the momentum. We've raised $450,000 in total from a combination of funds and angels including Lucy Guo and Village Global. We're currently valued at around $12 million. College is a 'maybe someday' for me, but it's not my priority My parents are Indian, so academics are a priority for them. I wanted to get a GED, but we compromised on me finishing my high school credits online, which I completed in June 2023. My decision to not go to college is hard for them, but they understand. I have a lot of responsibility on my plate and passion for what I'm building. I might consider college down the line to learn business skills like law and psychology, where the in-person format of college could be beneficial. I run a small and lean team, but I still do much of the work I start my days with running and prepping for my team's daily huddle. As my team members are older than me, good communication is key, as is knowing when to take the reigns. As a young founder, I have to be clear in communicating the company's mission and reminding everyone that we need to work together remotely. After the huddle, I will code and manage my team of engineers. There's a lot of logistical stuff to handle. I'm currently the only person managing HR and operations. We recently hired someone to help with customer service and sales and are hiring overseas engineers. I power down the day by sending e-mails, taking care of user requests, having dinner, and then sleeping. I also try to fit in time to play an instrument or a game of badminton. Being a young founder has its fair share of challenges and opportunities I've found that being young, people are more inclined to help me or answer questions. But when I walk up to people, they sometimes look down at me – literally and figuratively – as they're trying to figure out what I want. I try to have a clear objective and keep conversations rich and stacked with content, which helps. I try not to take in everything that social media or the news throws at me. I'm not afraid to mute or unfollow people. We're dealing with a flood of new AI products – the competition is insane. In 2021 in Miami, everyone was talking about crypto. By the end of 2022, that had changed completely to AI. The market is growing very fast, so the next thing for us is refining our product with user feedback and raising more funding. Read the original article on Business Insider
AI Startups
OpenAI boss Sam Altman admitted a computer chip shortage is hindering ChatGPT’s progress during an off-the-record meeting in London last month – the details of which surfaced after one attendee mistakenly published a blog post about the event. Altman’s candid remarks to a roomful of app developers and startup founders were detailed in a post late last month by Raza Habib, the CEO of London-based AI firm Humanloop. During the discussion, Altman purportedly said OpenAI lacked enough graphic processing units, or GPUs – the ultra-powerful chips required to train and run AI software — to make fast improvements to ChatGPT. “A common theme that came up throughout the discussion was that currently OpenAI is extremely GPU-limited and this is delaying a lot of their short-term plans,” Habib wrote in the blog post. “The biggest customer complaint was about the reliability and speed of the API. Sam acknowledged their concern and explained that most of the issue was a result of GPU shortages,” Habib added. OpenAI is just one of many firms impacted by the industrywide chip shortage. In April, The Information reported that AI startups were experiencing major difficulties due to server shortages at key cloud providers such as Amazon, Microsoft, and Google. Demand for the chips is so strong that Nvidia, the leading US manufacturer, recently attained a $1 trillion market cap for the first time. The company’s shares are up by a whopping 171% since the start of the year. Habib quickly took down the blog post, but archived versions spread quickly on social media and other platforms. The blog post sparked a pair of lengthy threads on the forum Hacker News, where users dissected Altman’s purported roadmap for OpenAI. The original post now displays a message that reads, “This content has been removed at the request of OpenAI.” When reached for comment, Habib confirmed that he had taken down the blog post after someone at OpenAI told him the event was meant to be “off the record.” “We posted our notes from the developer session, they messaged me to say they had intended it to be off the record and apologized that they weren’t clear,” Habib said. “I took it down because we’re close partners and we want to support them as they support us.” Habib did not respond to a request for further comment. The Post has reached out to OpenAI for comment. According to the blog post, Altman reportedly complained that a lack of computing power has blocked OpenAI from implementing larger “context windows” for ChatGPT. Simply defined, context windows are the amount of information the chatbot can process when responding to a specific user prompt. Without an improved context window, ChatGPT and other chatbots are limited in how much information they can “remember,” including past user prompts, and lack the capacity for more complex assignments such as complex coding. “The main takeaway is indeed the GPU bottleneck, which is caused by a huge gap between demand and supply, that must be solved as soon as possible,” Omri Geller, the CEO, and co-founder of Israel-based Run:ai, told The Post. “This goes for OpenAI, but it’s true for the whole ecosystem.” Altman also reportedly laid out his immediate plans for OpenAI and how to improve the ChatGPT experience for professional customers. This year, Altman reportedly said that OpenAI would focus on creating a “cheaper and faster GPT-4” and “longer context windows” for the chatbot, as well as improvements to the API meant to streamline the experience for developers. Altman also purportedly reassured worried developers that OpenAI had no plans to “release more products beyond ChatGPT.” “Quite a few developers said they were nervous about building with the OpenAI APIs when OpenAI might end up releasing products that are competitive to them,” Habib wrote. Fortune was the first to report on the deleted blog post.
AI Startups
OpenAI, one of the best-funded AI startups in business, is exploring making its own AI chips. Discussions of AI chip strategies within the company have been ongoing since at least last year, according to Reuters, as the shortage of chips to train AI models worsens. OpenAI is reportedly considering a number of strategies to advance its chip ambitions, including acquiring an AI chip manufacturer or mounting an effort to design chips internally. OpenAI CEO Sam Altman has made the acquisition of more AI chips a top priority for the company, Reuters reports. Currently, OpenAI, like most of its competitors, relies on GPU-based hardware to develop models such as ChatGPT, GPT-4 and DALL-E 3. GPUs’ ability to perform many computations in parallel make them well-suited to training today’s most capable AI. But the generative AI boom — a windfall for GPU makers like Nvidia — has massively strained the GPU supply chain. Microsoft is facing a shortage of the server hardware needed to run AI so severe that it might lead to service disruptions, the company warned in a summer earnings report. And Nvidia’s best-performing AI chips are reportedly sold out until 2024. GPUs are also essential for running and serving OpenAI’s models; the company relies on clusters of GPUs in the cloud to perform customers’ workloads. But they come at a sky-high cost. An analysis from Bernstein analyst Stacy Rasgon found that, if ChatGPT queries grew to a tenth the scale of Google Search, it’d require roughly $48.1 billion worth of GPUs initially and about $16 billion worth of chips a year to keep operational. OpenAI wouldn’t be the first to pursue creating its own AI chips. Google has a processor, the TPU (short for “tensor processing unit”), to train large generative AI systems like PaLM-2 and Imagen. Amazon offers proprietary chips to AWS customers both for training (Trainium) and inferencing (Inferentia). And Microsoft, reportedly, is working with AMD to develop an in-house AI chip called Athena, which OpenAI is said to be testing. Certainly, OpenAI is in a strong position to invest heavily in R&D. The company, which has raised over $11 billion in venture capital, is nearing $1 billion in annual revenue. And it’s considering a share sale that could see its secondary-market valuation soar to $90 billion, according to a recent Wall Street Journal report. But hardware is an unforgiving business — particularly AI chips. Last year, AI chipmaker Graphcore, which allegedly had its valuation slashed by $1 billion after a deal with Microsoft fell through, said that it was planning job cuts due to the “extremely challenging” macroeconomic environment. (The situation grew more dire over the past few months as Graphcore reported falling revenue and increased losses.) Meanwhile, Habana Labs, the Intel-owned AI chip company, laid off an estimated 10% of its workforce. And Meta’s custom AI chip efforts have been beset with issues, leading the company to scrap some its experimental hardware. Even if OpenAI commits to bringing a custom chip to market, such an effort could take years and cost hundreds of millions of dollars annually. It remains to be seen if the startup’s investors, one of which is Microsoft, have the appetite for such a risky bet.
AI Startups
It's that time of year again: the week when startups in Y Combinator's latest batch present their products for media -- and investor -- scrutiny. Over the next two days, roughly 217 companies will present in total, a tad smaller than last winter's 235-firm cohort as VC enthusiasm a slight slump. In the first half of 2023, VCs backed close to 4,300 deals totaling $64.6 billion. That might sound like a lot. But the deal value represented a 49% decline from H1 2022 while the deal volume was a 35% dip year-over-year. In a bright note, one segment -- driven by equal parts hype and demand -- is wildly outperforming the others: AI. Nearly a fifth of total global venture funding from August to July came from the AI sector, according to CrunchBase. And the voraciousness is manifesting in this summer's Y Combinator cohort, which features over double (57 versus 28) the number of AI companies compared to the winter 2022 batch. To get a sense of what AI technologies are driving investments these days, I dove deep into the summer 2023 batch, rounding up the YC-backed AI startups that appeared to me to be the most differentiated -- or hold the most promise. . AI infrastructure startups Several startups in the Y Combinator W2023 cohort focus not on what AI can accomplish, but on the tools and infrastructure necessary to build AI from the ground up. There's Shadeform, for example, which provides a platform to enable customers to access and deploy AI training and inferencing workloads to any cloud provider. Founded by data engineers and distributed systems architects Ed Goode, Ronald Ding and Zachary Warren, Shadeform aims to ensure AI jobs run on time and at "optimal cost." As Goode notes in a blog post on the Y Combinator website, the explosion in demand for hardware to develop AI models, particularly GPUs, has resulted in a shortage of capacity. (Microsoft recently warned of service disruptions if it can't get enough AI chips for its data centers.) Smaller providers are coming online, but they don't always deliver the most predictable resources -- making it difficult to scale across them. Shadeform solves for this problem by letting customers launch AI jobs anywhere, across public cloud infrastructure. Leveraging the platform, companies can manage GPU instances on every provider from a single pane of glass, configuring "auto-reservations" when the machines they need are available or deploying into server clusters with a single click. Image Credits: Shadeform Another intriguing Y Combinator startup tackling challenges in AI operations is Ceralyze, founded by ex-Peloton AI engineer Sarang Zambare. Ceralyze is Zambare's second YC go-around after leading the AI team at cashier-less retail startup Caper. Ceralyze takes AI research papers -- the kind typically found on open access archives like Arxiv.org -- and translates the math contained within into functioning code. Why is that useful? Well, lots of papers describe AI techniques using formulas but don't provide links to the code that was used to put them into practice. Developers are normally left having to reverse engineer the methods described in papers to build working models and apps from them. Ceralyze seeks to automate implementation through a combination of AI models that understand language and code and PDF parsers "optimized for scientific content." From a browser-based interface, users can upload a research paper, ask Cerylize natural language questions about specific parts of the paper, generate or modify code and run the resulting code in the browser. Now, Ceralyze can't translate everything in a paper to code -- at least not in its current state. Zambare acknowledges that the platform's code translation only works for a "small subset of papers" right now and that Ceralyze can only extract and analyze equations and tables from papers, not figures. But I still think it's a fascinating concept, and I'm hoping it'll grow and improve with time -- and the right investments. AI dev tools Still developer-focused but not an AI infrastructure startup per se, Sweep autonomously handles small dev tasks like high-level debugging and feature requests. The startup was launched this year by William Zeng and Kevin Lu, both veterans of the video-game-turned-social-network Roblox. "As software engineers, we found ourselves switching from exciting technical challenges into mundane tasks like writing tests, documentation and refactors," Zeng wrote on Y Combinator's blog. "This was frustrating because we knew large language models [similar to OpenAI's GPT-4] could handle this for us." Sweep can take a code error or GitHub issue and plan how to solve it, Zeng and Lu say -- writing and pushing the code to GitHub via pull requests. It can also address comments made on the pull request either from code maintainers or owners -- a bit like GitHub Copilot but more autonomous. "Sweep started when we realized some software engineering tasks were so simple we could automate the entire change," Zeng said. "Sweep does this by writing the entire project request with code." Given AI's tendency to make mistakes, I'm a little skeptical of Sweep's reliability over the long run. Fortunately, so are Zeng and Lu -- Sweep doesn't automatically implement code fixes by default, requiring a human review and edit them before they're pushed to the master codebase. AI apps Transitioning away from the tooling subset of AI Y Combinator startups this year, we have Nowadays, which bills itself as the "AI co-pilot for corporate event planning." Anna Sun and Amy Yan co-founded the company in early 2023. Sun was previously at Datadog, DoorDash and Amazon while Yan held various roles at Google, Meta and McKinsey. Not many of us have had to plan a corporate event -- certainly not this reporter. But Sun and Yan describe the ordeal as arduous, needlessly tiring and expensive. "Corporate event planners are bombarded with endless calls and emails while planning events," Sun writes in a Y Combinator blog post. "Stressing over tight schedules, planners are paying for full-time assistants or tools that cost them over $100,000 a year." So, Sun and Yan thought, why not offload the most painful parts of the process to AI? Enter Nowadays, which -- provided the details of an upcoming event (e.g. dates and the number of attendees) -- can automatically reach out to venues and vendors and manage relevant emails and phone calls. Nowadays can even account for personal preferences around events, like amenities near a given venue and activities within walking distance. I should note that it isn't entirely clear how Nowadays works behind the scenes. Is AI actually answering and placing phone calls and returning emails? Or are humans involved somewhere along the way -- say for quality assurance? Your guess is as good as mine. Nevertheless, Nowadays is a very cool idea with a potentially huge addressable market ($510.9 billion by 2030, according to Allied Market Research), and I'm curious to see where it goes next. Image Credits: Nowadays Another startup trying to abstract away traditionally manual processes is FleetWorks, the brainchild of ex-Uber Freight product manager Paul Singer and Quang Tran, who formerly worked on moonshot projects at Airbnb. FleetWorks targets freight brokers -- the essential middlemen between freight shippers and carriers. Designed to sit alongside a broker's phone, email and transportation management system (TMS), FleetWorks can automatically book and track loads and schedule appointments with shipping facilities that lack a booking portal. Typically, brokers have to reach out via phone or email to drivers and dispatchers for loads that aren't being tracked automatically for updates on shipment statuses. Simultaneously, they have to juggle calls from trucking companies interesting in booking loads and negotiate on the price, as well as set appointment times for unscheduled loads. Singer and Tran claim that FleetWorks can lighten the load (no pun intended) by triggering calls and emails and pushing all the relevant information to the TMS or email. In addition to sharing load details, the platform can discuss price and book a carrier, even calling a driver and updating account teams on issues that crop up. "FleetWorks helps freight operators focus on high-value work by automating routine calls and emails," Singer wrote in a Y Combinator post. "Our AI-powered platform can leverage email or use a human-like voice to make tracking calls, cover loads, and reschedule appointments." If it works as advertised, that sounds genuinely useful.
AI Startups
Anguilla, a tiny British island territory in the Caribbean, may bring in up to $30 million in revenue this year thanks to its ".ai" domain name, reports Bloomberg in a piece published Thursday. Over the past year, skyrocketing interest in AI has made the country's ".ai" top-level domain particularly attractive to tech companies. The revenue is a boon for Anguilla's economy, which primarily relies on tourism and has been impacted by the pandemic. $30 million from domains may not sound like a lot compared to the billions thrown around in AI these days, but with a total land area of 35 square miles and a population of 15,753, Anguilla isn't complaining. Registrars like GoDaddy must pay Anguilla a fixed price—$140 for a two-year registration—and the prices are rising due to demand. Bloomberg says that Anguilla brought in a mere $7.4 million from .ai domain registrations in 2021, but all that changed with the release of OpenAI's ChatGPT last year. Its release spawned a huge wave of AI hype, fear, and investment. Vince Cate, who has managed the ".ai" domain for Anguilla for decades, told Bloomberg that .ai registrations have effectively doubled in the past year. "Since November 30, things are very different here," he said. Anguilla has been in charge of assigning web addresses with the ".ai" domain since 1995. Countries first received their own top-level domain names (ccTLDs, or country code top-level domains) in 1985, including domains like .us (United States), .uk (United Kingdom), and .de (Germany). These ccTLDs were originally intended to give nations a distinct presence on the Internet and were often used primarily for websites that focused on those particular countries or its residents. Over time, some ccTLDs, like .tv for Tuvalu have taken on additional meanings and broader uses, particularly when their abbreviations coincidentally stand for something else, like AI for "artificial ntelligence" in this case. As a result, high-profile AI startups such as Stability.ai and Character.ai have opted for web addresses ending in ".ai," contributing significantly to the island's unexpected revenue stream. While some experts foresee a decline in the "AI gold rush" that may eventually cool the market for ".ai" domains, Bloomberg reports, the impact on Anguilla's economy is already significant. With revenue from ".ai" domain registrations estimated to be a notable percentage of the territory’s gross domestic product ($300 million in 2021), Anguilla is a case study of how even a small Caribbean island can benefit from a global tech boom.
AI Startups
- Generative AI in supply chains will be able to forecast demand, predict when trucks need maintenance and work out optimal shipping routes, according to analysts. - "AI may be able to totally (or nearly) remove all human touchpoints in the supply chain including 'back office' tasks," said Morgan Stanley analysts. - But "Generative AI, in my mind is, [a] once in a lifetime kind of disruption that's going to happen … so there are going to be losses of jobs in the more traditional setting, but I also believe it's going to create new jobs like every prior technology disruption has," said Navneet Kapoor, chief technology and information officer at shipping giant Maersk. Artificial intelligence is likely to shake up the transportation industry — transforming how supply chains are managed and reducing the number of jobs carried out by people, according to analysts and industry insiders. Sidewalk robots, self-driving trucks and customer service bots are on their way, along with generative AI that can predict disruptions or explain why sales forecasts may have been missed, according to industry executives. "AI may be able to totally (or nearly) remove all human touchpoints in the supply chain including 'back office' tasks," Morgan Stanley's analysts led by Ravi Shanker stated in a research note last month. "The Freight Transportation space is on the cusp of a generational shift driven by disruptive technologies incl. Autonomous, EV, blockchain and drones. AI is the latest one of these potentially transformative technologies to emerge – and perhaps the most powerful to-date," the analysts added. For example, the bank said it expects several hundred autonomous trucks to begin operations in the U.S. in 2024, reducing the cost-per-mile by 25% to 30%, and eventually eliminating the need for drivers entirely (its timescale for this is "beyond three years"). Supply chains are often long and multifaceted: A company might source from manufacturers in different parts of the world, with components shipped to a central assembly plant before goods are distributed to customers globally. Producing and transporting goods, already a complex process, was disrupted by the Covid-19 pandemic and the Russia-Ukraine war — which led to a shortage of components such as computer chips and the rerouting of shipments. That complexity means companies are often unaware of what happens to their products from one end of the process to the other. "This is where AI (and machine learning) come in. By predicting what could go wrong with a fluid Transportation network … before it does, AI/ML systems could … potentially even avoid the disruption scenario entirely," Morgan Stanley's analysts added. This is a theme picked up by analysts at investment firm Jefferies, who made multiple predictions about the effect that generative AI will have on transportation and logistics. That includes forecasting demand, predicting when trucks need maintenance, working out optimal shipping routes and tracking shipments in real time. "A shortage of truck drivers, polar vortexes halting interstate commerce, and a dearth of baby formula on grocery store shelves will be a distant memory with the adoption of generative AI in the Trucking & Logistics space," its analysts, led by Stephanie Moore, wrote in a research note published on June 6. Generative AI will be a big part of shipping giant Maersk's operations, said its chief technology and information officer, Navneet Kapoor. "AI and machine learning, they've existed for a very long time … Over the years, it has progressed from being interesting research projects to more 'real' projects within companies … And now, with the advent of generative AI … we have a real pivoting opportunity to take AI mainstream," Kapoor told CNBC by phone. Maersk has used AI for several years and is now "pursuing aggressively" ways to integrate it into its business processes and functions on a larger scale, Kapoor said. One way it is already being used is to help customers plan better. "We are using AI to build what we call a predictive cargo arrival model to improve scheduled reliability for our customers … Reliability is a big deal, even post pandemic, so that they can plan their supply chain, their inventories better, and bring their costs down," Kapoor said. Maersk also wants to use AI to recommend solutions when shipping routes are congested, advising on whether goods should be flown or stored, for example. And, Kapoor said, the company wants to use a type of generative AI known as a large language model — which learns how to recognize, summarize and generate text and other types of content from vast amounts of data — to understand the sales process better. "You can get a full view of all the transactions the customer has done with you in the last year, you can figure out the root causes of why [for example] you might lose deals in a certain business area," Kapoor said. And what of potential job losses? "Generative AI, in my mind is, [a] once in a lifetime kind of disruption that's going to happen … so there are going to be losses of jobs in the more traditional setting, but I also believe it's going to create new jobs like every prior technology disruption has," Kapoor said, adding that roles such as prompt engineers (people who train AI to give better responses) are likely to be more in demand. One threat noted by Morgan Stanley is from "high tech digital entrants" to the industry, with analysts describing a double-edged sword for transportation companies: AI might help them become more efficient, but it could also reduce the need for services from the third-party logistics firms that organize packing, storage and shipping. Maersk has invested in AI startups via its Maersk Growth venture arm, including Einride, a self-driving electric truck manufacturer; Pactum, a company that automates sales negotiations; and 7bridges, an AI platform that helps companies see where their stock is and anticipate delays. "We look at [data startups] as definitely an enabler for our transformation, and an accelerator, but we are also watchful: we don't want to be caught napping on this one … Data start-ups can be [an] intermediary between us and the customer and we need to make sure that we are staying ahead of the curve, but also learning from them," Kapoor said. "Knowledge assistants" can help with another problem: the over- and under-ordering of goods, according to Igor Rikalo, president and chief operating officer of software company o9 Solutions, which helps firms centralize and analyze data. That's often the result of a lack of communication between internal teams, with sales departments placing orders separately from those who work in supply chain management, he said. "It's a sub-optimal result, because sales [teams] might be investing into promoting the items that a supply chain is constrained on, so you're wasting money," Rikalo told CNBC by phone. "We see a world where hopefully, every one of us will have what we call knowledge assistants that are powered by these AI, by these large language models," he added, with such assistants being able to give insights into why a supplier has delivered less than what was ordered, for example. Answering those questions usually requires input from sales, marketing, supply chain and procurement teams, but generative AI might be able to examine large data sets to provide answers. It may also mean fewer people are needed in integrated business planning teams, which oversee long-term goals, revenue projections and forecast demand for particular products. "A 1,000-person planning function today can probably be transformed to 100 people or less," Rikalo said. — CNBC's Cheyenne DeVon and Jonathan Vanian contributed to this report.
AI Startups
Microsoft’s Cloud Recovery Is Outshining Rivals Amazon, Google The software maker’s business is likely getting a boost from an alliance with startup OpenAI. (Bloomberg) -- In the race to rebound from a two-year slowdown in spending on cloud computing, Microsoft Corp. is pulling ahead of its chief rivals, Amazon.com Inc. and Google. Microsoft’s Azure cloud business posted 29% sales growth in the September quarter, faster than analysts estimated, in part because of corporate customers’ interest in new artificial intelligence products. In its own report the same day earlier this week, for the same period, Google parent Alphabet Inc. struck a more subdued tone, saying that cloud clients are still in cost-cutting mode. And on Thursday, Amazon.com Inc.’s cloud-revenue picture was mixed, with sales slightly less than projected and operating income ahead of analysts’ predictions. Microsoft — which is No. 2 in the market, behind Amazon but ahead of Google — noted that it took cloud market share from competitors, but didn’t say which. After a spree of investment during the pandemic, businesses spent much of 2022 and 2023 in what the biggest software companies euphemistically called “optimization” — making better use of stuff they’re already paying for and looking for places where they can save. That has meant the biggest cloud providers are vying to land big contracts in a more challenging environment, leading them to look for ways to entice businesses, including by offering to incorporate the latest artificial intelligence-based products that promise to boost efficiency. “The world is going to be driven by workloads accelerating into the cloud,” said Stefan Slowinski, an analyst at BNP Paribas’s Exane. “CEOs make that decision based on gut, and right now they’re still being cautious.” New interest in developing and running AI applications has almost certainly influenced recent corporate decisions about which cloud partner to sign with. Microsoft offers ways to work with a number of AI tools, and has gained a reputation as a leader in the burgeoning space because of its partnership with OpenAI, which makes the popular ChatGPT program for generating content. That alliance helped fuel new customer growth, Microsoft said — a service called Azure OpenAI, which lets Microsoft’s cloud customers use the startup’s technology for their own applications, attracted more than 18,000 customers, up from 11,000. Microsoft has also invested $13 billion in OpenAI and serves as its cloud provider, so that firm’s increasing need for computing power also benefits Microsoft.Amazon, for its part, is trying to appeal to clients with a menu of different options, as well as a partnership with AI developer Anthropic, which makes the Claude chatbot. Alphabet Inc.-owned Google says it is a popular choice among big companies and AI startups alike, with CEO Sundar Pichai saying on a conference call that more than 60% of the world’s 1,000 largest companies are Google cloud customers, as well as “more than half of all funded generative AI startups.” Amazon Chief Executive Officer Andy Jassy told analysts on the e-commerce giant’s conference call Thursday that generative AI represents an opportunity worth “tens of billions” of dollars for Amazon Web Services, the company’s cloud unit, which he ran in his prior role. AWS revenue growth came in at 12%, about the same pace as the previous quarter. But operating income was about $1.3 billion ahead of analyst expectations, pushing operating margin for the cloud unit — which tends to account for all of the company’s profit — to the highest level since the first quarter of 2022. On a conference call with reporters, Amazon Chief Financial Officer Brian Olsavsky said that some companies are still working on “optimization,” but the pace “has started to slow down.” Some businesses were making new commitments to AWS, or resuming projects that had been paused earlier, he said. On a later call with analysts, the company said several new deals with customers were signed late in the third quarter and won’t show up as revenue until the current period. The comments helped push Amazon shares higher in late trading Thursday.At Microsoft, the 29% jump in sales from Azure cloud services outpaced the 26% growth for in the previous quarter, sending the company’s stock higher in New York trading the next day. CFO Amy Hood said that while the “optimization trends” were similar to the previous quarter’s, consumption — a measure of the amount of Azure services used — was better than expected, and the company saw growth in the number of contracts worth more than $10 million for both Azure and Office cloud services. Google’s cloud unit, which includes both of the types of infrastructure services that are part of AWS and Azure and adds in results from productivity software, saw growth rise 22% in the quarter from a year earlier. That’s a deceleration from the previous quarter’s growth. Sales for the unit were $8.4 billion, falling short of Wall Street projections of $8.6 billion. Profit also came in short of estimates. Alphabet President Ruth Porat said in an interview that the unit’s sales had been affected by some customers’ belt-tightening. The shares fell 9.5% the following day, their biggest decline since March 2020. “Cloud computing is a much lumpier business than advertising, and one where Google is facing stiff competition,” said Max Willens of Insider Intelligence. “While the traction it has among AI startups may bear fruit in the long run, it is not currently helping Google Cloud enough to satisfy investors.” More stories like this are available on bloomberg.com ©2023 Bloomberg L.P.
AI Startups
Investing in artificial intelligence (AI) startups is the latest bandwagon VCs are piling onto. But as last year’s crypto experts quickly work to rebrand as AI experts, they’ll have to compete with the VCs who have been investing in the category all along. Seattle-based Ascend is one of them. Firm founder and solo GP Kirby Winfield has been involved in the AI sector as either a founder or investor since the 90s. Now that seemingly every VC has turned their attention to the category he told TechCrunch he’s glad he’s been in it for so long and therefore will not make some of the mistakes newer entrants will. “It’s so easy to throw together a vertical AI demo,” Winfield told TechCrunch. “You see a lot of folks who would have been decent SaaS founders, trying to be decent AI founders. I would say it is pretty easy to identify who has actual chops from a technical perspective. We are really fortunate to be investing at this time regardless of the hype.” Ascend is announcing the close of $25 million for its second fund. Winfield said the firm will invest in pre-seed AI and machine learning (ML) companies largely based in the Pacific Northwest. This continues the firm’s strategy from its first fund which raised $15 million and started deploying in 2019. Winfield isn’t fully avoiding the hype though. The firm hasn’t always only focused on AI and ML. Ascend’s Fund I also invested in brands and marketplaces too, areas it is stepping away from with this latest batch of capital. The fund was raised 100% from individuals, Winfield said, and consists of two vehicles: one that raised $22.5 million and another that raised $2.5 million from existing portfolio company founders. Winfield said he was able to raise $21 million in the first month the fund was open before letting it sit open for almost the entirety of 2022 hoping to see some additional funds mosey in, a process he also ran for Fund I. “I would say that money trickled in a lot more strongly in 2019 when I raised Fund I,” Winfield said. “I couldn’t really think of a good reason to close the fund. We got another $3 million in the door by leaving it open. I don’t overthink these things too much.” Winfield added that many of the Fund I LPs were happy to reup now that the industry’s notion around investing in AI has changed dramatically since Winfield raised Fund I. But as every startup is rewriting their marketing to call themselves an AI company, Winfield said he is intentional about the kind of companies he backs. He said he isn’t looking for AI companies necessarily but instead is focused on startups that will utilize the tech to find a better solution. “AI doesn’t matter,” he said. “What matters is the solution you are selling to your customers. Many founders and investors are getting wrapped around the axle and putting the technology and solution before the benefit.” Companies from Fund I that fit that bill according to Winfield include Xembly, which uses AI to create a virtual chief of staff, Fabric, which operates as a “headless” e-commerce platform, and WhyLabs, an AI observability platform. This fund also doubles down on the firm’s focus on companies in the Pacific Northwest, with a particular focus on Seattle. While that might sound limiting for folks who focus on Silicon Valley, Winfield disagrees, citing the talent that comes out of Microsoft and Amazon and the companies that are incubated at the nonprofit Allen Institute for Artificial Intelligence, where Winfield has been the investor in residence for nearly six years. But no matter his experience and intention, it may still be hard for Winfield to compete with the rapidly growing flock of AI investors. Plus, even if he brings a beneficial background, he doesn’t come with the same deep pockets some of his fellow VCs have — Bessemer just announced they are putting $1 billion of their already raised capital toward the strategy. Plus, we all know how aggressive VCs chasing hype can be. Xembly founder and CEO Pete Christothoulou said that despite the market’s noise, companies should look to work with VCs like Winfield because while everyone is looking to put money to work in AI, not all support is created equal. “An AI fund without the right underpinnings is just money,” Christothoulou said. “The money is nice but you want the relationships that the investor can bring. If they can baseline their advice and real technical guidance, that’s where it starts getting really interesting and [Winfield] has a big opportunity.”
AI Startups
It seems like it’s the best of times for founders thinking about launching an AI startup, especially with OpenAI releasing ChatGPT to the masses, as it has the potential to really put AI front and center in business and perhaps everything we do technologically. Who wouldn’t want to launch a startup right now with the energy and hype surrounding the industry? But it also could be the worst of times for founders thinking about launching an AI startup, especially one that can grow and be defensible against incumbents in a fast-changing environment. And that’s a real problem for companies thinking about this area: AI is evolving so rapidly that your idea could be obsolete before it’s even off the ground. How do you come up with a startup idea that can endure in such a challenging and rapidly evolving landscape? The bottom line is that the same principles that apply to previously successful startups apply here, too. It just may be a bit harder this time because of how quickly everything is moving. A bunch of successful founders and entrepreneurs spoke last week at the Imagination in Action conference at MIT. Their advice could help founders understand what they need to do to be successful and take advantage of this technological leap. What’s working? CB Insights compiled data from 2021 and 2022 to understand where VC investment money has been going when it comes to generative AI startups. Given the recent hype around this area, it’s reasonable to think that the volume of investment will increase, and perhaps the allocation will be different, but this is what we have for now.
AI Startups
The creation of a new market is like the start of a long race. Competitors jockey for position as spectators excitedly clamour. Then, like races, markets enter a calmer second phase. The field orders itself into leaders and laggards. The crowds thin. In the contest to dominate the future of artificial intelligence, OpenAI, a startup backed by Microsoft, established an early lead by launching ChatGPT last November. The app reached 100m users faster than any before it. Rivals scrambled. Google and its corporate parent, Alphabet, rushed the release of a rival chatbot, Bard. So did startups like Anthropic. Venture capitalists poured over $40bn into AI firms in the first half of 2023, nearly a quarter of all venture dollars this year. Then the frenzy died down. Public interest in AI peaked a couple of months ago, according to data from Google searches. Unique monthly visits to ChatGPT’s website have declined from 210m in May to 180m now (see chart). The emerging order still sees OpenAI ahead technologically. Its latest AI model, GPT-4, is beating others on a variety of benchmarks (such as an ability to answer reading and maths questions). In head-to-head comparisons, it ranks roughly as far ahead of the current runner-up, Anthropic’s Claude 2, as the world’s top chess player does against his closest rival—a decent lead, even if not insurmountable. More important, OpenAI is beginning to make real money. According to The Information, an online technology publication, it is earning revenues at an annualised rate of $1bn, compared with a trifling $28m in the year before ChatGPT’s launch. Can OpenAI translate its early edge into an enduring advantage, and join the ranks of big tech? To do so it must avoid the fate of erstwhile tech pioneers, from Netscape to Myspace, which were overtaken by rivals that learnt from their early successes and stumbles. And as it is a first mover, the decisions it takes will also say much about the broader direction of a nascent industry. OpenAI is a curious firm. It was founded in 2015 by a clutch of entrepreneurs including Sam Altman, its current boss, and Elon Musk, Tesla’s technophilic chief executive, as a non-profit venture. Its aim was to build artificial general intelligence (AGI), which would equal or surpass human capacity in all types of intellectual tasks. The pursuit of something so outlandish meant that it had its pick of the world’s most ambitious AI technologists. While working on an AI that could master a video game called “Dota”, they alighted on a simple approach that involved harnessing oodles of computing power, says an early employee who has since left. When in 2017 researchers at Google published a paper describing a revolutionary machine-learning technique they christened the “transformer”, OpenAI’s boffins realised that they could scale it up by combining untold quantities of data scraped from the internet with processing oomph. The result was the general-purpose transformer, or GPT for short. Obtaining the necessary resources required OpenAI to employ some engineering of the financial variety. In 2019 it created a “capped-profit company” within its non-profit structure. Initially, investors in this business could make 100 times their initial investment—but no more. Rather than distribute equity, the firm distributes claims on a share of future profits that come without ownership rights (“profit-participation units”). What is more, OpenAI says it may reinvest all profits until the board decides that OpenAI’s goal of achieving AGI has been reached. OpenAI stresses that it is a “high-risk investment” and should be viewed as more akin to a “donation”. “We’re not for everybody,” says Brad Lightcap, OpenAI’s chief operating officer and its financial guru. Maybe not, but with the exception of Mr Musk, who pulled out in 2018 and is now building his own AI model, just about everybody seems to want a piece of OpenAI regardless. Investors appear confident that they can achieve venture-scale returns if the firm keeps growing. In order to remain attractive to investors, the company itself has loosened the profit cap and switched to one based on the annual rate of return (though it will not confirm what the maximum rate is). Academic debates about the meaning of AGI aside, the profit units themselves can be sold on the market just like standard equities. The firm has already offered several opportunities for early employees to sell their units. SoftBank, a risk-addled tech-investment house from Japan, is the latest to be seeking to place a big bet on OpenAI. The startup has so far raised a total of around $14bn. Most of it, perhaps $13bn, has come from Microsoft, whose Azure cloud division is also furnishing OpenAI with the computing power it needs. In exchange, the software titan will receive the lion’s share of OpenAI’s profits—if these are ever handed over. More important in the short term, it gets to license OpenAI’s technology and offer this to its own corporate customers, which include most of the world’s largest companies. It is just as well that OpenAI is attracting deep-pocketed backers. For the firm needs an awful lot of capital to procure the data and computing power necessary to keep creating ever more intelligent models. Mr Altman has said that OpenAI could well end up being “the most capital-intensive startup in Silicon Valley history”. OpenAI’s most recent model, GPT-4, is estimated to have cost around $100m to train, several times more than GPT-3. For the time being, investors appear happy to pour more money into the business. But they eventually expect a return. And for its part Openai has realised that, if it is to achieve its mission, it must become like any other fledgling business and think hard about its costs and its revenues. GPT-4 already exhibits a degree of cost-consciousness. For example, notes Dylan Patel of SemiAnalysis, a research firm, it was not a single giant model but a mixture of 16 smaller models. That makes it more difficult—and so costlier—to build than a monolithic model. But it is then cheaper to actually use the model once it has been trained. because not all the smaller models need be used to answer questions. Cost is also a big reason why OpenAI is not training its next big model, GPT-5. Instead, say sources familiar with the firm, it is building GPT-4.5, which would have “similar quality” to GPT-4 but cost “a lot less to run”. But it is on the revenue-generating side of business that OpenAI is most transformed, and where it has been most energetic of late. AI can create a lot of value long before AGI brains are as versatile as human ones, says Mr Lightcap. OpenAI’s models are generalist, trained on a vast amount of data and capable of doing a variety of tasks. The ChatGPT craze has made OpenAI the default option for consumers, developers and businesses keen to embrace the technology. Despite the recent dip, ChatGPT still receives 60% of traffic to the top 50 generative-AI websites, according to a study by Andreessen Horowitz, a venture-capital (VC) firm which has invested in OpenAI (see chart). Yet OpenAI is no longer only—or even primarily—about ChatGPT. It is increasingly becoming a business-to-business platform. It is creating bespoke products of its own for big corporate customers, which include Morgan Stanley, an investment bank. It also offers tools for developers to build products using its models; on November 6th it is expected to unveil new ones at its first developer conference. And it has a $175m pot to invest in smaller AI startups building applications on top of its platform, which at once promotes its models and allows it to capture value if the application-builders strike gold. To further spread its technology, it is handing out perks to AI firms at Y Combinator, a Silicon Valley startup nursery that Mr Altman used to lead. John Luttig of Founders Fund (a VC firm which also has a stake in OpenAI), thinks that this vast and diverse distribution may be even more important than any technical advantage. Being the first mover certainly plays in OpenAI’s favour. GPT-like models’ high fixed costs erect high barriers to entry for competitors. That in turn may make it easier for OpenAI to lock in corporate customers. If they are to share internal company data in order to fine-tune the model to their needs, many clients may prefer not to do so more than once—for cyber-security reasons, or simply because it is costly to move data from one AI provider to another, as it already is between computing clouds. Teaching big models to think also requires lots of tacit engineering know-how, from recognising high-quality data to knowing the tricks to quickly debug the source code. Mr Altman has speculated that fewer than 50 people in the world are at the true model-training frontier. A lot of them work for OpenAI. These are all real advantages. But they do not guarantee OpenAI’s continued dominance. For one thing, the sort of network effects where scale begets more scale, which have helped turn Alphabet, Amazon and Meta into quasi-monopolists in search, e-commerce and social networking, respectively, have yet to materialise. Despite its vast number of users, GPT-4 is hardly better today than it was six months ago. Although further tuning with user data has made it less likely to go off the rails, its overall performance has changed in unpredictable ways, in some cases for the worse. Being a first mover in model-building may also bring some disadvantages. The biggest cost for modellers is not training but experimentation. Plenty of ideas went nowhere before the one that worked got to the training stage. That is why OpenAI is estimated to have lost $500m last year, even though GPT-4 cost one-fifth as much to train. News of ideas that do not pay off tends to spread quickly throughout AI world. This helps OpenAI’s competitors avoid going down costly blind alleys. As for customers, many are trying to reduce their dependence on OpenAI, fearful of being locked into its products and thus at its mercy. Anthropic, which was founded by defectors from OpenAI, has already become a popular second choice for many AI startups. Soon businesses may have more cutting-edge alternatives. Google is building Gemini, a model believed to be more powerful than GPT-4. Even Microsoft is, despite its partnership with OpenAI, something of a competitor. It has access to GPT-4’s black box, as well as a vast sales force with long-standing ties to the world’s biggest corporate IT departments. This array of choices diminishes OpenAI’s pricing power. It is also forcing Mr Altman’s firm to keep training better models if it wants to stay ahead. The fact that OpenAI’s models are a black box also reduces its appeal to some potential users, including large businesses concerned about data privacy. They may prefer more transparent “open-source” models like Meta’s LLaMA 2. Sophisticated software firms, meanwhile, may want to build their own model from scratch, in order to exercise full control over its behavour. Others are moving away from generality—the ability to do many things rather than just one thing—by building cheaper models that are trained on narrower sets of data, or to do a specific task. A startup called Replit has trained one narrowly to write computer programs. It sits atop Databricks, an AI cloud platform which counts Nvidia, a $1trn maker of specialist AI semiconductors, among its investors. Another called Character AI has designed a model that lets people create virtual personalities based on real or imagined characters that can then converse with other users. It is the second-most popular AI app behind ChatGPT. The core question, notes Kevin Kwok, a venture capitalist (who is not a backer of OpenAI), is how much value is derived from a model’s generality. If not much, then the industry may be dominated by many specialist firms, like Replit or Character AI. If a lot, then big models such as those of OpenAI or Google may come out on top. Mike Speiser of Sutter Hill Ventures (another non-OpenAI backer) suspects that the market will end up containing a handful of large generalist models, with a long tail of task-specific models. If AI turns out to be all it is cracked up to be, being an oligopolist could still earn OpenAI a pretty penny. And if its backers really do see any of that penny only after the company has created a human-like thinking machine, then all bets are off. © 2023 The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on https://www.economist.com/business/2023/09/18/could-openai-be-the-next-tech-giant
AI Startups
(Bloomberg) -- Alphabet Inc.’s Google is adding artificial intelligence tools from companies including Meta Platforms Inc. and Anthropic to its cloud platform, weaving more generative AI into its products and positioning itself as a one-stop shop for cloud customers seeking to tap into the technology. Most Read from Bloomberg Google’s cloud clients will be able to access Meta’s Llama 2 large language model, as well as AI startup Anthropic’s Claude 2 chatbot, to customize with enterprise data for their own apps and services. The move announced Tuesday at Google’s Next ’23 event in San Francisco, is part of the company’s effort to position its platform as one where customers have the freedom to choose an AI model that best meets their needs, whether from the company itself or one of its partners. More than 100 powerful AI models and tools are now available to Google Cloud clients, the company said. The company also announced wider availability of its Duet AI product for customers of its Workspace productivity suite, with access for the public to follow later this year. Users can tap a generative AI helper, which responds to prompts to help create content on apps like Google Docs, Sheets and Slides. Duet AI, introduced in May, can take notes during video calls, send meeting summaries and translate captions in 18 languages, Google said. Through a new feature called “attend for me,” users can dispatch the tool to join meetings on their behalf, deliver messages and create a recap of the event.Google also said it has new partnerships with companies such as GE Appliances and Fox Sports, which will allow customers to take advantage of AI, for example, to create custom recipes or see a playback of a sports event from Fox’s broadcast catalog. And Google announced a deepened partnership with chipmaker Nvidia Corp. Google said its cloud offerings will expand to enable more use of Nvidia’s chips and products designed to speed up the training of large language models. Google touted its access to Nvidia’s H100 accelerators — a prized commodity during the AI frenzy — and said it will be letting customers use the latest version of the chipmaker’s so-called supercomputer. Read More: How Large Language Models Work, Making Chatbots Lucid With the announcements, Google is signaling that it’s more willing than ever to work with other companies in artificial intelligence as it aims to gain market share from its competitors. Google has trumpeted its products and services as the finest options in AI, emphasizing its years of experience in the field. While the company still trails Amazon.com Inc. and Microsoft Corp. in the cloud computing market, Google said the AI additions to its cloud catalog give the platform the widest variety of models to choose from. “We are in an entirely new era of digital transformation, fueled by gen AI,” Thomas Kurian, chief executive officer of Google Cloud, said in a blog post timed to the announcements. “This technology is already improving how businesses operate and how humans interact with one another.” Beyond adding new AI models to its cloud catalog, Google said it was making improvements to its own AI models and tools. PaLM 2, Google’s large language model that it announced at its annual developers conference in May, is now available in 38 languages and can better analyze longer documents like research papers, books and legal briefs, the company said. Meanwhile, Google’s AI model that helps with coding, called Codey, has been updated to enhance performance. Imagen, the company’s text-to-image app, will feature better-quality images and newer capabilities like style tuning, to help cloud customers better align their images to brand guidelines, the company said. Amid growing concerns about how companies should deal with the wave of AI-generated content, Google Cloud announced a feature that will embed a watermark to indicate that images were created by artificial intelligence. The feature, which is powered by technology from AI lab Google DeepMind, will include the watermark at the pixel level, meaning it will be hard to alter, the company said. Google also touted its notable cloud customers and partners in its announcements Tuesday. The company said that more than half of venture-backed generative AI startups pay for Google’s cloud computing platform, including Anthropic, Character.ai and Cohere. The company’s industry-specific models are gaining traction too, it said. Its Med-PaLM 2 model, an AI model adapted for medical settings, now boasts partnerships with health care companies such as Bayer Pharmaceuticals, HCA Healthcare Inc. and Meditech, Google said. Its Sec-PaLM 2 model, designed for cybersecurity, is being used by providers like Broadcom Inc. and Tenable, Google added. The cloud unit also announced a commercial service based on Ampere Computing's new AmpereOne chip that adds weight to the startup's assertion that it can become a rival to chipmakers Intel Corp. and Advanced Micro Devices Inc., the dominant providers of processors in data centers. Ampere, backed by Oracle Corp., argues that its chips are more power efficient than rival offerings and better suited to the kind of high-throughput computing that cloud providers need. --With assistance from Ian King. (Updates with Nvidia partnership in fifth paragraph.) Most Read from Bloomberg Businessweek ©2023 Bloomberg L.P.
AI Startups
Tech veteran Chris Messina and writer/podcaster Brian McCullough are launching a new $15 million fund aimed at AI startups. The backers include tech luminary investors and founders such as Marc Andreessen, Chris Dixon, and Dennis Crowley, who are all investing personally. McCullough has been running his Ride Home Fund for two years now, a fund which grew out of his hosting the Techmeme Ride Home podcast. Messina, formerly of Google, Uber, and a previous AI startup founder, is also famed as being the the inventor of the hashtag. Handily, he is also the number 1 product hunter on Product Hunt, which renders his ability to boost products fairly unique. Over email, McCullough told me: “Not only has our inbound and deal flow switched to 90% AI, Chris was like: “The people coming off the bench right now are people that didn’t bite at crypto, didn’t bite at web3, but they’re going now and they’re people you would invest in whatever they’re doing next, by definition.” He adds that given the layoffs at all the tech platforms, university graduates aren’t getting job offers from Big Tech companies any more, so they are punting for quick $100k to $500k checks to “just go with this AI idea” he says. “It’s the best talent of the previous generation and the best talent of this generation pulling the trigger at the same time. It’s almost a perfect storm situation,” he added. The fund plans to be the first check in at pre-seed or seed and deploy all its capital in the next 12-18 months. The 506c fund and fundraising will close on October 31. As a 506c fund, any accredited investors can become LPs, so long as they can stump up the minimum $100,000 required. Messina is finally jumping to the VC side of the table after being a prolific angel investor. Over an email interview, I asked Messina how investors are going to avoid the pitfall of the current AI hype cycle, which may well pull them into over-valued companies? “We believe this is a generational reset in how people use technology. Considering my experience building the social web, I see a similar shift now as I did back in 2005,” he said. “Generative AI changes how software will be built and used — beyond what the iPhone started 16 years ago. It lead to generational software companies like Airbnb and Uber (where I worked), which applied new technological capabilities (GPS, payments, notifications) to alter how people understood and coordinated transportation and housing.” “We’re similarly bullish that companies that we invest in now will have a similar lead in redefining calcified industries. We’re investing in the productization of generative AI by founders who come equipped with tactical knowledge of vertical spaces like compliance or do-it-yourself repairs (yes, we’re writing checks already),” he added. However, given that the larger players in AI – the foundational platforms – are as big as they are, what will make it possible for small startups to compete against these leviathans? “The opportunities we’re investing in are too small and bespoke for the big incumbents to focus on,” he told me. “They’re creating general purpose platforms that can’t, by definition, be too finely trained on any particular application space. Furthermore, designing the best experiences with generative AI requires deep knowledge of and empathy with customers from specific industries — both of which big tech companies typically lack (again, something I can speak to having worked at Google). They simply can’t move fast enough with sufficient product elegance to win the markets we believe are up for grabs,” he said. I also asked him what hurdles AI investors face, given there are already things such as the blocking of AI crawler bots happening. “We get people asking us all the time: where is the moat? Where will the value accrue? There are plenty of risks, but blocking AI crawler bots isn’t the top of that list. We think the people waiting for clear sailing are insane and will be kicking themselves in 18 months,” he added. Given Messina’s quite pivotal role at so many successful tech companies in the past, perhaps this is the signal that the AI startup boom really has now kicked off?
AI Startups
AI Looks Like a Bubble Investors need to take a cold shower Sponsored By: Vanta This essay is brought to you by Vanta, the leading Trust Management Platform. Need SOC-2 Type I? They can help you get it in just two weeks. Bubbles are when people buy too much dumb stuff because they think there is someone dumber than them they can sell said stuff to. Take the crypto bubble in 2017. When Bitcoin mooned, a variety of companies pivoted to the blockchain and saw huge gains in their stock price. Those companies included a furniture firm, juice makers, a gold miner, and my personal favorite—a sports bra manufacturer. The most telling example was Long Island Iced Tea, which had a market capitalization of $23.8M. It announced that it was changing its name to “Long Blockchain Corp” and saw its stock boom by 183% in one day. In pre-market reading it had risen by over 500%. Perhaps the biggest sign of a technology bubble is mania-driven stock price swings. Anyway, here’s a headline: “BuzzFeed CEO Jonah Peretti wrote in a memo to staff that the company would rely on ChatGPT creator Open AI to enhance its quizzes and personalize some content for audiences, the Journal reported.” Buzzfeed’s stock boomed ~260% within two days to a ~$464M valuation. Simultaneously, there have been multiple reports in the private markets of AI companies raising at billion-dollar valuations—while having zero revenue. Some snarky hedge-fund analyst probably thought that the sports-bra-ification of AI would happen, and they could turn a quick profit by selling stock to the rubes who bought anything with the word “crypto” slapped on it. Sources have also told me that OpenAI’s newest $29B valuation is off of less than <$50M in revenue (though I was unable to get their P&L to confirm this, so don’t cite me—I just like feeding the AI gossip machine). It appears that the newest bubble is upon us—and it is AI. However, there is one really, really weird anomaly with AI valuations. The only pure-play, wholly AI-focused, B2B software company, which is run by an experienced management team, is down 83% over the last two years. From its all-time high of $161 a share, it’s currently trading around $23.10. I’m talking about c3.ai, which describes itself as an “enterprise AI” company. This company is a case study for why investing in AI will be harder than people think. Do you need a SOC 2 ASAP? Vanta, the leading Trust Management Platform, can help you get SOC 2 in two weeks. Get SOC 2 and close more deals, hit your revenue targets, and create a solid foundation for security best practices. In yet another signal of an AI bubble, the company has had a resurgence since December 28, despite no material change in its earnings forecast. Shoot, on January 31 it announced a “generative AI” product, and its stock shot up 21%. The current moment is the boy who cried bubble’s piece de resistance: bubble bubble bubble bubble bubble. AI is probably the most exciting tech paradigm since the personal computer. But at the risk of sounding like the guy in Times Square yelling about the end of the world, I feel the need to scream, “TECHNOLOGICAL INNOVATION DOES NOT EQUAL INVESTMENT OPPORTUNITIES.” New tech allows for new opportunities, but that doesn’t mean returns will be distributed equally. A market is still subject to market dynamics, regardless of the level of science involved. AI will change the world, it will make us question what it means to be alive, and there is a chance it will make us a multi-planetary species. But I’m not convinced that just being a company that sells AI will deliver judicious year-over-year returns. To illustrate my doubts, we need to look more deeply at C3’s struggles and why they’ll presage what is to come for so many of these AI startups. Once we finish (off) C3, we’ll review what AI opportunities make sense for high returns. TL;DR - Companies are calling themselves AI companies right now and reaping stock price rewards. However, it's important to understand what it means to be an AI company—only certain kinds of those companies will have long-term sustainable advantages. - C3 has been struggling for a while, and now that it's branded itself an AI company, its stock is going through the roof. But nothing is materially different about the company. Putting C3 in context In the B2B software world, the word “AI” is applied to, like, everything now. Marketers appear to be compensated commensurate with the level of confusion they cause with their nomenclature. Internally at Every, we describe AI as marketing nitro. You slap those two bad-boy letters onto anything, and it takes off on Twitter like wildfire. If we're going to talk about AI companies, we need to get into the specifics of what we mean. To do that, I've broken the five steps in the value chain into the following: - Compute: The chips or server infrastructure required to run AI models - Data: A data set that a model is trained on - Foundational model: The compute and data will be mathematically combined into a broadly applicable use case - Fine-tune: The big foundational model, if not sufficient for a use case, will be tuned for a specific scenario - End user access point: The model will be deployed in an application There is a lot of overlap between these categories, but any time a company is selling something with the AI label, there is a chance it’s offering one or all of these features. A bubble occurs when people forget that there will be clear winners and losers. Investors get such strong FOMO that they don’t want to miss out on the future and wildly bid up asset prices. C3 helps enterprises train and use AI models in endpoint applications. It helps companies do some form of predictive analytics on top of their own data—think of questions like,“How likely is this sale prospect to close?” or, “When will this truck need maintenance?” It’s built some of these applications, while others were built by customers for internal use. Image from C3 S1. C3 does so for some of the largest customers on the planet, like the U.S. Department of Defense and Raytheon. The problem of managing and deploying AI models at the scale of these organizations is real, but everything is not sunshine and roses. C3’s financial performance A company that is underperforming this poorly deserves to be viewed critically. The financial performance is fantastically terrible. A few benchmarks I pulled from Public Comps on February 1 make the case. Let’s start with a flabbergasting stat. The company currently operates at -115% operating income percentage, putting it at the bottom of the pack for this set of high-growth SaaS comparables. It’s also near the bottom of any B2B SaaS company on the market. (I recognize this graph is unreadable but I wanted to give you a sense of how bad this performance is.) It spends 82% (82!) of current quarter revenue on R&D. It spends 72% of its current quarter revenue on sales and marketing. The company is also at the bottom with revenue growth of 7%. In its defense, there are some slightly extenuating circumstances. Much of its current expenses are stock-based compensation. Revenue growth flatlined because it switched to usage-based pricing, which will push revenue out. However, you don’t make that move if the previous model is working. Because the company serves exclusively large customers, trying to sign large contracts and get access to the biggest data pipelines, its sales motion is long and slow. All in all, the company appears to be in some trouble. The underlying forces powering this struggle should make your hair stand on end: - The company does not own and did not invent the AI techniques that makes this AI magic possible. Instead, it relies on advances from labs like OpenAI or Google’s Deepmind. - It does not own or store the sources of the data that make the AI customizable to individual customers. Instead, cloud providers like AWS or Azure host all the data (and also offer competing AI products). - It sells exclusively to very large customers. C3 has positioned itself as the end-all solution for “enterprise AI.” That unfortunately makes it responsible for the end output even though it doesn’t control the most important parts of the process. By not owning the data, it’s reliant on internal partners to heavily invest in cleaning and preparing the data. By not owning the AI technology, there will always be the temptation for companies to use readily accessible foundational models (e.g., GPT3 APIs) or lower their costs by using an open-source knockoff. The largest companies have so much data that finding a quick win for the platform is challenging. Let me make this a little less theoretical. C3’s problems in practice C3’s most recent press release stated: “C3 Generative AI for Enterprise Search provides enterprise users with a transformative user experience using a natural language interface to rapidly locate, retrieve, and present all relevant data across the entire corpus of an enterprise’s information systems.” Ugh. This is B2B SaaS marketing at its finest. Sounds good, means nothing. If you’ve ever worked as a data analyst, you know that math isn’t the hard part of the job. Most of the work of data scientists is formatting data so the math can work correctly. Data and analytics technicians are partially paid for their technical wizardry, but a lot of their value derives from the fact that they know where to look for data and what’s wrong with it. Your brain is a repository for tribal heuristics that are pre-baked into how the data was inputted: abbreviations, data being entered into the wrong fields, when labels changed and who followed protocol, all of the creatively incorrect ways that salespeople use a CRM, its troughs of intellectual gunk that resides exclusively in your neurons. A data analyst is paid to sort through all of that. C3 wants to replace the work of an executive emailing their data team a question and an answer comes back a few days later. Its solution, hypothetically, is that you ask AI that same question, and an answer comes back immediately. This sounds good, and the AI chatbot part is even technically feasible. My colleague Dan has built a variety of chatbots that can search through his favorite podcast, and he just published instructions on how to build a chatbot based on any book. His success has only been possible because the reference data is relatively simple and accessible. Doing that across petabytes of unclean, unstructured data is something else entirely. The key for this so-called product is that the hard part isn’t the AI. It’s doing the change management. Getting buy-in from the customer’s executive team, integrating it into existing systems, and rolling it out to the company will take months, if not years. When I chatted with employees at the company, they all mentioned some version of, “The product requires the company to invest a ton of time and resources to get value out of it.” That’s OK, but hard to sell to clients worried about a recession. It especially challenging when there are options like GPT-3 easily available (more on that in a sec). C3’s press release goes on to say: “The C3 Generative AI Product Suite integrates the latest AI capabilities from organizations such as Open AI, Google, and academia, and the most advanced models, such as ChatGPT and GPT-3 into C3 AI’s enterprise AI products.” Did you catch that? The company hasn’t invented any new AI models! It’s just using someone else’s. I’ve previously written about how AI’s roots are in academia, so most advances are shared rather than patented. Open-source model alternatives pop up, turning most AI SaaS companies into consulting shops that manage data pipelines into open-source models. In summary, C3’s stock price jumped because it announced a product it didn’t invent—an analytics product with a chatbot slapped on top. C3’s other products are somewhat similar, if less ambitious, versions of this generative AI tool. It has applications dedicated to specific use cases, such as inventory management, lead scoring, etc. Again, this is not particularly unique or difficult to replicate. The value is in storing, understanding, and piping in the data. C3 has no special right to access the databases where the information is stored. If I were in its shoes, I would be deeply worried about AWS and Azure moving in on its territory. All major cloud providers already offer AI products, but Microsoft has an exclusive partnership with the leading research lab that C3 is citing. How does this relate to Buzzfeed and the future of AI companies? It comes down to buyer personas. Endpoints versus APIs and generative AI Let’s say you are the new chief technology officer of a B2B SaaS company. Your boss is reading about how AI is the future of technology. He comes to you and says, “Go do AI.” You can try to build a custom AI model that’s trained on all your data, setting up a platform as a service (e.g. ,C3), or, alternatively, you could use a foundational model that is accessible via API. You already know what happens. This is the co-founder of Intercom, a customer service SaaS company: 🚨ɴᴇᴡ ꜰᴇᴀᴛᴜʀᴇ ᴀʟᴇʀᴛ🚨 The day ChatGPT launched our ML team got straight to work, asking how it could make better at Customer Service. Today we're announcing our first wave of features... This feature has to make customer service reps at least 20% faster. Sure, GPT-3 isn't fine-tuned to that use case, but the foundational model is powerful enough to overcome that deficiency. If Intercom started to spend significantly on these tools and OpenAI bills got out of control, there are open-source alternatives they could utilize, and other providers will inevitably pop up. There is nothing secret about OpenAI's methods, so they are replicable in the long term. This all happened in a few months with a team of developers and probably cost them less than $500K to build. In contrast, C3’s average contract size was $19M in 2021. Generative AI companies are so exciting because they allow for instant magic. Building using generative AI APIs allows that CTO to send a flashy demo to his CEO and say, “Hey, I’m great at my job, please don’t fire me.” CTO hires are frequently judged on their ability to purchase and implement the correct software for a company—easy and cheap is a significant competitive advantage. Much of this current bubble is people throwing capital at anything with AI involved. This is particularly true in the use cases currently available: text and image generation. In my first piece on AI five months ago, I argued that AI’s longest-term impact is that it would bring the cost of digital good creation close to zero, forcing companies to compete on distribution efficiency. Buzzfeed is the perfect candidate for where AI is at right now. Its quizzes are not complex, and the company already pays its writers as little as possible. By using GPT3 to help automate quiz outputs, it can make better content faster. Do I believe that implementing AI in quiz generation makes this stock worthy of doubling in value overnight? No, never, nadda, zilch. However, in this case, AI makes sense to help. AI bubble dynamics On Twitter AI bros yell that “AI is like electricity.” *Shoots confetti* Or, “It will power a technological revolution.” *Confetti intensifies* Or most dramatically, “AI is going to change the world.” *Confetti emphatically shooting out of all the speaker’s orifices* The issue with the electricity analogy is that electricity was and is a terrible business to be in. Power companies typically don’t do well unless they reach monopoly status. It has been far more lucrative to build things with electricity than slinging the product itself. Investors don’t know where the value from AI will settle, so they're throwing capital at everything. I would argue that there will be four final form factors where value will accrue: - Integrated AI: AI capabilities will be integrated into existing products without dislodging incumbents. Rather than an AI company building a CRM from scratch, it is much more likely that Salesforce incorporates GPT-3. Microsoft has already launched Microsoft products with generativeAI. Everyone else will soon follow. If an AI tool is only improving or replacing an existing button on a productivity app, that AI company will lose. It will require a more comprehensive improvement than merely putting AI on an existing capability. - Infrastructure as a service: Major consolidation will occur at all levels of the value chain besides access points. Cloud providers like AWS, Oracle, and Azure will build their own custom AI workload chips, build networking software, and train in-house models that people can reference. This will also follow existing technology market power dynamics where scale trumps all. There will be room on the side for the Nvidias and Scales of the world, but a fully consolidated offering will have a large amount of appeal for access point developers. C3 will struggle mightily against the cloud providers. - Intelligence layer: Fundamental models will improve fast enough that fine-tuning will have ever-decreasing importance. I’ve already heard stories of AI startups spending years building their models, getting access to an Open AI model, and then ripping the whole thing out because the foundational model was better than what their fine-tuned one could do. Fine-tuning will become less about output quality and more about output cost/speed. Companies like OpenAI will have a hefty business selling API access to their foundational models or partnering with corporations for custom fine-tuned models. The companies that compete on this layer will win or lose based on their ability to attract top talent and have said talent perform extraordinary feats. - Invisible AI: I would argue that the most successful AI company of the last 10 years is TikTok’s parent company, Bytedance. Its product is short-form entertainment videos, with AI completely in the background, doing the intellectually murky task of selecting the next video to play. Everything in its design, from the simplicity of the interface to the length of the content, is built in service of the AI. It is done to ensure that the product experience is magical. Invisible AI is when a company is powered by AI but never even mentions it. It simply uses AI to make something that wasn’t considered possible before but is entirely delightful. The original thesis for this piece was that asset prices are out of whack in both public and private markets. You can see it on the surface with some of these stock price jumps, and it is also clear upon deeper investigation. The prices and capital amounts that I hear going into AI startups are staggering. I liked how one growth investor put it to me: “Investing in AI right now means taking on venture capital risk with growth equity check sizes.” I sincerely believe in the power of this technology. But, if you look at my four forecasted categories of value accrual, you’ll note that almost all of the value goes to incumbents. Microsoft, Amazon, and Google will do quite well selling pure-play AI products because they invent or replicate the underlying techniques while simultaneously storing all of the fine-tuning data. As an added bonus, they already sell their products to every company on the planet, giving them a distribution advantage. The final stage of a bubble is the pop—when suddenly and dramatically, asset values crash back to earth. In this case, I think we are a long way from that occurring. Private technology investors have the most cash they've ever had. Tech giants are hunting for new growth. All of this points to a world where these values go even higher for some time longer. But when it does pop and valuations come back to earth, remember that you read it here first. Thanks to our Sponsor: Vanta Thanks again to our sponsor Vanta, the leading Trust Management Platform.
AI Startups
Building a durable growth funnel doesn’t just scale your business: it also signals to investors that a team can move directionally, which is a major confidence builder. “Acquisition, activation and retention are critical,” writes Jonathan Martinez, TC+’s in-house growth expert. “While referral and monetization are also quite important, they won’t make or break a startup.” He says early-stage teams should push well beyond “vanity metrics” like click-throughs and conversions to develop advanced metrics that are specific to the business they’re building. Full TechCrunch+ articles are only available to members Use discount code TCPLUSROUNDUP to save 20% off a one- or two-year subscription For this article, he broke down onboarding and activation processes at several startups, including Postmates, Zoom, Uber and Canva, to show how they shaped messaging that push users deeper into their funnels. It’s complex work, but don’t be intimidated — a growth analyst or data scientist contractor can easily set up the dashboards you’ll need to run experiments, set goals and track day-to-day progress. “This isn’t meant to be a teardown of each specific startup, but rather a holistic look into what leading companies are doing, their mindsets when it comes to growth and how to replicate these actions in your own startup,” says Martinez. Thanks very much for reading, Walter Thompson Editorial Manager, TechCrunch+ @yourprotagonist While everyone keeps talking about AI, HR tech startups are quietly building toward a $24B market According to a report on European HR tech by GP Bullhound, the industry generated 15% of the region’s new unicorns. “HR tech is proving more durable than other sectors, at least when it comes to fundraising,” write Anna Heim and Alex Wilhelm. “Going by the trend these days, we’re bound to see some HR tech startups from around the world going public in addition to all the AI startups.” Get the TechCrunch+ Roundup newsletter in your inbox! To receive the TechCrunch+ Roundup as an email each Tuesday and Friday, scroll down to find the “sign up for newsletters” section on this page, select “TechCrunch+ Roundup,” enter your email, and click “subscribe.” Coinbase execs: As global crypto policy grows, US has urgent need for legislation Crypto maximalists are comfortable making big bets, but ambiguous oversight by U.S. financial agencies is holding back corporate adoption, according to a study conducted by The Block and Coinbase. “Around 91% of surveyed executives agree that lack of clear regulation on crypto, blockchain or web3 make the space hard to navigate,” reports Jacquelyn Melinek, who interviewed Kara Calvert, head of U.S. policy at Coinbase, and Faryar Shirzad, its chief policy officer. “It doesn’t matter where lines are drawn, we’ll build to those lines,” said Shirzad. “But we can’t deal with a lack of clarity; the uncertainty is not healthy.” Don’t wait to identify your startup’s ideal customer personas Most early-stage startups don’t have a dedicated full-time CMO, and that’s OK. However, it’s still someone’s responsibility to capture user data, which is why growth expert Jonathan Martinez shared a guide with TC+ for developing ideal customer profiles (ICPs). “By identifying your ideal customer personas first, you will find product-market fit faster and identify the right customers to sell to,” he writes. There are signs that it will be a hot secondaries summer Buyers and sellers in the secondary market are getting closer on prices: the average bid/ask spread at private securities marketplace Forge Global has fallen to 17%, reports Rebecca Szkutak. “We need to watch this and see that this 17% is sustainable,” said Forge Global CEO Kelly Rodriques. “If it is, there are a group of market participants that are watching the space and wondering when to jump back in.” SignalFire’s State of Talent report 2023 The State of Talent report that early-stage VC firm SignalFire just shared with TC+ tracks shifts in the tech labor market from the start of the pandemic in March 2020 to the end of Q1 2023. “Tech has seen nonstop layoffs that hit 166,044 workers in Q1 2023 alone,” writes Dr. Heather Doshay, a SignalFire partner. “That’s more than all of 2022’s then-record 161,411 tech layoffs.”
AI Startups
Shutterstock today announced that it plans to expand its existing deal with OpenAI to provide the startup with training data for its AI models. Over the next six years, OpenAI will license data from Shutterstock including images, videos and music as well as any associated metadata. Shutterstock, in turn, will gain “priority access” to OpenAI’s latest tech and new editing capabilities that’ll let Shutterstock customers transform images in Shutterstock’s stock content library. Shutterstock says that, in addition, OpenAI will work with it to bring generative AI capabilities to mobile users through Giphy, the GIF library Shutterstock recently acquired from Meta. “The renewal and significant expansion of our strategic partnership with OpenAI reinforces Shutterstock’s commitment to driving AI tech innovation and positions us as the data and distribution partner of choice for industry leaders in generative AI,” Shutterstock CEO Paul Hennessy said in a press release. Stock content galleries like Shutterstock and generative AI startups have an uneasy — and sometimes testy — relationship. Generative AI, particularly generative art AI, poses an existential threat to stock galleries, given its ability to create highly customizable stock images on the fly. Contributors to stock image galleries, meanwhile, including artists and photographers, have protested against generative AI startups for what they see as attempts to profit off their work without providing credit or compensation. Early this year, Getty Images sued Stability AI, the creators of the AI art tool Stable Diffusion, for scraping its content. The company accused Stability AI of unlawfully copying and processing millions of Getty Images submissions protected by copyright to train its software. In a separate suit, a trio of artists are alleging that Stability AI and Midjourney, an AI art creation platform, are violating copyright law by training on their work from the web without their permission. In contrast to Getty Images, Shutterstock — perhaps unwilling to hinge profits on a lengthy court battle — has embraced generative AI, partnering with OpenAI to roll out an image creator powered by OpenAI’s DALL-E 2. (The Shutterstock-OpenAI deal dates back to 2021, but the image creator didn’t launch until late 2022.) Beyond OpenAI, Shutterstock has established licensing agreements with Nvidia, Meta, LG and others to develop generative AI models and tools across 3D models, images and text. In an attempt to placate the artists on its platform, Shutterstock also maintains a “contributor fund” that pays artists for the role their work has played in training Shutterstock’s generative AI and ongoing royalties tied to licensing for newly-generated assets.
AI Startups
A boom in artificial intelligence startup funding sparked by OpenAI has spilled over to China, the world's second-biggest venture capital market. Now American institutional investors are indirectly financing a rash of Chinese AI startups aspiring to be China's answer to OpenAI. From a report: The American investors, including U.S. endowments, back key Chinese VC firms such as Sequoia Capital China, Matrix Partners China, Qiming Venture Partners and Hillhouse Capital Management that are striking local AI startup deals, which haven't been previously reported. U.S. government officials have grown increasingly wary of such investments in Chinese AI as well as semiconductors because they could aid a geopolitical rival. For instance, Sequoia China, the Chinese affiliate of the Silicon Valley VC stalwart, recently made a U.S.-dollar investment in a brand-new AI venture created by Yang Zhilin, a young assistant professor at Beijing's prestigious Tsinghua University, which is sometimes described as China's equivalent of the Massachusetts Institute of Technology, according to a person with direct knowledge of the deal. Yang, who got his doctorate from the School of Computer Science, Carnegie Mellon University, in 2019, is considered one of China's top AI researchers. He previously co-founded another startup Sequoia China backed, Recurrent AI, which develops tools for salespeople, according to the company's website. Matrix and Qiming, meanwhile, recently funded another Beijing-based AI startup, Frontis, which has compared its product to ChatGPT. It was founded in 2021 by Zhou Bowen, a Tsinghua professor who once led JD.com's AI research lab, according to the company's website. The deal gave the startup a paper valuation of hundreds of millions of U.S. dollars, the company said. Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!× Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area. Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area.
AI Startups
- A Reddit vice president, Jack Hanlon, left the company earlier this year after nearly four years. - Hanlon led a team investing in artificial intelligence for both users and advertisers. - The company filed for an IPO in 2021, and it could go public as soon as this year. A vice president of Reddit overseeing the integration of artificial intelligence into the company's products left earlier this year with the social-media company preparing to go public. Jack Hanlon, who was vice president of feeds, AI, search, and data, spent nearly four years at Reddit and led the purchase of AI companies meant to help Reddit improve its experience for users and ad targeting for brands. Reddit and Hanlon confirmed the departure, which took place in March. On LinkedIn, Hanlon describes himself as the first executive at Reddit to serve under the title and says he grew a team from 21 to 250 people. Reddit filed for an initial public offering in December 2021, when the market for tech stocks was roaring. Since then, higher interest rates and depressed demand for tech stocks saw capital dry up across the industry. The company now hopes to go public in the second half of this year. Employees told Insider that the path to an IPO had been filled with "thrash" from road-map changes and that the organization could slim down middle management. AI is becoming more crucial to social-media companies. The technology can recommend content to users and help businesses target advertisements. Reddit increased its investments in the field to grow its ad business in particular, as well as to automate content moderation. The company has acquired AI startups including Spell, Oterlu, Spiketrap, and others. Reddit more recently dealt with a public backlash over its decision to begin charging for access to its API, resulting in the popular third-party client Apollo being forced to shut down. This led to protest from unpaid moderators, who help keep the site free of spam and other harmful content. They argued that if Reddit wanted to make money off their efforts, they should be paid commensurately. According to an FAQ sheet, Reddit only intends to charge developers who use the API heavily and monetize their apps (Apollo charged its users about $13 a year). Reddit has not retreated, arguing that it cannot subsidize apps that do not display advertising or otherwise generate revenue for the company. CEO Steve Huffman has said he wants companies training AI language models on its content to pay up, The New York Times reported.
AI Startups
Microsoft just kicked off its big press event where it's expected to talk about OpenAI's ChatGPT and how Microsoft will use it in its products. The event will not stream live for the public, but CNBC and other media outlets will be there to cover the news live. OpenAI's CEO, Sam Altman, on Monday tweeted a photo of himself alongside Microsoft CEO Satya Nadella. Microsoft announced a multibillion-dollar investment in OpenAI in January and said it would "deploy OpenAI's models across our consumer and enterprise products and introduce new categories of digital experiences built on OpenAI's technology." We expect to learn more about those plans. Multiple publications have reported that Microsoft aims to augment its Bing search engine with ChatGPT. Microsoft has been unwilling to confirm or deny those reports. Follow along for live updates from Microsoft's Tuesday event below. Microsoft announces new AI-powered Bing homepage that you can chat with Microsoft just announced a new AI-powered Bing homepage, with an expanded chat box that can answer more than just factual questions. The new Bing can: - Answer questions with lots of context similar to the way ChatGPT does. - Create itineraries for trips. So, for example, you can ask it to "Plan a five-day trip to Mexico." - You can continue to ask it more questions. So, if you use the example of planning a trip, you can then follow up with additional questions like "How much will this trip cost us?" or "Can we add or change something in the itinerary?" --Ashley Capoot Nadella promises a 'new paradigm for search' Nadella discussed some of the work the company is doing with AI, specifically referring to search. "And so we want to show you some of this innovation starting with how it's going to reshape the largest software category on planet earth, which I've been working on for a long time and which we are very excited about, search." '"It's a new day in search, it's a new paradigm for search, rapid innovation is going to come," Nadella added. Nadella talked about a "new copilot" experience, which refers to the ability of AI tools like to help perform tasks on behalf of workers with "an all-new Bing search engine and web browser." -- Jonathan Vanian Microsoft CEO Satya Nadella is on stage Microsoft CEO Satya Nadella is on stage. He's talking about ChatGPT's launch last year and how it was "the only thing anybody in your family wanted to talk about throughout the holidays. It's just crazy." "I think this technology is going to reshape pretty much every software category," Nadella added. --Jordan Novet Microsoft's event is starting now Operating on short notice, Microsoft managed to corral about 70 journalists into a room before its presentation, which is getting underway. Media outlets from the U.S. and abroad have representation. —Jordan Novet We're here at Microsoft's campus We're here! It's a typical overcast rainy February morning in Redmond, Washington, and there's construction on Microsoft's campus as the company executes a broad campus refresh. Before the Covid pandemic, Microsoft had said new buildings would be complete by 2022. The company now expects the whole project to be complete by 2025. Microsoft's Connect shuttle buses are ferrying employees into offices, while other employees drive themselves to work. Some employees are still working from home, after getting the okay from management. We should be getting started in about 15 minutes or so. —Jordan Novet AI wars between Microsoft and Google are heating up Not to be outdone by OpenAI, Google parent Alphabet debuted on Monday its own ChatGPT-like tool called Bard, confirming earlier reporting by CNBC. Google chief Sundar Pichai said in a blog post that the company plans to incorporate some of Bard's cutting-edge AI features into its core search tool. This means that, like ChatGPT, future versions of Google Search will summarize complex topics to users. Soon after Google announced Bard, the company sent a memo to employees urging them to provide feedback on the new software. "Next week, we'll be enlisting every Googler to help shape Bard and contribute through a special company-wide dogfood," Pichai said in the employee memo that was viewed by CNBC. --Jonathan Vanian What is ChatGPT? OpenAI's ChatGPT produces text-based answers to prompts. It's an example of generative AI, a kind of artificial intelligence technology that's the latest craze in Silicon Valley. Products like ChatGPT are built on top of generative AI software capable of producing stunning imagery that can look like paintings sketched by real humans or, as is the case with ChatGPT, essays that read like they were written by actual college students. Investors are pouring billions of dollars into startups that specialize in generative AI. Microsoft, for example, announced a new multibillion-dollar investment in OpenAI in late January. Meanwhile, Khosla Ventures, Craft Ventures, Sequoia, Entrepreneur First and Lux Capital are the top venture capital firms investing in generative AI startups, according to deal-tracking firm PitchBook. Stability AI, another popular generative AI-focused startup, said in October that it received $101 million in funding from Coatue, Lightspeed Venture Partners, and O'Shaughnessy Ventures. Stability AI built an open-source project called Stable Diffusion that garnered a lot of attention from people who were captivated by its ability to dream up images based on written prompts. -- Jonathan Vanian Bank of America says to expect ChatGPT integration into Bing search Bank of America analysts said Microsoft's event will likely focus on ChatGPT's integration into Bing search, as well as the company's broader partnership with OpenAI. "The AI race is clearly on for the tech sector," the analysts wrote in a Monday note. They added that speed will be important for scale as the AI models are learning. There's already competition brewing between Microsoft and Google on the AI front. Google will hold an event on Wednesday to talk about its new Bard AI product. Google announced Bard on Monday and said it will roll out in Google Search in the coming weeks. -- Ashley Capoot OpenAI's ChatGPT might pose data-security risks OpenAI's buzzy ChatGPT service knows a lot about the world after being trained on loads of online data. Big technology companies are concerned about the risks that it poses around data security. That reportedly includes Microsoft, which supplies cloud-computing services to OpenAI in order to run ChatGPT. In January, a Microsoft engineer said in an internal online discussion that Microsoft employees should not tell ChatGPT any sensitive corporate information, according to Insider, which reviewed the warning. OpenAI could use the information in the course of training future models, the engineer wrote. From there, it could theoretically be possible for someone to receive confidential information while conversing with a version of ChatGPT that's relying on a more up-to-date language model. Insider also reported that an Amazon attorney advised staff members not to send ChatGPT any confidential information, including source code. — Jordan Novet People can't get enough of ChatGPT Since the hybrid AI research firm OpenAI released ChatGPT in November, people can't seem to get enough of the chat-generating software. OpenAI CEO Sam Altman said via Twitter that, within 5 days of debuting ChatGPT — which is still in a so-called beta, or experimental version — the software "crossed 1 million users!" People have found a number of users for the tool, from helping technologists organize research notes for lectures to generating software code on behalf of developers. ChatGPT has become so popular that some schools across the nation have banned students from using the tool as a new "homework assistant." Check out this CNBC documentary on the rise of ChatGPT and its potential to shake up the business world. -- Jonathan Vanian
AI Startups
Oct 27 (Reuters) - Alphabet's (GOOGL.O) Google has agreed to invest up to $2 billion in the artificial intelligence company Anthropic, a spokesperson for the startup said on Friday. The company has invested $500 million upfront into the OpenAI rival and agreed to add $1.5 billion more over time, the spokesperson said. Google is already an investor in Anthropic, and the fresh investment would underscore a ramp-up in its efforts to better compete with Microsoft (MSFT.O), a major backer of ChatGPT creator OpenAI, as Big Tech companies race to infuse AI into their applications. In Amazon's quarterly report to the U.S. Securities and Exchange Commission this week, the online retailer detailed it had invested in a $1.25 billion note from Anthropic that can convert to equity, while its ability to invest up to $2.75 billion in a second note expires in the first quarter of 2024. Google declined to comment, and Amazon did not immediately respond to a Reuters request for comment. The Wall Street Journal earlier reported the news of Google's latest agreement with Anthropic. The rising number of investments shows ongoing maneuvering by cloud companies to secure ties with the AI startups that are reshaping their industry. Anthropic, which was co-founded by former OpenAI executives and siblings Dario and Daniela Amodei, has shown efforts to secure the resources and deep-pocketed backers needed to compete with OpenAI and be leaders in the technology sector. Reporting by Krystal Hu in New York and Chavi Mehta in Bengaluru; Additional reporting by Jeffrey Dastin; Editing by Anil D'Silva, Devika Syamnath and Chris Reese Our Standards: The Thomson Reuters Trust Principles.
AI Startups
While AI, and in particular the generative AI subcategory, are as hot as the sun, not all venture attention is going to the handful of names that you already know. Sure, OpenAI is able to land nine and 10-figure rounds from a murderer’s row of tech investors and mega-cap corporations. And rising companies like Hugging Face and Anthropic cannot stay out of the news, proving that smaller AI-focused startups are doing more than well. In fact, new data from Carta, which provides cap table management and other services, indicates that AI-focused startups are outperforming their larger peer group at both the seed and Series A stage. The dataset, which notes that AI-centered startups are raising more and at higher valuations than other startups, indicates that perhaps the best way to avoid a down round today is to build in the artificial intelligence space. What the data says Per Carta data relating to the first quarter of the year, seed funding to non-AI startups in the U.S. market that use its services dipped from $1.64 billion to $1.08 billion, or a decline of around 34%. That result is directionally aligned with other data that we’ve seen regarding Q1 2023 venture capital totals; the data points down.
AI Startups
OpenAI has finally released the official ChatGPT app for iPhone, which you can download from the App Store in the US right now. That means you no longer have to worry about third-party apps or Safari to get to ChatGPT on your iPhone. But Apple has reportedly issued an internal memo to restrict ChatGPT and other generative AI products for some employees. While this isn’t confirmed, I see two reasons for iPhone users like me to be excited about Apple’s decision. Here’s what Apple employees can’t do with ChatGPT According to documentation that The Wall Street Journal saw Apple doesn’t want some employees to use third-party generative AI programs as they could send confidential data to the companies that run these services. Apple also instructed employees not to use the Microsoft Copilot program, which automates the writing of software code. The problem with ChatGPT and any other generative AI product is that they don’t have strong privacy protection. That means the information you use in ChatGPT chats goes back to OpenAI and can be used to train ChatGPT. OpenAI only recently introduced ChatGPT privacy settings to prevent that kind of behavior. Apple isn’t the only company implementing strict generative AI policies. Samsung made a similar move recently after employees posted confidential information on ChatGPT. Similarly, JPMorgan Chase and Verizon have banned ChatGPT, per The Journal. Amazon, meanwhile, urged engineers to use its own internal AI tool to write code rather than ChatGPT. Why Apple’s stance on ChatGPT is exciting While the report that a company wants to keep its secrets protected and avoid spilling them via generative AI products like ChatGPT might seem boring, there’s an exciting detail in it. The Journal says Apple has restricted ChatGPT and other generative AI use “for some employees as it develops its own similar technology.” That’s a key detail here and a great reveal that gets somewhat buried in the report. Only later in the story do we learn that former Googler John Giannandrea is heading Apple’s AI efforts. The Journal mentions that Apple had purchased various AI startups in the past without detailing Apple’s actual ChatGPT competitor. If there is one. Also, Apple only wants “some” employees not to use competing generative AI programs. These may be the engineers working on Apple’s own ChatGPT-like advancements for the iPhone. Secondly, the report mentions Tim Cook’s recent reaction to AI apps like ChatGPT and Bard. Here’s what the CEO said during Apple’s most recent calls with investors about the potential of AI: I do think it’s very important to be deliberate and thoughtful in how you approach these things. And there’s a number of issues that need to be sorted as is being talked about in a number of different places, but the potential is certainly very interesting. Having Apple develop its own generative AI program is certainly exciting, and could be exactly what Siri needs. Let’s also remember that a report said Apple wants people to code apps for its AR/VR headset via voice, with sounds a lot like using generative AI. Also, Apple being aware of the various issues with generative AI means it’ll take steps to improve user privacy. Getting accuracy right is another matter, however, and it’s one that not even Apple might be able to fix.
AI Startups
Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of the last week’s stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own. It could be said that last week, Apple very visibly, and with intention, threw its hat into the ultra-competitive AI race. It’s not that the company hadn’t signaled its investments in — and prioritization of — AI previously. But at its WWDC event, Apple made it abundantly clear that AI was behind many of the features in both its forthcoming hardware and software. For instance, iOS 17, which is set to arrive later this year, can suggest recipes for similar dishes from an iPhone photo using computer vision. AI also powers Journal, a new interactive diary that makes personalized suggestions based on activities across other apps. iOS 17 will also feature an upgraded autocorrect powered by an AI model that can more accurately predict the next words and phrases that a user might use. Over time, it’ll become tailored, learning a user’s most frequently-used words — including swear words, entertainingly. AI is central to Apple’s Vision Pro augmented reality headset, too — specifically FaceTime on the Vision Pro. Using machine learning, the Vision Pro can create a virtual avatar of the wearer, interpolating out a full range of facial contortions — down to the skin tension and muscle work. It might not be generative AI, which is without a doubt the hottest subcategory of AI today. But Apple’s intention, it seems to me, was to mount a comeback of sorts — to show that it’s not one to be underestimated after years of floundering machine learning projects, from the underwhelming Siri to the self-driving car in production hell. Projecting strength isn’t just a marketing ploy. Apple’s historical underperformance in AI has led to serious brain drain, reportedly, with The Information reporting that talented machine learning scientists — including a team that had been working on the type of tech underlying OpenAI’s ChatGPT — left Apple for greener pastures. Showing that it’s serious about AI by actually shipping products with AI imbued feels like a necessary move — and a benchmark some of Apple’s competitors have, in fact, failed to meet in the recent past. (Here’s looking at you, Meta.) By all appearances, Apple made inroads last week — even if it wasn’t particularly loud about it. Here are the other AI headlines of note from the past few days: - Meta makes a music generator: Not to be outdone by Google, Meta has released its own AI-powered music generator — and, unlike Google, open-sourced it. Called MusicGen, Meta’s music-generating tool can turn a text description into about 12 seconds of audio. - Regulators examine AI safety: Following the UK government’s announcement last week that it plans to host a “global” AI safety summit this fall, OpenAI, Google DeepMind and Anthropic have committed to provide “early or priority access” to their AI models to support research into evaluation and safety. - AI, meet cloud: Salesforce is launching a new suite of products aimed at bolstering its position in the ultra-competitive AI space. Called AI Cloud, the suite, which includes tools designed to deliver “enterprise ready” AI, is Salesforce’s latest cross-disciplinary attempt to augment its product portfolio with AI capabilities. - Testing text-to-video AI: TechCrunch went hands on with Gen-2, Runway’s AI that generates short video clips from text. The verdict? There’s a long way to go before the tech comes close to generating film-quality footage. - More money for enterprise AI: In a sign that there’s plenty of cash to go around for generative AI startups, Cohere, which is developing an AI model ecosystem for the enterprise, last week announced that it raised $270 million as part of its Series C round. - No GPT-5 for you: OpenAI is still not training GPT-5, OpenAI CEO Sam Altman said at a recent conference hosted by Economic Times — months after the Microsoft-backed startup pledged to not work on the successor to GPT-4 “for some time” after many industry executives and academics expressed concerns about the fast-rate of advancements by Altman’s large language models. - AI writing assistant for WordPress: Automattic, the company behind WordPress.com and the main contributor to the open source WordPress project, launched an AI assistant for the popular content management system last Tuesday. - Instagram gains a chatbot: Instagram may be working on an AI chatbot, according to images leaked by app researcher Alessandro Paluzzi. According to the leaks, which reflect in-progress app developments that may or may not ship, these AI agents can answer questions or give advice. Other machine learnings If you’re curious how AI might affect science and research over the next few years, a team across 6 national labs authored a report, based on workshops conducted last year, about exactly that. One may be tempted to say that, being based on trends from last year and not this one, in which things have progressed so fast, the report may already be obsolete. But while ChatGPT has made huge waves in tech and consumer awareness, the truth is it’s not particularly relevant for serious research. The larger-scale trends are, and they’re moving at a different pace. The 200-page report is definitely not a light read, but each section is helpfully divided into digestible pieces. Elswehere in the national lab ecosystem, Los Alamos researchers are hard at work on advancing the field of memristors, which combine data storage and processing — much like our own neurons do. It’s a fundamentally different approach to computation, though one that has yet to bear fruit outside the lab, but this new approach appears to move the ball forward, at least. AI’s facility with language analysis is on display in this report on police interactions with people they’ve pulled over. Natural language processing was used as one of several factors to identify linguistic patterns that predict escalation of stops — especially with black men. The human and machine learning methods reinforce each other. (Read the paper here.) DeepBreath is a model trained on recordings of breathing taken from patients in Switzerland and Brazil that its creators at EPFL claim can help identify respiratory conditions early. The plan is to put it out there in a device called the Pneumoscope, under spinout company Onescope. We’ll probably follow up with them for more info on how the company is doing. Another AI health advance comes from Purdue, where researchers have made software that approximates hyperspectral imagery with a smartphone camera, successfully tracking blood hemoclobin and other metrics. It’s an interesting technique: using the phone’s super-slow-mo mode, it gets a lot of information about every pixel in the image, giving a model enough data to extrapolate from. It could be a great way to get this kind of health information without special hardware. I wouldn’t trust an autopilot to take evasive maneuvers just yet, but MIT is inching the tech closer with research that helps AI avoid obstacles while maintaining a desirable flight path. Any old algorithm can propose wild changes to direction in order to not crash, but doing so while maintaining stability and not pulping anything inside is harder. The team managed to get a simulated jet to perform some Top Gun-like maneuvers autonomously and without losing stability. It’s harder than it sounds. Last this week is Disney Research, which can always be counted on to show off something interesting that also just happens to apply to filmmaking or theme park operations. At CVPR they showed off a powerful and versatile “facial landmark detection network” that can track facial movements continuously and using more arbitrary reference points. Motion capture is already working without the little capture dots, but this should make it even higher quality — and more dignified for the actors.
AI Startups
Intel Corp. has introduced a processor in China that is designed for AI deep-learning applications despite reports of the Biden administration considering additional restrictions on Chinese companies to address loopholes in chip export controls. The chip giant’s product launch on July 11 is part of an effort by U.S. technology companies to bypass or curb government export controls to the Chinese market as the U.S. government, citing national security concerns, continues to tighten restrictions on China's artificial intelligence industry. CEOs of U.S. chipmakers including Intel, Qualcomm and Nvidia met with U.S. Secretary of State Antony Blinken on Monday to urge a halt to more controls on chip exports to China, Reuters reported. Commerce Secretary Gina Raimondo, National Economic Council director Lael Brainard and White House national security adviser Jake Sullivan were among other government officials meeting with the CEOs, Reuters said. The meeting came after China announced restrictions on the export of materials that are used to construct chips, a response to escalating efforts by Washington to curb China's technological advances. VOA Mandarin contacted the U.S. chipmakers for comment but has yet to receive responses. Reuters reported Nvidia Chief Financial Officer Colette Kress said in June that "over the long term, restrictions prohibiting the sale of our data center graphic processing units to China, if implemented, would result in a permanent loss of opportunities for the U.S. industry to compete and lead in one of the world’s largest markets and impact on our future business and financial results." Before the meeting with Blinken, John Neuffer, president of the Semiconductor Industry Association, which represents the chip industry, said in a statement to The New York Times that the escalation of controls posed a significant risk to the global competitiveness of the U.S. industry. "China is the world’s largest market for semiconductors, and our companies simply need to do business there to continue to grow, innovate and stay ahead of global competitors,” he said. “We urge solutions that protect national security, avoid inadvertent and lasting damage to the chip industry, and avert future escalations.” According to the Times, citing five sources, the Biden administration is considering additional restrictions on the sale of high-end chips used to power artificial intelligence to China. The goal is to limit technological capacity that could aid the Chinese military while minimizing the impact such rules would have on private companies. Such a move could speed up the tit-for-tat salvos in the U.S.-China chip war, the Times reported. And The Wall Street Journal reported last month that the White House was exploring how to restrict the leasing of cloud services to AI firms in China. But the U.S. controls appear to be merely slowing, rather than stopping, China’s AI development. Last October, the U.S. Commerce Department banned Nvidia from selling two of its most advanced AI-critical chips, the A100 and the newer H100, to Chinese customers, citing national security concerns. In November, Nvidia designed the A800 and H800 chips that are not subject to export controls for the Chinese market. According to the Journal, the U.S. government is considering new bans on the A800 exports to China. According to a report published in May by TrendForce, a market intelligence and professional consulting firm, the A800, like Nvidia's H100 and A100, is already the most widely used mainstream product for AI-related computing. Combining chips Robert Atkinson, founder and president of the Information Technology and Innovation Foundation, told VOA in a phone interview that although these chips are not the most advanced, they can still be used by China. “What you can do, though, is you can combine lesser, less powerful chips and just put more of them together. And you can still do a lot of AI processing with them. It just makes it more expensive. And it uses more energy. But the Chinese are happy to do that,” Atkinson said. As for the Chinese use of cloud computing, Hanna Dohmen, a research analyst at Georgetown’s Center for Security and Emerging Technology, told VOA Mandarin in a phone interview that companies can rent chips through cloud service providers. In practice, it is similar to a pedestrian hopping on an e-share scooter or bike — she pays a fee to unlock the scooter’s key function, its wheels. For example, Dohman said that Nvidia's A100, which is “controlled and cannot be exported to China, per the October 7 export control regulations,” can be legally accessed by Chinese companies that “purchase services from these cloud service providers to gain virtual access to these controlled chips.” Dohman acknowledged it is not clear how many Chinese AI research institutions and companies are using American cloud services. “There are also Chinese regulations … on cross-border data that might prohibit or limit to what extent Chinese companies might be willing to use foreign cloud service providers outside of China to develop their AI models,” she said. Black market chips In another workaround, Atkinson said Chinese companies can buy black market chips. “It's not clear to me that these export controls are going to be able to completely cut off Chinese computing capabilities. They might slow them down a bit, but I don't think they're going to cut them off." According to an as yet unpublished report by the Information Technology and Innovation Foundation, China is already ahead of Europe in terms of the number of AI startups and is catching up with the U.S. Although Chinese websites account for less than 2% of global network traffic, Atkinson said, Chinese government data management can make up for the lack of dialogue texts, images and videos that are essential for AI large-scale model training. “I do think that the Chinese will catch up and surpass the U.S. unless we take fairly serious steps,” Atkinson said.
AI Startups
- AI may be the hottest thing in the technology industry, but OpenAI's ChatGPT is not on pace to challenge Google's grip as the search engine leader, according to new analysis from Bank of America Securities. - Analysts found that app downloads for ChatGPT and Microsoft Bing have slowed in recent weeks, citing Sensor Tower data. - ChatGPT downloads on iPhones in the U.S. were down 38% month-over-month in June, according to the note. AI may be the hottest thing in the technology industry, but OpenAI's ChatGPT is not on pace to challenge Google's grip as the search engine leader, according to new analysis from Bank of America Securities. Analysts found that app downloads for ChatGPT and Microsoft Bing have slowed in recent weeks, citing Sensor Tower data, BofA analyst Justin Post wrote in a note on Wednesday. ChatGPT downloads on iPhones in the U.S. were down 38% month-over-month in June, according to the note. Bing app downloads, which includes a ChatGPT-based chatbot in the U.S., were also down 38% in June. Google's search engine market share is slightly up year-over-year at over 92%, according to the note, citing SimilarWeb data. Microsoft's Bing, which uses OpenAI's ChatGPT technology, was down 40 basis points on an annual basis to about 2.8% of the market. The slowing attention for ChatGPT and similar large language models, or LLMs, highlights the investment risk for companies like Google and Microsoft, which have funneled billions of dollars into the idea that recent AI advances could create a next-generation search engine to displace the current winner. Both companies, as well as other other tech giants like Nvidia, are also investing hundreds of millions in AI startups. But if ChatGPT adoption is already slowing, it could indicate that the technology may not seriously threaten Google's dominance in search — and companies might have to find other applications for LLMs, such as in new advertiser tools, the analysts wrote. "As for Google's stock, LLM concerns for Google search have seemingly shifted from market share risk to monetization risk, but with search share seemingly healthy, Google may have less urgency to integrate LLM (chat) results into commercial queries," Post wrote in the note. The ChatGPT app was released in May and so far is only available for iPhones. Bank of America analysts believe that a forthcoming OpenAI app for Android could boost adoption. Besides the app or Bing search engine, ChatGPT users can also access the chatbot through its website. Bank of America analysts estimate visits to ChatGPT were down about 11% on a monthly basis to just over 51 million visitors per week, or about only 2% of Google's estimated web traffic.
AI Startups
Getty Images has filed a lawsuit in the US against Stability AI, creators of open-source AI art generator Stable Diffusion, escalating its legal battle against the firm. The stock photography company is accusing Stability AI of “brazen infringement of Getty Images’ intellectual property on a staggering scale.” It claims that Stability AI copied more than 12 million images from its database “without permission ... or compensation ... as part of its efforts to build a competing business,” and that the startup has infringed on both the company’s copyright and trademark protections. The lawsuit is the latest volley in the ongoing legal struggle between the creators of AI art generators and rights-holders. AI art tools require illustrations, artwork, and photographs to use as training data, and often scrape it from the web without the creator’s consent. The latest in a fast-developing legal battle between AI startups and rights’ holders Getty announced last month that it has “commenced legal proceedings in the High Court of Justice in London” against Stability AI. However, that claim has not yet been served, and the company did not say at the time whether or not it also intended to pursue legal action in the US. Stability AI is also being sued in US along with another AI art startup, Midjourney, by a trio of artists who are seeking a class action lawsuit. “We can confirm on Friday Getty Images filed a complaint against Stability AI, Inc. in the United States District Court in Delaware,” Anne Flanagan, vice president of communications at Getty Images, told The Verge. “Getty Images has also filed a Claim in the High Court, which has not been served at this time. As is customary in the UK, on January 16 Getty Images sent and requested a response to a letter before action from Stability AI Limited within a customary timeframe. Stability AI Limited have confirmed receipt of this letter.” Legal experts say Getty Images’ case is on stronger footing than the artist-led lawsuit, but caution that in such unknown legal territory it’s impossible to predict any outcome. Andres Guadamaz, a UK academic specializing in AI and copyright law, said Getty’s complaint was “very strong,” on Twitter. “The complaint is technically more accurate than the class action lawsuit,” said Guadamaz. “The case will likely rest on the [copyright] infringement claim, and the defendants are likely to argue fair use. Could go either way.” Aaron Moss, a copyright lawyer at Greenberg Glusker and publisher of the Copyright Lately blog, tweeted: “Getty’s new complaint is much better than the overreaching class action lawsuit I wrote about last month. The focus is where it should be: the input stage ingestion of copyrighted images to train the data. This will be a fascinating fair use battle.” Speaking to The Verge via DM, Moss, who was the first to publish the full complaint on his blog, noted that the would-be class action lawsuit “was much more focused on the occupational harm caused to working artists by the proliferation of AI tools,” while Getty’s concentrates “on the fact it wasn’t paid for the use of its images.” Notably, Getty has licensed its images and metadata to other AI art generators, underscoring the fact that Stability AI willfully scraped its images without permission. The copyright infringement arguments in the lawsuit will turn on the interpretation of the US fair use doctrine, which protects the unlicensed use of copyrighted-work in certain scenarios. The concept of “transformative use” is also likely to be an important factor. Is the output of Stable Diffusion different enough from its training data? Recent research has found that the software memorizes some of its training images and can reproduce them almost exactly, though this only happens in a very small number of cases. Another argument floated by Getty Images relates to its trademark. Stable Diffusion is well known for recreating the company’s watermark in some of its images, and Getty argues that the appearance of this watermark on the model’s often “bizarre or grotesque images, dilutes the quality of the Getty Images Marks by blurring or tarnishment.” The case will be slow to move forward though, cautioned Moss. He notes that it was filed in the District Court of Delaware, and that the court’s docket is “pretty backed up.” “I’m currently handling a matter there, and was told that judges routinely take months (like sometimes up to 6-9 months) to decide motions to dismiss after they’re submitted,” Moss told The Verge. “it will likely take several years for the Getty Images case to get through discovery and summary judgment motions before trial.” He notes that such fair use cases also require input from both judges and juries. “The jury decides any disputed factual issues, but the ultimate legal questions are supposed to be decided by a judge,” says Moss. The Verge has reached out to Stability AI for comment and will update this story if we hear back. You can read Getty Images’ complaint (23-cv-135) in full below:
AI Startups
Altman said there's too much short-term investor frenzy, and not enough long-term vision. "There's crazy stuff happening in Silicon Valley right now," the OpenAI CEO said. Altman made the comments at a recent event in India, as part of his AI speaking world tour. OpenAI CEO Sam Altman thinks there's too much investor hype around artificial intelligence in Silicon Valley right now. Altman has been on a world tour of late to meet with policymakers, developers and AI users, making appearances across Europe, in Israel and now India, where he recently sat down for an event hosted by the Indian newspaper, The Economic Times. When asked about the investor AI frenzy, Altman didn't mince words. "It's wildly overhyped in the short-term," Altman said. "There's crazy stuff happening in Silicon Valley right now." But he added that the investor interest is "still underhyped in the long-term." That's because if the technology progresses as well as he and others think it could, there's no way to determine how valuable that technology will be. "No one knows how to think about that, no one knows how to value that, but whatever they're thinking is probably too low," Altman said. There's no doubt venture capitalists and other investors are closely looking at how to fund the next big thing in AI. OpenAI recently raised a round of funding via a share sale of $495 million, which valued the company at $27 billion to $29 billion. AI startups are announcing new rounds of funding every week from VC firms large and small, like Andreessen Horowitz leading fundraises in startups like Pinecone and Character.AI and coleading investments in Hippocratic AI, Coactive, ElevenLabs, among others. Sequoia is also knee-deep in AI investing, backing companies like Harvey and Langchain, and issuing a rare call out to woo startup investors looking for funding. Funding to generative AI startups alone has jumped 580% in the past three years, according to PitchBook data. In the first quarter of this year, those companies brought in about $1.7 billion of funding from investors, while that number was just $250 million in the same quarter in 2020. There's so much AI startup funding activity that Y Combinator cofounder Paul Graham thinks that public stock market investors are missing out because there's so few options available there. "Most of the good investments are still private," Graham wrote in a recent tweet. Regarding OpenAI's training of the next version of ChatGPT, GPT-5, Altman said the company hasn't started working on it yet. That was in response to claims that the company had started work on the update in April, but stopped after those in the tech industry, including Elon Musk and AI experts, signed an open letter calling for a pause on training AI systems more powerful than GPT-4. Altman said it takes a lot of time, people and resources to train these models and there wasn't a set timeframe for when OpenAI will release the next version of the chatbot. "We have a lot of work to do before we're ready to go start that model," Altman said. "We're working on the new ideas that we think we need for it, but we're certainly not close to ready to start." Read the original article on Business Insider
AI Startups
xAI, Elon Musk’s newly formed AI company, has revealed itself with a new website detailing its mission and team at https://x.ai/. Musk tweeted the company’s intent is to “understand reality” without any other details or explanation. “The goal of xAI is to understand the true nature of the universe,” according to the website. The team is headed up by Elon Musk and includes team members that have worked at other big names in AI, including OpenAI, Google Research, Microsoft Research, and DeepMind (which was recently folded into Google). In addition to Musk, the website lists Igor Babuschkin, Manuel Kroiss, Yuhuai (Tony) Wu, Christian Szegedy, Jimmy Ba, Toby Pohlen, Ross Nordeen, Kyle Kosic, Greg Yang, Guodong Zhang, and Zihang Dai. xAI’s team is currently advised by Dan Hendrycks, a researcher who currently leads the Center for AI Safety, a nonprofit that aims to “reduce societal-scale risks associated with AI.” The @xAI team will be hosting a Twitter Spaces discussion on July 14th, where listeners can “meet the team and ask us questions,” the website says. No specific time was given. According to xAI’s website, the company is “separate” from Musk’s overarching X Corp “but will work closely with X (Twitter), Tesla, and other companies.” Musk recently imposed strict but apparently temporary limits on reading Twitter, blaming the change on scraping by AI startups seeking data for large language models (LLMs). We first heard about xAI in April, when filings indicated that Musk founded the company in Nevada. At the time, it had Musk listed as its director, with Jared Birchall, the director of Musk’s family office, listed as its secretary. Not much was known about xAI at the time, but reports suggested that Musk sought funding from SpaceX and Tesla to get it started. Musk has been part of a major AI organization before, co-founding OpenAI in 2015. However, he walked away from it in 2018 to avoid a conflict of interest with Tesla, which also does a lot of work in the field. He’s since openly criticized OpenAI and told Tucker Carlson he was working on building something called “TruthGPT.” Update July 12th, 12:36PM ET: Added tweet from Elon Musk.
AI Startups
That was quick: Artificial intelligence has gone from science fiction to novelty to Thing We Are Sure Is the Future. Very, very fast. One easy way to measure the change is via headlines — like the ones announcing Microsoft’s $10 billion investment in OpenAI, the company behind the dazzling ChatGPT text generator, followed by other AI startups looking for big money. Or the ones about school districts frantically trying to cope with students using ChatGPT to write their term papers. Or the ones about digital publishers like CNET and BuzzFeed admitting or bragging that they’re using AI to make some of their content — and investors rewarding them for it. “Up until very recently, these were science experiments nobody cared about,” says Mathew Dryhurst, co-founder of the AI startup Spawning.ai. “In a short period of time, [they] became projects of economic consequence.” Then there’s another leading indicator: lawsuits lodged against OpenAI and similar companies, which argue that AI engines are illegally using other people’s work to build their platforms and products. This means they are aimed directly at the current boom of generative AI — software, like ChatGPT, that uses existing text or images or code to create new work. Last fall, a group of anonymous copyright owners sued Open AI and Microsoft, which owns the GitHub software platform, for allegedly infringing on the rights of developers who’ve contributed software to GitHub. Microsoft and OpenAI collaborated to build GitHub Copilot, which says it can use AI to write code. And in January, we saw a similar class-action suit filed (by the same attorneys) against Stability AI, the developer of the AI art generator Stable Diffusion, alleging copyright violations. Meanwhile, Getty Images, the UK-based photo and art library, says it will also sue Stable Diffusion for using its images without a license. It’s easy to reflexively dismiss legal filings as an inevitable marker of a tech boom — if there’s hype and money, lawyers are going to follow. But there are genuinely interesting questions at play here — about the nature of intellectual property and the pros and cons of driving full speed into a new tech landscape before anyone knows the rules of the road. Yes, generative AI now seems inevitable. These fights could shape how we use it and how it affects business and culture. We have seen versions of this story play out before. Ask the music industry, which spent years grappling with the shift from CDs to digital tunes, or book publishers who railed against Google’s move to digitize books. The AI boom is going to “trigger a common reaction among people we think of as creators: “‘My stuff is being stolen,’” says Lawrence Lessig, the Harvard law professor who spent years fighting against music labels during the original Napster era, when he argued that music owners were using copyright rules to quash creativity. In the early 2000s, tussles over digital rights and copyrights were a sidelight, of concern to a relatively small slice of the population. But now everyone is online — which means that even if you don’t consider yourself a “creator,” stuff you write or share could become part of an AI engine and used in ways you’d never imagine. And the tech giants leading the charge into AI — in addition to Microsoft, both Google and Facebook have made enormous investments in the industry, even if they have yet to bring much of it in front of the public — are much more powerful and entrenched than their dot-com boom counterparts. Which means they have more to lose from a courtroom challenge, and they have the resources to fight and delay legal consequences until those consequences are beside the point. AI’s data-fueled diet The tech behind AI is a complicated black box, and many of the claims and predictions about its power may be overstated. Yes, some AI software seems to be able to pass parts of MBA and medical licensing tests, but they’re not going to replace your CFO or doctor quite yet. They are also not sentient, despite what a befuddled Googler might have said. But the basic idea is relatively straightforward: Engines like the ones built by OpenAI ingest giant data sets, which they use to train software that can make recommendations or even generate code, art, or text. In many cases, the engines are scouring the web for these data sets, the same way Google’s search crawlers do, so they can learn what’s on a webpage and catalog it for search queries. In some cases, such as Meta, AI engines have access to huge proprietary data sets built in part by the text, photos, and videos its users have posted on their platforms. Meta declined to comment on the company’s plans for using that data if it ever builds AI products like a ChatGPT-esque engine. Other times, the engines will also license data, as Meta and OpenAI have done with the photo library Shutterstock. Unlike the music piracy lawsuits at the turn of the century, no one is arguing that AI engines are making bit-for-bit copies of the data they use and distributing them under the same name. The legal issues, for now, tend to be about how the data got into the engines in the first place and who has the right to use that data. AI proponents argue that 1) engines can learn from existing data sets without permission because there’s no law against learning, and 2) turning one set of data — even if you don’t own it — into something entirely different is protected by the law, affirmed by a lengthy court fight that Google won against authors and publishers who sued the company over its book index, which cataloged and excerpted a huge swath of books. The arguments against the engines seem even simpler: Getty, for one, says it is happy to license its images to AI engines, but that Stable Diffusion builder Stability AI hasn’t paid up. In the OpenAI/Microsoft/GitHub case, attorneys argue that Microsoft and OpenAI are violating the rights of developers who’ve contributed code to GitHub, by ignoring the open source software licenses that govern the commercial use of that code. And in the Stability AI lawsuit, those same lawyers argue that the image engine really is making copies of artists’ work, even if the output isn’t a mirror image of the original. And that their own output competes with the artists’ ability to make a living. “I’m not opposed to AI. Nobody’s opposed to AI. We just want it to be fair and ethical — to see it done right,” says Matthew Butterick, a lawyer representing plaintiffs in the two class-action suits. And sometimes the data question changes depending on whom you ask. Elon Musk was an early investor in OpenAI — but once he owned Twitter, he said he didn’t want to let OpenAI crawl Twitter’s database. Not surprising, as I just learned that OpenAI had access to Twitter database for training. I put that on pause for now.— Elon Musk (@elonmusk) December 4, 2022 Need to understand more about governance structure & revenue plans going forward. OpenAI was started as open-source & non-profit. Neither are still true. What does the past tell us about AI’s future? Here, let’s remember that the Next Big Thing isn’t always so: Remember when people like me were earnestly trying to figure out what Web3 really meant, Jimmy Fallon was promoting Bored Ape NFTs, and FTX was paying millions of dollars for Super Bowl ads? That was a year ago. Still, as the AI hype bubble inflates, I’ve been thinking a lot about the parallels with the music-versus-tech fights from more than two decades ago. Briefly: “File-sharing” services blew up the music industry almost overnight because they gave anyone with a broadband connection the ability to download any music they wanted, for free, instead of paying $15 for a CD. The music industry responded by suing the owners of services like Napster, as well as ordinary users like a 66-year-old grandmother. Over time, the labels won their battles against Napster and its ilk, and, in some cases, their investors. They also generated tons of opprobrium from music listeners, who continued to not buy much music, and the value of music labels plummeted. But after a decade of trying to will CD sales to come back, the music labels eventually made peace with the likes of Spotify, which offered users the ability to subscribe to all-you-can-listen-to service for a monthly fee. Those fees ended up eclipsing what the average listener would spend a year on CDs, and now music rights and the people who own them are worth a lot of money. So you can imagine one outcome here: Eventually, groups of people who put things on the internet will collectively bargain with tech entities over the value of their data, and everyone wins. Of course, that scenario could also mean that individuals who put things on the internet discover that their individual photo or tweet or sketch means very little to an AI engine that uses billions of inputs for training. It’s also possible that the courts — or, alternatively, regulators who are increasingly interested in taking on tech, particularly in the EU — enforce rules that make it very difficult for the likes of OpenAI to operate, and/or punish them retroactively for taking data without consent. I’ve heard some tech executives say they’d be wary of working with AI engines for fear of ending up in a suit or being required to unwind work they’d made with AI engines. But the fact that Microsoft, which certainly knows about the dangers of punitive regulators, just plowed another $10 billion into OpenAI suggests that the tech industry figures the reward outweighs the risk. And that any legal or regulatory resolution will show up long, long after the AI winners and losers will have been sorted out. A middle ground, for now, could be that people who know and care about this stuff take the time to tell AI engines to leave them alone. The same way people who know how webpages are made know that “robots.txt” is supposed to tell Google not to crawl your site. Spawning.AI has built “Have I Been Trained,” a simple tool that’s supposed to tell if your artwork has been consumed by an AI engine, and gives you the ability to tell engines not to inhale it in the future. Spawning co-founder Dryhurst says the tool won’t work for everyone or every engine, but it’s a start. And, more important, it’s a placeholder as we collectively figure out what we want AI to do, and not do. “This is a dress rehearsal and opportunity to establish habits that will prove to be crucial in the coming decades,” he told me via email. “It’s hard to say if we have two years or 10 years to get it right.” Update, February 2, 3 pm ET: This story was originally published on February 1 and has been updated with Meta declining to comment on its plans for building generative AI products.
AI Startups
With the rise of open-source AI models, the commoditization of this groundbreaking technology is upon us. It’s easy to fall into the trap of aiming a newly-released model at a desirable tech demographic and hoping it catches on. Creating a moat when so many models are easily accessible creates a dilemma for early-stage AI startups, but leveraging deep relationships with customers in your domain is a simple, yet effective tactic. The real moat is a combination of AI models trained on proprietary data, as well as a deep understanding of how an expert goes about their daily tasks to solve nuanced workflow problems. In highly-regulated industries where outcomes have real-world implications, data storage must pass a high bar of compliance checks. Typically, customers prefer companies with prior track records over startups, which promotes an industry of fragmented datasets where no single player has access to all the data. Today, we have a multi-modal reality in which players of all sizes hold datasets behind highly compliant walled-garden servers. This creates an opportunity for startups with existing relationships to approach potential customers who would typically outsource their technology to launch a test pilot with their software to solve specific customer problems. These relationships could arise through co-founders, investors, advisors, or even prior professional networks. The real moat is a combination of AI models trained on proprietary data, as well as a deep understanding of how an expert goes about their daily tasks to solve nuanced workflow problems. Showing customers tangential credentials is an effective way to build trust: positive indicators include team members from a university known for AI experts, a strong demo where the prototype enables prospective customers to visualize outcomes, or a clear business case analysis of how your solution will help them save or make money. One mistake founders commonly make at this stage is to assume that building models of client data is sufficient for product-market-fit and differentiation. In reality, finding PMF is much more complex: just throwing AI at a problem creates issues regarding accuracy and customer acceptance. Clearing the high bar of augmenting experienced experts in highly-regulated industries who have an intricate knowledge of day-to-day changes typically turns out to be a tall order. Even AI models that are trained well on data can lack the accuracy and nuance of expert domain knowledge, or even more importantly, any connection to reality. A risk-detection system trained on a decade of data may have no idea about industry expert conversations or recent news that could render a formerly-considered “risky” widget completely harmless. Another example could be a coding assistant suggesting code completion of a prior version of a front-end framework which has separately benefitted from a succession of high-frequency breaking feature releases. In these types of situations, it’s better for startups to rely on the pattern of launching and iterating, even with pilots. There are three key tactics in managing pilots:
AI Startups
Y Combinator’s Latest Batch Is 35% AI Startups The latest group of Y Combinator companies will be in the Bay Area in person. (Bloomberg) -- More than one-third of the famed startup accelerator Y Combinator’s latest batch of companies are focused specifically on artificial intelligence. Y Combinator received a record 24,000 applications for its latest cohort, and accepted just under 1% of those, Garry Tan, president and chief executive officer of the accelerator, said Tuesday in an interview on Bloomberg Television. About 35% of the companies selected for the program are AI-focused, he said, and as many as half involve AI as a component of their business. “There’s something very special happening here,” Tan said about AI in San Francisco. “The smartest people in the world are sitting in those cafes having discussions. Not just about starting their companies, but also what is the cutting edge of what these AI models can do.” This year, for the first time since the pandemic, Y Combinator is making in-person participation a mandatory part of its program, and will require all founders to be in the Bay Area. “There’s nothing like having the energy of having people in person,” Tan said. The accelerator runs two programs annually, one in the winter and one in the summer. Y Combinator pledges to invest $500,000 in those startups selected for the program and takes them through a three-month course on running a company. Some of YC’s notable alumni include Airbnb Inc. and Stripe Inc. More stories like this are available on bloomberg.com ©2023 Bloomberg L.P.
AI Startups
When employees leave Google to join the artificial intelligence startup race, the search giant still has a way to benefit -- by keeping those former workers as cloud customers. From a report: More than half of venture-backed generative AI startups pay for Google's cloud computing platform, Alphabet, Google's parent company, said Tuesday. Of the startups valued at over $1 billion, 70% are Google Cloud customers, and about a third of those are helmed by former employees, including Anthropic, Character.ai and Cohere, the company said. That gives Google a way to extend its influence in the field even when it sheds talent. Google's cloud unit, which reported a profit for the first time this year, has emerged as one of the company's best bets for growth as its core search business matures. Google still trails Amazon's AWS and Microsoft's Azure in the market. But startups in the field of generative AI -- programs that can spin up images, text and video from simple prompts -- are increasingly turning to the company, said James Lee, Google Cloud's general manager for startups and AI. "We're seeing strong momentum in our business, and we see Google Cloud as the preferred choice for startups building generative AI," Lee said in an interview. Google Cloud customers have the option to use AI models from Google itself as well as other companies, a degree of flexibility that appeals to startups, Lee said. Google's cloud unit, which reported a profit for the first time this year, has emerged as one of the company's best bets for growth as its core search business matures. Google still trails Amazon's AWS and Microsoft's Azure in the market. But startups in the field of generative AI -- programs that can spin up images, text and video from simple prompts -- are increasingly turning to the company, said James Lee, Google Cloud's general manager for startups and AI. "We're seeing strong momentum in our business, and we see Google Cloud as the preferred choice for startups building generative AI," Lee said in an interview. Google Cloud customers have the option to use AI models from Google itself as well as other companies, a degree of flexibility that appeals to startups, Lee said.
AI Startups
Just over a year ago, the idea of generating realistic images from a prompt would have been called a batshit crazy idea. Fast forwarding a few months, we have AI generators that can produce a full 3D model, with all the textures added, with just a single sentence.Luma AI, a startup aiming to make 3D models as easy as “waving a phone around”, has released Imagine, a model that gives users the power of creating 3D assets entirely with text.And the quality of these models is insane:Check them out. After getting on their waitlist, you can also generate 3D AI models (Just like what OpenAI did with DallE)Get our free, 5-min daily newsletter with TLDRs of the most exciting/terrifying news in AI🤖, ML📱, and AI Startups🚀! 👇Half of a AAA video game's development cost is art & design, which is also half the time of development. This technology can allow game companies to make custom 3D assets in no time at all. A 3D landscape with all the trees, flowers, skies, and mountains could be made in just a day or two instead of weeks. It gives 3D modelers a base asset that can be further customized and improved upon.In fact, AI is already being used to make concepts art into actual characters.Leaving startups aside, Text-to-3D models are also being developed by mega-corporations in this AI gold rush. Here is some work of Google’s DreamFusion: Nvidia has also released its own model called Magic3D.It has been only a year into the AI bandwagon and we have already achieved some insane progress. It's certain that the next decade is going to be the AI era, just like the Internet era and the Crypto era. Here’s how Sequoia projects it to be:It is a very exciting and terrifying time to live in.Get our free, 5-min daily newsletter with TLDRs of the most exciting/terrifying news in AI🤖, ML📱, and AI Startups🚀! 👇
AI Startups
Mayfield is nearly doubling its investing pace this year, while other firms are pulling back. The longtime VC firm just announced a new $250 million seed fund focused on AI. In tough economic times, the firm's leader, Navin Chaddha, believes it's a prime time to invest. Slow and steady wins the race. That has essentially been Navin Chaddha's mantra since he took over the reins at venture-capital stalwart Mayfield Fund in 2009. Since then, Mayfield's bar to make a deal with a startup has been incredibly high. On average, of the thousands of companies it meets with annually, Mayfield only makes investments in about 0.4% of them. That's lower than the acceptance rate of startup accelerator Y Combinator, which accepts around 1.5% to 2% of companies that apply for its program. But it's a strategy that's worked for Mayfield, helping it deliver consistent returns for its LPs, and a big part of the reason why that prudence has made its LPs continue to commit capital for the firm's funds. Earlier this year, the firm announced a total fundraise of $955 million for two new funds — its biggest fundraise thus far — and recently announced a new $250 million AI-focused fund that will draw from the firm's current funds and the new capital raised. That's in stark contrast to many VCs and their firms like Tiger Global, TCV and Insight Partners, which are tempering their lofty fundraising goals by lowering their targets. The fresh capital has put Mayfield in a position to lean in and do more investments than it typically does. The firm does about eight investments per year and when markets get too frothy, Mayfield actually pulls back on investing – like it did in 2021 when it only invested in seven new startups, Chaddha said. But now that the market has been on shaky footing the past year with talks of a looming recession, Mayfield's leader, Navin Chaddha, says the firm is probably on pace to do up to 15 new investments this year. Last year, when most of the venture industry pulled back on funding startups, Mayfield made 12 new investments. "So that's what we do, when things get out of whack, and the whole industry follows this way, we follow that way. We are just contrarian," Chaddha said. And the timing really couldn't be better for Mayfield, as the hype in AI – particularly generative AI – has hit an apex in the VC industry. "We think this is the time to lean in," Chaddha said. The Mayfield leader – who's made successful bets on companies like Hashicorp, Lyft and Poshmark, and has been featured on Forbes' Midas list almost every year he's been at the helm of the firm – firmly believes that the greatest companies are created during tough economic times. That's what makes the current period such an enticing time to invest in new companies, particularly in AI. These companies could usher in the next wave of tech dominance. Mayfield is actually doubling-down on that assertion with its new AI-focused seed fund, and hiring expert industry investor, Vijay Reddy, to head it up. Reddy focused on AI at Clear Ventures and before that, at Intel Capital, making investments in companies like Sambanova, DataRobot and Joby. Chaddha told Forbes that he was interested in a wide range of AI companies, from application software to semiconductors, while Reddy said he was excited about co-piloted AI applications, which keep humans as part of the decision making process, and companies building AI trust and safety infrastructure tools. Mayfield's new AI focus comes amid an inflection point for the venture-capital industry. Many saw this AI wave coming, while others are now trying to catch up. Over the past few months, several firms have announced new AI-focused funds or set aside capital to focus on the sector: Wing Venture Capital just closed a $600 million fund, Bessemer earmarked $1 billion, and Sequoia is now pulling out all the stops to get in on AI startup investments. For years, Mayfield had kept tabs on AI startups like MindsDB, which was relatively unknown and cast aside by some investors just five to six years ago. But once the startup began to get investor interest, Mayfield was ready to pounce and make an investment. With a dedicated $250 million AI seed war chest, Mayfield is now prepared to lean in even more. "We believe that AI will emerge as our teammate and that the Gen.AI wave will create many iconic companies," Chaddha said in a press release. Read the original article on Business Insider
AI Startups