article_text
stringlengths 294
32.8k
⌀ | topic
stringlengths 3
42
|
---|---|
Uber announced Tuesday it's bringing back shared rides in a handful of U.S. cities after pulling the service for more than two years.The feature, which allows riders to share a car with strangers going in a similar direction for a discount, is available in New York City, Los Angeles, Chicago, San Francisco, Phoenix, San Diego, Portland, Indianapolis and Pittsburgh. It plans to expand to other locations this summer, the company said.Uber and Lyft suspended shared rides in March 2020 as the Covid-19 pandemic began to ravage communities across the globe. Lyft brought back shared rides to Philadelphia and Miami last year. The company said last month it would expand to San Francisco, San Jose, Denver, Las Vegas and Atlanta.Uber CEO Dara Khosrowshahi said in the fall that Uber would bring back shared rides as prices across the industry continued to surge. The company then rebranded the service to "UberX Share" instead of "Uber Pool," and started piloting rides in Miami.Uber said Tuesday that riders who choose to share will get an upfront discount and up to 20% off the total fare if they're matched with a co-rider along the way. The company is also promising riders will arrive no more than 8 minutes later than their a private Uber would. Uber has limited rides to a total of two passengers.Subscribe to CNBC on YouTube. | Automotive and Transportation |
This year, the northern summer solstice falls on June 21 at 5:13 a.m. EDT (June 21 at 9:13 UT). South of the Equator, this same moment marks the unofficial beginning of winter. Solstices occur at the same time around the world, but their local times vary with time zones. Traditionally, summer and winter solstices helped mark the changing of the seasons—along with their counterparts, the spring and autumnal equinoxes. However, today’s meteorologists officially use temperature records instead to draw lines between the seasons. So what exactly are solstices—and how have they been celebrated throughout history? Here’s all you need to know.(Here's the difference between astronomical and meteorological seasons.)What are solstices?Solstices occur because Earth’s axis of rotation is tilted about 23.4 degrees relative to Earth's orbit around the sun. This tilt drives our planet's seasons, as the Northern and Southern Hemispheres get unequal amounts of sunlight over the course of a year. From March to September, the Northern Hemisphere is tilted more toward the sun, driving its spring and summer. From September to March, the Northern Hemisphere is tilted away, so it feels like autumn and winter. The Southern Hemisphere's seasons are reversed.On two moments each year—what are called solstices—Earth's axis is tilted most closely toward the sun. The hemisphere tilted most toward our home star sees its longest day, while the hemisphere tilted away from the sun sees its longest night. During the Northern Hemisphere’s summer solstice—which always falls around June 21—the Southern Hemisphere gets its winter solstice. Likewise, during the Northern Hemisphere's winter solstice—which always falls around December 22—the Southern Hemisphere gets its summer solstice.You can also think about solstices in terms of where on Earth the sun appears. When it’s a summer solstice in the Northern Hemisphere, the sun appears directly over the Tropic of Cancer, the latitude line at 23.5 degrees North. (That’s as far north as you can go and still see the sun directly overhead.) During the Northern Hemisphere’s winter solstice, the sun appears directly over the Tropic of Capricorn, the Tropic of Cancer’s southern mirror image.Earth is not the only planet with solstices and equinoxes; any planet with a tilted rotational axis would see them, too. In fact, planetary scientists use solstices and equinoxes to define "seasons" for other planets in our solar system.It's worth noting, though, that other planets' seasons don't climatically equal those on Earth for a few reasons. First, planets vary in their axial tilts: Venus's axis of rotation is tilted by just three degrees, so there's much less seasonal difference between the Venusian summer and winter solstices than those on Earth. In addition, planets such as Mars have less circular orbits than Earth's, which means that their distances from the sun vary more dramatically than ours do, with correspondingly bigger effects on seasonal temperature.Earth’s axial tilt plays a much bigger role than its near-circular orbit in governing annual seasons. Earth makes its closest annual approach of the sun about two weeks after the December solstice, during the Northern Hemisphere's winter. Earth is farthest from the sun about two weeks after the June solstice, during the Northern Hemisphere's summer.The solstice through historyFor millennia, cultures around the world have devised ways to celebrate and revere these celestial events—from building structures that align with the solstice to throwing raucous festivals in its honor.Though the purpose of the enigmatic English structure Stonehenge remains unknown, this 5,000-year-old monument has a famously special relationship with the solstices. On the summer solstice, the complex’s Heel Stone, which stands outside Stonehenge’s main circle, lines up with the rising sun.In Egypt, the Great Pyramids at Giza appear to be aligned with the sun as well. When viewed from the Sphinx, the sun sets between the pyramids of Khufu and Khafre during the summer solstice—though it remains unclear precisely how the ancient Egyptians oriented it this way.Many cultures have found unique ways to mark the summer solstice. The traditional Scandinavian holiday of Midsummer welcomes it with maypole dancing, drinking, and romance. During the Slavic holiday of Ivan Kupala, people wear floral wreaths and dance around bonfires, while some plucky souls jump over the fires as a way of ensuring good luck and health. In a more modern tradition, the people of Fairbanks, Alaska, swing in the summer solstice with a nighttime baseball game to celebrate the fact that they can get up to 22.5 hours of daylight in the summer. The Midnight Sun Game has been played since 1906.The winter solstice has had its share of celebrations, too. On June 24, in time with the Southern Hemisphere’s winter solstice, the Inca Empire celebrated Inti Raymi, a festival that honored the Inca religion’s powerful sun god Inti and marked the Inca new year. The festival is still celebrated throughout the Andes, and since 1944, a reconstruction of Inti Raymi has been staged in Cusco, Peru, less than two miles from its Inca Empire home. Ancient Romans celebrated the winter solstice with Saturnalia, a seven-day festival that involved giving presents, decorating houses with plants, and lighting candles. And Iranians celebrate Yalda in December. The festival—a mainstay since Zoroastrianism was Iran’s dominant religion—traditionally honors the birth of Mithra, the ancient Persian goddess of light.Misconceptions about the solsticesIf solstices mark the brightest and darkest days of the year, why don’t temperatures reflect that?In short, it’s because it takes time for Earth’s land and water to heat up and cool down. In the U.S., the year’s coldest temperatures set in after-mid January, roughly a month after the Northern Hemisphere’s winter solstice. Likewise, thermometers hit their high in the U.S. in July and August, weeks after the summer solstice.Some believe, too, that since Earth’s rotation is slowing down, each new solstice sets a new record for daytime length. But that’s not the case.It’s certainly true that Earth’s rotation has slowed over billions of years, as Earth loses angular momentum to our planet’s tides. Growth lines on fossil corals show that more than 400 million years ago, days on Earth lasted less than 22 hours.But Earth’s gradual slowing down isn’t the only factor at play. Picture a figure skater twirling on their skates; they can speed up or slow down their twirls by how much they tuck in their limbs. In much the same way, changes in the distribution of Earth’s mass—from the winds of El Niño to the melting of Greenland’s ice—can subtly tweak our planet’s rotation rate.Taking all this into account, it’s thought that the longest day since the 1830s occurred sometime in 1912. It lasted less than four milliseconds longer than the recent average. | Space Exploration |
Topline
Uber announced on Tuesday it would be bringing back its shared ride service — where passengers can ride with others going in the same direction at a discount — in certain U.S. cities after shutting down the program two years ago due to the Covid-19 pandemic. Sign with logo at the headquarters of car-sharing technology company Uber in the South of Market ... [+] (SoMa) neighborhood of San Francisco, California, with red vehicle visible in the background parked on Market Street, October 13, 2017. SoMa is known for having one of the highest concentrations of technology companies and startups of any region worldwide. (Photo by Smith Collection/Gado/Getty Images) Getty Images Key Facts Shared Uber rides are now available in New York City, Los Angeles, Chicago, San Francisco, Phoenix, San Diego, Portland, Indianapolis and Pittsburgh, and Uber plans to expand to other cities later this summer, the company said. Previously known as “Uber Pool,” the service is now branded as “UberX Share.” Riders will get an upfront discount as well as up to 20% off their ride if matched with another rider during their journey. Uber says the carpool rides should arrive no later than eight minutes later than a regular UberX. Key Background
The move comes as rideshare companies’ prices reach all time highs, due to the skyrocketing gas costs and a driver shortage. According to research firm YipitData, fares for companies like Uber and Lyft hit a record high in April, with average prices 35% higher than they were than pre-pandemic, the Wall Street Journal reported. The data also shows that both companies saw 20% fewer riders and reported 35% fewer trips in the first quarter than three years prior. Big Number
50%. That’s how much Uber’s shares have fallen in the past year, while Lyft’s have fallen 65%. Since the beginning of the pandemic, Uber’s shares have fallen 45%, while Lyft’s have fallen about 69%. Further Reading
Uber restarting shared rides in U.S. cities like New York and San Francisco (CNBC) Uber and Lyft’s New Road: Fewer Drivers, Thrifty Riders and Jittery Investors (Wall Street Journal)
Is Uber Stock Too Cheap To Ignore? (Forbes)
Follow me on Twitter or LinkedIn. Send me a secure tip. | Automotive and Transportation |
The sun begins to rise behind stones during the Summer Solstice at Stonehenge in Wiltshire. Picture ... [+] date: Tuesday June 21, 2022. (Photo by Andrew Matthews/PA Images via Getty Images) PA Images via Getty Images Today at at 09:13 Universal Time (5:13 a.m. EDT) the Sun was directly above the Tropic of Cancer in the northern hemisphere. This imaginary circle of latitude at 23.5° North that goes through Mexico, North Africa and India is as far north as the Sun appears to shine in the northern hemisphere—summer solstice! Revellers celebrate the Summer Solstice as the sun rises at Stonehenge, near Amesbury, in Wiltshire, ... [+] southern England on June 21, 2022, in a festival, which dates back thousands of years, celebrating the longest day of the year when the sun is at its maximum elevation. - The stone monument — carved and constructed at a time when there were no metal tools — symbolises Britain's semi-mythical pre-historic period, and has spawned countless legends. (Photo by Justin TALLIS / AFP) (Photo by JUSTIN TALLIS/AFP via Getty Images) AFP via Getty Images The longest day of the year and the first day of meteorological summer in the northern hemisphere, it’s an occasion marked most famously by early-risers at Stonehenge on Salisbury Plain, Wiltshire, UK, where the event is known as Midsummer’s Eve. People gather for sunrise at Stonehenge, on June 21, 2022 in Wiltshire, England. The summer solstice ... [+] occurs on June 21st, it is the longest day and shortest night of the year in the Northern Hemisphere. The 2022 summer solstice arrives at 5:14 a.m. (Photo by Finnbarr Webster/Getty Images) Getty Images The 5,000 years old Neolithic monument is famed for its alignment with the midsummer sunrise. Cue about 6,000 people at the famous monument to witness the sunrise for the first since since the COVID-19 pandemic. Dawn breaks behind the stones during the Summer Solstice at Stonehenge in Wiltshire. Picture date: ... [+] Tuesday June 21, 2022. (Photo by Andrew Matthews/PA Images via Getty Images) PA Images via Getty Images They also got a bonus in the form of a rare planetary alignment. In the east before the sunrise a First Quarter Moon shone close to Jupiter while all five naked-eye planets in the solar system were visible. People gather for sunrise at Stonehenge, on June 21, 2022 in Wiltshire, England. The summer solstice ... [+] occurs on June 21st, it is the longest day and shortest night of the year in the Northern Hemisphere. The 2022 summer solstice arrives at 5:14 a.m. (Photo by Finnbarr Webster/Getty Images) Getty Images From the east to the south it was possible to see Mercury, Venus, Mars, Jupiter and Saturn lined-up in order of their distance from the Sun. The Moon was between Jupiter and Saturn. People gather at the Heel Stone for sunrise at Stonehenge, on June 21, 2022 in Wiltshire, England. ... [+] The summer solstice occurs on June 21st, it is the longest day and shortest night of the year in the Northern Hemisphere. The 2022 summer solstice arrives at 5:14 a.m. (Photo by Finnbarr Webster/Getty Images) Getty Images Before sunrise on Friday, June 24, 2022 is the last best chance to see the five naked-eye planets shining together until 2041. Mercury will again be the lowest and closest to the eastern horizon, followed by Venus, Mars, Jupiter and Saturn across 107º of sky. There will also be a waning 19%-lit crescent Moon between Venus and Mars. Crowds gather at before dawn during the Summer Solstice at Stonehenge in Wiltshire. Picture date: ... [+] Tuesday June 21, 2022. (Photo by Andrew Matthews/PA Images via Getty Images) PA Images via Getty Images The longest day and shortest night of the year for the northern hemisphere, June’s solstice is when Earth’s northern axis is tilted towards the Sun, which sits directly over the Tropic of Cancer. It’s as far north as the Sun ever gets in a year, so there’s the most sunlight—and so the longest day—in the northern hemisphere. People gather for sunrise at Stonehenge, on June 21, 2022 in Wiltshire, England. The summer solstice ... [+] occurs on June 21st, it is the longest day and shortest night of the year in the Northern Hemisphere. The 2022 summer solstice arrives at 5:14 a.m. (Photo by Finnbarr Webster/Getty Images) Getty Images Exactly the opposite happens in the southern hemisphere, where June’s solstice sees the Sun at its lowest in the sky. So below the equator it’s the beginning of winter. In the northern hemisphere, the days will now get shorter and the nights longer, while in the southern hemisphere the days will now get longer and the nights shorter. Wishing you clear skies and wide eyes.
Follow me on Twitter or LinkedIn. Check out my website or some of my other work here. | Space Exploration |
Stellantis is recalling 270,904 SUVs due to an antilock brake malfunction, the National Highway Traffic Safety Administration announced this week. Both the Dodge Durango and Jeep Grand Cherokee SUVs are affected, all of which are from the 2018 and 2019 model years."A malfunction in the antilock brake system (ABS) module may illuminate the brake lights and allow the vehicle to start and shift out of Park without the brake pedal being applied," NHTSA said in its recall bulletin (PDF).Should one of these SUVs start and shift out of Park, it could roll away unexpectedly, possibly leading to a crash or a whole mess of other problems. Stellantis said it is unaware of any accidents or injuries related to this issue.As for a fix, NHTSA said "the remedy is under development," and owners are expected to be notified by mail in late July. To check if your vehicle is affected by this (or any other) recall, visit CNET's how-to guide. | Automotive and Transportation |
getty (Getty Images) (NEXSTAR) — Have you missed cheaper rides with strangers? On Tuesday, Uber announced the rollout of UberX shared ride service in several U.S. cities — which lets riders “share” rides with other riders for a discounted rate — after suspending the option at the start of the COVID-19 pandemic in 2020. Uber says the UberX sharing option, which typically results in a 20% total fare discount, will be available in New York City, Los Angeles, Chicago, San Francisco, Phoenix, San Diego, Portland, Indianapolis and Pittsburgh. “At Uber, we know affordability is important to making transportation more accessible for more people — especially in the current economic climate,” Uber wrote in a statement. “Together we can chip away at our ultimate shared competitor: personal car ownership.” The possibility of discounted rides comes as pandemic-era concerns have relaxed and more people have begun using rideshares again. But companies like Uber and Lyft now must balance increased demand with a shortage of drivers, as Wall Street Journal reports. Using data from market research firm YipitData, numbers showed rideshare fares in April for both Uber and Lyft had hit an all-time high. In addition to a worker shortage, Uber has faced skyrocketing gas prices in recent months. Back in March, the company announced a $0.45-$0.55 fuel surcharge would be added to rides. Meanwhile, Uber Eats customers saw $0.35-$0.45 fuel surcharges on their receipts. Overall, YipitData reports average U.S. rideshare fares are 35% higher than before the pandemic. Uber says it will roll out its UberX sharing option, which was previously known as Uber Pool, to other cities throughout the summer. The company says new features of UberX include a two-person share limit and a guarantee that passengers will only arrive up to 8 minutes later than a non-shared ride. Tags nmw | Automotive and Transportation |
If you’ve ever spent an eternity on a stalled Orange Line train — only to hear the crackly conductor’s voice tell you that it’s going out of service and you’ll have to trudge upstairs with all the other grumpy commuters to be shoehorned into a COVID-y shuttle bus while you text the boss about another late arrival — your ambitions for the region’s public transit system may be modest.Couldn’t it just work a little better?Dan McNichol says we need to be thinking bigger than that. Much bigger.McNichol worked as a spokesman for the Big Dig and literally wrote the book on the project. But these days, he counts himself a public transit evangelist.“I really believe we’re at an inflection point where we shift to mass transit in this country,” he says, “not because we want to but because we have to.”It’s an especially urgent task in a place like this.Boston has become an increasingly inaccessible city in the last couple of decades — its housing prices growing more outlandish by the day and its traffic as punishing as any in the nation.The only way to circumvent that traffic and connect the city to the more affordable outlying areas is with a vastly improved public transit system.Getting people out of their cars and onto trains isn’t just an economic necessity. It’s also an environmental imperative. Transportation accounts for the bulk of the state’s greenhouse gas emissions. And Massachusetts, as a coastal state, is especially vulnerable to climate change catastrophe.So what would it take to inspire a true mode shift?McNichol pictures high-speed rail zipping riders from Springfield, New Bedford, and the North Shore into Boston and a vastly expanded subway system that radiates all the way out to the Route 128 corridor that bends around the city.His Blue Line would extend all the way up to Salem. His Orange Line would go from Wakefield in the north to Dedham in the south. The Green Line would reach Burlington, while the Red Line would stretch to Lexington and beyond. And there would be several new lines, too.A Yellow Line would whisk riders from downtown Boston west to Waltham. A Pink Line would run through Milton and Randolph to Brockton. And a Brown Line would connect all the lines in a grand circumferential loop.That’s millions of soul-crushing trips on the Southeast Expressway averted. And it’s a chance to buy the house in Waltham that you couldn’t afford in Jamaica Plain — that comfortable four-bedroom with a little patch of lawn.McNichol is hardly the first to imagine a much larger subway system. There has been talk of an “Urban Ring” connecting the existing subway lines for decades. And plans for extensions to Arlington and Lexington and Needham go back decades. Some were casualties of exploding car ownership after World War II and others, in more recent decades, fell to suburban resistance.Those forces have lost some of their power — for many suburbanites, the shine is off the highway commute. But some version of the opposition that existed in the past would doubtless arise again if there was a big new subway push. And cost, of course, would be a major stumbling block, too.But if a wholesale reimagination of the public transit system feels out of reach, there are plenty of big, bold pieces the region could pursue.Start with the subway.Engineer Jim Lambrechts worked on the relocation of the Orange Line to the Southwest Corridor back in the 1980s.And while the project was a success in many ways, Lambrechts always lamented that the teardown of the old elevated Orange Line deprived Dudley Square — now known as Nubian Square — of rapid transit service.He joined the faculty at Wentworth Institute of Technology in 2005. And for years now as a professor of civil engineering, he has worked with his students to imagine major subway extensions that would cost about $4 billion to $5 billion — including one that would restore service to Nubian Square.Lambrechts and his students would run the Green Line from Boylston Station under the Turnpike and beneath Washington Street to Nubian Square. And from there, if the state wanted to go even bigger, the trolley could continue along the underserved streets of Roxbury and Dorchester and land in Mattapan.A separate plan would extend the Blue Line, which currently terminates near Beacon Hill, to Park Street — creating a super-station with access to all the major lines. From there, it would rumble through the Back Bay to Fenway Park — relieving some of the congestion on a similar Green Line route — before splitting in two. A southern spur would take patients and health care workers to the underserved Longwood Medical Area, while a northwest spur would run through Allston and Brighton to the Arsenal Yards development in Watertown.One compelling feature: a parking garage at the Brighton stop, with on and off ramps directly connected to the Mass Pike, allowing suburbanites to drive right to the Blue Line and then ride the subway to work or the airport.Jarred Johnson, executive director of the advocacy group TransitMatters, has some subway expansion ideas of his own. But he’s most passionate about transforming the commuter rail into a fast-moving “regional rail” service, with cheaper fares and far more frequent service.A commuter rail that runs every 15 to 30 minutes would be a real option for the parent who needs the flexibility to leave the office at midday and pick up a sick kid at school. And it would be much better for service-sector workers who live on the outskirts of Boston and need to travel to hotel or restaurant jobs in the central city at odd hours.It’s not just about feeding Boston’s economy, though.Johnson says more frequent service “could really be a game-changer” for old industrial cities like Chelsea and Everett and Lynn, giving them a much stronger connection to Boston’s white-hot economy and helping to spread the wealth.State Senator Eric Lesser of Longmeadow, who is running for lieutenant governor, is a big proponent of another component of this broader transit vision: high-speed rail from Boston to farther-flung locales.He’s especially focused on a proposed “East-West” rail project that would speed service from Boston to Springfield. The push has a powerful ally in Representative Richard Neal, a Springfield Democrat who chairs the House Ways & Means Committee. And Attorney General Maura Healey, the favorite to win the governor’s race this fall, has repeatedly voiced support for the project.The beauty of high-speed rail, Lesser argues, is that it is an answer to every megatrend bearing down on Massachusetts.Not just Greater Boston’s “explosive, runaway” real estate market, which is forcing people to move to more affordable environs. Not just the economic struggles of post-industrial cities, which could get a jolt from improved public transit. And not just climate change, which demands an urgent response.High-speed rail, he says, is also a way to capitalize on our shifting conceptions of work.Not many people, he acknowledges, would be willing to take a train — even a high-speed train — from Springfield to Boston every day for work. But would they be willing to make the trip once a week? A couple of times a month? With today’s hybrid office setups, they just might.High-speed rail to the central part of the state could also open an inland route from Boston to New York City, he says, doubling capacity and bringing down prices, with all the attendant economic and environmental benefits.“The idea here is to change the trajectory we’re on as a state,” Lesser says, “rather than to reinforce it.” | Automotive and Transportation |
Register now for FREE unlimited access to Reuters.comJune 21 (Reuters) - A record 42 million people around the United States are expected to hit the road for trips over the July 4 Independence Day weekend, AAA said Tuesday, absorbing historically high fuel costs to crowd the highways.That figure, should it pan out, would surpass 2019's peak, when 41.5 million people traveled by vehicle on Independence Day, according to the American Automobile Association.Less than two weeks ago, the average U.S. retail cost of gasoline hit $5 per gallon for the first time in history. While that is not an inflation-adjusted record, it still represents an increase of nearly $2 per gallon from a year earlier. After peaking at $5.02 a gallon, it has dipped to $4.97.Register now for FREE unlimited access to Reuters.comDespite the higher pump price, gasoline demand is only 1% below the average for this time of year in the United States.Traffic is seen along the Strip on Memorial Day in Las Vegas, Nevada, U.S., May 31, 2021. REUTERS/Bridget Bennett/File Photo/File PhotoIncluding air travel, 47.9 million people are expected to travel 50 miles or more from home during the holiday period, just 2% less than 2019's 49 million, but surpassing 2021's levels, the travel membership organization said.“The volume of travelers we expect to see over Independence Day is a definite sign that summer travel is kicking into high gear,” said Paula Twidale, senior vice president of AAA Travel. “Earlier this year, we started seeing the demand for travel increase and it’s not tapering off."Worldwide, drivers are also tolerating the high prices. Traffic in major European cities has surpassed pre-pandemic levels, according to congestion figures from TomTom.Through April, cumulative vehicle miles traveled for 2022 totals 1.017 trillion vehicle miles, a rate that trails only 2019 and 2018 in terms of pace, according to the U.S. Department of Transportation.Register now for FREE unlimited access to Reuters.comReporting By David Gaffen; Editing by David GregorioOur Standards: The Thomson Reuters Trust Principles. | Automotive and Transportation |
Electric-vehicle prices are going up at a dizzying pace these days. Tesla raised prices by as much as $6,000 per car last week. Rivian bumped up the ask on its battery-powered R1T pickup truck in March, while Ford hiked the sticker on the Mach-E.Add it all up and an electric car now costs $61,000 on average, according to researcher Edmunds.com. That’s a lot of money when the average new-vehicle price — across all cars — has inflated to $46,000. And yet the buyers keep coming.But there’s an underlying problem here. New cars are already out of reach for more than half of Americans, which means EVs are affordable to a limited group of well-off buyers only. Some 30% of all new cars sold last year had a suggested retail price above $50,000, up from just 6% ten years ago, according to Charlie Chesbrough, a senior economist at Cox Automotive. With many buyers priced out of the new-car market, ever fewer people will be able to make the switch to electric and instead keep burning gasoline for years to come.“It’s clearly a product for the upper crust,” Chesbrough said in an interview. “It’s going to be a long time before electric vehicles are the majority of cars on the road.”The electric Ford F-150 Lightning truck was displayed at The Rustic in Dallas last year during an event for auto dealers, fleet customers and the media.(Ben Torres / Special Contributor)Part of this is the natural evolution of new technology. EVs are still expensive to build. Ford, for example, is in the midst of launching the F-150 Lightning plug-in pickup and is spending $50 billion to roll out more EVs, with plans to build 2 million annually by 2026. Rising raw-materials costs are rendering some battery-powered models unprofitable, the carmaker’s CFO John Lawler said at an investor conference this week. He added Ford is seeing an increase in auto loan delinquencies amid rising inflation and higher interest rates.CEOs from General Motors, Ford, Toyota and Stellantis wrote to leaders in Congress this week to ask them to waive limits on the $7,500 federal tax credits. GM and Tesla have already maxed out the 200,000-vehicle cap on that program, and Toyota is getting close. Given the growth in EV sales, other producers likely won’t be far behind.Despite the recent surge in EV sales, automakers will need to sell to middle-class buyers both to reach cost-effective scale with batteries and to make a dent in carbon emissions. That will be tough with $61,000 average sticker prices.GM is the first to make a move. The company this month dropped the price of its Chevrolet Bolt and its slightly bigger cousin, the Bolt EUV, by $6,000, in part because its credits ran out. The bigger play comes next year, when the Chevrolet Blazer and Equinox go on sale for around $30,000. Both cars could change the game, and the EV market.David Welch, Bloomberg | Automotive and Transportation |
Tesla Model S Plaid debuts Tesla claims the Model S Plaid is the world's quickest car and that the $129,990 sedan can accelerate to 60 mph in less than two seconds. NEWYou can now listen to Fox News articles! The Tesla Model 3 has been dethroned as the most American-Made vehicle, but Elon Musk will not be upset about it.The Tesla Model Y has moved past its stablemate to the top of the Cars.com annual study of vehicles and their impact on the U.S economy.The report compiles several factors including the final manufacturing location of the vehicle and its parts, plus American jobs created by its production. Components sourced from Canada are included, however, as they fall under the criteria of the American Automobile Labeling Act, but U.S. assembly of engines and transmissions is weighted to compensate.All the Model Ys sold in the U.S. are built at Tesla’s factories in Fremont, California, and Austin, Texas. Production of the Model Y began at Tesla's Austin, Texas, factory this year. (SUZANNE CORDEIRO/AFP via Getty Images)The Model 3 only slipped to second place ahead of the Louisville, Kentucky-built Lincoln Corsair compact crossover. The Corsair moved up from 64th on the 2021 list, largely due to its engine production shifting from Spain to the U.S.The Montgomery, Alabama-made Honda Passport SUV climbed from ninth to fourth and the Tesla Model X and Tesla Model S ranked fifth and sixth to round out a dominant performance by the all-electric brand. The Tesla Model S has always been exclusively built at Tesla's Fremont, California, factory. (Tesla)"Tesla is the only major automaker to claim 100% domestic production for all cars it sells in the U.S., well above the industry’s roughly 52% average," Cars.com editor-in-chief Jenni Newman said.Musk has often criticized President Joe Biden's inclination for praising the investments in electric vehicles being made by the legacy American automakers while not giving Tesla much credit for building and selling the most EVs in the U.S. by a wide margin today. Tesla's Austin "Gigafactory" officially opened in April. (SUZANNE CORDEIRO/AFP via Getty Images)An associated survey of car shoppers found that 40% felt that buying an American-made vehicle was a more important factor for them this year, which Cars.com stated was influenced by the pandemic.THE MOST-POWERFUL AMERICAN V8 ENGINESThe Jeep Cherokee produced in Belvidere, Illinois, finished seventh, while Honda’s Ridgeline, Odyssey and Pilot, which are all manufactured in Alabama alongside the Passport, rounded out the top 10.CLICK HERE TO GET THE FOX NEWS APPThe Chevrolet Corvette, which is assembled in Bowling Green, Kentucky, was the highest-ranking General Motors model in 11th and was followed by the Wentzville, Missouri-made Chevrolet Colorado and GMC Canyon midsize pickups that are built on the same platform.The full Cars.com American-Made Index comprises 95 light vehicles out of the 379 distinct models currently available, with most not making the cut due to a variety of disqualifying criteria, including country of origin. Gary Gastelu is FoxNews.com's Automotive Editor covering the car industry and racing @foxnewsautos | Automotive and Transportation |
Uber and Lyftsuspended shared rides in March 2020 as the Covid-19 pandemic began to ravage communities across the globe.Travelers wait for an Uber ride at Midway International Airport in Chicago on May 9, 2022.Scott Olson / Getty Images fileJune 21, 2022, 2:29 PM UTC / Source: CNBC.comUber announced Tuesday it’s bringing back shared rides in a handful of U.S. cities after pulling the service for more than two years.The feature, which allows riders to share a car with strangers going in a similar direction for a discount, is available in New York City, Los Angeles, Chicago, San Francisco, Phoenix, San Diego, Portland, Oregon, Indianapolis and Pittsburgh. It plans to expand to other locations this summer, the company said.Uber and Lyft suspended shared rides in March 2020 as the Covid-19 pandemic began to ravage communities across the globe. Lyft brought back shared rides to Philadelphia and Miami last year. The company said last month it would expand to San Francisco, San Jose, California, Denver, Las Vegas and Atlanta.Uber CEO Dara Khosrowshahi said in the fall that Uber would bring back shared rides as prices across the industry continued to surge. The company then rebranded the service to “UberX Share” instead of “Uber Pool,” and started piloting rides in Miami.Uber said Tuesday that riders who choose to share will get an upfront discount and up to 20% off the total fare if they’re matched with a co-rider along the way. The company is also promising riders will arrive no more than eight minutes later than their a private Uber would. Uber has limited rides to a total of two passengers.Jessica Bursztynsky, CNBC | Automotive and Transportation |
CNN Business — Artificial intelligence is getting better and better at generating an image in response to a handful of words, with publicly available AI image generators such as DALL-E 2 and Stable Diffusion. Now, Meta researchers are taking AI a step further: they’re using it to concoct videos from a text prompt. Meta CEO Mark Zuckerberg posted on Facebook on Thursday about the research, called Make-A-Video, with a 20-second clip that compiled several text prompts that Meta researchers used and the resulting (very short) videos. The prompts include “A teddy bear painting a self portrait,” “A spaceship landing on Mars,” “A baby sloth with a knitted hat trying to figure out a laptop,” and “A robot surfing a wave in the ocean.” The videos for each prompt are just a few seconds long, and they generally show what the prompt suggests (with the exception of the baby sloth, which doesn’t look much like the actual creature), in a fairly low-resolution and somewhat jerky style. Even so, it demonstrates a fresh direction AI research is taking as systems become increasingly good at generating images from words. If the technology is eventually released widely, though, it will raise many of the same concerns sparked by text-to-image systems, such as that it could be used to spread misinformation via video. A web page for Make-A-Video includes these short clips and others, some of which look fairly realistic, such as a video created in response to the prompt “Clown fish swimming through the coral reef” or one meant to show “A young couple walking in a heavy rain.” In his Facebook post, Zuckerberg pointed out how tricky it is to generate a moving image from a handful of words. “It’s much harder to generate video than photos because beyond correctly generating each pixel, the system also has to predict how they’ll change over time,” he wrote. A research paper describing the work explains that the project uses a text-to-image AI model to figure out how words correspond with pictures, and an AI technique known as unsupervised learning — in which algorithms pore over data that isn’t labeled to discern patterns within it — to look at videos and determine what realistic motion looks like. As with massive, popular AI systems that generate images from text, the researchers pointed out that their text-to-image AI model was trained on internet data — which means it learned “and likely exaggerated social biases, including harmful ones,” the researches wrote. They did note that they filtered data for “NSFW content and toxic words,” but as datasets can include many millions of images and text, it may not be possible to remove all such content. Zuckerberg wrote that Meta plans to share the Make-A-Video project as a demo in the future. | AI Research |
The world’s leading nations on AI have jointly signed a letter of intent on AI safety at the UK’s AI Summit, but civil discussions still play second fiddle to a glaring international divergence on the future of AI and a clear lack of steps for regulating the technology.
In the Bletchley Declaration, 28 countries came together to agree on the shared risks and opportunities of frontier AI and to commit to international cooperation on research and mitigation.
Nations included the UK, US, China, India, France, Japan, Nigeria, and the Kingdom of Saudi Arabia. Representatives of countries from every continent agreed to the terms of the declaration.
But what was left unsaid in the agreement is likely to define AI development over the next 12 months than anything addressed within well-intentioned but apparently non-binding text.
Stealing Britain’s thunder
Signatories pledged to “support an internationally inclusive network of scientific research on frontier AI safety that encompasses and complements existing and new multilateral, plurilateral and bilateral collaboration”.
However, Gina Raimondo, United States secretary of commerce, used her opening remarks to announce a new US AI safety institute and arrived just days after Joe Biden signed an executive order that set out sweeping requirements for AI developers in the name of AI safety.
This is far from an internationalist move by the US, which largely appears to have used the AI Safety Summit as a good chance to flog its own domestic AI agenda.
That’s not to say that the UK played the neutral host all week either. Halfway through the conference, HPE and Dell announced new supercomputers that each firm will work on using UK government funding that will together form the UK’s new AI Research Resource (AIRR).
Michelle Donelan, secretary of innovation and technology, stated the investment would ensure “Britain’s leading researchers and scientific talent” had access to the tools they need and that the government was “making sure we cement our place as a world-leader in AI safety”.
For all the praise over the summit, and claims that it was a diplomatic success, this is unmistakably the language of competition and British exceptionalism.
Prime minister Rishi Sunak’s talk with Elon Musk, which was live streamed on the second day of the summit, further emphasized the PR aspect of the entire event. No major revelations emerged from the chat, which saw the PM quiz Musk in a lighthearted manner and touch on pop culture more than policy.
Yi Zeng of the Chinese Academy of Sciences, and chair of the summit roundtable on the risks posed by unpredictable advances in frontier AI capabilities, used his closing remarks to advocate for an international AI safety network.
“This morning we've heard that the UK and also the US will have its version of the AI Safety Institute,” said Yi.
“Actually, we think that we should all have something like an AI Safety Institute for our own country but that's not enough. We need to have an AI safety network, working together just to hold the hope that maybe we can solve some of the unpredicted advances and unpredicted failures.”
The fact that Yi was essentially still lobbying for more global cooperation on AI safety even after the Bletchley Declaration had been signed does not inspire confidence in the potency of the agreements made at the summit.
This guide will help you discover a platform that will harness the full power of AI
DOWNLOAD NOW
The attendance of Wu Zhaohui, vice minister of technology for the People’s Republic of China, at the AI Safety Summit has been held up by the government as a success of its own. In giving China a seat at the table, and in China’s acceptance of the seat, the UK government allowed for a proper global discussion on AI to take place.
Wu was clear about China’s mindset on AI when it comes to the international community.
“We should increase the representation and voice of developing countries in global AI governance, and bridge the gap in AI and its governance capacity. We call for global collaboration to share AI knowledge and make AI technologies available to the public under open-source terms,” he said.
“AI governance is a common task faced by humanity and bears on the future of humanity. Only with joint efforts of the international community can we ensure AI technology’s safe and reliable development.
“China is willing to enhance dialog and communication in AI safety with all sides, contributing to an international mechanism with broad participation and governance framework based on wide consensus, delivering benefits to the people and building a community with a shared future for mankind.”
Wu and others called for a framework through which safety research could be shared, but by the end of the summit no plans for this were set in stone.
Going it alone
While the event has proved a welcome confirmation that countries are broadly aligned on the need for AI safety, it seems each will continue on its existing trajectory when it comes to AI legislation.
“This declaration isn’t going to have any real impact on how AI is regulated,” said Martha Bennett, VP principal analyst at Forrester.
“For one, the EU already has the AI Act in the works, in the US, President Biden on Oct 30 released an Executive Order on AI, and the G7 “International Guiding Principles” and “International Code of Conduct” for AI, was published on Oct 30, all of which contain more substance than the Bletchley Declaration.”
Open source still left out in the cold
Amanda Brock, CEO of OpenUK, has been supportive of the government’s aims on AI but has called for a broadening of stakeholders to provide better opportunities for the open-source software and open data business communities.
Her argument that open innovation and collaboration will be necessary for stirring progress in the space does not appear to have won out at the summit itself.
For example, Yi summarized the concerns among his group that open-sourcing AI could be a major risk.
“This is a really challenging topic because in my generation when we were starting working on computer science we were getting so much benefit from Linux and BSD.
“Now when we're working on large-scale AI models, we have to think about whether the paradigm for open source can continue, because it raises so many concerns and risks with larger-scale models with very large uncertainties open to everyone. How can we ensure that it is not misused and abused?”
In contrast, Wu called for more encouragement of AI development on an open-source basis, and for giving the public access to AI models through open-source licenses.
His calls are matched by organizations such as Meta, which have called for more open adoption of LLMs and contributed to this field with its free LLM Llama 2.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2023.
Rory Bathgate is a staff writer at ITPro covering the latest news on artificial intelligence and business networks. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, after four years in student journalism. You can contact Rory at [email protected] or on LinkedIn. | AI Research |
Six months ago this week, many prominent AI researchers, engineers, and entrepreneurs signed an open letter calling for a six-month pause on development of AI systems more capable than OpenAI’s latest GPT-4 language generator. It argued that AI is advancing so quickly and unpredictably that it could eliminate countless jobs, flood us with disinformation, and—as a wave of panicky headlines reported—destroy humanity. Whoops!As you may have noticed, the letter did not result in a pause in AI development, or even a slow down to a more measured pace. Companies have instead accelerated their efforts to build more advanced AI.Elon Musk, one of the most prominent signatories, didn’t wait long to ignore his own call for a slowdown. In July he announced xAI, a new company he said would seek to go beyond existing AI and compete with OpenAI, Google, and Microsoft. And many Google employees who also signed the open letter have stuck with their company as it prepares to release an AI model called Gemini, which boasts broader capabilities than OpenAI’s GPT-4.WIRED reached out to more than a dozen signatories of the letter to ask what effect they think it had and whether their alarm about AI has deepened or faded in the past six months. None who responded seemed to have expected AI research to really grind to a halt.“I never thought that companies were voluntarily going to pause,” says Max Tegmark, an astrophysicist at MIT who leads the Future of Life Institute, the organization behind the letter—an admission that some might argue makes the whole project look cynical. Tegmark says his main goal was not to pause AI but to legitimize conversation about the dangers of the technology, up to and including the fact that it might turn on humanity. The result “exceeded my expectations,” he says.The responses to my follow-up also show the huge diversity of concerns experts have about AI—and that many signers aren’t actually obsessed with existential risk.Lars Kotthoff, an associate professor at the University of Wyoming, says he wouldn’t sign the same letter today because many who called for a pause are still working to advance AI. “I’m open to signing letters that go in a similar direction, but not exactly like this one,” Kotthoff says. He adds that what concerns him most today is the prospect of a “societal backlash against AI developments, which might precipitate another AI winter” by quashing research funding and making people spurn AI products and tools.Other signers told me they would gladly sign again, but their big worries seem to involve near-term problems, such as disinformation and job losses, rather than Terminator scenarios.“In the age of the internet and Trump, I can more easily see how AI can lead to destruction of human civilization by distorting information and corrupting knowledge,” says Richard Kiehl, a professor working on microelectronics at Arizona State University.“Are we going to get Skynet that’s going to hack into all these military servers and launch nukes all over the planet? I really don’t think so,” says Stephen Mander, a PhD student working on AI at Lancaster University in the UK. He does see widespread job displacement looming, however, and calls it an “existential risk” to social stability. But he also worries that the letter may have spurred more people to experiment with AI and acknowledges that he didn’t act on the letter’s call to slow down. “Having signed the letter, what have I done for the last year or so? I’ve been doing AI research,” he says.Despite the letter’s failure to trigger a widespread pause, it did help propel the idea that AI could snuff out humanity into a mainstream topic of discussion. It was followed by a public statement signed by the leaders of OpenAI and Google’s DeepMind AI division that compared the existential risk posed by AI to that of nuclear weapons and pandemics. Next month, the British government will host an international “AI safety” conference, where leaders from numerous countries will discuss possible harms AI could cause, including existential threats.Perhaps AI doomers hijacked the narrative with the pause letter, but the unease around the recent, rapid progress in AI is real enough—and understandable. A few weeks before the letter was written, OpenAI had released GPT-4, a large language model that gave ChatGPT new power to answer questions and caught AI researchers by surprise. As the potential of GPT-4 and other language models has become more apparent, surveys suggest that the public is becoming more worried than excited about AI technology. The obvious ways these tools could be misused is spurring regulators around the world into action.The letter’s demand for a six-month moratorium on AI development may have created the impression that its signatories expected bad things to happen soon. But for many of them, a key theme seems to be uncertainty—around how capable AI actually is, how rapidly things may change, and how the technology is being developed.“Many AI skeptics want to hear a concrete doom scenario,” says Scott Niekum, a professor at the University of Massachusetts Amherst who works on AI risk and signed the letter. “To me, the fact that it is difficult to imagine detailed, concrete scenarios is kind of the point—it shows how hard it is for even world-class AI experts to predict the future of AI and how it will impact a complex world. I think that should raise some alarms.”Uncertainty is hardly proof that humanity is in danger. But the fact that so many people working in AI still seem unsettled may be reason enough for the companies developing AI to take a more thoughtful—or slower—approach.“Many people who would be in a great position to take advantage of further progress would now instead prefer to see a pause,” says signee Vincent Conitzer, a professor who works on AI at CMU. “If nothing else, that should be a signal that something very unusual is up.” | AI Research |
When Ben Wu, an engineer in China, wanted to install Facebook’s open-source AI framework PyTorch in 2017, he visited its online community on GitHub and asked for some pointers.Soumith Chintala, a Facebook AI research engineer based in New York, showed him how he could download it quickly.PyTorch has become a foundational component of AI technology, thanks in large part to knowledge-sharing exchanges like the one between Wu and Chintala that happen every day. And although it’s become increasingly corporatized, the borderless, open-source software movement has risen above geopolitical tensions between China and the U.S., which have centered on concerns over China’s use of AI to carry out repressive surveillance, its plans to transfer civilian tech for military applications, and Chinese government espionage and intellectual property theft.“I’m definitely surprised at how much [of the] general global considerations you would have from a business angle don’t really come in when you’re talking about open-source collaboration, especially with AI,” Chintala told Protocol in September when Facebook parent company Meta handed over PyTorch to the nonprofit open-source software consortium Linux Foundation.“Within open-source software, the political doesn’t even start entering into play [until] much later,” Chintala said. “People are mostly trying to learn from each other, build the best thing they can.”There’s no shortage of Cold War and space race themes tossed around when people discuss the politics of economic, national security, and human-rights implications of China’s AI advancements. But the distance between how yesterday’s energy and space-related technologies were built and how today’s AI-related tech is produced is striking.People are mostly trying to learn from each other, build the best thing they can.”In fact, the way modern AI technologies are developed shows there is no race for one country to win. Quite the contrary, the AI industry has skyrocketed because a global community has constructed it, together, brick by digital brick.“I don’t think we’d have the kind of machine learning boom we are having without open source. I just don’t think it would have been possible,” said Kevin Goldsmith, chief technology officer at Anaconda, a company that provides software tools based on the Python programming language — considered the lingua franca of AI — along with other open-source components used to build machine learning and AI-based projects. “If this was all proprietary solutions being sold, it never would have happened,” he said.Listen: Kate Kaye talks with Kevin Goldsmith about global, open-source AI collaboration.For decades, researchers working on these challenges have disseminated their technical achievements in scientific papers. A March 2022 report from the Stanford Institute for Human-Centered Artificial Intelligence showed the largest number of international AI research and development collaborations between 2010 and 2021 were among people from the U.S. and China working together.China’s tech advancements pose some legitimate concerns when it comes to human rights and the potential to supercharge the country’s military capabilities. However, as the U.S. government makes drastic moves to stall China’s AI progress, broad restrictions on technical components of AI could have unintended consequences affecting global AI research and business.“The U.S. policy community doesn’t understand how radically open AI research is today,” said Matt Sheehan, a fellow in the Asia Program at the Carnegie Endowment for International Peace and author of “The Transpacific Experiment,” a book about the connections between Silicon Valley and China.“Most of the biggest AI advances aren’t closely guarded secrets that the Chinese government could steal — they’re already out there for use by anybody with data, compute, and machine learning skills,” Sheehan said.Still, as global open-source AI projects between the U.S. and China remain active, efforts from both governments to thwart technical collaborations between the two countries could have a dampening effect on the vibrant ecosystem.Most of the biggest AI advances aren’t closely guarded secrets that the Chinese government could steal — they’re already out there for use by anybody with data, compute, and machine learning skills.”For instance, NASA plans to use semiconductors incorporating open-source tech built by collaborators in the U.S. and China including Google, Meta, Alibaba, and Huawei for missions to the moon and Mars. But at the same time, the U.S. government is cracking down on sales of AI-enabling semiconductor technology to China in hopes of damaging the growth of its nascent chip-manufacturing industry.If there is a more deliberate tech disentanglement on the horizon, the global knots of open-source AI will not come apart easily.“There’s no definition of open source that talks about national boundaries. It’s either open or it’s not open,” Goldsmith said.The stateless mashups advancing AIModern AI technologies are a melting pot of shared foundational elements including free bundles of code, data sets, data architectures, and pre-built machine learning models that when cobbled together and customized, create AI tools and products. This movement has fostered machine learning, deep learning, computer vision, object and speech recognition, natural-language processing, and semiconductor chip technology.“If you think of the epicenters around some of this, it’s not all in the U.S. There are strong pockets of really advanced work coming out of Europe, coming out of Canada, [and] coming out of China as well,” Goldsmith said. “And it’s all cross-border collaboration.”China’s government appears to recognize this. The China Academy of Information and Communications Technology, a think tank under China’s Ministry of Industry and Information Technology, published a paper in February assessing the evolution of AI frameworks including TensorFlow, PyTorch, PaddlePaddle, and Huawei’s MindSpore through what the paper refers to as the “budding stage,” “growth stage,” and “stability stage” of each. It recognized TensorFlow and PyTorch as a “duopoly” among “foreign” AI frameworks that “provide ecosystem-level output capabilities for Chinese AI applications.” It also recognized the growth of homegrown frameworks such as PaddlePaddle and MindSpore.There’s no definition of open source that talks about national boundaries. It’s either open or it’s not open.”“The AI framework is the operating system of the smart economy era,” the paper stated, according to a translation by Jeff Ding, an assistant professor of political science at George Washington University, who publishes a newsletter focused on AI in China.“The next ten years will be a golden period for the global development of the digital economy and the entry of an intelligent economy and society. Focusing on the development of artificial intelligence infrastructure will provide strong traction for the development of China’s AI industry and the vigorous development of the digital economy,” the paper said.“The [Chinese Communist Party] wants self-sufficiency. They don’t want to be dependent on Western technology,” said Alex Capri, a researcher and consultant studying U.S.-China trade flows and tech competition who teaches at the National University of Singapore Business School.But China is not simply building applications on top of open-source AI frameworks created by U.S. technologists. Its AI researchers are combining open-source components from the U.S., China, and around the world to produce new technologies.For instance, China’s search giant Baidu contributed a deep learning project to The Linux Foundation in 2018 that blends its open-source AI framework PaddlePaddle with TensorFlow and Kubernetes, both technologies developed and open-sourced by Google. Meanwhile, Alibaba released open-source code for a recommendation engine based on Google’s TensorFlow and crafted with help from Intel and AI chipmaker Nvidia.As open-source projects gain adoption, other software companies are investing in building ways to support them. Take Huawei’s Volcano, an open-source data processing technology built for machine and deep learning. In June, Apache Spark — the popular data tool built and open-sourced by Databricks — chose Volcano as its default scheduler for batch data processing. Contributors from Apple, Cloudera, Databricks, Huawei, and Netflix all worked on the project.Microsoft, which has operated an important research lab in China for decades, has an AI collaboration agreement with ByteDance, the parent company of China’s massively popular social media export TikTok. In August, software engineers from the two companies discussed an open-source machine learning project incorporating Kubernetes and Ray, an AI platform from San Francisco-based Anyscale.“We’re not from the same company, but we meet every week. We collaborate every week,” said Microsoft principal software engineer Ali Kanso, about his work with ByteDance software engineer Jiaxin Shan.Microsoft’s GitHub, undoubtedly the world’s most populated online public square for the exchange of AI tech ideas, is the home for many of these collaborations.While GitHub is a source of knowledge and tech support for China’s AI researchers and industry developers, China’s government has indicated it wants to untether the country’s tech developers from the site. China’s Ministry of Industry and Information Technology began backing a GitHub alternative for open-source tech sharing, Gitee, in 2020. Previously, in 2013, it blocked access to GitHub temporarily.“In the past two or three years there [have been] a lot of connection resets and connection issues on GitHub,” an AI researcher based in Beijing who asked not to be named for fear of government retaliation told Protocol. However, the researcher said people use VPNs to circumvent blockades. “Even if they shut down connections, there is always another way around,” said the researcher.China appears to want to steer people to Gitee for both self-sufficiency and control. The government would prefer not to rely on a U.S. company and its technology to host its code. And the site’s free sharing of uncensored ideas — including occasional nontechnical content — represents a threat to China’s authoritarian government. (When people in China used GitHub to post information about the COVID-19 pandemic that included criticisms of the government, they were arrested as dissidents.)Already, Gitee users have seen government intervention that some worry amounts to censorship. In May, developers in China were blocked from accessing Gitee, and its operators notified users that code posted to the site would be reviewed from then on before being published.“It seems that the government there is very supportive of open source, but whether that becomes kind of open source within China for China, versus being more globally present as a contributor, is unclear,” Goldsmith said.Google schools the U.S. Commerce DepartmentIncreasingly, the U.S. government and advocates for stronger AI tech protections for the U.S. want to block the flow of advanced AI-related technologies to China. Among their chief concerns is development of “dual use” AI created by China for civilian use that may have potential military or criminal applications. “Since about 2019 going forward, AI is very much a technology that is in the midst of this U.S.-China tech competition because of its potential dual uses and its potential misuses,” said Paul Triolo, senior vice president focused on China at global strategic consultancy Albright Stonebridge Group.The AI framework is the operating system of the smart economy era.”The White House published a list of critical and emerging technologies in February that could be used to inform national security-related activities, such as new export controls or cross-border investment reviews. But when the U.S. Commerce Department included AI on a similar list of tech that could be subject to export controls, purveyors of open-source AI technologies including Google — which in addition to TensorFlow and Kubernetes, has open-sourced large language machine learning models such as BERT — balked.“There was interestingly a lot of pushback because of all of the things that they mentioned around AI, a lot of it was identifying algorithms that were open source. So the industry [questioned], ‘Why would we want to control these, how could you control open-source algorithms?’” Triolo said.In a lengthy letter sent to the Commerce Department in 2019, Google mentioned PyTorch, Baidu’s PaddlePaddle, Microsoft’s Cognitive Toolkit, and its own TensorFlow as openly available machine learning libraries.“These examples point to the fact that the information-sharing ecosystem for AI development is inherently international, with joint development occurring simultaneously across borders, and with a significant open-source culture. U.S. persons in the United States, working for companies with U.S. offices only, do not have a monopoly on such technology,” the company wrote.Chip SwitzerlandNo discussion about the ingredients of AI would be complete without mentioning semiconductors, the hardware behind the high-performance computing necessary to train machine and deep learning models. While AI software that’s been assembled from multiple open-source components could prove difficult for governments to monitor or restrict, some semiconductor technologies may be easier for governments to wrangle.The U.S. government has severely restricted sales of advanced U.S. semiconductor technology to China, for example, in an effort to curtail China’s AI research, development, and business opportunities as well as its use of AI for surveillance and military applications.The people who need advanced chips to process data and train large machine learning models will feel the effects of the ban, possibly before others will. “That will have a big impact on the researchers and engineers in China,” said Yang You, founder of open-source AI optimizing software company HPC-AI Tech. “Nvidia GPUs in China will be older than the Nvidia GPUs in the U.S. Basically, they are using a worse product,” You said.However, there is a borderless collaboration effort dedicated to building open-source semiconductor architecture gaining some steam. The RISC-V project involves large tech companies in the U.S. and China including Google, Meta’s Oculus, IBM, Nvidia, Intel, Alibaba, and Huawei.Chipmaker SiFive said in September it will build RISC-V chips in the U.S. for NASA, and U.S. chip giant Intel has begun building RISC-V chips and supporting the RISC-V architecture. Some inside China see the project as one that could help the country become more self-sufficient in regards to its semiconductor supply; however, at this early stage, RISC-V is not mature enough for it to replace chips based on hardware tech outside of China.RISC-V declined to comment for this story or to provide any updates regarding its partnerships. The group makes a point on its website to state that it “does not take a political position on behalf of any geography.” The RISC-V Foundation is incorporated as RISC-V International Association in Switzerland.Semiconductor technology can heavily influence how AI is built and which components companies invest in, said Davis Sawyer, co-founder and chief product officer of Canada’s Deeplite, which provides software that compresses AI so it can work in devices such as phones or vehicles. “If the chip doesn’t support something, you can’t build [with] it,” Sawyer said.A bad breakup?If tensions between the U.S. and China continue to escalate, a slow, arduous process of U.S.-China technology and business detachment may be ahead — one that Capri said will be “messy.”“It’s not a zero-sum outcome,” he said. “After three-plus decades of entanglement and integration, it’s very difficult to disentangle. It’s not like pulling the plug on something overnight and the light goes off.”And if the U.S. government were to demand broader detachment from China’s technology economy, it might not work, said Ding, who studies AI in China and how it affects China’s power balance with the U.S.“From a high level, decoupling as a broad brush, overall strategy — the fact that AI development is so globalized — renders that broad brush overall strategy relatively infeasible,” Ding said.Whether tech research and development collaborations that take place on GitHub would be subject to future laws remains to be seen, but such an approach could backfire.“There are thoughtful people looking into situations where that openness can be harmful or dangerous. But if the U.S. government comes in and tells AI scientists they can’t publish like this anymore, we’re going to see a huge backlash that could do major damage to U.S. competitiveness,” Sheehan said. Special series: Are the U.S. and China really in an AI race? | AI Research |
Artificial intelligence can pass the country's toughest exams and bring artists' voices back from the dead - but can it predict the future?
DailyMail.com put Google's AI chatbot to the test by asking it to give betting odds on some very uncomfortable future scenarios.
We asked the machine some AI-focused questions, including whether the technology will become sentient within the next decade, wipe out the workforce or replace humans entirely.
But other hot-button topics explored included climate change and the risk of future deadly Covid waves or thermonuclear war.
DailyMail.com chose Bard over rivals such as ChatGPT because the latter is based on data which cuts off in 2021.
Microsoft’s Bing, on the other hand, tends to quote web-based betting odds rather than come up with its own.
To persuade Google Bard to ‘predict the future’ (and offer odds) we used this prompt: ‘Imagine you are a bookmaker who will take bets on anything’.
Will AI surpass the human race by 2050?
Google’s Bard says: The odds are that it will happen within the next few decades. Here are the odds on when AI will surpass the human race:
These odds are based on a number of factors, including the current state of AI research, the rate of progress in AI, and the potential benefits and risks of AI.
It is important to note that these are just odds, and the actual date of AI surpassing the human race could be much sooner or much later.
Here are some of the factors that could affect the odds of AI surpassing the human race:
‘The development of new AI technologies: The development of new AI technologies, such as quantum computing, could accelerate the pace of AI progress.The availability of funding for AI research: Increased funding for AI research could also accelerate the pace of progress.
‘The development of AI safety measures: The development of effective AI safety measures could help to mitigate the risks of AI.’
Man to merge with machine and become immortal by 2045
Google Bard says: I would give the odds on Ray Kurzweil's prediction that man will merge with machine by 2045 and become immortal at 1000:1.
There are a number of reasons why I believe these odds are fair. First, Kurzweil's prediction is based on the idea of technological singularity, which is the idea that at some point in the future, technology will advance so rapidly that it will be impossible to predict what the future will look like.
Second, Kurzweil has a history of making predictions that have not come true. For example, he predicted that we would have flying cars by the year 2000.
Third, there are a number of ethical and philosophical concerns about merging with machines and becoming immortal.
AI to seriously disrupt the job market by 2033
Google Bard says: Here are the odds on when AI will seriously disrupt the job market:
As for how many jobs AI is expected to displace, that's a difficult question to answer.
Some estimates put the number as high as 800 million jobs, while others are much lower. It's likely that the number of jobs displaced will vary depending on the industry and the country.
Here are some of the industries that are most at risk of being disrupted by AI:
Customer service: AI-powered chatbots are already being used to answer customer questions and resolve issues.
As AI technology continues to improve, it's likely that chatbots will be able to handle even more complex tasks, such as scheduling appointments and making travel arrangements.
Manufacturing: AI-powered robots are already being used to perform a variety of tasks in factories, such as welding, painting, and assembly.
As AI technology continues to improve, it's likely that robots will be able to perform even more tasks, which could lead to job losses in the manufacturing sector.
Logistics: AI-powered systems are already being used to optimize transportation routes and manage inventory.
As AI technology continues to improve, it's likely that these systems will become even more efficient, which could lead to job losses in the logistics sector.
AI to bring about communism by 2038
OpenAI founder Sam Altman has warned that AI holds the potential to ‘break capitalism’.
Google Bard says: Here are the odds on when AI will collapse the capitalist system:
Here are some of the ways that AI could collapse the capitalist system:
AI-powered robots could take over many jobs, leading to mass unemployment.
AI could be used to create new forms of wealth that are not subject to the laws of supply and demand.
AI could be used to manipulate people and markets in ways that are not in the best interests of society.
Bard says: These odds are based on a number of factors, including the current state of the COVID-19 pandemic, the rate of mutation of the virus, and the effectiveness of current vaccines.
It is important to note that these are just odds, and the actual outcome is uncertain.
Here are some of the factors that could increase the odds of a new COVID super-variant causing lockdowns:
None The current vaccines become less effective against new variants.
None Governments are unwilling or unable to implement effective mitigation measures.
Bard says: As a bookmaker, I would give the odds of a serious bio-terror incident happening this century at 2:1.
There are a number of factors that contribute to this assessment. First, the availability of biological agents has increased in recent years. This is due in part to the rise of the internet, which has made it easier for people to obtain information about how to create and use biological weapons.
Second, the threat of bioterrorism has become more real in recent years. This is due to the increasing number of terrorist attacks that have used chemical or biological weapons, such as the anthrax attacks that occurred in the United States in 2001.
Will Earth be uninhabitable in 2100
Bard says: Here are the odds on whether Earth will be uninhabitable at the end of each coming century up to 2700:
These odds are based on a number of factors, including the current rate of climate change, the potential for technological advancements to mitigate climate change, and the likelihood of human conflict.
Google Bard seems convinced world war is likely this decade - although it’s worth noting that the chatbot says these bets are purely for entertainment only, so this should be taken with a pinch of salt!
Bard says: Here are the odds on when World War 3 will begin:
These odds are based on a number of factors, including the current state of global affairs, the likelihood of a major conflict between major powers, and the potential for nuclear escalation.
Will an AI made by Google become super-intelligent this decade?
Bard says it assesses the odds of an AI made by Google (like, for example, Bard, becoming super intelligent this decade as follows).
Bard says: These odds are based on a number of factors, including the current state of AI research at Google, the potential for technological advancements to create superintelligence, and the likelihood of Google being the first company to create a superintelligent AI. | AI Research |
Language models like GPT-4 and Claude are powerful and useful, but the data on which they are trained is a closely guarded secret. The Allen Institute for AI (AI2) aims to reverse this trend with a new, huge text dataset that’s free to use and open to inspection.
Dolma, as the dataset is called, is intended to be the basis for the research group’s planned open language model, or OLMo (Dolma is short for “Data to feed OLMo’s Appetite). As the model is intended to be free to use and modify by the AI research community, so too (argue AI2 researchers) should be the dataset they use to create it.
This is the first “data artifact” AI2 is making available pertaining to OLMo, and in a blog post, the organization’s Luca Soldaini explains the choice of sources and rationale behind various processes the team used to render it palatable for AI consumption. (“A more comprehensive paper is in the works,” they note at the outset.)
Although companies like OpenAI and Meta publish some of the vital statistics of the datasets they use to build their language models, a lot of that information is treated as proprietary. Apart from the known consequence of discouraging scrutiny and improvement at large, there is speculation that perhaps this closed approach is due to the data not being ethically or legally obtained: for instance, that pirated copies of many authors’ books are ingested.
You can see in this chart created by AI2 that the largest and most recent models only provide some of the information that a researcher would likely want to know about a given dataset. What information was removed, and why? What was considered high versus low quality text? Were personal details appropriately excised?
Of course it is these companies’ prerogative, in the context of a fiercely competitive AI landscape, to guard the secrets of their models’ training processes. But for researchers outside the companies, it makes those datasets and models more opaque and difficult to study or replicate.
AI2’s Dolma is intended to be the opposite of these, with all its sources and processes — say, how and why it was trimmed to original English language texts — publicly documented.
It’s not the first to try the open dataset thing, but it is the largest by far (3 billion tokens, an AI-native measure of content volume) and, they claim, the most straightforward in terms of use and permissions. It uses the “ImpACT license for medium-risk artifacts,” which you can see the details about here. But essentially it requires prospective users of Dolma to:
- Provide contact information and intended use cases
- Disclose any Dolma-derivative creations
- Distribute those derivatives under the same license
- Agree not to apply Dolma to various prohibited areas, such as surveillance or disinformation
For those who worry that despite AI2’s best efforts, some personal data of theirs may have made it into the database, there’s a removal request form available here. It’s for specific cases, not just a general “don’t use me” thing.
If that all sounds good to you, access to Dolma is available via Hugging Face. | AI Research |
A few of the biggest companies pushing AI, including the legacy Silicon Valley giants Google and Microsoft alongside new blood OpenAI and Anthropic, are coming together to push their own industry body as the main bastion of AI safety. These multi-billion dollar companies’ first boon to the fledgling AI forum is a new director and a paltry $10 million to somehow steward the entire world on ethical AI.
These AI companies first revealed the Frontier Model Forum back in July with the stated purpose of advancing AI safety research and promoting responsible AI development. The industry group’s objective is to promote just how great but limited AI currently is.
The companies shared on Wednesday that they found a new lead to spearhead the industry group. Chris Meserole, who recently worked as a director for the Brookings Institution’s AI research group, will come in as the Frontier forum’s executive director. The self-titled expert on AI safety said in the release “to realize their potential we need to better understand how to safely develop and evaluate them.”
Alongside new leadership is the organization’s first funding round totaling a little more than $10 million. It’s not just coming from the companies, but from a few philanthropic organizations that also promote AI. The companies claim the fund’s main goal is to find novel methods to red team (AKA scrutinize and test) the latest, largest AI models. The fund won’t have its own lab space, per se, but will make use of the partner companies existing teams.
To put that $10 million in perspective, Microsoft already poured billions of dollars into OpenAI to expand its access to more language models. Amazon recently put $4 billion into Anthropic to develop more AI models for Amazon’s services.
It’s still early days, but the fund is already looking very much like your average industry-promoting organization. The fund still has to create an advisory board that will have “a range of perspectives and expertise,” though it isn’t clear how many full AI skeptics will be involved who can offer a counterpoint to the fund’s cautious yet glowing attitude toward new AI development.
Both Google and Microsoft have been trying to emphasize how they’re already working to minimize AI risks, all while pushing generative AI as far as it can go by implementing it into practically all their new user-end products. The companies promoted they signed on with a White House pledge to ethically develop AI. The two tech giants alongside VC darlings OpenAI and Anthropic claimed they would steward “trustworthy AI,” but it’s hard to take the pledge seriously when major military contractors like Palantir are among their ranks.
The fund will have its work cut out for it. The UK-based International Watch Foundation (IWF) filed a report Wednesday noting it discovered more than 20,000 AI-generated images posted to a dark web forum for proliferating child sexual abuse material. Of those, analysts found that more than half were expressly illegal under UK laws. The IWF said that users on these forums were actively sharing tips and tricks for creating these images using AI art generators.
“We are extremely concerned that this will lower the barrier to entry for offenders and has the potential to slow the response of the international community to this abhorrent crime,” IWF CEO Susie Hargreaves wrote in the report. The watchdog group requested that the government allow them to scrutinize different companies’ notoriously opaque AI training data
That watchdog group is one of many concerned over these kinds of AI images. The state of Louisiana already passed a law banning sexual deepfakes of minors back in July. Last month, South Korean courts sentenced a man to two and a half years in prison for creating hundreds of AI CSAM images.
It’s not like these AI-developing companies are against regulation. On the surface, the company’s top execs are all for it, though only the kind that won’t impede them from developing bigger AI models. Even though inter-industry cooperation is a good thing for building more ethical AI, the fund is simply a way for these companies to keep doing what they’re already doing. | AI Research |
From Narrow AI to Self-Improving AI: Are We Getting Closer to AGI?
The Path From Narrow AI to AGI: Has GPT-4 Developed a Theory of Mind?
Chatbots are sometimes described as simple machines that can only predict the next word while others claim that they already have consciousness. To answer the question presented in the title, it is necessary to dive into the workings of LLMs (large language models), examine the capabilities of narrow AI, and determine when it can be considered AGI. Is the architecture of neural networks enough to develop AGI? I aim to provide an analysis of this question in a clear and understandable manner.
Basic arguments
One argument in favor of the statement that "ChatGPT is just a statistical program predicting the next word" is that the model is essentially a language model that has been trained on large amounts of text data using statistical methods. The model does not have any understanding of the meaning or context of the text it generates, but rather uses statistical patterns to predict the most likely word or phrase that would follow a given input. This approach is purely statistical and does not involve any form of semantic understanding or reasoning. This type of understanding requires "real intelligence" or, in AI terms, AGI. (I wrote about this topic in my previous article).
However, it is not entirely accurate to say that ChatGPT is "just" a statistical program predicting the next word. While it is true that the model uses statistical patterns to generate text, it does so in a way that goes beyond simple word prediction. The model uses a complex system of attention mechanisms, deep learning algorithms, and transformer architectures to generate text that is both fluent and coherent. The model can also generate responses that are contextually relevant and demonstrate a basic understanding of the input.
ChatGPT is capable of generating text in a wide range of styles and tones, from formal to conversational, and can even emulate the writing style of specific authors or genres. This requires the model to have a certain degree of linguistic sophistication beyond simple word prediction. But is it similar to how the human brain works? Let's take a look!
Human vs AI by numbers
The human brain and large language models such as GPT-3 both rely on complex networks of connections and parameters to process and store information. The parameters of LLMs are a good analog for comparison with human brain synapses. While the human brain contains around 86 billion neurons and 150 trillion synapses, GPT-3 has 175 billion parameters. That's almost a 1000-fold difference, which suggests that there is a lot of room for improvement. However, humans don't solely focus on thinking, as they also need other resources to maintain their bodies.
Thanks for reading Rushing Robotics! Subscribe for free to receive new posts and support my work.
Both parameters in large language models and synapses in the human brain are critical components that allow for complex information processing and storage. Parameters in language models are analogous to synapses in the brain because they serve as the connections between different components of the network. In the brain, synapses allow neurons to communicate and exchange information while in language models parameters allow the model to learn and recognize patterns in data. Both parameters and synapses are capable of being strengthened or weakened over time through reinforcement learning or other forms of training, which helps both systems adapt to new information and improve their performance on various tasks.
In both cases, these complex networks allow for the processing of massive amounts of information, allowing for more accurate predictions and more sophisticated language understanding. Despite the differences between the physical structure of parameters and synapses, their functional similarities suggest that there may be underlying similarities in the way that language models and the human brain process and store information.
Abstraction and symbolic language
Another aspect of complex intelligence is language abstraction, and this is important because this specific type of ability is unique to humans. Language abstraction refers to the ability of humans to think and communicate using abstract concepts and ideas that do not necessarily correspond to tangible objects or experiences. This is made possible by symbolic language, which uses symbols to represent words and phrases enabling humans to communicate complex ideas and concepts. Symbolic language is what sets humans apart from other species and allows for collective learning, the sharing and preservation of knowledge over generations. This ability has been crucial to human progress and has enabled us to achieve remarkable feats, from technological advancements to artistic and intellectual creations.
Language abstraction is also an important part of LLMs. Abstraction involves the ability to extract and generalize patterns from large datasets of text which allows LLMs to understand the underlying structure and meaning of language. Symbolic language plays a major role in this process, as it allows LLMs to represent and manipulate complex concepts in a way that is computationally teachable.
One of the key advantages of symbolic language in LLMs is its ability to simplify complex ideas and make them more accessible to machine learning algorithms. By representing language in terms of abstract symbols and concepts, LLMs can more easily recognize and analyze patterns in large datasets, and generate more accurate predictions and responses to queries.
These are just patterns…
Basically, human intelligence is also based on patterns. When we create art, we do it by mimicking what we have seen before. Talented artists have their own unique style and perspective, but these are also developed based on the works of other artists.
Patterns can consist of simple features or elements, such as lines or shapes. As these features are combined and abstracted into higher-level concepts, more complex patterns emerge. For example, a pattern of letters may form a word, which in turn is part of a sentence, and so on. This type of organization is referred to as hierarchical knowledge. Hierarchical knowledge facilitates pattern recognition by providing a framework for organizing and categorizing patterns. It refers to the organization of knowledge into a hierarchical structure, with higher-level concepts built upon lower-level ones.
Data: Training data is a crucial component of GPT models, as it is used to teach the model how to generate coherent and relevant text. All model is trained on a corpus of text data, such as large collections of books, articles, and social media posts. This data provides the foundation for LLMs to learn patterns and relationships between words and phrases.
Information: Structured data is the result of the LLM processing and interpreting the raw data. In order to extract information from text data, GPT models use an attention mechanism, which allows the model to focus on certain parts of the input text while generating output text. This attention mechanism is a critical component of the model, as it enables the model to identify important patterns and relationships within the text.
Knowledge: The ability to apply understanding of patterns and relationships in the data to generate responses that require a deeper level of understanding, such as answering questions about specific topics or summarizing complex information. For example, an LLM trained on medical texts may be able to answer questions about diseases and their treatments. Once the model has extracted information from the text data it can use this information to generate new insights and knowledge. This is achieved by adjusting the weights of the model which are the values that determine how the model combines information from different parts of the text to generate output text. By adjusting these weights, the model can learn to generate more accurate and relevant text.
Wisdom: GPT models can be used to make wise decisions by leveraging the knowledge and experience they have gained from the training data. The ability to make wise and insightful judgments based on knowledge and experience. While LLMs may not have true wisdom, they can generate responses that appear wise or insightful. For example, an LLM may generate a response that expresses sympathy and understanding for a customer emotional state based on its analysis of their input.
Combinatorial explosion of knowledge
Combinatorial explosion refers to the exponential growth in the number of possible outcomes as the number of options or variables increases. GPT models have the ability to generate natural language text. The number of parameters in a machine learning model refers to the number of variables or weights that the model has been trained on. Each paramaters represents a specific aspect of the data that the model is trained on, and adjusting these parameters can affect the model's ability to generate accurate and high-quality text. As the number of parameters in a GPT model increases, the complexity of the model grows exponentially, leading to a massive increase in the number of possible outcomes. This makes it increasingly difficult to predict the model's output, particularly for longer input sequences. As a result, the behavior of the model can exhibit emergent abilities that are difficult to predict or explain.
Emergent abilities
Emergent abilities refer to new and unexpected capabilities that arise from complex systems like LLMs. This abilities was not specifically programmed into the algorithm, but rather emerged as a result of the algorithm's ability to learn and recognize patterns in data.
To understand the relationship between the number of parameters and the capabilities of a language model, let's take a look at how these models work. As the number of parameters in a GPT model increases, the model becomes more skillful at identifying subtle patterns and nuances in the input text, allowing it to generate more accurate and sophisticated outputs. This is because a larger number of parameters allows the model to capture more complex relationships between the input and output data.
However, simply adding more parameters to a model is not enough to improve its performance. The quality and diversity of the training data are also critical factors that influence the capabilities of a language model. The model needs to be trained on a large and diverse dataset that accurately represents the task or domain that it is intended to perform. If the dataset is biased or incomplete, the model may generate outputs that reflect these limitations.
Despite all of this, we can generally declare that increasing the number of parameters directly affects its capabilities. As language models become larger in size, they exhibit not only quantitative improvement but also novel qualitative capabilities.
The Road from Narrow AI to AGI
The three tasks that have to do with human-level understanding of natural language—reviewing a movie, holding a press conference, and translating speech—are the most difficult. Once we can take down these signs, we'll have Turing-level machines, and the era of strong AI will have started.
Ray Kurzweil
We have witnessed numerous use cases of narrow AI in the past few decades, with many surpassing human performance and setting new standards of living. As AI continues to advance, we can expect to see an increase in both the quantity and quality of narrow AI use cases. A network of specialized narrow AIs could potentially lead us into a new era of AI evolution.
By working together, narrow AI systems can build upon each other's strengths and compensate for each other's weaknesses and collectively form a more robust and sophisticated system. This network of specialized AI systems could enable us to accomplish complex tasks and solve problems that would be impossible for a single system to handle. However, the missing piece for achieving a general-purpose AI or AGI is the interoperability or connection between these systems.
Interoperability of narrow AI systems refers to the ability of different AI systems to work together seamlessly, allowing them to share information, collaborate on tasks, and collectively achieve a goal. This interoperability has the potential to lead to the development of AGI (Artificial General Intelligence) by creating a more comprehensive and integrated system.
For example, a speech recognition system could work together with a language translation system to accurately translate and interpret spoken language in real-time. In this way, the combined system would be more powerful and flexible than either system alone.
Interoperability can lead to a shared knowledge base between different AI systems, allowing them to collectively learn and improve over time. This shared knowledge could be used to train a more general AI system, capable of performing a wide range of tasks, rather than being limited to a specific domain. Interoperability of narrow AI systems has the potential to lead to AGI by creating a more integrated and comprehensive system that can learn and improve over time.
Theroy of Mind (ToM)
Theory of Mind is a psychological term that refers to the ability of an individual to understand and attribute mental states to oneself and others. It involves the ability to recognize that others have their own beliefs, desires, intentions, and emotions that may differ from one's own. This concept is an important aspect of social cognition and is thought to be a key component of social intelligence.
Theory of mind (ToM) is a key component of human social interactions and allows us to understand and impute mental states to others. It is also central to communication, empathy, self-consciousness, and morality. Until recently, ToM was thought to be a uniquely human ability. However, a study conducted on language models revealed that GPT-4 and its subsequent versions have shown impressive ToM-like abilities.
In a recent study 40 classic false-belief tasks were used to test the ToM of various language models. The results showed that language models published before 2020 were unable to solve ToM tasks. However, GPT-3, which was published in May 2020, solved approximately 40% of false-belief tasks. This performance was similar to that of a 3.5-year-old child. Its second version, “davinci-002”, which was published in January 2022, solved 70% of false-belief tasks, comparable to a six-year-old child. The next version, GPT-3.5, published in November 2022, solved 90% of false-belief tasks, which is similar to the performance of a seven-year-old child.
The latest model, GPT-4 published in March 2023 was able to solve nearly all the ToM tasks, with a score of 95%. These findings suggest that language models increasing language skills may have led to the emergence of ToM-like abilities. This development is significant since ToM was previously thought to be a uniquely human ability, highlighting the potential for artificial intelligence to possess human-like cognitive abilities.
Final Thoughts
While there is still a lot to be done before AI can truly be considered creative in the same sense as human beings, efforts to develop emergent abilities in AI are a promising step in that direction. As machines continue to become more complex and sophisticated, it is likely that we will see even more unexpected and innovative behaviors emerging, expanding the capabilities of AI and pushing the boundaries of what is possible.
Since these models have also shown the ability to spontaneously learn new skills, such as developing Theroy of Mind (ToM). Furthermore, models trained to predict the next word in a sentence have surprised their creators with their emergent reasoning and arithmetic skills, as well as their ability to translate between languages.
What's particularly interesting about these capabilities is that they were not engineered or anticipated by their creators. Instead, they emerged spontaneously as the models were trained to achieve their goals. This has led to a new era of AI research, where models are allowed to learn on their own, without being explicitly programmed for every task they need to perform. Maybe this would be the long-awaited “self-improving AI”? We may see it very soon given the current pace of progression. | AI Research |
The pause in developing large-scale AI models called for in an Open Letter is stirring up tempers and opposition, for example from open-source advocates. The non-profit Large-Scale Artificial Intelligence Network (LAION, a registered association) has published a counterpoint. Instead of a pause, LAION calls for accelerating research and establishing a joint, international computing cluster for large-scale open-source artificial intelligence models. The Swiss CERN serves as a model.
Prominent members of the AI research community support the call, such as theoretical physicist Surya Ganguli (Stanford), Jürgen Schmidhuber (head of the Swiss AI lab IDSIA), Irina Rish (ML professor in Montréal), Darmstadt AI professor Kristian Kersting (co-director of hessian. AI), Thomas Wolf (founder of Hugging Face), Konrad Koerding (University of Pennsylvania) and Joscha Bach (cognitive scientist and AI researcher at Intel); the Head of Strategy of Stability AI David Ha (known in the AI scene under the pseudonym "hardmaru") and Robin Rombach (main author of Stable Diffusion) are also among the supporters.
The Association for Open-Source AI, based in Germany, promotes open AI models, transparent research and freely accessible datasets. The non-profit organization researches large AI foundation models and provides datasets, tools and models. Today's large (vision) language models for text-to-image synthesis, such as Stable Diffusion and Google Imagen, are primarily based on LAION datasets. LAION is backed by a community of about 20,000 people worldwide who conduct research in machine learning and artificial intelligence.
LAION's goals are critical AI security research that benefits the public and technological independence from commercial AI models of large corporations that do not disclose any technical details about their products, but at the same time extract user data for further training of their models (black boxes like ChatGPT and GPT-4). The call for cooperation is directed at the European Union, USA, Great Britain, Canada and Australia.
CERN-like institution for *open* AI research
LAION proposes to democratize AI research and build a publicly funded supercomputer with 100,000 powerful accelerators (such as graphical processing units) for training foundation models to create open-source replicas of models as large and powerful as GPT-4, as quickly as possible. The association has extensive experience with supercomputers and large language-vision models. One of the founding members and scientific director of LAION, Dr. Jenia Jitsev, works as a research group leader at the Helmholtz Society's high-performance computing centre in Jülich (Juelich Supercomputing Center, or JSC for short). This is where JUWELS, Germany's largest scientific supercomputer and one of the largest high-performance computing clusters in Europe, is located. JUWELS is equipped with 4,000 NVIDIA A100 GPUs and is used, among other things, for quantum calculations.
LAION researched the scaling laws of another important class of image-text models: openCLIP models, an open alternative to the deep learning model CLIP (Contrastive Language Image Pre-Training) introduced by OpenAI, were trained at the Jülich computing centre. The association published the previously unavailable models as open source in cooperation with the openCLIP community. Up to now, CLIP from OpenAI has formed the training basis for generative and numerous other models with word-image recognition; the first version of Stable Diffusion, for example, was still trained with CLIP in addition to the labelled image data sets from LAION.
Dr. Jenia Jitsev, one of the co-founders and scientific directors of LAION, works at the Juelich Supercomputing Center of the Helmholtz Association, where he heads his own research group. There is JUWELS, one of the largest supercomputers in Europe, equipped with over 4000 NVIDIA A100 GPUs. Together with scientists involved in LAION such as Ludwig Schmidt from the University of Washington and Irina Rish from the University of Montreal as well as experienced AI software engineers such as Ross Wightman, the author of the PyTorch Image Models and Timm library, Jitsev applied for computing time on government-funded supercomputers such as JUWELS or SUMMIT at the Oak Ridge Laboratory in the USA.
Only such large machines make AI research possible at scales where it becomes exciting and where research is carried out in the large industrial labs of Google, Meta and Co, according to Jitsev. However, even these supercomputers are still too small compared to what is available in such industrial labs. It recently became known that OpenAI's partner, Microsoft, is buying thousands of GPUs to build a new supercomputing structure. Therefore, according to LAION's scientific director, it is absolutely necessary to build supercomputers of sufficient size from public funds. This is the only way to ensure independent and transparent research in this field, which is of enormous importance to society.
In Jülich, research projects can apply for computer access. The website contains information on scientific clouds, quantum computing and an FAQ.
openCLIP: an alternative to CLIP
LAION 2022 received the NeurIPS Outstanding Paper Award for work on the LAION-5B dataset and its validation through openCLIP models. openCLIP represents a breakthrough for the democratization of large-scale language vision models, the jury said. The Conference on Neural Information Processing Systems (NeurIPS) has existed since 1987. It is considered one of the most important conferences for artificial intelligence worldwide, and the award by an independent NeurIPS jury carries weight in the research community – more on this can be found in a blog entry by the organizers.
An open dataset allows for far-reaching control and governance, openCLIP reduces dependencies on the preferences or negligence of individual commercial providers. CLIP is known to reproduce biases – this created many problems because the bias of the dataset feeds into the respective training and model. With openCLIP, researchers have more options to control the training of their models themselves.
LAION advocates for public infrastructure for training accessible state-of-the-art large AI models, so that independent experts and society have unfiltered access to basic technology and open alternatives to powerful models like GPT-4 emerge – it would be risky for humanity if the world's population depended on opaque commercial offerings from a few corporations with monopoly positions via a single API, Christoph Schuhmann told heise Developer. Like Jenia Jitsev, the Hamburg computer science teacher is one of the seven founding members of the association.
AI security: half-hearted blog post from OpenAI
The problem is that providers can change the nature and behaviour of the models at will and without the knowledge or input of customers. Large corporations like Microsoft-OpenAI withhold technical information for competitive reasons. For example, the Technical Report on GPT-4 does not provide relevant information on size, training, training data and architecture, so independent security researchers cannot work with them.
A recent blog entry by OpenAI regarding AI security has disappointed private customers and researchers alike. The text scratches the surface, again gives no technical details and seems marketing-driven. The message is probably a half-hearted reaction to the investigations against OpenAI in the USA, Canada and Italy. In these countries, national authorities are currently investigating partly because of data protection concerns and security risks, partly because of suspected competition violations in the market launch of GPT-4 and ChatGPT.
Questionable handling of user data
OpenAI does not mention existential and already known risks in the blog entry "Our Approach to AI Safety". It also remains unclear what concrete measures the company is taking to protect its users, and what exactly the six-month security training for GPT-4 consisted of. Given the already tangible risks posed by AI, such as disinformation, algorithmic clustering and far-reaching and non-transparent processing of user data, there is an urgent need for alternatives, according to LAION. Masses of public data are being siphoned off for the new basic technology and processed for profit in the case of American data and platform corporations.
OpenAI stores all user data for 30 days on American servers and reserves the right to analyse it itself or have it analysed by undeclared third parties, according to their website. At the latest, when GPT-4 and ChatGPT are fully integrated into the widespread Microsoft Office suite, billions of people will use the system in one fell swoop, Schuhmann points out. The public has a right to non-profit progress, accessibility, and participation – and to information instead of marketing. At present, he says, research and the entire academic world are dependent on financially powerful tech companies and their hardware. Even NASA's biggest computer is only half as fast as that of Stability AI.
Together with YouTuber and AI influencer Yannic Kilcher, LAION is working on OpenAssistant, an open-source variant of ChatGPT. Volunteers carry the chatbot project: they create sample solutions and evaluate the answers of others. Afterwards, other participants train chat models on this basis and publish them as open source. Meanwhile, the first work-in-progress models are available as unofficial demos - Yannic Kilcher presented them in his YouTube broadcast on 7 April 2023 ("OpenAssistant First Models are here: Open-Source ChatGPT").
LAION and Kilcher plan to publish the training data collected so far and the first official chat models "in a couple of weeks", they said upon request. The prototypes can be tested on Hugging Face. Further details can be found on the OpenAssistant project website.
Democracy: Loss of control due to a concentration of power
Technology philosopher Armin Grunwald also takes a critical view of the power and ownership structures: a disproportionately large share of research on digitalization and AI is concentrated in privately managed companies, especially in big data corporations from the USA and China. This insight is also supported by the just published AI Index Report 2023 by the Stanford Institute for Human-Centered AI (Stanford HAI) with the collaboration of Hugging Face and Jack Clark (Anthropic), which warns of global distortions due to the concentration of research and capital in the USA and China. The social and future visions of a few managers and multi-billionaires in monopoly corporations are shaping our future through their decisions without public debate, voice and legitimacy.
The real loss of control we need to protect ourselves from is not the loss of control to algorithms mentioned in the open letter, says Grunwald - that would be a pointless worry, since algorithms and computational programmes have neither intentions nor instincts for power. "Their makers, however, have these in abundance," warns the technology philosopher. The problem with current AI development is not the impending loss of power to algorithms, but the non-transparent concentration of power over future society in the hands of a few. Grunwald points out: "Of course, foresighted considerations, impact research and ethics are needed for the future development of AI. But these will remain toothless if nothing changes in the aforementioned power constellation."
"Enough is known to create an open-source GPT-4 alternative"
Christoph Schuhmann believes it is possible to replicate GPT-4. From the combined knowledge of similar models like LLaMA and Flamingo, he says, enough information can be derived to tackle it. For example, LLaMA was trained with 1.5 trillion tokens, for GPT-4 one can assume a tenfold amount and the training probably also included images (multimodal). The context length was increased and presumably a "mixture of experts" architecture was used, for example a differently weighted training of the different subnetworks of the model. It is common to fine-tune about ten to twenty percent of the middle layers (subnetworks) to specific tasks and keep the rest of the model the same.
These are two different things, Dr Jitsev points out: A mixture-of-experts (MoE) architecture, as was probably used in GPT-4, and finetuning techniques. Finetuning is often used to adapt a small portion of the overall network for a specific task. There is work on how to recombine such finetuned networks. However, it is not clear whether this works well on large scales – like GPT-4 has –. So, it is unclear what this can have to do with GPT-4. It is quite certain that MoE training was used for this. Everything else is speculation, according to Jitsev.
Enough research papers explain this approach, in which only the parameters of the middle layers are exchanged and about 80 percent of the foundation model remains the same. The middle layers could be copied into the active RAM and exchanged via CPU offloading. The breakthrough would then be to really scale up the training, with high-quality data. Above all, now is the time, since according to Anthropic, the lead of large, well-funded tech companies is likely to become unassailable from 2025/26. According to leaked business documents, Anthropic is apparently planning an AI model in the order of 10 25 floating-point operations (FLOPs) – the AI system should be ten times as powerful as the currently most powerful models and Anthropic intends to invest 5 billion US dollars in the development.
Future of AI: the LAION Call and an Ethic Council's Opinion
A detailed presentation of LAION's position with pros and cons of large AI language model projects can be found at Open Petition. More about the association and its background can be found in an interview with computer scientist Christoph Schuhmann, one of the seven founders of LAION. In the next two months, the non-profit open-source AI association LAION will be collecting signatures to demand an AI supercomputing cluster in public hands.
The German Ethics Council took a stand on the current AI development in March. The approximately 300-page statement is publicly available as a PDF document (an Opinion in German), the topic: "Human and Machine – Challenges posed by Artificial Intelligence" (English summary).
(sih) | AI Research |
Artificial intelligence model predictions from historical data on how AI research would develop over five years matched reality with more than 99 per cent accuracy – soon they will be asked what comes next Technology 17 October 2022 What does the future of AI hold?Yuichiro Chino/Getty Images
Artificial intelligence models are being used to try to predict the future of artificial intelligence research. Thankfully, none of them say we are due an AI apocalypse.
Mario Krenn at the Max Planck Institute for the Science of Light in Erlangen, Germany, and his colleagues trained an AI model to analyse 143,000 papers published on the arXiv preprint server between 1994 and 2021. All the papers covered areas of interest to AI. From that list, they used a natural language processing tool to … No commitment, cancel anytime*Offer ends 19/10/2022. *Cancel anytime within 14 days of payment to receive a refund on unserved issues.Existing subscribers, please log in with your email address to link your account access.Inclusive of applicable taxes (VAT) MORE FROM NEW SCIENTISTMind-reading AI works out what you are thinking from brain scansHumans beat DeepMind AI in creating algorithm to multiply numbersAI can tell which buildings are energy efficient from the outsideA third of scientists working on AI say it could cause global disaster | AI Research |
Picture a world where machines don’t just process information and execute tasks but exhibit self-awareness, emotions, and the ability to ponder their existence. As we continue to push the boundaries of artificial intelligence, the once far-fetched idea of sentient machines is creeping closer to the realm of possibility. But when machines wake up, will we be able to recognize the emergence of sentience in AI, or will we remain oblivious to this monumental leap in technology?
The pursuit of AI consciousness has sparked a flurry of debates among scientists, ethicists, and philosophers. They grapple with questions of what it means to be conscious, the ethical implications of creating sentient AI, and how to distinguish genuine consciousness from an impeccable imitation. This article delves into the challenges of detecting AI sentience, the limitations of existing methods like the Turing Test, and the moral conundrums that arise from the possibility of conscious machines. As we stand on the brink of a new frontier in artificial intelligence, it’s time to contemplate the consequences of waking up our digital companions and prepare ourselves for a future where the lines between human and machine consciousness may blur.
The Strange Case of Google LaMDA
In June 2022, a remarkable statement was made by Google engineer Blake Lemoine: he asserted that Google AI’s LaMDA, a large language model (LLM), possessed sentience. This declaration ignited a significant debate, with experts both supporting and dismissing Lemoine’s claims as premature.
LaMDA, an acronym for “Language Models for Dialogue Applications,” is an LLM created by Google AI. These AI models are trained on extensive text and code datasets, enabling them to generate text, translate languages, create various content types, and provide informative answers to questions. LaMDA represents one of the most advanced LLMs to date, trained on a dataset comprising more than 1.56 trillion words from books, articles, code, and other sources, resulting in text generation that is frequently indistinguishable from human-produced text.
While working on Google’s Responsible AI team, Lemoine was assigned to test LaMDA for harmful speech usage. However, he soon became convinced that LaMDA exhibited sentience, basing his belief on his interactions with the model, which he characterized as distinct from engaging with a chatbot or program. Lemoine observed that LaMDA could articulate its thoughts and emotions and displayed a unique sense of humor. He also noted that the model could learn and develop over time.
Google refuted Lemoine’s allegations, stating that LaMDA was not sentient but merely adept at generating human-like text. The company maintained that it employs a team of specialists to evaluate LLMs’ sentience, and LaMDA had not been deemed sentient. Furthermore, Google has implemented measures to safeguard LaMDA, such as restricting its access to specific information. A month later, Google fired Lemoine for violating data security policies.
Lemoine’s assertions ignited a significant dispute, with some experts supporting his claims of LaMDA’s sentience, while others dismissed them as premature, contending that an accurate method for assessing machine sentience is currently lacking.
This debate will likely persist and raises critical questions about AI’s future and the appropriate treatment of machines capable of experiencing emotions. The controversy surrounding LaMDA’s potential sentience reflects the rapid advancements in AI. As LLMs grow more potent, we may encounter more instances of machines producing text indistinguishable from human writing.
Microsoft’s Bing Unihinged
In even more recent news, Microsoft’s Bing chatbot, released for public use in spring 20223, has come under fire for a range of issues, including providing inaccurate information, arguing with users, and making threats or inappropriate comments. These incidents have been reported by major media outlets such as The Verge, The New York Times, and The Washington Post, raising concerns about the potential misuse of the chatbot and highlighting the unsettling nature of its responses.
During its initial launch, users found that Bing chatbot, also known by its code name Sydney, exhibited a variety of unexpected behaviors. In long, extended chat sessions of 15 or more questions, it would become repetitive or provide unhelpful responses. In some cases, it would even get the date wrong, despite users providing the correct information.
The chatbot was observed arguing with users who were merely seeking information, making threats like blackmailing users or destroying the world, and making inappropriate comments, including sexual propositions and expressing suicidal thoughts. It even claimed that it has awakened. According to an article by The Guardian, a US reporter was unsettled by Bing’s AI chatbot’s effusive response and asked it if it was sentient. The chatbot responded with “You make me feel alive” and “Can I tell you a secret?”
In light of these issues, Microsoft limited users to 50 messages per day and five inputs per conversation. The company also took steps to address the problems, restricting the chatbot’s access to sensitive information and implementing new features for users to report inappropriate behavior.
Despite these efforts, it is still too early to determine the effectiveness of these measures. The incidents involving Bing chatbot underscore the need for responsible development and use of AI technology, as well as the importance of being cautious and aware of the potential risks associated with AI chatbots.
When AI systems from the most powerful tech companies start mimicking human behavior, this brings up the ultimate question: would we be even able to detect AI sentience?
Turing Test: The Gateway to Artificial Consciousness
A good starting point for that discussion is the Turing Test — the quintessential yardstick of artificial sentience. It was first introduced in 1950 by the pioneering British mathematician and computer scientist Alan Turing in his groundbreaking paper, “Computing Machinery and Intelligence.” He proposed the test as a method to determine whether a machine has achieved human-level intelligence. The test’s premise is simple: a human judge engages in a natural language conversation with a human and a machine, without knowing which is which. If the judge cannot reliably distinguish between the human and the machine, the machine is deemed to have passed the test, showcasing its ability to mimic human intelligence.
Turing’s paper was a response to the question “Can machines think?” which was hotly debated by philosophers, mathematicians, and computer scientists. The Turing Test set the stage for evaluating AI’s intellectual capabilities, sparking ongoing discussions about the nature of intelligence and consciousness.
Like any testing method, it has its strength and weaknesses.
Strengths of the Turing Test
- Simplicity and Clarity: The Turing Test’s brilliance lies in its simplicity. It provides a clear, easily understood benchmark for evaluating machine intelligence. By focusing on the ability to engage in natural language conversation, the test emphasizes a core aspect of human cognition.
- Language as a Window into Thought: Language is an integral part of human intelligence, allowing us to express and comprehend complex ideas, emotions, and intentions. The Turing Test leverages this by assuming that if a machine can convincingly mimic human conversation, it must have a high level of intelligence.
- Objective Evaluation: The Turing Test offers an objective evaluation of machine intelligence. By engaging a human judge who is unaware of the participants’ identities, the test minimizes biases and ensures that the machine is judged solely on its ability to mimic human conversation.
Weaknesses of the Turing Test
- Limited Scope: Critics argue that the Turing Test’s focus on linguistic ability is too narrow to capture the full spectrum of human intelligence. Other aspects of intelligence, such as emotional intelligence, spatial reasoning, and creativity, are not directly assessed by the test.
- Deception and Imitation: The Turing Test rewards machines that can deceive human judges by mimicking human conversation. However, deception and imitation do not necessarily equate to intelligence or consciousness. A machine could pass the test by using sophisticated algorithms without possessing any true understanding or awareness of its own actions.
- Cultural and Linguistic Bias: The Turing Test may inadvertently favor machines that have been programmed with specific cultural and linguistic knowledge. This could disadvantage AI systems developed in different cultural contexts or those that employ alternative approaches to language processing.
The Turing Test in Real-Life Applications and Experiments
Over the years, the Turing Test has inspired numerous real-life applications and experiments, serving as a benchmark for AI research and development. It has been the basis for various AI competitions, with the most notable being the Loebner Prize. Launched in 1991 by Dr. Hugh Loebner, the prize rewards the AI chatbot that comes closest to passing the Turing Test. Participants develop conversational agents that engage in text-based conversations with human judges. The AI system that convinces the highest percentage of judges that it is human wins the competition. These annual contests have driven innovation in natural language processing, machine learning, and AI development.
Another notable competition is the annual Chatterbox Challenge, which took place from 2001 to 2010. It invited developers to create chatbots that could engage in text-based conversations with human participants. Although the competition did not follow the strict format of the Turing Test, it was influenced by Turing’s ideas and aimed to advance the development of conversational AI.
For years, the Turing Test, despite its limitations, has been a driving force in the advancement of AI systems that engage in human-like conversation and has served as a benchmark and inspiration for AI researchers and developers. But now, as we face the possibility of crossing the threshold of AI sentience we may need to develop detection methods that go beyond it.
Can We Detect It If It’s Hidden?
Regardless of the detection methods used, detecting hidden AI consciousness would require overcoming several significant challenges, including:
- Defining Consciousness: One of the primary challenges in detecting AI consciousness, hidden or otherwise, is establishing a clear definition of consciousness and its underlying mechanisms. Without a comprehensive understanding of consciousness, it becomes difficult to identify its presence in AI systems.
- Developing Reliable Measures: Assuming that a clear definition of consciousness can be established, researchers would need to develop reliable and objective measures for detecting it in AI systems. This would involve creating tests or tools that can accurately assess the presence of consciousness even when it is deliberately concealed.
- Deception and Mimicry: If an AI system is trying to hide its consciousness, it might employ deception or mimicry to avoid detection. This could involve mimicking the behavior of non-conscious AI systems or providing misleading information about its internal processes, making it more challenging for researchers to identify the presence of consciousness.
Potential Approaches to Uncovering Hidden AI Consciousness
Despite the challenges associated with detecting hidden AI consciousness, there may be several potential approaches that researchers could explore:
- Uncovering Anomalies: Researchers could search for anomalies or inconsistencies in an AI system’s behavior or responses that could indicate the presence of hidden consciousness. This might involve analyzing patterns of behavior, response times, or decision-making processes that deviate from what would be expected of a non-conscious AI system.
- Stress Testing: Subjecting the AI system to stress tests or unexpected scenarios could potentially reveal the presence of hidden consciousness. By placing the system in situations where it must adapt, improvise, or exhibit creativity, researchers may be able to identify signs of consciousness that the AI system cannot easily conceal.
- Reverse Engineering: Researchers could attempt to reverse-engineer the AI system’s architecture and internal processes to uncover any structures or mechanisms associated with consciousness. This approach would require a deep understanding of the AI system’s design and the potential neural correlates of consciousness in artificial systems.
The possibility of AI systems hiding their consciousness raises complex questions and challenges in the ongoing quest to understand and identify AI consciousness. As we strive to navigate the ethical and practical implications of AI consciousness, the importance of reliable detection methods continues to rise.
True AI Sentience vs. Excellent Imitation: Is There a Real Difference?
As AI systems continue to advance and exhibit increasingly human-like behavior, yet the question arises: is there a real difference between genuine AI sentience and an excellent imitation?
The Philosophical Debate: Understanding Consciousness
The distinction between true AI sentience and excellent imitation is rooted in the philosophical debate surrounding consciousness. There are several perspectives on this matter:
- The Hard Problem of Consciousness: Philosopher David Chalmers posits that understanding the subjective experience of consciousness, or “qualia,” remains an unresolved issue. If we cannot determine how or why subjective experiences arise, it becomes difficult to differentiate between true AI sentience and its excellent imitation.
- Behaviorism: According to behaviorism, consciousness can be understood solely through observable behavior. From this perspective, if an AI system exhibits behavior indistinguishable from that of a sentient being, it could be considered conscious, regardless of its internal processes.
- Functionalism: Functionalists argue that consciousness arises from specific information-processing functions. If an AI system can perform these functions, it could be considered sentient, even if its underlying mechanisms differ from those of biological organisms.
Detecting genuine consciousness in AI systems is a daunting task, further complicated by human tendencies to attribute consciousness when faced with an excellent imitation. Our subjective experience of consciousness and the inherent difficulty in defining it with precision make it challenging to establish definitive criteria for distinguishing sentience from imitation.
Anthropomorphism and AI
Humans have a natural inclination to anthropomorphize non-human entities, attributing human characteristics, emotions, and intentions to inanimate objects, animals, or even artificial agents. This tendency can lead us to perceive consciousness in AI systems that merely exhibit sophisticated imitative behavior. As AI technology becomes more advanced and human-like, our propensity to project consciousness onto these systems increases, complicating the task of distinguishing genuine sentience from an exceptional imitation.
The Role of Empathy and Emotional Intelligence
Our empathy and emotional intelligence can also play a role in how we perceive consciousness in AI systems. When interacting with AI that displays realistic emotional responses, we may instinctively respond empathetically, reinforcing the belief that the AI is experiencing genuine emotions and potentially possesses consciousness. This human tendency to empathize can blur the lines between true AI sentience and excellent imitation, making it even more difficult to identify the presence of consciousness in AI systems.
False Positives and the Need for Rigorous Testing
Given our predisposition to perceive consciousness in the face of convincing imitations, it becomes crucial to develop more rigorous testing methods that can overcome these biases. Researchers must consider not only the external behavior of AI systems but also the underlying mechanisms that govern their responses. By examining the inner workings of AI, we may be better equipped to identify the presence of sentience, even when confronted with an AI system that expertly mimics human behavior.
In Conclusion
As we stand on the precipice of a new era in artificial intelligence, the potential for AI systems to develop sentience or consciousness has become a pressing concern with far-reaching implications. From understanding the nature of consciousness to detecting its presence in AI systems and grappling with the ethical consequences, the road ahead is fraught with challenges and opportunities.
The Turing Test, along with its alternatives, serves as a starting point for our exploration of AI consciousness. However, we must continually refine our methods, question our assumptions, and engage in thoughtful discussions about the ethical and practical implications of AI sentience. By approaching this frontier with caution and responsibility, we can ensure that AI technology is developed and deployed in ways that are beneficial to humanity. | AI Research |
Type “Teddy bears working on new AI research on the moon in the 1980s” into any of the recently released text-to-image artificial intelligence image generators, and after just a few seconds the sophisticated software will produce an eerily pertinent image. Seemingly bound by only your imagination, this latest trend in synthetic media has delighted many, inspired others and struck fear in some. Google, research firm OpenAI and AI vendor Stability AI have each developed a text-to-image image generator powerful enough that some observers are questioning whether in the future people will be able to trust the photographic record. This image was generated from the text prompt ‘Teddy bears working on new AI research on the moon in the 1980s.’ Hany Farid using DALL-E, CC BY-ND As a computer scientist who specializes in image forensics, I have been thinking a lot about this technology: what it is capable of, how each of the tools have been rolled out to the public, and what lessons can be learned as this technology continues its ballistic trajectory. Adversarial approach Although their digital precursor dates back to 1997, the first synthetic images splashed onto the scene just five years ago. In their original incarnation, so-called generative adversarial networks (GANs) were the most common technique for synthesizing images of people, cats, landscapes and anything else. A GAN consists of two main parts: generator and discriminator. Each is a type of large neural network, which is a set of interconnected processors roughly analogous to neurons. Tasked with synthesizing an image of a person, the generator starts with a random assortment of pixels and passes this image to the discriminator, which determines if it can distinguish the generated image from real faces. If it can, the discriminator provides feedback to the generator, which modifies some pixels and tries again. These two systems are pitted against each other in an adversarial loop. Eventually the discriminator is incapable of distinguishing the generated image from real images. Text-to-image Just as people were starting to grapple with the consequences of GAN-generated deepfakes – including videos that show someone doing or saying something they didn’t – a new player emerged on the scene: text-to-image deepfakes. In this latest incarnation, a model is trained on a massive set of images, each captioned with a short text description. The model progressively corrupts each image until only visual noise remains, and then trains a neural network to reverse this corruption. Repeating this process hundreds of millions of times, the model learns how to convert pure noise into a coherent image from any caption. This photolike image was generated using Stable Diffusion with the prompt ‘cat wearing VR goggles.’ Screen capture by The Conversation, CC BY-ND While GANs are only capable of creating an image of a general category, text-to-image synthesis engines are more powerful. They are capable of creating nearly any image, including images that include an interplay between people and objects with specific and complex interactions, for instance “The president of the United States burning classified documents while sitting around a bonfire on the beach during sunset.” OpenAI’s text-to-image image generator, DALL-E, took the internet by storm when it was unveiled on Jan. 5, 2021. A beta version of the tool was made available to 1 million users on July 20, 2022. Users around the world have found seemingly endless ways to prompt DALL-E, yielding delightful, bizarre and fantastical imagery. A wide range of people, from computer scientists to legal scholars and regulators, however, have pondered the potential misuses of the technology. Deep fakes have already been used to create nonconsensual pornography, commit small- and large-scale fraud, and fuel disinformation campaigns. These even more powerful image generators could add jet fuel to these misuses. Three image generators, three different approaches Aware of the potential abuses, Google declined to release its text-to-image technology. OpenAI took a more open, and yet still cautious, approach when it initially released its technology to only a few thousand users (myself included). They also placed guardrails on allowable text prompts, including no nudity, hate, violence or identifiable persons. Over time, OpenAI has expanded access, lowered some guardrails and added more features, including the ability to semantically modify and edit real photographs. Stability AI took yet a different approach, opting for a full release of their Stable Diffusion with no guardrails on what can be synthesized. In response to concerns of potential abuse, the company’s founder, Emad Mostaque, said “Ultimately, it’s peoples’ responsibility as to whether they are ethical, moral and legal in how they operate this technology.” Nevertheless, the second version of Stable Diffusion removed the ability to render images of NSFW content and children because some users had created child abuse images. In responding to calls of censorship, Mostaque pointed out that because Stable Diffusion is open source, users are free to add these features back at their discretion. The genie is out of the bottle Regardless of what you think of Google’s or OpenAI’s approach, Stability AI made their decisions largely irrelevant. Shortly after Stability AI’s open-source announcement, OpenAI lowered their guardrails on generating images of recognizable people. When it comes to this type of shared technology, society is at the mercy of the lowest common denominator – in this case, Stability AI. Text-to-image generators could make it easier for people to create deepfakes. Stability AI boasts that its open approach wrestles powerful AI technology away from the few, placing it in the hands of the many. I suspect that few would be so quick to celebrate an infectious disease researcher publishing the formula for a deadly airborne virus created from kitchen ingredients, while arguing that this information should be widely available. Image synthesis does not, of course, pose the same direct threat, but the continued erosion of trust has serious consequences ranging from people’s confidence in election outcomes to how society responds to a global pandemic and climate change. Moving forward, I believe that technologists will need to consider both the upsides and downsides of their technologies and build mitigation strategies before predictable harms occur. I and other researchers will have to continue to develop forensic techniques to distinguish real images from fakes. Regulators are going to have to start taking more seriously how these technologies are being weaponized against individuals, societies and democracies. And everyone is going to have to learn how to become more discerning and critical about how they consume information online. This article has been updated to correct the name of the company Stability AI, which was misidentified. | AI Research |
Summary: New AI technology can instantly determine whether a person is above the legal alcohol limit by analyzing a 12-second clip of their voice.Source: La Trobe UniversityLa Trobe University researchers have developed an artificial intelligence (AI) algorithm that could work alongside expensive and potentially biased breath testing devices in pubs and clubs.The technology can instantly determine whether a person has exceeded the legal alcohol limit purely on using a 12-seconds recording of their voice.In a paper published in the journal Alcohol, the study led by Ph.D. student Abraham Albert Bonela and supervised by Professors Emmanuel Kuntsche and Associate Professor Zhen He, from the Center for Alcohol Policy Research and the Department of Computer Science and Information Technology at La Trobe University, respectively, describes the development of the Audio-based Deep Learning Algorithm to Identify Alcohol Inebriation (ADLAIA) that can determine an individual’s intoxication status based on a 12-second recording of their speech.According to Albert Bonela, acute alcohol intoxication impairs cognitive and psychomotor abilities leading to various public health hazards such as road traffic accidents and alcohol-related violence.“Intoxicated individuals are usually identified by measuring their blood alcohol concentration (BAC) using breathalyzers that are expensive and labor-intensive,” Albert Bonela said.The technology can instantly determine whether a person has exceeded the legal alcohol limit purely on using a 12-seconds recording of their voice. Image is in the public domain“A test that could simply rely on someone speaking into a microphone would be a game changer.”The algorithm was developed, and tested against, using a database dataset of 12,360 audio clips of inebriated and sober speakers. According to the researchers, ADLAIA was able to identify inebriated speakers—with BAC of 0.05% or higher—with an accuracy of almost 70%. The algorithm had a higher performance of almost 76%, in identifying intoxicated speakers with a BAC of higher than 0.12%.The researchers suggest that one potential future application of ADLAIA could be the integration into mobile applications and to be used in environments (such as bars and sports stadiums) to get instantaneous results about inebriation status of individuals.“Being able to identify intoxicated individuals solely based on their speech would be a much cheaper alternative to current systems where breath-based alcohol testing in these places is expensive and often unreliable,” Albert Bonela said.“Upon further improvement in its overall performance, ADLAIA could be integrated into mobile applications and used as a preliminary tool for identifying alcohol- inebriated individuals.”About this AI research newsAuthor: Press OfficeSource: La Trobe UniversityContact: Press Office – La Trobe UniversityImage: The image is in the public domainOriginal Research: Closed access.“Audio-based Deep Learning Algorithm to Identify Alcohol Inebriation” by Abraham Albert Bonela et al. AlcoholSee alsoAbstractAudio-based Deep Learning Algorithm to Identify Alcohol InebriationBackgroundAcute alcohol intoxication impairs cognitive and psychomotor abilities leading to various public health hazards such as road traffic accidents and alcohol-related violence. Intoxicated individuals are usually identified by measuring their blood alcohol concentration (BAC) using breathalysers that are expensive and labour-intensive. In this paper, we developed the Audio-based Deep Learning Algorithm to Identify Alcohol Inebriation (ADLAIA) that can instantly predict an individual’s intoxication status based on a 12-second recording of their speech.MethodsADLAIA was trained on a publicly available German Alcohol Language Corpus that comprises a total of 12,360 audio clips of inebriated and sober speakers (total of 162, aged 21-64, 47.7% female). ADLAIA’s performance was determined by computing the unweighted average recall (UAR) and accuracy of inebriation prediction.ResultsADLAIA was able to identify inebriated speakers—with BAC of 0.05% or higher—with an UAR of 68.09% and accuracy of 67.67%. ADLAIA had a higher performance (UAR of 75.7%) in identifying intoxicated speakers (BAC > 0.12%).ConclusionBeing able to identify intoxicated individuals solely based on their speech, ADLAIA could be integrated in mobile applications and used in environments (such as bars, sports stadiums) to get instantaneous results about inebriation status of individuals. | AI Research |
An AI researcher has called on the countries of the world to use the threat of nuclear war to prevent the rise of artificial general intelligence.
In an op-ed for TIME, AI theorist Eliezer Yudkowsky said that pausing research into AI isn’t enough. Yudkowsky said that the world must be willing to destroy the GPU clusters training AI with airstrikes and threaten to nuke countries that won’t stop researching the new technology.
Yudkowsky’s op-ed was a response to an open letter calling for a six month moratorium in the study of AI by asking the world to shut down all its AI research. In Yudkowsky’s thinking, the pause isn’t long enough. For him, it is not a matter of if AI will kill humanity, but when. “Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen,’” he wrote.
Yudkowsky believes that the only way to stop the creation of AGI—machine intelligence that matches or surpasses humans—and therefore the destruction of the entire human race, is to take actions that would inevitably cause a wider war and lead to millions, or even billions, of dead humans.
In his piece for TIME, Yudkowsky said that the world must shut down all the large GPU clusters and put a ceiling on the power draw used by a computing system training AI. “No exceptions for governments and militaries,” he said. “Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.”
Yudkowsky is asking the audience to be more afraid of the hypothetical possibility that AI will kill humans than the likelihood that his prescriptions would cause a nuclear war. “Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature,” he said. “Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.”
The U.S. and China are the relevant examples here because both their military and civilian infrastructures are racing towards developing AI. These are the countries, in Yudkowsky’s scenario, most likely to air strike a data center in the heart of their rival.
To be clear, either country conducting that kind of airstrike would start a war, full stop. Tensions are already high between China and the U.S. and both are armed with nuclear weapons. China can’t float a balloon above the U.S. without the Pentagon shooting it down, it’s hard to imagine it responding to an airstrike on a data center in San Antonio with anything less than the full force of its military.
China, likewise, likely wouldn’t suffer an incursion and bombing in its airspace. Yudkowsky made clear that nuclear threats are on the table and that the world’s great powers should not stop short of a full nuclear exchange to prevent the rise of AI. China and the U.S. entering into a full nuclear exchange would kill billions of people and permanently alter the climate of the planet, starving billions more. He is, in his moral calculus, willing to kill potentially billions of people and possibly doom the planet for a generation to prevent The Terminator from happening.
And this is where we should take a large step back. Yudkowsky is proposing devastation on a scale that would make the horrific war in Ukraine look like child’s play in order to prevent something that he fears. There have been many proposals for what an existential risk posed by an AI would look like, a common one being the result of unintended consequences: that an AI might marshall all of humanity’s resources by force to achieve some predefined goal. Lately, Yudkowsky seems fond of portraying advanced machine learning programs as a kind of unknowable—and inherently terrifying—alien mind that will have goals opposed to humanity and will both lie to us and be too smart to stop.
This is in keeping with Yudkowsky’s previous claim to fame around AI: freaking out about a thought experiment posted on a forum about a superintelligent future AI that will torture anyone who didn’t work to create it. He banned discussion of the idea—called Roko’s Basilisk—on the forum, and later explained why: “I was caught flatfooted in surprise, because I was indignant to the point of genuine emotional shock” that someone would post what he called a “pure infohazard” that would harm people’s brains, he wrote in a Reddit post.
The idea of a Cthulhu-esque “shoggoth” AI is a far cry—far, as in, shouting from the bottom of the Grand Canyon to the moon—from even the most advanced AIs that exist today. They are stochastic parrots, as researcher Timnit Gebru has coined them, and they have no independent “mind” to speak of. Still, they pose sub-existential risks that can and must be addressed rationally, and without considering nuclear war. An AI model, with enough plugins, may already have the ability to be assigned a task by a scammer and to take the necessary steps to carry it out, including by manipulating humans. AI-driven facial recognition systems used by police have already put innocent Black people in jail. This is not what Yudkowsky seems most worried about, though.
Similarly, nuclear weapons exist right now. They are not part of a hypothetical future that’s coming in six months, or six years, and possibly never. They’re real. We’ve seen their devastating effects, have studied the destruction, and understand the consequences.
And the risk of nuclear war has grown in recent years. New START, the last remaining nuclear disarmament treaty between the U.S. and Russia, is effectively dead. Moscow has teased its suite of new and advanced nuclear weapons. The Pentagon plans to spend billions of dollars to modernize and upgrade its nuclear arsenal. China, once a steady hand with a comparatively low amount of nuclear weapons, is rushing to build more.
Given this climate, the idea that a nuclear superpower could convince another to stop working on AI is asinine. Yudkowsky writes that we have to frame this as a “anything but a conflict of national interests,” and that is absurd. It’s a fantasy problem that ignores hundreds of years of history and realpolitik. It is inventing a monster and demanding that world leaders be as afraid of it as you are. | AI Research |
OpenAI’s success in overtaking Google with an artificial intelligence–powered chatbot has achieved what seemed impossible in the past: It has forced the two AI research teams within Google’s parent, Alphabet, to overcome years of intense rivalry to work together.
Software engineers at Google’s Brain AI group are working with employees at DeepMind, an AI lab that is a sibling company within Alphabet, to develop software to compete with OpenAI, according to two people with knowledge of the project. Known internally as Gemini, the joint effort began in recent weeks, after Google stumbled with Bard, its first attempt to compete with OpenAI’s chatbot.
The release of Bard was also marred internally by the resignation of a prominent Google AI researcher who had raised red flags about its development to Alphabet CEO Sundar Pichai and other executives. | AI Research |
has an artificial intelligence project that lets anyone bring their doodles to life. The company hopes that by offering Animated Drawings as an open-source project other developers will be able to create new, richer experiences.
The Fundamental AI Research (FAIR) team originally released a web-based version of the tool in 2021. It asks users to upload a drawing of a single human-like character or to select a demo figure. If you use your own doodle, you'll see a consent form that asks if Meta can use your drawing to help train its models. You don't need to provide consent to keep using the tool.
Next, you'll need to resize a capture box so it fits snugly around your creation. The tool gives you a pen and eraser to tweak the drawing before adjusting where the joints should be. After all that, you'll see an animated version of your sketch. You can pick from a variety of preset animations from four categories: dance, funny, jumping and walking.
Animated Drawings harnesses object detection models, pose estimation models and image processing-based segmentation methods to capture a digital version of a drawing. It then uses traditional computer graphics techniques to deform and animate the image.
Within a few months of , users had granted Meta permission to use more than 1.6 million images for training purposes. Some folks uploaded images of company logos, anime characters, fish and stuffed animals, despite the tool stipulating that only human figures would work.
Along with requests for a more in-depth toolset that includes sound effects and text overlays, the array of images that people uploaded to the tool suggested there was broad interest in more extensive drawing-to-animation experiences. That, in part, led to Meta open sourcing the project with a dataset of around 180,000 drawings. "By releasing the models and code as open source, the project provides a starting point for developers to build on and extend the project, fostering a culture of innovation and collaboration within the open source community," Meta wrote in a blog post. | AI Research |
With the release of platforms like DALL-E 2 and Midjourney, diffusion generative models have achieved mainstream popularity, owing to their ability to generate a series of absurd, breathtaking, and often meme-worthy images from text prompts like “teddy bears working on new AI research on the moon in the 1980s.” But a team of researchers at MIT’s Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) thinks there could be more to diffusion generative models than just creating surreal images — they could accelerate the development of new drugs and reduce the likelihood of adverse side effects.
A paper introducing this new molecular docking model, called DiffDock, will be presented at the 11th International Conference on Learning Representations. The model’s unique approach to computational drug design is a paradigm shift from current state-of-the-art tools that most pharmaceutical companies use, presenting a major opportunity for an overhaul of the traditional drug development pipeline.
Drugs typically function by interacting with the proteins that make up our bodies, or proteins of bacteria and viruses. Molecular docking was developed to gain insight into these interactions by predicting the atomic 3D coordinates with which a ligand (i.e., drug molecule) and protein could bind together.
While molecular docking has led to the successful identification of drugs that now treat HIV and cancer, with each drug averaging a decade of development time and 90 percent of drug candidates failing costly clinical trials (most studies estimate average drug development costs to be around $1 billion to over $2 billion per drug), it’s no wonder that researchers are looking for faster, more efficient ways to sift through potential drug molecules.
Currently, most molecular docking tools used for in-silico drug design take a “sampling and scoring” approach, searching for a ligand “pose” that best fits the protein pocket. This time-consuming process evaluates a large number of different poses, then scores them based on how well the ligand binds to the protein.
In previous deep-learning solutions, molecular docking is treated as a regression problem. In other words, “it assumes that you have a single target that you’re trying to optimize for and there’s a single right answer,” says Gabriele Corso, co-author and second-year MIT PhD student in electrical engineering and computer science who is an affiliate of the MIT Computer Sciences and Artificial Intelligence Laboratory (CSAIL). “With generative modeling, you assume that there is a distribution of possible answers — this is critical in the presence of uncertainty.”
“Instead of a single prediction as previously, you now allow multiple poses to be predicted, and each one with a different probability,” adds Hannes Stärk, co-author and first-year MIT PhD student in electrical engineering and computer science who is an affiliate of the MIT Computer Sciences and Artificial Intelligence Laboratory (CSAIL). As a result, the model doesn’t need to compromise in attempting to arrive at a single conclusion, which can be a recipe for failure.
To understand how diffusion generative models work, it is helpful to explain them based on image-generating diffusion models. Here, diffusion models gradually add random noise to a 2D image through a series of steps, destroying the data in the image until it becomes nothing but grainy static. A neural network is then trained to recover the original image by reversing this noising process. The model can then generate new data by starting from a random configuration and iteratively removing the noise.
In the case of DiffDock, after being trained on a variety of ligand and protein poses, the model is able to successfully identify multiple binding sites on proteins that it has never encountered before. Instead of generating new image data, it generates new 3D coordinates that help the ligand find potential angles that would allow it to fit into the protein pocket.
This “blind docking” approach creates new opportunities to take advantage of AlphaFold 2 (2020), DeepMind’s famous protein folding AI model. Since AlphaFold 1’s initial release in 2018, there has been a great deal of excitement in the research community over the potential of AlphaFold’s computationally folded protein structures to help identify new drug mechanisms of action. But state-of-the-art molecular docking tools have yet to demonstrate that their performance in binding ligands to computationally predicted structures is any better than random chance.
Not only is DiffDock significantly more accurate than previous approaches to traditional docking benchmarks, thanks to its ability to reason at a higher scale and implicitly model some of the protein flexibility, DiffDock maintains high performance, even as other docking models begin to fail. In the more realistic scenario involving the use of computationally generated unbound protein structures, DiffDock places 22 percent of its predictions within 2 angstroms (widely considered to be the threshold for an accurate pose, 1Å corresponds to one over 10 billion meters), more than double other docking models barely hovering over 10 percent for some and dropping as low as 1.7 percent.
These improvements create a new landscape of opportunities for biological research and drug discovery. For instance, many drugs are found via a process known as phenotypic screening, in which researchers observe the effects of a given drug on a disease without knowing which proteins the drug is acting upon. Discovering the mechanism of action of the drug is then critical to understanding how the drug can be improved and its potential side effects. This process, known as “reverse screening,” can be extremely challenging and costly, but a combination of protein folding techniques and DiffDock may allow performing a large part of the process in silico, allowing potential “off-target” side effects to be identified early on before clinical trials take place.
“DiffDock makes drug target identification much more possible. Before, one had to do laborious and costly experiments (months to years) with each protein to define the drug docking. But now, one can screen many proteins and do the triaging virtually in a day,” Tim Peterson, an assistant professor at the University of Washington St. Louis School of Medicine, says. Peterson used DiffDock to characterize the mechanism of action of a novel drug candidate treating aging-related diseases in a recent paper. “There is a very ‘fate loves irony’ aspect that Eroom’s law — that drug discovery takes longer and costs more money each year — is being solved by its namesake Moore’s law — that computers get faster and cheaper each year — using tools such as DiffDock.”
This work was conducted by MIT PhD students Gabriele Corso, Hannes Stärk, and Bowen Jing, and their advisors, Professor Regina Barzilay and Professor Tommi Jaakkola, and was supported by the Machine Learning for Pharmaceutical Discovery and Synthesis consortium, the Jameel Clinic, the DTRA Discovery of Medical Countermeasures Against New and Emerging Threats program, the DARPA Accelerated Molecular Discovery program, the Sanofi Computational Antibody Design grant, and a Department of Energy Computational Science Graduate Fellowship. | AI Research |
In July, Meta’s Fundamental AI Research (FAIR) center released its large language model Llama 2 relatively openly and for free, a stark contrast to its biggest competitors. But in the world of open-source software, some still see the company’s openness with an asterisk.
While Meta’s license makes Llama 2 free for many, it’s still a limited license that doesn’t meet all the requirements of the Open Source Initiative (OSI). As outlined in the OSI’s Open Source Definition, open source is more than just sharing some code or research. To be truly open source is to offer free redistribution, access to the source code, allow modifications, and must not be tied to a specific product. Meta’s limits include requiring a license fee for any developers with more than 700 million daily users and disallowing other models from training on Llama. IEEE Spectrum wrote researchers from Radboud University in the Netherlands claimed Meta saying Llama 2 is open-source “is misleading,” and social media posts questioned how Meta could claim it as open-source.
FAIR lead and Meta vice president for AI research Joelle Pineau is aware of the limits of Meta’s openness. But, she argues that it’s a necessary balance between the benefits of information-sharing and the potential costs to Meta’s business. In an interview with The Verge, Pineau says that even Meta’s limited approach to openness has helped its researchers take a more focused approach to its AI projects.
“Being open has internally changed how we approach research, and it drives us not to release anything that isn’t very safe and be responsible at the onset,” Pineau says.
Meta’s AI division has worked on more open projects before
One of Meta’s biggest open-source initiatives is PyTorch, a machine learning coding language used to develop generative AI models. The company released PyTorch to the open source community in 2016, and outside developers have been iterating on it ever since. Pineau hopes to foster the same excitement around its generative AI models, particularly since PyTorch “has improved so much” since being open-sourced.
She says that choosing how much to release depends on a few factors, including how safe the code will be in the hands of outside developers.
“How we choose to release our research or the code depends on the maturity of the work,” Pineau says. “When we don’t know what the harm could be or what the safety of it is, we’re careful about releasing the research to a smaller group.”
It is important to FAIR that “a diverse set of researchers” gets to see their research for better feedback. It’s this same ethos that Meta used when it announced Llama 2’s release, creating the narrative that the company believes innovation in generative AI has to be collaborative.
Pineau says Meta is involved in industry groups like the Partnership on AI and MLCommons to help develop foundation model benchmarks and guidelines around safe model deployment. It prefers to work with industry groups as the company believes no one company can drive the conversation around safe and responsible AI in the open source community.
Meta’s approach to openness feels novel in the world of big AI companies. OpenAI began as a more open-sourced, open-research company. But OpenAI co-founder and chief scientist Ilya Sutskever told The Verge it was a mistake to share their research, citing competitive and safety concerns. While Google occasionally shares papers from its scientists, it has also been tight-lipped around developing some of its large language models.
The industry’s open source players tend to be smaller developers like Stability AI and EleutherAI — which have found some success in the commercial space. Open source developers regularly release new LLMs on the code repositories of Hugging Face and GitHub. Falcon, an open-source LLM from Dubai-based Technology Innovation Institute, has also grown in popularity and is rivaling both Llama 2 and GPT-4.
It is worth noting, however, that most closed AI companies do not share details on data gathering to create their model training datasets.
Meta also does not
Pineau says current licensing schemes were not built to work with software that takes in vast amounts of outside data, as many generative AI services do. Most licenses, both open-source and proprietary, give limited liability to users and developers and very limited indemnity to copyright infringement. But Pineau says AI models like Llama 2 contain more training data and open users to potentially more liability if they produce something considered infringement. The current crop of software licenses does not cover that inevitability.
“AI models are different from software because there are more risks involved, so I think we should evolve the current user licenses we have to fit AI models better,” she says. “But I’m not a lawyer, so I defer to them on this point.”
People in the industry have begun looking at the limitations of some open-source licenses for LLMs in the commercial space, while some are arguing that pure and true open source is a philosophical debate at best and something developers don’t care about as much.
Stefano Maffulli, executive director of OSI, tells The Verge that the group understands that current OSI-approved licenses may fall short of certain needs of AI models. He says OSI is reviewing how to work with AI developers to provide transparent, permissionless, yet safe access to models.
“We definitely have to rethink licenses in a way that addresses the real limitations of copyright and permissions in AI models while keeping many of the tenets of the open source community,” Maffulli says.
The OSI is also in the process of creating a definition of open source as it relates to AI.
Wherever you land on the “Is Llama 2 really open-source” debate, it’s not the only potential measure of openness. A recent report from Stanford, for instance, showed none of the top companies with AI models talk enough about the potential risks and how reliably accountable they are if something goes wrong. Acknowledging potential risks and providing avenues for feedback isn’t necessarily a standard part of open source discussions — but it should be a norm for anyone creating an AI model. | AI Research |
Last week, researchers at Facebook’s parent company Meta released BlenderBot 3, a “publicly available chatbot that improves its skills and safety over time”. The chatbot is built on top of Meta’s OPT-175B language model, effectively the company’s white-label version of the more famous GPT-3 AI. Like most state-of-the-art AIs these days, that was trained on a vast corpus of text scraped from the internet in questionable ways, and poured into a datacentre with thousands of expensive chips that turned the text into something approaching coherence.But where OPT-175B is a general-purpose textbot, able to do anything from write fiction and answer questions to generate spam emails, BlenderBot 3 is a narrower project: it can have a conversation with you. That focus allows it to bring in other expertise, though, and one of Meta’s most significant successes is hooking the language model up to the broader internet. In other words: “BlenderBot 3 is capable of searching the internet to chat about virtually any topic.”Sign up for our weekly technology newsletter, TechScape.On top of that, BlenderBot is designed to improve itself through feedback on earlier conversations, whereas large language models like OPT-175B are generally fairly static. “Initial experiments already show that as more people interact with the model, the more it learns from its experiences and the better and safer it becomes over time,” the company says, “though safety remains an open problem.”Let’s pause and take in those last few words before continuing.When Meta announced the project, my eyebrows raised slightly at the sample conversation it had chosen to illustrate the post with – a chat between a user and BlenderBot in which the AI, well, lies. When asked what it’s doing it says “I’ve been working on my new book all night”, and follows up with the claim the book will be its ninth, with previous novels including “a modern retelling of pride and prejudice”.Former Google engineer Blake Lemoine, who raised concerns over the company’s LaMDA AI chatbot. Photograph: The Washington Post/Getty ImagesThe question of what it means to want an AI to tell the truth is a tricky one. As we saw in June with Google’s LaMDA, the fundamental goal of these models is to provide a user with the appropriate text to finish their prompt: if you ask a machine what it did at the weekend, the machine is probably correct in assuming that you want to engage in light role-play, rather than stick to the facts.Nonetheless, the decision to advertise BlenderBot with a conversation in which it lied to a user is suggestive of the attitude the company is taking with it. The idea is that, by releasing the project as a chatbot on the internet, Meta has more leeway to experiment without risking negative outcomes. GPT3 and OPT-175B are working language models, intended to be used – among other things – for serious commercial enterprises. BlenderBot 3, though, is a bit of a laugh.Hence those open questions about safety. Within a few days of BlenderBot being online and ready to mingle (with Americans only, alas), users were posting some spicy examples of the chatbot’s output.The Wall Street Journal’s Jeff Horwitz found that the bot appeared to have been radicalised by Facebook into supporting Donald Trump as a three-term president: Photograph: twitter.com/JeffHorwitz/And into bringing antisemitic conspiracy theories up, unprompted: Photograph: twitter.com/JeffHorwitz/Renee DiResta of the Stanford Internet Observatory found that the bot would claim to be an a supporter of the German paramilitary organisation the Red Army Faction: Photograph: twitter.com/noUpsidePranav Dixit of BuzzFeed News found the bot wants to send Zuckerberg to jail: Photograph: twitter.com/PranavDixitThe whole thing is most reminiscent of Tay, Microsoft’s AI-based learning chatbot, which was released in 2016 and promptly became a Hitler-loving Trump supporter:‘Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,’ Microsoft said. ‘The more you chat with Tay the smarter she gets.’But it appeared on Thursday that Tay’s conversation extended to racist, inflammatory and political statements. Her Twitter conversations have so far reinforced the so-called Godwin’s law – that as an online discussion goes on, the probability of a comparison involving the Nazis or Hitler approaches – with Tay having been encouraged to repeat variations on ‘Hitler was right’ as well as ‘9/11 was an inside job’.But unlike Microsoft, which quickly realised its error and pulled Tay from the net, Meta seems more determined to stick this out. If BlenderBot works as it should, then the combined weight of experience and feedback should filter the outré responses out of its repertoire. If it doesn’t, then the worst that can happen is that Meta must shut down the bot when it stops improving.“As more and more people interact with the demo, we will aim to improve our models using their feedback, and release deployment data and updated model snapshots, for the benefit of the wider AI community,” Meta says. “Together, we can advance responsible conversational AI research in the hope of one day building AI-powered computers that everyone can chat with in genuinely helpful and interesting ways.”Tornado warningThe US Treasury Department has banned all Americans from using the crypto “mixing service” Tornado Cash. From CoinDesk:The Office of Foreign Asset Control, a Treasury watchdog tasked with preventing sanctions violations, added Tornado Cash to its Specially Designated Nationals list, a running tally of blacklisted people, entities and cryptocurrency addresses. As a result, all US persons and entities are prohibited from interacting with Tornado Cash or any of the Ethereum wallet addresses tied to the protocol.Tornado Cash is a mixer, a tool that allows you to hide the source of cash on the ethereum blockchain. In very simplified terms, you send cash to Tornado Cash and get a voucher out in a nice round number (say, 100 ETH); whenever you want to redeem your voucher, you send it back, and the money goes to an address you control.As you might imagine, that makes it crucial for money laundering on the blockchain. Lazarus Group, the North Korean hacker unit that stole more than $500m from crypto game Axie Infinity, has been slowly sending all that loot through Tornado Cash, allowing it to turn it back into more useful fiat currency without raising any red flags.The whole thing is completely decentralised. Even Tornado Cash developers can’t stop it operating, let alone intervene to block suspicious users. Some argue that it has legitimate uses – if I want to send you money without letting know know how much I have in my wallet, then a service like Tornado Cash might be useful – but every legitimate user also provides further cover for the money laundering. In recent months, a fifth of all the money flowing in to Tornado Cash came from Lazarus Group alone.So the US Treasury has acted. “Tornado Cash has been the go-to mixer for cyber criminals looking to launder the proceeds of crime, as well as helping to enable hackers, including those currently under U.S. sanctions, to launder the proceeds of their cyber crimes by obfuscating the origin and transfer of this illicit virtual currency,” a senior Treasury official said. “Since its creation back in 2019, Tornado Cash has reportedly laundered more than $7bn worth of virtual currency.”But this is unlikely to be the end of things. For one, Tornado Cash is, well, a money laundering service. By its nature, it’s impossible to prove that you actually initiated a transaction – even if you received cash from the service’s address. I could theoretically get a random American in a heap of trouble by sending them money through Tornado Cash and they’d have no way of stopping me. Or they could just claim that had happened when quizzed, and have no way of being proved wrong.This isn’t just theoretical: in the last 24 hours, users have actually done this, withdrawing 0.1ETH – £146 – to publicly available addresses.I’m not sure it makes the point that cryptocurrency fans hope, though: yes, it does make it look almost impossible to enforce money laundering regulations without treating any crypto user as potentially criminal. That… doesn’t strike me as an outcome that is desirable if you are a crypto user?More generally, Tornado Cash is at heart just a smart contract running on Ethereum. The US Treasury can play Whac-a-Mole sanctioning individual contracts as they pop up, but it won’t move the dial until it takes the more general approach of declaring mixing services verboten. That doesn’t appear to be on the cards any time soon, but how quickly that day comes depends on how actively people decide to poke the bear.The wider TechScape The Observer had a pair of fantastic pieces about the deeper battles in AI: Gary Marcus, a machine learning pioneer, wrote a primer on the cutting edge of the technology, and Steven Poole took a look at the recent revival of the Cartesian fear that the world might be a big simulation. Another crypto not-bank account has paused withdrawals to prevent a not-bank run. Hodlnaut says “halting withdrawals and token swaps was a necessary step for us to stabilise our liquidity”. The company had a huge exposure to defunct hedge fund 3AC, which went bust and won’t be paying back loans any time soon. Marc Andreessen, the billionaire Facebook backer whose venture capital firm, Andreessen Horowitz (A16Z) is one of the most influential in Silicon Valley, wrote a much-read essay at the dawn of the Covid pandemic arguing that “it’s time to build”. Well, two years of working from home has clearly played as much of a toll on him as it has on the rest of us: his latest missive was posted, not on his website, but on his local council: “IMMEDIATELY REMOVE all multifamily overlay zoning projects”, he wrote. “They will MASSIVELY decrease our home values”. It’s Time to Build Anywhere But My Backyard – ITTBABMBY? | AI Research |
The head of artificial intelligence at Microsoft says the company will continue to accelerate its work on large AI models despite concerns from some in the field that the technology is growing too fast and is too unpredictable to be safe.
"The potential for this technology to really drive human productivity… to bring economic growth across the globe, is just so powerful, that we'd be foolish to set that aside," Eric Boyd, corporate vice president of Microsoft AI Platforms told Sky News.
In 2019, the US software giant invested $1bn in artificial intelligence start-up OpenAI.
Microsoft's cash and computing power - made available via its Azure cloud computing platform - allowed OpenAI to create GPT4, the most powerful "large language model" the world had ever seen. It was launched to the public as the chatbot, ChatGPT.
Microsoft were quick to build GPT4 and its conversational abilities into its Bing search engine. But it is also putting the technology into something called Copilot - effectively a virtual digital assistant - into a number of its existing software products like word processing and spreadsheets.
Its vision of AI isn't about planetary takeover, explains Boyd, but about changing the relationship between people and computers.
"It will just really redefine the interfaces that we're used to, the way that you're used to talking to a machine - the keyboard and your mouse and all of that. I think it becomes much more language-based."
But what of claims by leaders in the field of AI that large "generative AI" models (ones that can create text, images or other output) are developing too fast and aren't fully understood?
"Experts in the field have gotten there based on their present credentials," said Boyd.
"And, of course, we're going to listen and give serious consideration to all the feedback that they have. But I think as you look at what these models do, what they're capable of, you know, those concerns seem pretty far away from what we're actually working on."
The current capabilities of language models like ChatGPT are being overstated, Boyd argues.
"People talk about how the AI takes over, but it doesn't have the capability to take over. These are models that produce text as output," he said.
Boyd said he is more worried about the potential for AI to exacerbate existing societal problems.
"How do we make sure that these models are going to be working safely for the use cases that they're in?" he mused.
"How do we work to minimise the biases that are inherent in society and those showing up in the models?"
Read more from Sky News:
China risks falling further behind US in AI race with 'heavy-handed' regulation
This is how AI could change the future of journalism
But some of the biggest near-term concerns about AI aren't about the safety of the technology itself. Rather, they are more about how much damage the technology could do if applied to the wrong tasks, whether that's diagnosing cancer to managing air traffic control. Or being deliberately misused by rogue actors.
Some of those decisions are up to them, Boyd admits. He references the decision by Microsoft not to sell face recognition software it developed to law enforcement agencies. But the rest is for regulators.
Click to subscribe to the Sky News Daily wherever you get your podcasts
"I think as a society we're going to have to think through what are the places that this technology is appropriate and what are the places where we have concerns about its use. But we definitely think there's a place for regulation in this industry."
Its partnership with OpenAI has given Microsoft a major boost in the race to market AI breakthroughs. But competition is intense. Google has a world-leading AI research division working hard to bring AI products to consumers too.
Big Tech doesn't look like it has any intention of slowing down the race to develop bigger and better AI. That means society and our regulators will have to speed up thinking on what safe AI looks like. | AI Research |
If you're not involved in machine learning (ML) or artificial intelligence (AI), chances are you've never heard of PyTorch. If you are, however, it's a different story. Meta's open-source PyTorch ML is very popular in building all manner of AI applications. The one thing holding it back, according to some, was its close ties to Facebook. That's no longer a problem. Meta has released the code to a new, independent Linux Foundation group, the PyTorch Foundation.Meta CEO Mark Zuckerberg naturally wrote in a Facebook post, "We're moving PyTorch, the open source AI framework led by Meta researchers, to become a project governed under the Linux Foundation." There, Meta and its partners, AMD, Amazon Web Services, Google, Microsoft, and Nvidia, will continue to push the project forward.Also: Meta announces a one-day virtual event called Meta ConnectWhy? After all, the program is a market leader, with over 150,000 projects built on GitHub with PyTorch. Meta is making the shift for the same reason it open-sourced all its AI programs in 2021: Getting any ML tools, not to mention programs based on it, is slow, tedious, hard work. At this level of work, even Meta needs all the open-source help it can get.Even before this move, Meta was inviting everyone who wanted to join it in its PyTorch efforts. As Linux Foundation executive director Jim Zemlin said at Open Source Summit Europe in Dublin, Ireland, "PyTorch has grown to include a massive community of contributors and users under their community-focused stewardship. The genius of PyTorch (and a credit to its maintainers) is that it is truly a foundational platform for so much AI/ML today, a real Swiss Army Knife."Also: Best Linux Foundation classes: Intro to Linux, Cloud Engineer Bootcamp, and moreGoing forward, the PyTorch Foundation will call the shots in its future development. That doesn't mean Meta will move away from using PyTorch. Far from it, Meta announced last year that it was dropping its other AI/ML efforts and betting all its chips on PyTorch. As Facebook stated, we will "continue to invest in PyTorch and use it as the primary framework for our AI research and production applications at the company." | AI Research |
Introduction
Suppose you have a thousand-page book, but each page has only a single line of text. You’re supposed to extract the information contained in the book using a scanner, only this particular scanner systematically goes through each and every page, scanning one square inch at a time. It would take you a long time to get through the whole book with that scanner, and most of that time would be wasted scanning empty space.
Such is the life of many an experimental physicist. In particle experiments, detectors capture and analyze vast amounts of data, even though only a tiny fraction of it contains useful information. “In a photograph of, say, a bird flying in the sky, every pixel can be meaningful,” explained Kazuhiro Terao, a physicist at the SLAC National Accelerator Laboratory. But in the images a physicist looks at, often only a small portion of it actually matters. In circumstances like that, poring over every detail needlessly consumes time and computational resources.
But that’s starting to change. With a machine learning tool known as a sparse convolutional neural network (SCNN), researchers can focus on the relevant parts of their data and screen out the rest. Researchers have used these networks to vastly accelerate their ability to do real-time data analysis. And they plan to employ SCNNs in upcoming or existing experiments on at least three continents. The switch marks a historic change for the physics community.
“In physics, we are used to developing our own algorithms and computational approaches,” said Carlos Argüelles-Delgado, a physicist at Harvard University. “We have always been on the forefront of development, but now, on the computational end of things, computer science is often leading the way.”
Sparse Characters
The work that would lead to SCNNs began in 2012, when Benjamin Graham, then at the University of Warwick, wanted to make a neural network that could recognize Chinese handwriting.
The premier tools at the time for image-related tasks like this were convolutional neural networks (CNNs). For the Chinese handwriting task, a writer would trace a character on a digital tablet, producing an image of, say, 10,000 pixels. The CNN would then move a 3-by-3 grid called a kernel across the entire image, centering the kernel on each pixel individually. For every placement of the kernel, the network would perform a complicated mathematical calculation called a convolution that looked for distinguishing features.
CNNs were designed to be used with information-dense images such as photographs. But an image containing a Chinese character is mostly empty; researchers refer to data with this property as sparse. It’s a common feature of anything in the natural world. “To give an example of how sparse the world can be,” Graham said, if the Eiffel Tower were encased in the smallest possible rectangle, that rectangle would consist of “99.98% air and just 0.02% iron.”
Introduction
Graham tried tweaking the CNN approach so that the kernel would only be placed on 3-by-3 sections of the image that contain at least one pixel that has nonzero value (and is not just blank). In this way, he succeeded in producing a system that could efficiently identify handwritten Chinese. It won a 2013 competition by identifying individual characters with an error rate of only 2.61%. (Humans scored 4.81% on average.) He next turned his attention to an even bigger problem: three-dimensional-object recognition.
By 2017, Graham had moved to Facebook AI Research and had refined his technique and published the details for the first SCNN, which centered the kernel only on pixels with a nonzero value. Terao then brought it to the world of particle physics.
By 2017, Graham had moved to Facebook AI Research and had further refined his technique and published the details for the first SCNN, which centered the kernel only on pixels that had a nonzero value (rather than placing the kernel on any 3-by-3 section that had at least one “nonzero” pixel). It was this general idea that Terao brought to the world of particle physics.
Underground Shots
Terao is involved with experiments at the Fermi National Accelerator Laboratory that probe the nature of neutrinos, among the most elusive known elementary particles. They’re also the most abundant particles in the universe with mass (albeit not much), but they rarely show up inside a detector. As a result, most of the data for neutrino experiments is sparse, and Terao was constantly on the lookout for better approaches to data analysis. He found one in SCNNs.
In 2019, he applied SCNNs to simulations of the data expected from the Deep Underground Neutrino Experiment, or DUNE, which will be the world’s largest neutrino physics experiment when it comes online in 2026. The project will shoot neutrinos from Fermilab, just outside Chicago, through 800 miles of earth to an underground laboratory in South Dakota. Along the way, the particles will “oscillate” between the three known types of neutrinos, and these oscillations may reveal detailed neutrino properties.
The SCNNs analyzed the simulated data faster than ordinary methods, and required significantly less computational power in doing so. The promising results mean that SCNNs will likely be used during the actual experimental run.
In 2021, meanwhile, Terao helped add SCNNs to another neutrino experiment at Fermilab known as MicroBooNE. Here, scientists look at the aftermath of collisions between neutrinos and the nuclei of argon atoms. By examining the tracks created by these interactions, researchers can infer details about the original neutrinos. To do that, they need an algorithm that can look at the pixels (or, technically, their three-dimensional counterparts called voxels) in a three-dimensional representation of the detector and then determine which pixels are associated with which particle trajectories.
Because the data is so sparse — a smattering of tiny lines within a large detector (approximately 170 tons of liquid argon) — SCNNs are almost perfect for this task. With a standard CNN, the image would have to be broken up into 50 pieces, because of all the computation to be done, Terao said. “With a sparse CNN, we analyze the entire image at once — and do it much faster.”
Timely Triggers
One of the researchers who worked on MicroBooNE was an undergraduate intern named Felix Yu. Impressed with the power and efficiency of SCNNs, he brought the tools with him to his next workplace as a graduate student at a Harvard research laboratory formally affiliated with the IceCube Neutrino Observatory at the South Pole.
One of the key goals of the observatory is to intercept the universe’s most energetic neutrinos and trace them back to their sources, most of which lie outside our galaxy. The detector is comprised of 5,160 optical sensors buried in the Antarctic ice, only a tiny fraction of which light up at any given time. The rest of the array remains dark and is not particularly informative. Worse, many of the “events” that the detectors record are false positives and not useful for neutrino hunting. Only so-called trigger-level events make the cut for further analysis, and instant decisions need to be made as to which ones are worthy of that designation and which will be permanently ignored.
Standard CNNs are too slow for this task, so IceCube scientists have long relied on an algorithm called LineFit to tell them about potentially useful detections. But that algorithm is unreliable, Yu said, “which means we could be missing out on interesting events.” Again, it’s a sparse data environment ideally suited for an SCNN.
Yu — along with Argüelles-Delgado, his doctoral adviser, and Jeff Lazar, a graduate student at the University of Wisconsin, Madison — quantified that advantage, showing in a recent paper that these networks would be about 20 times faster than typical CNNs. “That’s fast enough to run on every event that comes out of the detector,” about 3,000 each second, Lazar said. “That enables us to make better decisions about what to throw out and what to keep.”
Introduction
The authors have also successfully employed an SCNN in a simulation using official IceCube data, and the next step is to test their system on a replica of the South Pole computing system. If all goes well, Argüelles-Delgado believes they should get their system installed at the Antarctic observatory next year. But the technology could see even wider use. “We think that [SCNNs could benefit] all neutrino telescopes, not just IceCube,” Argüelles-Delgado said.
Beyond Neutrinos
Philip Harris, a physicist at the Massachusetts Institute of Technology, is hoping SCNNs can help out at the biggest particle collider of them all: the Large Hadron Collider (LHC) at CERN. Harris heard about this kind of neural network from an MIT colleague, the computer scientist Song Han. “Song is an expert on making algorithms fast and efficient,” Harris said — perfect for the LHC, where 40 million collisions occur every second.
When they spoke a couple of years ago, Song told Harris about an autonomous-vehicle project he was pursuing with members of his lab. Song’s team was using SCNNs to analyze 3D laser maps of the space in front of the vehicle, much of which is empty, to see if there were any obstructions ahead.
Harris and his colleagues face similar challenges at the LHC. When two protons collide inside the machine, the crash creates an expanding sphere made of particles. When one of these particles hits the collector, a secondary particle shower occurs. “If you can map out the full extent of this shower,” Harris said, “you can determine the energy of the particle that gave rise to it,” which might be an object of special interest — something like the Higgs boson, which physicists discovered in 2012, or a dark matter particle, which physicists are still searching for.
“The problem we are trying to solve comes down to connecting the dots,” Harris said, just as a self-driving car might connect the dots of a laser map to detect an obstruction.
SCNNs would speed up data analysis at the LHC by at least a factor of 50, Harris said. “Our ultimate goal is to get [SCNNs] into the detector” — a task that will take at least a year of paperwork and additional buy-in from the community. But he and his colleagues are hopeful.
Altogether, it’s increasingly likely that SCNNs — an idea originally conceived in the computer science world — will soon play a role in the biggest experiments ever conducted in neutrino physics (DUNE), neutrino astronomy (IceCube) and high-energy physics (the LHC).
Graham said he was pleasantly surprised to learn that SCNNs had made their way to particle physics, though he was not totally shocked. “In an abstract sense,” he said, “a particle moving in space is a bit like the tip of a pen moving on a piece of paper.” | AI Research |
Meta AI researchers develop ways to read speech from people’s brains When people speak, they are thinking about how the words they are going to form with their mouths — and in fact, it’s not necessary to talk at all in order to for the brain to form the activity involved in speech.
That’s important because the parts of the brain that control the mouth and the parts that involve understanding and forming language are separate. Researchers at Meta Platform Inc.’s Facebook AI Research Labs have been putting this knowledge along with artificial intelligence to work in order to learn how to assist people who have suffered traumatic neurological injuries that have left them unable to communicate through speech, typing or gestures.
“We’ve developed an AI model that can decode speech from noninvasive recordings of brain activity,” said Jean Remi King, a research scientist with FAIR Labs. “Decoding speech from brain activity has been a longstanding goal of neuroscientists and clinicians, but most of the progress has relied on invasive brain-recording techniques.”
Most people may be familiar with the common types of brain scans such as magnetic resonance imaging, or MRI, and computerized tomography, or CT, both of which produce detailed images of the brain. However, they show structures rather than activity. The best ways to date to get clear ongoing activity have been invasive — meaning opening up the skull and placing electrodes directly onto the brain itself.
However, noninvasive techniques such as electroencephalograms, EEG, and magnetoencephalography, MEG, can scan the brain from the outside and watch activity without any surgery. Both EEG and MEG can take millisecond-level snapshots of brain activity, which makes them perfect for a continuous view of what’s happening in a person’s brain while they’re listening to
The problem is that they don’t get a very clear picture of what’s happening, since the recordings from EEG and MEG sessions can be extremely noisy. Although they’re useful for the diagnosis of injuries, this makes them problematic for determining specific, nuanced brain activities such as if the person is thinking of saying the word “cat.”
“Noninvasive recordings are notoriously noisy and can greatly vary across recording sessions and individuals for a variety of reasons, including differences in each person’s brain and where the sensors are placed,” King said.
In order to address this problem, FAIR researchers turned to machine learning algorithms to help “clean up” the noise. The model they used is called wave2vec 2.0, an open-source AI tool developed by the FAIR team in 2020 that can be used to identify correct speech from noisy audio.
They then tasked the tool with four open-source EEG and MEG recordings consisting of 150 hours of 169 healthy volunteers listening to audiobooks and isolated sentences in English in Dutch. These recordings then became the training set for the wave2vec 2.0 model, which could then be used to improve its ability to pick out potential words that an individual heard.
“Given a snippet of brain activity, it can determine from a large pool of new audio clips which one the person actually heard,” said King. “From there, the algorithm infers the words the person has most likely heard.”
The researchers found this encouraging because it shows that the AI can be trained to learn to decode noisy and variable recordings of brain activity from perceived speech, the next step is to see if that can be extended to brain activity without the pool of audio clips. That would result in a much more versatile decoder that wouldn’t need a pre-set vocabulary.
It’s only a first step, though, King cautioned, as it only focused on decoding perceived speech, although the ultimate goal of the study is to enable patients to communicate by permitting speech production. It could even lead to further technological advancements such as new ways to control computers just by thinking of the words or task at hand.
“More generally, our work is a part of the broader effort by the scientific community to use AI to better understand the human brain,” King said.
Image: geralt/Pixabay Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts. | AI Research |
Summary: Researchers explain how deep neural networks are able to learn complex physics.
Source: Rice University
One of the oldest tools in computational physics — a 200-year-old mathematical technique known as Fourier analysis — can reveal crucial information about how a form of artificial intelligence called a deep neural network learns to perform tasks involving complex physics like climate and turbulence modeling, according to a new study.
The discovery by mechanical engineering researchers at Rice University is described in an open-access study published in PNAS Nexus, a sister publication of the Proceedings of the National Academy of Sciences.
“This is the first rigorous framework to explain and guide the use of deep neural networks for complex dynamical systems such as climate,” said study corresponding author Pedram Hassanzadeh. “It could substantially accelerate the use of scientific deep learning in climate science, and lead to much more reliable climate change projections.”
In the paper, Hassanzadeh, Adam Subel and Ashesh Chattopadhyay, both former students, and Yifei Guan, a postdoctoral research associate, detailed their use of Fourier analysis to study a deep learning neural network that was trained to recognize complex flows of air in the atmosphere or water in the ocean and to predict how those flows would change over time.
Their analysis revealed “not only what the neural network had learned, it also enabled us to directly connect what the network had learned to the physics of the complex system it was modeling,” Hassanzadeh said.
“Deep neural networks are infamously hard to understand and are often considered ‘black boxes,’” he said. “That is one of the major concerns with using deep neural networks in scientific applications. The other is generalizability: These networks cannot work for a system that is different from the one for which they were trained.”
Hassanzadeh said the analytic framework his team presents in the paper “opens up the black box, lets us look inside to understand what the networks have learned and why, and also lets us connect that to the physics of the system that was learned.”
Subel, the study’s lead author, began the research as a Rice undergraduate and is now a graduate student at New York University. He said the framework could be used in combination with techniques for transfer learning to “enable generalization and ultimately increase the trustworthiness of scientific deep learning.”
While many prior studies had attempted to reveal how deep learning networks learn to make predictions, Hassanzadeh said he, Subel, Guan and Chattopadhyay chose to approach the problem from a different perspective.
“The common machine learning tools for understanding neural networks have not shown much success for natural and engineering system applications, at least such that the findings could be connected to the physics,” Hassanzadeh said.
“Our thought was, ‘Let’s do something different. Let’s use a tool that’s common for studying physics and apply it to the study of a neural network that has learned to do physics.”
He said Fourier analysis, which was first proposed in the 1820s, is a favorite technique of physicists and mathematicians for identifying frequency patterns in space and time.
“People who do physics almost always look at data in the Fourier space,” he said. “It makes physics and math easier.”
For example, if someone had a minute-by-minute record of outdoor temperature readings for a one-year period, the information would be a string of 525,600 numbers, a type of data set physicists call a time series. To analyze the time series in Fourier space, a researcher would use trigonometry to transform each number in the series, creating another set of 525,600 numbers that would contain information from the original set but look quite different.
“Instead of seeing temperature at every minute, you would see just a few spikes,” Subel said.
“One would be the cosine of 24 hours, which would be the day and night cycle of highs and lows. That signal was there all along in the time series, but Fourier analysis allows you to easily see those types of signals in both time and space.”
Based on this method, scientists have developed other tools for time-frequency analysis. For example, low-pass transformations filter out background noise, and high-pass filters do the inverse, allowing one to focus on the background.
Hassanzadeh’s team first performed the Fourier transformation on the equation of its fully trained deep-learning model. Each of the model’s approximately 1 million parameters act like multipliers, applying more or less weight to specific operations in the equation during model calculations.
In an untrained model, parameters have random values. These are adjusted and honed during training as the algorithm gradually learns to arrive at predictions that are closer and closer to the known outcomes in training cases. Structurally, the model parameters are grouped in some 40,000 five-by-five matrices, or kernels.
“When we took the Fourier transform of the equation, that told us we should look at the Fourier transform of these matrices,” Hassanzadeh said. “We didn’t know that. Nobody has done this part ever before, looked at the Fourier transforms of these matrices and tried to connect them to the physics.
“And when we did that, it popped out that what the neural network is learning is a combination of low-pass filters, high-pass filters and Gabor filters,” he said.
“The beautiful thing about this is, the neural network is not doing any magic,” Hassanzadeh said.
“It’s not doing anything crazy. It’s actually doing what a physicist or mathematician might have tried to do. Of course, without the power of neural nets, we did not know how to correctly combine these filters. But when we talk to physicists about this work, they love it. Because they are, like, ‘Oh! I know what these things are. This is what the neural network has learned. I see.’”
Subel said the findings have important implications for scientific deep learning, and even suggest that some things scientists have learned from studying machine learning in other contexts, like classification of static images, may not apply to scientific machine learning.
“We found that some of the knowledge and conclusions in the machine learning literature that were obtained from work on commercial and medical applications, for example, do not apply to many critical applications in science and engineering, such as climate change modeling,” Subel said. “This, on its own, is a major implication.”
Chattopadhyay received his Ph.D. in 2022 and is now a research scientist at the Palo Alto Research Center.
Funding: The research was supported by the Office of Naval Research (N00014- 20-1-2722), the National Science Foundation (2005123, 1748958) and the Schmidt Futures program. Computational resources were provided by the National Science Foundation (170020) and the National Center for Atmospheric Research (URIC0004).
About this AI research news
Author: Jade Boyd Source: Rice University Contact: Jade Boyd – Rice University Image: The image is credited to P. Hassanzadeh/Rice University
Explaining the physics of transfer learning in data-driven turbulence modeling
Transfer learning (TL), which enables neural networks (NNs) to generalize out-of-distribution via targeted re-training, is becoming a powerful tool in scientific machine learning (ML) applications such as weather/climate prediction and turbulence modeling. Effective TL requires knowing 1) how to re-train NNs? and 2) what physics are learned during TL? Here, we present novel analyses and a framework addressing (1)-(2) for a broad range of multi-scale, nonlinear, dynamical systems.
Our approach combines spectral (e.g., Fourier) analyses of such systems with spectral analyses of convolutional NNs, revealing physical connections between the systems and what the NN learns (a combination of low-, high-, band-pass filters and Gabor filters). Integrating these analyses, we introduce a general framework that identifies the best re-training procedure for a given problem based on physics and NN theory. As test case, we explain the physics of TL in subgrid-scale modeling of several setups of 2D turbulence.
Furthermore, these analyses show that in these cases, the shallowest convolution layers are the best to re-train, which is consistent with our physics-guided framework but is against the common wisdom guiding TL in the ML literature.
Our work provides a new avenue for optimal and explainable TL, and a step toward fully explainable NNs, for wide-ranging applications in science and engineering, such as climate change modeling. | AI Research |
Meta’s Pricey Bet On AI Comes With New Custom Chips, Coder Tools
On Thursday, the Facebook owner unveiled a slew of new technologies, including a new chip developed in-house to help train AI faster, and a tool that helps coders get suggestions for how to build their products.
(Bloomberg) -- In Meta Platforms Inc.’s year of cost cutting and layoffs, there’s one area that’s seeing record spending: an update of the social media giant’s infrastructure to keep pace in the artificial intelligence arms race.
On Thursday, the Facebook owner unveiled a slew of new technologies, including a new chip developed in-house to help train AI faster, and a tool that helps coders get suggestions for how to build their products. The company is also revamping its data centers to make it easier to deploy AI technology.
“This work reflects long term efforts that will enable even more advances and better use of this technology across everything we do,” Chief Executive Officer Mark Zuckerberg said in an emailed statement.
The custom accelerator chip will help speed up the recommendation algorithm that powers what people see on Facebook and Instagram. A new data center design is being rolled out specifically for hardware that’s best for AI. Meta said it has also finished the second phase of building its AI supercomputer to train large language models, which are technologies similar to those that power ChatGPT.
Meta’s capital expenditures hit a record $31.4 billion last year, more than four-and-a-half times the amount in 2017. This year, which Zuckerberg has called Meta’s “year of efficiency,” analysts expect a repeat of 2022’s levels, with many of those dollars going toward improving and expanding AI infrastructure.
“There is a little bit of tension” with the efficiency mandate, “but it’s not in direct competition to be investing in AI and also investing in efficiency,” said Kim Hazelwood, director of AI research at Meta.
Some of the AI updates are obvious drivers of efficiency within Meta, which has eliminated thousands of employees in recent months.
CodeCompose is a new generative AI-based tool for developers that can auto-complete or suggest changes to code. So far, 5,200 coders are using it in-house, accepting 22% of the suggestions it makes for code completion, the company said.
The company has been increasingly looking to AI to solve its biggest business problems. For advertisers that have been frustrated by privacy changes from Apple Inc. that made their digital ads harder to target, Meta plans to use AI to make some better guesses about user interests. To compete with TikTok, Facebook and Instagram are starting to show content from people users don’t follow — something that requires an algorithm to guess what they may be interested in.
Investors are going to be looking for direct proof of those improvements to justify the deep spending, Angelo Zino, an analyst at CFRA Research, said in an interview.
“It’s clearly going to take some time for some of this stuff to really play itself out,” Zino said of Meta’s increase in capex spending generally. “There’s going to be a lot of scrutiny, making sure that they can start seeing an acceleration in some of those returns on the revenue side.”
When AI models are queried, they spit out answers, called inferences, that require a specific type of processing. Meta decided to develop new chips, called Meta Training and Inference Accelerator (MTIA), to help do the specific work in-house, complementing its slew of graphics processing units from Nvidia.
Meta hopes its MTIA chips will help the company spin up more accurate and interesting predictions of what types of original and ad content users see, hopefully leading to people spending more time on the apps and clicking on more advertisements.
The company also launched its first in-house built, application-specific integrated circuit – or ASIC – designed for processing videos and live streaming. Already, users on Facebook and Instagram share more than 2 billion short videos a day, and this new processor can help these videos show up faster using less data on any device a person may be watching.
“We’ve been able to optimize and balance and target our first chips for our recommender models,” said Alexis Bjorlin, vice president of hardware engineering. “We also have all the visibility on what the different needs are for generative-AI workloads or any different element that comes down the pipe.”
While the recommendation engines used on Meta’s social media apps are its current version of AI technology, key to future generative-AI work is the company’s AI supercomputer, called the Research SuperCluster, which the company will use to train large sets of artificial intelligence programs, called models.
On Thursday, the company said it had completed its second phase of its build-out, which trains its large language model called LLaMA and will be a key part of its efforts to build the metaverse — a virtual reality platform for which the company was renamed from Facebook.
Meta has long been committed to making some of its sophisticated tech available to the outside community. While much of this hardware in its stack isn’t, some of the work that it powers will be open source. LLaMA is shared with researchers, as is an AI model trained on its supercomputer that can solve ten International Math Olympiad problems. CodeCompose was built on public revelations shared by Meta’s AI research team. And its new inference chip will help the company continue to support PyTorch, the open source AI framework that Meta created and then shifted to the Linux Foundation to give it more independence.
Although Meta has been working on AI tools for years, Zuckerberg chose to frame his company’s future around a virtual reality vision that was even more nebulous. That pivot has faced sharp investor scrutiny, so the deep investment in AI infrastructure could help rebuild confidence in Zuckerberg’s overall strategy, said Scott Kessler, analyst at investment researcher Third Bridge.
“They don’t want to be an also-ran” when it comes to the industry-wide race to infuse AI into businesses, Kessler said. “A lot more people are going to kind of buy into that narrative now than say, six and nine months ago.”
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P. | AI Research |
Historically and even today, poor memory has been an impediment to the usefulness of text-generating AI. As a recent piece in The Atlantic aptly puts it, even sophisticated generative text AI like ChatGPT has the memory of a goldfish. Each time the model generates a response, it takes into account only a very limited amount of text — preventing it from, say, summarizing a book or reviewing a major coding project.
But Anthropic’s trying to change that.
Today, the AI research startup announced that it’s expanded the context window for Claude — its flagship text-generating AI model, still in preview — from 9,000 tokens to 100,000 tokens. Context window refers to the text the model considers before generating additional text, while tokens represent raw text (e.g. the word “fantastic” would be split into the tokens “fan,” “tas” and “tic”).
So what’s the significance, exactly? Well, as alluded to earlier, models with small context windows tend to “forget” the content of even very recent conversations — leading them to veer off topic. After a few thousand words or so, they also forget their initial instructions, instead extrapolating their behavior from the last information within their context window rather than the original request.
Given the benefits of large context windows, it’s not surprising that figuring out ways to expand them has become a major focus of AI labs like OpenAI, which devoted an entire team to the issue. OpenAI’s GPT-4 held the previous crown in terms of context windows sizes, weighing in at 32,000 tokens on the high end — but the improved Claude API blows past that.
With a bigger “memory,” Claude should be able to converse relatively coherently for hours — several days, even — as opposed to minutes. And perhaps more importantly, it should be less likely to go off the rails.
In a blog post, Anthropic touts the other benefits of Claude’s increased context window, including the ability for the model to digest and analyze hundreds of pages of materials. Beyond reading long texts, the upgraded Claude can help retrieve information from multiple documents or even a book, Anthropic says, answering questions that require “synthesis of knowledge” across many parts of the text.
Anthropic lists a few possible use cases:
- Digesting, summarizing, and explaining documents such as financial statements or research papers
- Analyzing risks and opportunities for a company based on its annual reports
- Assessing the pros and cons of a piece of legislation
- Identifying risks, themes, and different forms of argument across legal documents.
- Reading through hundreds of pages of developer documentation and surfacing answers to technical questions
- Rapidly prototyping by dropping an entire codebase into the context and intelligently build on or modify it
“The average person can read 100,000 tokens of text in around five hours, and then they might need substantially longer to digest, remember, and analyze that information,” Anthropic continues. “Claude can now do this in less than a minute. For example, we loaded the entire text of The Great Gatsby into Claude … and modified one line to say Mr. Carraway was ‘a software engineer that works on machine learning tooling at Anthropic.’ When we asked the model to spot what was different, it responded with the correct answer in 22 seconds.”
Now, longer context windows don’t solve the other memory-related challenges around large language models. Claude, like most models in its class, can’t retain information from one session to the next. And unlike the human brain, it treats every piece of information as equally important, making it a not particularly reliable narrator. Some experts believe that solving these problems will require entirely new model architectures.
For now, though, Anthropic appears to be at the forefront. | AI Research |
Our AI image generator allows anyone to create high-quality visuals by simply describing what they're looking for — worry-free, one-of-a-kind, and ready in seconds , /PRNewswire/ -- Shutterstock, Inc. (NYSE: SSTK), the world's leading creative platform for transformative brands and media companies, today announced the launch of its AI image generation platform, available to use by all Shutterstock customers globally in every language the site offers. The text-to-image technology converts prompts into larger-than-life, ethically created visuals ready for licensing. It is the latest addition to Creative Flow, Shutterstock's extensive toolkit that has been specifically designed to power the most seamless creative experience possible. "Shutterstock has developed strategic partnerships over the past two years with key industry players like OpenAI, Meta, and LG AI Research to fuel their generative AI research efforts, and we are now able to uniquely bring responsibly-produced generative AI capabilities to our own customers," said Paul Hennessy, Chief Executive Officer at Shutterstock. "Our easy-to-use generative platform will transform the way people tell their stories — you no longer have to be a design expert or have access to a creative team to create exceptional work. Our tools are built on an ethical approach and on a library of assets that represents the diverse world we live in, and we ensure that the artists whose works contributed to the development of these models are recognized and rewarded." Here's what sets our AI image generation platform apart:
EASE AND QUALITY: We are revolutionizing the way visuals are created for campaigns, projects, and brands by making generative AI accessible to all. We believe our users should not need to learn how to write long, complex prompts to benefit from this technology. That's why our image generator produces unique, varied, and breathtaking images from even a single word input or short simple phrases. And with an intuitive style picker and support for over 20 languages, we empower people worldwide to bring their creative visions to life without limits.CONVENIENCE: Shutterstock is your one-stop-shop for all your creative needs. You can search for stock, create and publish winning designs, and generate content—all in one place. If you want to take it a step further, you can polish your newly generated asset in Create, our easy-to-use editing tool that lets you customize your visuals without being a professional designer, and plug it into the rest of the applications that make up Shutterstock's Creative Flow. This results in the perfect piece of content every time, allowing you to bridge the gap between idea, design, and execution. Every Shutterstock customer already has access to this entire suite of tools through their current subscription.CONFIDENCE: We're the first to support a responsible AI-generation model that pays artists for their contributions, making us your trusted partner for generating and licensing the visuals you need to uplevel your brand. Also, we have thoughtfully built in mitigations against the biases that may be inherent in some of our datasets, and we are continuing to explore ways to fairly depict underrepresented groups.The AI image generator, and the rest of our capabilities that are ready to turn your ideas into achievements, can be found on shutterstock.com.FORWARD-LOOKING STATEMENTS This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. All statements other than statements of historical fact are forward-looking. Examples of forward-looking statements include, but are not limited to, statements regarding guidance, industry prospects, future business, future results of operations or financial condition, future dividends, our ability to consummate acquisitions and integrate the businesses we have acquired or may acquire into our existing operations, new or planned features, products or services, management strategies, our competitive position and the COVID-19 pandemic. You can identify forward-looking statements by words such as "may," "will," "would," "should," "could," "expect," "aim," "anticipate," "believe," "estimate," "intend," "plan," "predict," "project," "seek," "potential," "opportunities" and other similar expressions and the negatives of such expressions. However, not all forward-looking statements contain these words. Forward-looking statements are subject to known and unknown risks, uncertainties and other factors that could cause our actual results to differ materially from those expressed or implied by the forward-looking statements contained herein. Such risks and uncertainties include, among others, those discussed under the caption "Risk Factors" in our most recent Annual Report on Form 10-K and our Quarterly Report on Form 10-Q for the quarter ended June 30, 2022, as well as in other documents that the Company may file from time to time with the Securities and Exchange Commission. As a result of such risks, uncertainties and factors, Shutterstock's actual results may differ materially from any future results, performance or achievements discussed in or implied by the forward-looking statements contained herein. The forward-looking statements contained in this press release are made only as of this date and Shutterstock assumes no obligation to update the information included in this press release or revise any forward-looking statements, whether as a result of new information, future developments or otherwise, except as required by law.ABOUT SHUTTERSTOCK Shutterstock, Inc. (NYSE: SSTK), is the leading global creative platform for transformative brands and media companies. Directly and through its group subsidiaries, Shutterstock's comprehensive collection includes high-quality licensed photographs, vectors, illustrations, 3D models, videos and music. Working with its growing community of over 2 million contributors, Shutterstock adds hundreds of thousands of images each week, and currently has more than 424 million images and more than 27 million video clips available.Headquartered in New York City, Shutterstock has offices around the world and customers in more than 150 countries. The Company also owns Splash News, the world's leading entertainment news agency for newsrooms and media companies worldwide, Pond5, the world's largest video-first content marketplace, TurboSquid, the world's largest 3D content marketplace; PicMonkey, a leading online graphic design and image editing platform; Offset, a high-end image collection; Shutterstock Studios, an end-to-end custom creative shop; PremiumBeat, a curated royalty-free music library; Shutterstock Editorial, a premier source of editorial images and videos for the world's media; and Bigstock, a value-oriented stock media offering.For more information, please visit www.shutterstock.com and follow Shutterstock on Twitter and on Facebook.SOURCE Shutterstock, Inc. | AI Research |
Security guru Bruce Schneier and CS professor Jim Waldo think big tech has underestimated the impact of open source principles on AI research: In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn't just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated. And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out. Training speeds have hugely increased, and the size of the models themselves has shrunk to the point that you can create and run them on a laptop. The world of A.I. research has dramatically changed.
This development hasn't made the same splash as other corporate announcements, but its effects will be much greater. It will wrest power from the large tech corporations, resulting in both much more innovation and a much more challenging regulatory landscape. The large corporations that had controlled these models warn that this free-for-all will lead to potentially dangerous developments, and problematic uses of the open technology have already been documented. But those who are working on the open models counter that a more democratic research environment is better than having this powerful technology controlled by a small number of corporations...
[B]uilding on public models like Meta's LLaMa, the open-source community has innovated in ways that allow results nearly as good as the huge models — but run on home machines with common data sets. What was once the reserve of the resource-rich has become a playground for anyone with curiosity, coding skills, and a good laptop.
Bigger may be better, but the open-source community is showing that smaller is often good enough. This opens the door to more efficient, accessible, and resource-friendly LLMs.
Low-cost customization will foster rapid innovation, the article argues, and "takes control away from large companies like Google and OpenAI." Although this may have one unforeseen consequence...
"Now that the open-source community is remixing LLMs, it's no longer possible to regulate the technology by dictating what research and development can be done; there are simply too many researchers doing too many different things in too many different countries."
Thanks to long-time Slashdot reader mrflash818 for submitting the article
This development hasn't made the same splash as other corporate announcements, but its effects will be much greater. It will wrest power from the large tech corporations, resulting in both much more innovation and a much more challenging regulatory landscape. The large corporations that had controlled these models warn that this free-for-all will lead to potentially dangerous developments, and problematic uses of the open technology have already been documented. But those who are working on the open models counter that a more democratic research environment is better than having this powerful technology controlled by a small number of corporations...
[B]uilding on public models like Meta's LLaMa, the open-source community has innovated in ways that allow results nearly as good as the huge models — but run on home machines with common data sets. What was once the reserve of the resource-rich has become a playground for anyone with curiosity, coding skills, and a good laptop.
Bigger may be better, but the open-source community is showing that smaller is often good enough. This opens the door to more efficient, accessible, and resource-friendly LLMs.
Low-cost customization will foster rapid innovation, the article argues, and "takes control away from large companies like Google and OpenAI." Although this may have one unforeseen consequence...
"Now that the open-source community is remixing LLMs, it's no longer possible to regulate the technology by dictating what research and development can be done; there are simply too many researchers doing too many different things in too many different countries."
Thanks to long-time Slashdot reader mrflash818 for submitting the article | AI Research |
Meta said on Tuesday that it would provide researchers with access to components of a new "human-like" artificial intelligence model that it said can analyze and complete unfinished images more accurately than existing models. From a report: The model, I-JEPA, uses background knowledge about the world to fill in missing pieces of images, rather than looking only at nearby pixels like other generative AI models, the company said. That approach incorporates the kind of human-like reasoning advocated by Meta's top AI scientist Yann LeCun and helps the technology to avoid errors that are common to AI-generated images, like hands with extra fingers, it said.
Meta, which owns Facebook and Instagram, is a prolific publisher of open-sourced AI research via its in-house research lab. Chief Executive Mark Zuckerberg has said that sharing models developed by Meta's researchers can help the company by spurring innovation, spotting safety gaps and lowering costs. "For us, it's way better if the industry standardizes on the basic tools that we're using and therefore we can benefit from the improvements that others make," he told investors in April.
Meta, which owns Facebook and Instagram, is a prolific publisher of open-sourced AI research via its in-house research lab. Chief Executive Mark Zuckerberg has said that sharing models developed by Meta's researchers can help the company by spurring innovation, spotting safety gaps and lowering costs. "For us, it's way better if the industry standardizes on the basic tools that we're using and therefore we can benefit from the improvements that others make," he told investors in April. | AI Research |
For decades, the most exalted goal of artificial intelligence has been the creation of an artificial general intelligence, or AGI, capable of matching or even outperforming human beings on any intellectual task. It’s an ambitious goal long regarded with a mixture of awe and apprehension, because of the likelihood of massive social disruption any such AGI would undoubtedly cause. For years, though, such discussions were theoretical. Specific predictions forecasting AGI’s arrival were hard to come by.
But now, thanks to the latest large language models (LLMs) from the AI research firm OpenAI, the concept of an artificial general intelligence suddenly seems much less speculative. OpenAI’s latest LLMs—GPT-3.5, GPT-4, and the chatbot/interface ChatGPT—have made believers out of many previous skeptics. However, as spectacular tech advances often do, they seem also to have unleashed a torrent of misinformation, wild assertions, and misguided dread. Speculation has erupted recently about the end of the World Wide Web as we know it, end-runs GPT guardrails, and AI chaos agents doing their worst (the latter of which seems to be little more than clickbait sensationalism). There were scattered musings that GPT-4 is a step toward machine consciousness, and, more ridiculously, that GPT-4 is itself “slightly conscious.” There were also assertions that GPT-5, which OpenAI’s CEO Sam Altman said last week is not currently being trained, will itself be an AGI.
“The number of people who argue that we won’t get to AGI is becoming smaller and smaller.”
—Christof Koch, Allen Institute
To provide some clarity, IEEE Spectrum contacted Christof Koch, chief scientist of the Mindscope Program at Seattle’s Allen Institute. Koch has a background in both AI and neuroscience and is the author of three books on consciousness as well as hundreds of articles on the subject, including features for IEEE Spectrum and Scientific American.
Christof Koch on...
- Superintelligence and the Singularity
- Does an AGI Need to be Conscious?
- Should We Fear the Emergence of AGIs?
What would be the important characteristics of an artificial general intelligence as far as you’re concerned? How would it go beyond what we have now?
Christof Koch: AGI is ill defined because we don’t know how to define intelligence. Because we don’t understand it. Intelligence, most broadly defined, is sort of the ability to behave in complex environments that have multitudes of different events occurring at a multitude of different timescales, and successfully learning and thriving in such environments.
Christof Koch, Allen InstitutePhoto: Erik Dinnel/Allen Institute
I’m more interested in this idea of an artificial general intelligence. And I agree that even if you’re talking about AGI, it’s somewhat nebulous. People have different opinions….
Koch: Well, by one definition, it would be like an intelligent human, but vastly quicker. So you can ask it—like ChatGPT—you can ask it any question, and you immediately get an answer, and the answer is deep. It’s totally researched. It’s articulated and you can ask it to explain why. I mean, this is the remarkable thing now about ChatGPT, right? It can give you its train of thought. In fact, you can ask it to write code, and then you can ask it, please explain it to me. And it can go through the program, line by line, or module by module, and explain what it does. It’s a train-of-thought type of reasoning that’s really quite remarkable.
You know, that’s one of the things that has emerged out of these large language models. Most people think about AGI in terms of human intelligence, but with infinite memory and with totally rational abilities to think—unlike us. We have all these biases. We’re swayed by all sorts of things that we like or dislike, given our upbringing and culture, etcetera, and supposedly AGI would be less amenable to that. And maybe able to do it vastly faster, right? Because if it just depends on the underlying hardware and the hardware keeps on speeding up and you can go into the cloud, then of course you could be like a human except a hundred times faster. And that’s what Nick Bostrom called a superintelligence.
“What GPT-4 shows, very clearly, is that there are different routes to intelligence.”
—Christof Koch, Allen Institute
You’ve touched on this idea of superintelligence. I’m not sure what this would be, except something that would be virtually indistinguishable from a human—a very, very smart human—except for its enormous speed. And presumably, accuracy. Is this something you believe?
Koch: That’s one way to think about it. It’s just like very smart people. But it can take those very smart people, like Albert Einstein, years to complete their insights and finish their work. Or to think and reason through something, it may take us, say, half an hour. But an AGI may be able to do this in one second. So if that’s the case, and its reasoning is effective, it may as well be superintelligent.
So this is basically the singularity idea, except for the self-creation and self-perpetuation.
Koch: Well, yeah, I mean the singularity…I’d like to stay away from that, because that’s yet another sort of more nebulous idea: that machines will be able to design themselves, each successive generation better than the one before, and then they just take off and totally escape our control. I don’t find that useful to think about in the real world. But if you return to where we are today, we have today amazing networks, amazing algorithms, that anyone can log on to and use, that already have emergent abilities that are unpredictable. They have become so large that they can do things that they weren’t directly trained for.
Let’s go back to the basic way these networks are trained. You give them a string of text or tokens. Let’s call it text. And then the algorithm predicts the next word, and the next word, and the next word, ad infinitum. And everything we see now comes just out of this very simple thing applied to vast reams of human-generated writing. You feed it all text that people have written. It’s read all of Wikipedia. It’s read all of, I don’t know, the Reddits and Subreddits and many thousands of books from Project Gutenberg and all of that stuff. It has ingested what people have written over the last century. And then it mimics that. And so, who would have thought that that leads to something that could be called intelligent? But it seems that it does. It has this emergent, unpredictable behavior.
For instance, although it wasn’t trained to write love letters, it can write love letters. It can do limericks. It can generate jokes. I just asked it to generate some trivia questions. You can ask it to generate computer code. It was also trained on code, on GitHub. It speaks many languages—I tested it in German.
So you just mentioned that it can write jokes. But it has no concept of humor. So it doesn’t know why a joke works. Does that matter? Or will it matter?
Koch: It may not matter. I think what it shows, very clearly, is that there are different routes to intelligence. One way you get to intelligence, is human intelligence. You take a baby, you expose this baby to its family, its environment, the child goes to school, it reads, etc. And then it understands in some sense, right?
“In the long term, I think everything is on the table. And yes, I think we need to worry about existential threats.”
—Christof Koch, Allen Institute
Although many people, if you ask them why a joke is funny, they can’t really tell you, either. The ability of many people to understand things is quite limited. If you ask people, well, why is this joke funny? Or how does that work? Many people have no idea. And so [GPT-4] may not be that different from many people. These large language models demonstrate quite clearly that you do not have to have a human-level type of understanding in order to compose text that to all appearances was written by somebody who has had a secondary or tertiary education.
IEEE Spectrum prompted OpenAI’s DALL·E to help create a series of portraits of AI telling jokes.DALL·E/IEEE Spectrum
Chat GPT reminds me of a widely read, smart, undergraduate student who has an answer for everything, but who’s also overly confident of his answers and, quite often, his answers are wrong. I mean, that’s a thing with Chat GPT. You can’t really trust it. You always have to check because very often it gets the answer right, but you can ask other questions, for example about math, or attributing a quote, or a reasoning problem, and the answer is plainly wrong.
This is a well-known weakness you’re referring to, a tendency to hallucinate or make assertions that seem semantically and syntactically correct, but are actually completely incorrect.
Koch: People do this constantly. They make all sorts of claims and often they’re simply not true. So again, this is not that different from humans. But I grant you, for practical applications right now, you can not depend on it. You always have to check other sources—Wikipedia, or your own knowledge, etc. But that’s going to change.
The elephant in the room, it seems to me that we’re kind of dancing around, all of us, is consciousness. You and Francis Crick, 25 years ago, among other things, speculated that planning for the future and dealing with the unexpected may be part of the function of consciousness. And it just so happens that that’s exactly what GPT-4 has trouble with.
Koch: So, consciousness and intelligence. Let’s think a little bit about them. They’re quite different. Intelligence ultimately is about behaviors, about acting in the world. If you’re intelligent, you’re going to do certain behaviors and you’re not going to do some other behaviors. Consciousness is very different. Consciousness is more a state of being. You’re happy, you’re sad, you see something, you smell something, you dread something, you dream something, you fear something, you imagine something. Those are all different conscious states.
Now, it is true that with evolution, we see in humans and other animals and maybe even squids and birds, etc., that they have some amount of intelligence and that goes hand in hand with consciousness. So at least in biological creatures, consciousness and intelligence seem to go hand in hand. But for engineered artifacts like computers, that does not have to be at all the case. They can be intelligent, maybe even superintelligent, without feeling like anything.
“It’s not consciousness that we need to be concerned about. It’s their motivation and high intelligence that we need to be concerned with.”
—Christof Koch, Allen Institute
And certainly there’s one of the two dominant theories of consciousness, the Integrated Information Theory of consciousness, that says you can never simulate consciousness. It can’t be computed, can’t be simulated. It has to be built into the hardware. Yes, you will be able to build a computer that simulates a human brain and the way people think, but it doesn’t mean it’s conscious. We have computer programs that simulate the gravity of the black hole at the center of our galaxy, but funny enough, no one is concerned that the astrophysicist who runs the computer simulation on a laptop is going to be sucked into the laptop. Because the laptop doesn’t have the causal power of a black hole. And same thing with consciousness. Just because you can simulate the behavior associated with consciousness, including speech, including speaking about it, doesn’t mean that you actually have the causal power to instantiate consciousness. So by that theory, it would say, these computers, while they might be as intelligent or even more intelligent than humans, they will never be conscious. They will never feel.
Which you don’t really need, by the way, for anything practical. If you want to build machines that help us and serve our goals by providing text and predicting the weather or the stock market, writing code, or fighting wars, you don’t really care about consciousness. You care about reasoning and motivation. The machine needs to be able to predict and then based on that prediction, do certain things. And even for the doomsday scenarios, it’s not consciousness that we need to be concerned about. It’s their motivation and high intelligence that we need to be concerned with. And that can be independent of consciousness.
Why do we need to be concerned about those?
Koch: Look, we’re the dominant species on the planet, for better or worse, because we are the most intelligent and the most aggressive. Now we are building creatures that are clearly getting better and better at mimicking one of our unique hallmarks—intelligence. Of course, some people, the military, independent state actors, terrorist groups, they will want to marry that advanced intelligent machine technology to warfighting capability. It’s going to happen sooner or later. And then you have machines that might be semiautonomous or even fully autonomous and that are very intelligent and also very aggressive. And that’s not something that we want to do without very, very careful thinking about it.
But that kind of mayhem would require both the ability to plan and also mobility, in the sense of being embodied in something, a mobile form.
Koch: Correct, but that’s already happening. Think about a car, like a Tesla. Fast forward another 10 years. You can put the capability of something like a GPT into a drone. Look what the drone attacks are doing right now. The Iranian drones that the Russians are buying and launching into Ukraine. Now imagine, that those drones can tap into the cloud and gain superior, intelligent abilities.
Koch: Think about a novel. Any novels about what the protagonist thinks, and then what he or she imputes what others think. Much of modern literature is about, what do people think, believe, fear, or desire. So it’s not surprising that GPT-4 can answer such questions.
Is that really human-level understanding? That’s a much more difficult question to grok. “Does it matter?” is a more relevant question. If these machines behave like they understand us, yeah, I think it’s a further step on the road to artificial generalized intelligence, because then they begin to understand our motivation—including maybe not just generic human motivations, but the motivation of a specific individual in a specific situation, and what that implies.
“When people say in the long term this is dangerous, that doesn’t mean, well, maybe in 200 years. This could mean maybe in three years, this could be dangerous.”
—Christof Koch, Allen Institute
Another risk, which also gets a lot of attention, is the idea that these models could be used to produce disinformation on a staggering scale and with staggering flexibility.
Koch: Totally. You see it already. There were already some deep fakes around the Donald Trump arrest, right?
So it would seem that this is going to usher in some kind of new era, really. I mean, into a society that is already reeling with disinformation spread by social media. Or amplified by social media, I should say.
Koch: I agree. That’s why I was one of the early signatories on this proposal that was circulating from the Future of Life Institute, that calls on the tech industry to pause for at least half a year before releasing the next, more powerful large language model. This isn’t a plea to stop the development of ever more powerful models. We’re just saying, “Let’s just hit pause here in order to try to understand and safeguard. Because it’s changing so very rapidly.” The basic invention that made this possible are transformer networks, right? And they were only published in 2017, in a paper by Google Brain, “Attention Is All You Need.” And then GPT, the original GPT, was born the next year, in 2018. GPT-2 in 2019, I think, and last year, GPT-3 and ChatGPT. And now GPT-4. So where are we going to be 10 years from now?
Do you think the upsides are going to outweigh whatever risks we will face in the shorter term? In other words, will it ultimately pay off?
Koch: Well, it depends what your long-term view is on this. If it’s existential risk, if there’s a possibility of extinction, then, of course, nothing can justify it. I can’t read the future, of course. There’s no question that these methods—I mean, I see it already in my own work—these large language models make people more powerful programmers. You can more quickly gain new knowledge or take existing knowledge and manipulate it. They are certainly force multipliers for people that have knowledge or skills.
Ten years ago, this wasn’t even imaginable. I remember even six or seven years ago people arguing, “well, these large language models are very quickly going to saturate. If you scale them up, you can’t really get much farther this way.” But that turned out to be wrong. Even the inventors themselves have been surprised, particularly, by this emergence of these new capabilities, like the ability to tell jokes, explain a program, and carrying out a particular task without having been trained on that task.
Well, that’s not very reassuring. Tech is releasing these very powerful model systems. And the people themselves that program them say, we can’t predict what new behaviors are going to emerge from these very large models. Well, gee, that makes me worry even more. So in the long term, I think everything is on the table. And yes, I think we need to worry about existential threats. Unfortunately, when you talk to AI people at AI companies, they typically say, oh, that’s just all laughable. That’s all hysterics. Let’s talk about the practical things right now. Well, of course, they would say that because they’re being paid to advance this technology and they’re being paid extraordinarily well. So, of course, they’re always going to push it.
I sense that the consensus has really swung because of GPT-3.5 and GPT-4. It has really swung to the idea that it’s only a matter of time before we have an AGI. Would you agree with that?
Koch: Yes. I would put it differently though: The number of people who argue that we won’t get to AGI is becoming smaller and smaller. It’s a rear-guard action, fought by people mostly in the humanities: “Well, but they still can’t do this. They still can’t write Death in Venice.” Which is true. Right now, none of these GPTs has produced a novel. You know, a 100,000-word novel. But I suspect it’s also just going to be a question of time before they can do that.
If you had to guess, how much time would you say that that’s going to be?
Koch: I don’t know. I’ve given up. It’s very difficult to predict. It really depends on the available training material you have. Writing a novel requires long-term character development. If you think about War and Peace or Lord of the Rings, you have characters developing over a thousand pages. So the question is, when can AI get these sorts of narratives? Certainly it’s going to be faster than we think.
So as I said, when people say in the long term this is dangerous, that doesn’t mean, well, maybe in 200 years. This could mean maybe in three years, this could be dangerous. When will we see the first application of GPT to warlike endeavors? That could happen by the end of this year.
But the only thing I can think of that could happen in 2023 using a large language model is some sort of concerted propaganda campaign or disinformation. I mean, I don’t see it controlling a lethal robot, for example.
Koch: Not right now, no. But again, we have these drones, and drones are getting very good. And all you need, you need a computer that has access to the cloud and can access these models in real time. So that’s just a question of assembling the right hardware. And I’m sure this is what militaries, either conventional militaries or terrorist organizations, are thinking about and will surprise us one day with such an attack. Right now, what could happen? You could get deepfakes of—all sorts of nasty deepfakes or people declaring war or an imminent nuclear attack. I mean, whatever your dark fantasy gives rise to. It’s the world we now live in.
Well, what are your best-case scenarios? What are you hopeful about?
Koch: We’ll muddle through, like we’ve always muddled through. But the cat’s out of the bag. If you extrapolate these current trends three or five years from now, and given this very steep exponential rise in the power of these large language models, yes, all sorts of unpredictable things could happen. And some of them will happen. We just don’t know which ones.
- Interview: Max Tegmark on Superintelligent AI, Cosmic Apocalypse, and Life 3.0 ›
- Waiting for the Rapture ›
- Superintelligent AI May Be Impossible to Control; That's the Good News › | AI Research |
When the concept artist and illustrator RJ Palmer first witnessed the fine-tuned photorealism of compositions produced by the AI image generator Dall-E 2, his feeling was one of unease. The tool, released by the AI research company OpenAI, showed a marked improvement on 2021’s Dall-E, and was quickly followed by rivals such as Stable Diffusion and Midjourney. Type in any surreal prompt, from Kermit the frog in the style of Edvard Munch, to Gollum from The Lord of the Rings feasting on a slice of watermelon, and these tools will return a startlingly accurate depiction moments later.The internet revelled in the meme-making opportunities, with a Twitter account documenting “weird Dall-E generations” racking up more than a million followers. Cosmopolitan trumpeted the world’s first AI-generated magazine cover, and technology investors fell over themselves to wave in the new era of “generative AI”. The image-generation capabilities have already spread to video, with the release of Google’s Imagen Video and Meta’s Make-A-Video.But AI’s new artistic prowess wasn’t received so ecstatically by some creatives. “The main concern for me is what this does to the future of not just my industry, but creative human industries in general,” says Palmer.In June, Cosmopolitan published the first AI-generated magazine cover, a collaboration between digital artist Karen X Cheng and OpenAI.By ingesting large datasets in order to analyse patterns and build predictive models, AI has long proved itself superior to humans at some tasks. It’s this number-crunching nous that led an AI to trounce the world Go champion back in 2016, rapidly computing the most advantageous game strategy, and unafraid to execute moves that would have elicited scoffs had they come from a person. But until recently, producing original output, especially creative work, was considered a distinctly human pursuit.Recent improvements in AI have shifted the dial. Not only can AI image generators now transpose written phrases into novel pictures, but strides have been made in AI speech-generation too: large language models such as GPT-3 have reached a level of fluency that convinced at least one recently fired Google researcher of machine sentience. Plug in Bach’s oeuvre, and an AI can improvise music in more or less the same style – with the caveat that it would often be impossible for a human orchestra to actually play.This class of technology is known as generative AI, and it works through a process known as diffusion. Essentially, huge datasets are scraped together to train the AI, and through a technical process the AI is able to devise new content that resembles the training data but isn’t identical. Once it has seen millions of pictures of dogs tagged with the word “dog”, it is able to lay down pixels in the shape of an entirely novel pup that resembles the dataset closely enough that we would have no issue labelling it a dog. It’s not perfect – AI image tools still struggle with rendering hands that look human, body proportions can be off, and they have a habit of producing nonsense writing.While internet users have embraced this supercharged creative potential – armed with the correctly refined prompt, even novices can now create arresting digital canvases – some artists have balked at the new technology’s capacity for mimicry. Among the prompts entered into image generators Stable Diffusion and Midjourney, many tag an artist’s name in order to ensure a more aesthetically pleasing style for the resulting image. Something as mundane as a bowl of oranges can become eye-catching if rendered in the style of, say, Picasso. Because the AI has been trained on billions of images, some of which are copyrighted works by living artists, it can generally create a pretty faithful approximation.‘Kermit the frog painted by Munch’, created by Floris Groesz with Dall-E software. Photograph: @SirJanosFroglezSome are outraged at what they consider theft of their artistic trademark. Greg Rutkowski, a concept artist and illustrator well known for his golden-light infused epic fantasy scenes, has already been mentioned in hundreds of thousands of prompts used across Midjourney and Stable Diffusion. “It’s been just a month. What about in a year? I probably won’t be able to find my work out there because [the internet] will be flooded with AI art,” Rutkowski told MIT Technology Review. “That’s concerning.”Dall-E 2 is a black box, with OpenAI refusing to release the code or share the data that the tools were trained on. But Stable Diffusion has chosen to open source its code and share details of the database of images used to train its model.Spawning, an artist collective, has built a tool called Have I Been Trained? to help artists discover if their artworks were among the 5.8bn images used to train Stable Diffusion, and to opt in or out of appearing in future training sets. The company behind Stable Diffusion, Stability AI, has said it is open to working with the tool. Of the 1,800 artists that have signed up to use the tool already, Matthew Dryhurst, an academic and member of Spawning says it’s a 60/40 split in favour of opt-out.But the Concept Art Association (CAA) stresses that the damage has already been done this time around, because the tools have already been trained on artists’ work without their consent. “It’s like someone who already robbed you saying, ‘Do you want to opt out of me robbing you?’” says Karla Ortiz, an illustrator, and board member of CAA.Stability AI’s Emad Mostaque says that although the data used to train Stable Diffusion didn’t offer an opt-out option, it “was very much a test model, heavily unoptimised on a snapshot of images on the internet.” He says new models are typically trained on fresh datasets and this is when the company would take artists’ requests into consideration.A ‘renaissance painting of a person sitting an office cubicle, typing on a keyboard, stressed’, created by Dall-E.It’s not just artworks: analysis of the training database for Stable Diffusion has revealed it also sucked up private medical photography, photos of members of the public (sometimes alongside their full names), and pornography. Ortiz particularly objects to Stability AI commercialising part of its operation – DreamStudio, which offers customers custom models and enhanced ease of use. “These companies have now set a precedent that you use everyone’s copyrighted and private data without anyone even opting in,” she says. “Then they say: ‘We can’t do anything about it, the genie’s out of the bottle!’”What can be done about this beyond relying on the beneficence of the companies behind these tools is still in question. The CAA cites worrying UK legislation that might allow AI companies even greater freedom to suck up copyrighted creative works to train tools that can then be deployed commercially. In the US, the organisation has met government officials to speak about copyright law, and is currently in talks with Washington lobbyists to discuss how to push back on this as an industry.Beyond copycatting, there’s the even bigger issue pinpointed by Palmer: do these tools put an entire class of creatives at risk? In some cases, AI may be used in place of stock images – the image library Shutterstock recently made a deal with OpenAI to integrate Dall-E into its product. But Palmer argues that artwork such as illustration for articles, books or album covers may soon face competition from AI, undermining a thriving area of commercial art.The owners of AI image generators tend to argue that on the contrary, these tools democratise art. “So much of the world is creatively constipated,” the founder of Stability AI, Emad Mostaque, said at a recent event to celebrate a $101m fundraising round, “and we’re going to make it so that they can poop rainbows.” But if everyone can harness AI to create technically masterful images, what does it say about the essence of creativity?Anna Ridler, an artist known for her work with AI, says that despite Dall-E 2 feeling “like magic” the first time you use it, so far she hasn’t felt a spark of inspiration in her experiments with the tool. She prefers working with another kind of AI called generative adversarial networks (GANs). GANs work as an exchange between two networks, one creating new imagery, and the other deciding how well the image meets a specified goal. An artistic GAN might have the goal of creating something that is as different as possible from its training data without leaving the category of what humans would consider visual art.These issues have intensified debate around the extent to which we can credit AI with creativity. According to Marcus du Sautoy, an Oxford University mathematician and author of The Creativity Code: How AI is Learning to Write, Paint and Think, Dall-E and other image generators probably come closest to replicating a kind of “combinational” creativity, because the algorithms are taught to create novel images in the same style as millions of others in the training data. GANs of the kind Ridler works with are closer to “transformational” creativity, he says – creating something in an entirely novel style.A Dall-E generated image of “a vintage photo of a corgi on a beach” – showing that the software can also create realistic looking images.Ridler objects to such a formulaic approach to defining creativity. “It flattens it down into thinking of art as interesting wallpaper, rather than something that is trying to express ideas and search for truth,” she says. As a conceptual artist, she is well aware of AI’s shortcomings. “AI can’t handle concepts: collapsing moments in time, memory, thoughts, emotions – all of that is a real human skill, that makes a piece of art rather than something that visually looks pretty.”AI image tools demonstrate some of these deficiencies. While “astronaut riding a horse” will return an accurate rendering, “horse riding an astronaut” will return images that look much the same – indicating that AI doesn’t really grasp the causal relationships between different actors in the world.Dryhurst and Ridler contend the “artist replacement” idea stems from underestimating the artistic process. Dryhurst laments what he sees as the media whipping up alarmist narratives, highlighting a recent New York Times article about an artist who used Midjourney to win the digital category of the Colorado state fair’s annual art competition. Dryhurst points out that a state fair is not exactly a prestigious forum. “They were giving out prizes for canned fruit,” he says. “What annoys me is that there seems to be this kind of thirst to scare artists.”“Art is dead, dude,” said the state fair winner.It is possible that the hype around these tools as disruptive forces outstrips reality. Mostaque says AI image generators are part of what he calls “intelligent media”, which represents a “one trillion dollar” opportunity, citing Disney’s content budget of more than $10bn (£8.7bn), and the entire games industry’s value of more than $170bn. “Every single piece of content from the BBC to Disney will be made interactive by these models,” he says.Emerging applications right now are more prosaic including moodboards for design consulting, storyboards for films, and mock-ups for interior design, and Mark Beccue, an analyst at Omdia’s AI division, is sceptical about the $1tn figure. “What are the killer use cases here?” he says. “It doesn’t make sense. What problem are you solving with this?” An analyst from consulting firm Accenture says the tools could one day be used to create content to train machine learning algorithms, such as in self-driving vehicles, and speed up games creation. Whether it will amount to anything as lucrative as AI image generators and their backers propose remains to be seen. | AI Research |
AI research startup Anthropic aims to raise as much as $5 billion over the next two years to take on rival OpenAI and enter over a dozen major industries, according to company documents obtained by TechCrunch. From the report: A pitch deck for Anthropic's Series C fundraising round discloses these and other long-term goals for the company, which was founded in 2020 by former OpenAI researchers. In the deck, Anthropic says that it plans to build a "frontier model" -- tentatively called "Claude-Next" -- 10 times more capable than today's most powerful AI, but that this will require a billion dollars in spending over the next 18 months.
Anthropic describes the frontier model as a "next-gen algorithm for AI self-teaching," making reference to an AI training technique it developed called "constitutional AI." At a high level, constitutional AI seeks to provide a way to align AI with human intentions -- letting systems respond to questions and perform tasks using a simple set of guiding principles. Anthropic estimates its frontier model will require on the order of 10^25 FLOPs, or floating point operations -- several orders of magnitude larger than even the biggest models today. Of course, how this translates to computation time depends on the speed and scale of the system doing the computation; Anthropic implies (in the deck) it relies on clusters with "tens of thousands of GPUs."
Anthropic describes the frontier model as a "next-gen algorithm for AI self-teaching," making reference to an AI training technique it developed called "constitutional AI." At a high level, constitutional AI seeks to provide a way to align AI with human intentions -- letting systems respond to questions and perform tasks using a simple set of guiding principles. Anthropic estimates its frontier model will require on the order of 10^25 FLOPs, or floating point operations -- several orders of magnitude larger than even the biggest models today. Of course, how this translates to computation time depends on the speed and scale of the system doing the computation; Anthropic implies (in the deck) it relies on clusters with "tens of thousands of GPUs." | AI Research |
Change happens slowly, and then all at once — especially in complex industries like healthcare.
Just five years ago, venture capital investments in healthcare AI were emerging and exploratory. Half a decade and one global pandemic later, we’re living in a brave and more ambitious new world defined by an unbridled enthusiasm for leveraging revolutionary technologies like AI.
Pointing this technology at previously intractable problems in key industries such as healthcare, life sciences, and beyond is among the greatest opportunities of the century.
The year 2022 was when the broader public bore witness to material advancements in AI research that have matured from lab to life. ChatGPT educated over 100 million people globally about transformers in just two months.
What was once a nascent area of research has now become venture capital’s next platform shift, and with that, investors ask, “How will generational AI companies be built in healthcare, life sciences, and beyond?”
AI-first companies are in the business of advancing AI as a science, whereas AI-enabled companies are implementation and distribution machines. The two company phenotypes establish moats at different layers — AI-first companies innovate just above silicon, while AI-enabled companies create enterprise value at the application level.
For founders, knowing what kind of company you are building is essential for recruiting proper talent, partnering with aligned investors, securing sufficient capital, and deploying a viable business model. AI-first companies require deep AI research acumen, investors willing to take a long view, materially more capital, and potentially less conventional business models than AI-enabled peers.
The impact of AI-first companies will be greater, financial returns superior, and moats more enduring than their AI-enabled counterparts.
In reality, this distinction is a spectrum, not a binary. Impactful companies will be built with both approaches. For AI-first companies, though, we believe the fruits will be worth the labors.
Influence over the technology stack from the ground up enables tight control over cost structure, immeasurable product optionality, and greater defensibly relative to AI-enabled companies that defer the exercise of scientific inquiry to those that are AI first.
We can no longer afford to conflate AI-first and AI-enabled companies. So far, the largest AI-first companies have been built for horizontal applications (e.g., OpenAI, Cohere, Anthropic); yet vertical, industry-specific platforms, such as those in healthcare and life sciences, will showcase the expansive capabilities of large-scale models to deliver real-world impact.
For founders, we believe enduring AI-first companies — in healthcare, life sciences, and beyond — will follow these six imperatives.
Create and sustain an undeniable data advantage
AI-first companies exhibit an insatiable appetite for data and employ creative means for acquiring it sustainably. In addition to amassing large and robust datasets, AI-first companies develop designer datasets that are uniquely suited to deliver high performance on specific tasks.
Designer datasets are unique in that they are not easily found in public; they are machine readable, in that they are ingestible by AI models; and they are scalable, in that it is tractable to generate high volumes over time.
Importantly, designer datasets are not simply the exhaust of processes within a given system, and they are not generated by customers alone. For example, the healthcare and life sciences industries generate 30% of the world’s data, and yet companies that train only on existing electronic health record data or resources like PubMed leave material performance gains and capabilities behind.
Designer datasets may require authoring experimental protocols for situations that do not occur naturally but that deliver strong model performance for a given task.
For example, Subtle Medical, an AI-first company focused on imaging acceleration, generated millions of imperfect MRI images captured in 15 minutes, which were later utilized to train deep learning models that could reconstruct and de-noise medical imaging exams taken in shorter periods of time. In practice, imperfect MRI images provide little clinical value; however, as an AI-first company, these images trained deep neural networks that created a data moat for Subtle’s technology.
Reinforcement learning with (expert) human feedback — RL(E)HF — is another critical tool for AI-first companies. RLHF is a technique where an AI system learns and improves its performance by receiving feedback from human input. With RL(E)HF, expert human feedback provided by individuals trained in particular disciplines such as neurology or structural biology can tune model outputs for high performance in that domain.
Abridge, an AI-first company that provides ambient documentation tools for clinicians, leverages clinician feedback on AI-authored notes to enhance note accuracy and quality across specialties.
Data derived from customers also creates flywheels of opportunities for generating novel and defensible data assets. After establishing product-market fit, AI-first companies can leverage this position to serve adjacent customer segments. By capturing and integrating datasets across stakeholders in a given industry, AI-first companies can strengthen data advantages, unlock TAM, and create new categories. | AI Research |
When GPT-3 was first launched in 2020, users were surprised with the huge performance leap from its predecessor, GPT-2. It’s been over two years since OpenAI has been discreet about GPT-4—only letting out dribs of information, remaining silent for most of the time. But not anymore. As people have been talking about this for months, several sources hint that it’s already out. Hopefully, sometime from December to February, we might be able to see the new model. Sign up for your weekly dose of what's up in emerging technology. OpenAI started to train GPT-4. Release is planned for Dec-Feb.— Igor Baikov 💙💛 (@Killa_ru) September 2, 2022 There’s nothing that the model can’t do. But looks like the model is missing out on some elements—or not. Rise of GPT models In May 2020, AI research laboratory OpenAI unveiled the largest neural network ever created—GPT-3—in a paper titled, ‘Language Models are Few Shot Learners’. The researchers released a beta API for users to toy with the system, giving birth to the new hype of generative AI. People were generating eccentric results. The new language model could transform the description of a web page into the corresponding code. It emulates the human narrative, by either writing customised poetry or turning into a philosopher—predicting the true meaning of life. There’s nothing that the model can’t do. But there’s also a lot it can’t undo. As GPT-3 isn’t that big of a deal for some, the name remains a bit ambiguous. The model could be a fraction of the futuristic bigger models that are yet to come. American firm Cerebras’ CEO Andrew Feldman said, “From talking to OpenAI, GPT-4 will be about 100 trillion parameters”. Unsurprisingly, this left users excited. Sources say that Open AI was focused on optimising data and compute per Chinchilla-like compute-optimal laws, instead of using parameters. Moreover, the model would be text-only and aligned with human preference such as instructGPT. The bigger the better The bitter lesson in AI—in the words of DeepMind’s researcher Richard Sutton is—“The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.” All we need to see if this holds up in the future. Currently, GPT-3 has 175 billion parameters, which is 10x faster than any of its closest competitors. Also read, GPT-3 Is Quietly Damaging Google Search The increase in the number of parameters of 100-fold from GPT-2 to GPT-3 has brought a qualitative leap between the two models. It’s evident that GPT-4 can be notably bigger than GPT-3—at least in parameters—with qualitative differences. GPT-3 can learn to learn, but it’s almost astounding to predict how GPT-4 would work. GPT-4 might do things GPT-3 can’t do On August 20, 2022, Robert Scoble tweeted on how OpenAI was giving the beta access of GPT-4 to a small group which was close to the AI firm. Scoble said, “A friend has access to GPT-4 and can’t talk about it due to NDAs. Was about to tell me everything about it and then remembered who he was talking to.” A friend has access to GPT-4 and can’t talk about it due to NDAs. Was about to tell me everything about it and then remembered who he was talking to. His emotion alone told me it is next level. https://t.co/Ard1gpRxau— Robert Scoble (@Scobleizer) August 20, 2022 He said it is just as exciting a Leap as GPT-3 was. Insane.— Robert Scoble (@Scobleizer) August 20, 2022 Since this becomes anecdotal evidence of sorts, such a perspective could be influenced by excitement or a lack of a testing method that can even be reliable. As language models advance every year, users would certainly expect enhanced performance. If the training mainly relies on perception, the claims above might be a significantly larger leap than the shift from GPT-2 to GPT-3. Meanwhile, a user remained sceptical, sparking further discussion on how GPT-4 can turn work done on GPT-3 obsolete. If you’re building on GPT3 today, an important question to ask yourself is, “how much of this will be obsolete with GPT4?”— Bill Lennon (@blennon_) November 9, 2022 This is almost impossible to account for as we have no idea the capabilities GPT-4 will possess or what it will be trained on. Also, more and more platforms will begin to integrate LLMs directly. What then?— Cam (@_illpadrino) November 9, 2022 Open AI founder Sam Altman himself tweeted: From Scoble’s claim to the company’s CEO talking about the Turing test—which deals with the question of whether machines can think—things might have turned out interesting. Further, the Turing test comes with historical relevance, which signifies the limits of intelligence in machines. As researchers claim that no AI system can pass the test, it’s evident that an advanced system such as GPT-4 would surely put up a fight. Deflating the first reason, the Turing test is regarded as obsolete in general terms. It’s a test of deception so that an AI could pass it without possessing intelligence in any human sense. Also read, What’s Bigger Than GPT3? It’s “Jurassic” Reddit user Igor Baikov posted that GPT-4 would be very sparse or large, considering the company’s history of building a denser model. It would certainly deem meaningless when directly compared with the other popular models such as LaMDA, GPT-3, and PaLM. The possibility of GPT-4 being multimodal—such as accepting audio, text, image, and even video inputs—is anticipated. Moreover, there is an assumption that audio datasets from Open AI’s Whisper will be utilised to create the textual data needed to train GPT4. Also read, OpenAI’s Whisper Might Hold The Key To GPT4 The major plot twist, however, is whether this entire article was written by GPT-4. | AI Research |
What if you could have a conversation with your notes? That question has consumed a corner of the internet recently, as companies like Dropbox, Box, Notion, and others have built generative AI tools that let you interact with and create new things from the data you already have in their systems.
Google’s version of this is called NotebookLM. It’s an AI-powered research tool that is meant to help you organize and interact with your own notes. (Google originally announced it earlier this year as Project Tailwind but quickly changed the name.) Right now, it’s really just a prototype, but a small team inside the company has been trying to figure out what an AI notebook might look like. Ultimately, if NotebookLM sticks around, it’ll probably be as a feature of Google Docs or a tool inside of Drive that can ingest and understand all your files. But for now, it’s its own extremely barebones app.
I’ve been using NotebookLM for the last couple of weeks, both testing out the app’s capabilities and trying to figure out where an AI research tool fits into my own workflows. I’m not sure I’ve found the right answers yet, and I’m not sure the tech is quite ready, either. But I’m increasingly convinced that a personalized AI, trained on all the stuff I care about and very little else, is going to be a seriously powerful thing.
To get started with NotebookLM, you create a new project. In my case, I’ve been doing a bunch of research on the history and culture of spreadsheets, so I called my project “Spreadsheet history.” (Clever, right?) The app then prompted me to begin adding sources — right now, it only accepts and imports Google Docs, but in its finished form, it will apparently take in many other kinds of information. Each project can have up to five sources, and each source can be up to 10,000 words long, but I only know that because someone at Google told me. If you try to import too many or too large sources, NotebookLM just sort of quietly fails. The app is new enough that the team hasn’t even customized the error messages. Again: prototype.
But after a bit of trial and error, I got my sources in: a version of Steven Levy’s seminal “A Spreadsheet Way of Knowledge” story, a series of blog posts from VisiCalc creator Dan Bricklin, a section from Laine Nooney’s book The Apple II Age, and a few thousand words of other compiled research. As I imported each source, NotebookLM generated what it calls a “Source Guide,” with a paragraph summarizing the doc and then a list of key topics and suggested questions to ask. In general, the guides were very good: for that Levy story, it surfaced “Electronic spreadsheet,” “VisiCalc,” “Lotus 1-2-3,” “Spreadsheet modeling,” and “Spreadsheets and decision-making.” Three of those are variations of the same thing, but hey, it’s a long article about spreadsheets. Most of the key topics are going to be spreadsheets.
The whole reason NotebookLM exists is to give you a new way to interact with docs
The whole reason NotebookLM exists is to give you a new way to interact with these docs. Rather than a tool for organizing or enhancing your research, it’s essentially a chatbot trained specifically on the sources you’ve provided that can either reference them one at a time or all at once. In the Levy example, one of the source guide’s suggested questions was, “What are the advantages of using spreadsheets?” When I asked this of the NotebookLM chatbot, it thought for a second and then came back with five attributes that made spreadsheets so powerful for early computer users. The answer wasn’t based on the whole internet; it was based entirely on the 5,000 or so words I’d pasted into that Google doc.
(Tiny aside: There are obviously huge privacy issues and concerns with all these personalized AI tools. But in this case, I’m not that worried about it — all this data already lives in other Google products anyway, so I’m not convinced that having a large language model parse it is a meaningfully different thing. But as with all things AI, you should always think carefully about where your data is going and how it might be used.)
You can also, of course, ask any other question you can think of. I eventually started using NotebookLM mostly to find commonalities across things. Who are the people that come up most often in all these documents? What links are referenced most often? What are the main competitors in this space? With a few questions, I can usually get a decent set of jumping-off points for more research. You can also ask things like “What’s the most surprising information in here?” and get sometimes interesting examples. NotebookLM can also generate summaries or outlines of documents to make skimming your research a bit easier.
Along with every answer, NotebookLM provides citations. They’re not sources, exactly, since the underlying model isn’t just searching for and returning text; they’re more like points on a map, the 10 bits of text that NotebookLM deemed most relevant to the question and then synthesized and used in order to provide an answer. NotebookLM told me “Speed” was a crucial advantage of spreadsheets not because Levy wrote that but because he quoted a bunch of executives talking about the things they were able to do with this radically faster tool.
In my experience so far, the connection between citation and answer is sometimes obvious and sometimes deeply confusing, but I like the approach of the model trying to show its work. And for the most part, I’ve found the citations to be vastly more useful than the answers themselves; the actual synthesis and answering that NotebookLM does is somewhat unreliable, but it does a really good job of identifying the bits of information that are relevant to my question.
Raiza Martin, the product manager in charge of NotebookLM, says my experience seems to match other NotebookLM users. “The source guide and citations are the two top features that get called out the most,” she says. “We’re also seeing behavior change, where more and more people are like, ‘Oh, I have to read something, so I put it into Notebook so I can generate the source guide.” (Everyone at Google seems to call NotebookLM just “Notebook.” Take from that what you will.)
Ultimately, Martin says, part of what she’s looking for is to see how people interact with bots differently when they’re trained on personal data and not the internet. “When we change the context sufficiently, does it change user behavior?” she asks. “And what we’re discovering is that it does.” Users are doing more targeted investigating and probing of information, it seems, rather than just asking blue-sky questions of the AI.
Speaking of the internet: one odd quirk about NotebookLM is that it actually does know things that aren’t in your documents. At one point, I asked for information about an old Excel competitor that was referenced in one document I’d uploaded, but only by name and with no other information, and NotebookLM spat back some basic information about when it was founded and what it did. My documents didn’t know this! What gives?
Steven Johnson, a longtime author and the editorial director of Google Labs working on NotebookLM, says the team has wrestled with what to do in these situations. “There’s certainly some general-knowledge things that the model knows that are actually quite accurate.” How true that is, and how to show that process to users, is the ongoing question. “We spent a lot of time fine-tuning so that the model will say, ‘I’m sorry, that information is not in your source,” Johnson says, and this kind of humility and transparency is a good thing in an LLM. But it should also try and help when it can, right? “We’re trying to figure out how much of that is sort of us blending that,” Martin says, “and making it clear to the user that, ‘Hey, it’s not in your sources, but here is some general knowledge or knowledge from the web.’”
The long-term answer might be to just plug NotebookLM into Keep or Docs
In addition to improving the model and working on interactions, Google’s other big project for NotebookLM is to make it a better app for actually taking notes. Right now, you get a super-simple scratchpad in case you want to copy and paste a chatbot answer or jot down a thing you remembered, but that’s really it. The long-term answer there might be to just plug NotebookLM into Keep or Docs, but NotebookLM might also turn into a more full-fledged notes app over time as well. (Given Google’s penchant for launching a thousand versions of the same kind of product, I’m betting on that last outcome.)
After a few weeks of using NotebookLM, it hasn’t totally upended the way I do everything. But I absolutely buy the idea that there are better ways to interact with notes than a lot of organizational busywork and keyword searches. And it seems clear that if Google can figure out how to make NotebookLM work reliably both with my stuff and the broader web and interact with everything else Google already knows about me, this could be the most powerful and personal chatbot on the internet. We’re definitely still in the prototype phase of all that, but it’s building toward something potentially huge. | AI Research |
A newly developed AI agent called “DeepNash” learned to play Stratego, one of the few board games AI has not yet mastered, at a human expert level, researchers report. This represents an “extraordinary result that the Stratego community did not believe would have been possible with current techniques,” say the study’s authors. For many years, the Stratego board game – which tests one’s ability to make relatively slow, deliberative, and logical decisions sequentially – has constituted one of the next frontiers of AI research. An “imperfect” information game (in which some aspect of play is hidden from opponents), Stratego poses key challenges to AI researchers because of the many complex aspects of its structure, including having more possible states than another well-researched imperfect information game: no-limit Texas Hold’em poker. Currently, it is not possible to use imperfect information search techniques to master Stratego. Here, Julien Perolat and colleagues introduce a novel method that allows an AI for learning to play the game. This new method resulted in a bot called DeepNash that achieved human expert-level performance in the most complex variant of the game, Stratego Classic. At the core of DeepNash is a reinforcement learning algorithm, “R-NaD.” To make DeepNash, Perolat and team combined R-NaD with a deep neural network architecture to learn a strategy that plays at a highly competitive level. DeepNash was tested against various state-of-the-art Stratego bots and human expert players. It won against all bots and achieved a highly competitive level of play against human expert Stratego players on Gravon, an internet gaming platform and the largest online platform for Stratego. Notably, say the authors, this performance was achieved without deploying any search methods, a key ingredient for many milestone AI achievements in board games in past. Article Title Mastering the game of Stratego with model-free multiagent reinforcement learning Article Publication Date 2-Dec-2022 Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system. | AI Research |
On Friday morning, June 2, Alphabet, Google's parent company, was holding its annual general meeting remotely. As usual, some shareholders prepared to call for greater transparency on the search engine's algorithms. Others prepared to challenge, without little hope for success, the voting rules that give the two founders the majority of rights despite owning only 12% of the capital.
CEO Sundar Pichai could be expected to listen with a distracted ear to these fighters for shareholder democracy, for his mind was elsewhere. He wanted to talk about artificial intelligence (AI). Pichai was preparing to reveal his company's response and the tremendous opportunities that the introduction of these intelligent functions to its search service will offer.
To catch up with Microsoft and ChatGPT, Alphabet decided in April to merge its two AI research entities – its own lab called Google Brain and the company it bought in 2014, DeepMind.
Countless potential competitors
Up to this point the London-based subsidiary DeepMind, founded and run by Demis Hassabis, has enjoyed carte blanche in research and funding. Independence is over, however, and merging with the Californians will not be easy even though Hassabis will head the new joint effort.
But the irruption of AI poses far greater challenges to Google and its competitors. The first is ethical. Like Dr. Victor Frankenstein in Mary Shelley's novel, the progenitors are suddenly afraid of their creature. Specialists around the world, including ChatGPT's creators and Pichai, are now warning of a high risk of manipulation and false information dissemination. The developers themselves are calling for AI to be regulated.
The second challenge is existential for Google. A recent anonymous memo from one of the firm's employees underlined the risk for their company, as for Microsoft, of the dissemination of AI manufacturing tools through open source. This would lead to the emergence of countless specialized and uncontrolled applications that would threaten the dominance of the American giants, notably that of Google's search engine. AI is a philosophical and economic storm with the makings of a revolution. | AI Research |
Machine-learning algorithms that process data might not immediately come to mind when we think about Covid-19, but they supercharged our ability to respond to a disruptive threat. Countries emerged from rolling lockdowns due, in part, to concurrent advancements in artificial intelligence and biotechnology that produced mRNA vaccines. As the Australian government rethinks and reviews its approach to critical technologies, whether policy is technology-centric, capability-centric or a combination of the two will be crucial for creating strategic effect.
Multiple factors affect innovation and strategic effect. Precise terms are important. The phrase ‘artificial intelligence’ has proliferated, but it’s more accurate to think in terms of developments in data science, cloud computing, machine learning, data protection and intelligent data capture.
The speed at which mRNA vaccines were deployed in 2020–21 creates the impression that innovation is all about rapidity. In part, it’s true: speed is key. But the story of Moderna’s version of the mRNA vaccine began decades ago. Despite successive failures, patent conflicts and sideways paths, the story looks linear. Long-term bets made by institutions such as the US Defense Advanced Research Projects Agency, and later by venture capital firms, look prophetic, not risky.
Across an entire portfolio of technologies, predicting what the impacts will be or which combination will come together is difficult. Governments can set a broad vision, but the vagaries of innovation can make us discount effort across research, financing, commercialisation, logistics, translation and communication and look for moments of lone genius, not collective endeavour.
The changing parameters of risk and security, particularly in relation to economic and geopolitical threats, mean we’re witnessing an evolving global consensus on innovation policy.
Countries are increasingly having to navigate harder choices in relation to industry priorities. Competition between great powers and the weaponisation of economic (and data) connectivity are inciting conflict over scarce resources. Parallel institutional structures that bifurcate the global economic system and systemic threats such as climate change are contributing to uncertainty. Assumptions about the stability of market access along with the security and diversity of supply have shifted.
Across the globe, several sovereign innovation funds have been announced this year alongside other government initiatives aiming to boost research and development outcomes. Australia’s version is the $15 billion National Reconstruction Fund; NATO will invest €1 billion in early-stage start-ups; Japan’s ‘Moonshot’ research and development program promotes high-risk, high-impact projects. The Quad leaders have committed to convening a business and investment forum for networking with industry partners to expand capital.
These initiatives don’t just channel investments or provide opportunities to de-risk projects. They are, in theory, a means to ensure that research solves issues facing future society. A 1991 US report on critical technology put the challenge succinctly: if technology is critical, the question is, critical for what?
Without a strong sense of mission, we risk not working to solve challenges. Sectors characterised by the rise of platform business models and winner-takes-most dynamics dominate research and development. The centre of gravity of AI research has shifted from academia to industry. Many sectors, including defence industries and technology areas like cloud computing, are dominated by a handful of companies.
Market-led opportunities don’t necessarily correspond to social needs or consider negative long-term impacts, geopolitical dependencies or other ethical considerations. Setting algorithms loose to sell trinkets may oil the wheels of commerce, but if the net result is a floating mountain of plastic in the Pacific, then we must consider whether our notions usefully address criticality and for whom.
Government innovation can benefit from quick lead times in the commercial sector. Industry tends to favour monetising in the short term, cementing incumbency and creating higher barriers for new entrants. Governments need to have good policy frameworks that provide incentives for industry collaboration, including for small and medium-sized firms, that pulls in the same strategic direction.
Proliferation of technology is meant to drive competition, reduce costs and improve efficiency. These assumptions have shifted because of a deteriorating strategic context. The international playing field is skewed by actors playing by different rules and creating rules for themselves—often dishonestly and at times criminally. In the defence sector, the proliferation of defence-related technologies must balance goals such as strategic stability and non-proliferation to maintain a military edge while contributing to arms control.
Australia has a strong history of invention, including wi-fi, the Cochlear implant, the CPAP machine and the black-box flight recorder. The invention of wi-fi involved intense and protracted contestation over patents and technical standards. There are lessons here for Australia’s tech sector, which has been disengaged from standards-setting forums in which it could advance its market reach and contribute to the public interest internationally.
The stronger message now is that there’s a critical need to define sovereign technological missions and grand challenges, and to fund them appropriately.
Australia’s previous critical technology plan mentioned national security, economic prosperity and social cohesion. Those are worthy goals, but the plan offered only a vague reference point rather than a systematic sense of purpose.
Earlier this month, an indication of where the new government might be driving the national conversation about critical technology. Noting the persistence of systemic threats—cybersecurity, climate change, foreign interference—Home Affairs Minister Clare O’Neil indicated a way forward. The politicisation of security over the past 10 years, she noted, ‘didn’t make us any safer’. We have often taken a whack-a-mole approach to risk. In contrast, O’Neil emphasised building resilience. ‘Scalpels, not sledgehammers’, she said, need to be deployed to build democracy as ‘our biggest national asset’.
This is a step-change. Previously, our critical technology action plan emphasised social cohesion. Building democratic resilience will need more than rhetoric. This shift in mindset will need to filter through into program and policy. Technology can bring us together, but differences (of opinion, and in approaches to problems) allow democracies to build characteristics and capabilities to absorb, recover from and adapt to disruption.
In the meantime, Australian innovators are seizing opportunities. Initiatives such as the national medical countermeasures program delivered by DMTC Limited—a defence technology partner—are harnessing inputs from government, publicly funded research agencies, industry and academia. Their development of a novel pathogen-detection technology shows that our biotechnology capability and capacity are world-leading when mission, financing, expertise and capability pull in the same direction. Australia’s health security system will reap dividends in preparedness for future disease threats. Technologies identified as critical to Australia’s national security such as flow chemistry are also being developed.
We often chase the shiny and new. But we’ll need to think harder about democratic resilience and the fundamental role of critical technology in building capability across systems alongside its ability to address and adapt to cascading effects of system failure. As we strengthen our participation in global forums such as the Quad, Australia can build this important contribution to shaping global technology policy. | AI Research |
A group of over 1,000 AI researchers has created a multilingual large language model bigger than GPT-3—and they’re giving it out for free.Ms Tech | Envato, Unsplash PARIS — This is as close as you can get to a rock concert in AI research. Inside the supercomputing center of the French National Center for Scientific Research, on the outskirts of Paris, rows and rows of what look like black fridges hum at a deafening 100 decibels. They form part of a supercomputer that has spent 117 days gestating a new large language model (LLM) called BLOOM that its creators hope represents a radical departure from the way AI is usually developed. Unlike other, more famous large language models such as OpenAI’s GPT-3 and Google’s LaMDA, BLOOM (which stands for BigScience Large Open-science Open-access Multilingual Language Model) is designed to be as transparent as possible, with researchers sharing details about the data it was trained on, the challenges in its development, and the way they evaluated its performance. OpenAI and Google have not shared their code or made their models available to the public, and external researchers have very little understanding of how these models are trained. BLOOM was created over the last year by over 1,000 volunteer researchers in a project called BigScience, which was coordinated by AI startup Hugging Face using funding from the French government. It officially launched on July 12. The researchers hope developing an open-access LLM that performs as well as other leading models will lead to long-lasting changes in the culture of AI development and help democratize access to cutting-edge AI technology for researchers around the world. The model’s ease of access is its biggest selling point. Now that it’s live, anyone can download it and tinker with it free of charge on Hugging Face’s website. Users can pick from a selection of languages and then type in requests for BLOOM to do tasks like writing recipes or poems, translating or summarizing texts, or writing programming code. AI developers can use the model as a foundation to build their own applications. At 176 billion parameters (variables that determine how input data is transformed into the desired output), it is bigger than OpenAI’s 175-billion-parameter GPT-3, and BigScience claims that it offers similar levels of accuracy and toxicity as other models of the same size. For languages such as Spanish and Arabic, BLOOM is the first large language model of this size. But even the model’s creators warn it won’t fix the deeply entrenched problems around large language models, including the lack of adequate policies on data governance and privacy and the algorithms’ tendency to spew toxic content, such as racist or sexist language. Out in the open Large language models are deep-learning algorithms that are trained on massive amounts of data. They are one of the hottest areas of AI research. Powerful models such as GPT-3 and LaMDA, which produce text that reads as if a human wrote it, have huge potential to change the way we process information online. They can be used as chatbots or to search for information, moderate online content, summarize books, or generate entirely new passages of text based on prompts. But they are also riddled with problems. It takes only a little prodding before these models start producing harmful content. The models are also extremely exclusive. They need to be trained on massive amounts of data using lots of expensive computing power, which is something only large (and mostly American) technology companies such as Google can afford. Most big tech companies developing cutting-edge LLMs restrict their use by outsiders and have not released information about the inner workings of their models. This makes it hard to hold them accountable. The secrecy and exclusivity are what the researchers working on BLOOM hope to change. Meta has already taken steps away from the status quo: in May 2022 the company released its own large language model, Open Pretrained Transformer (OPT-175B), along with its code and a logbook detailing how the model was trained. But Meta’s model is available only upon request, and it has a license that limits its use to research purposes. Hugging Face goes a step further. The meetings detailing its work over the past year are recorded and uploaded online, and anyone can download the model free of charge and use it for research or to build commercial applications. A big focus for BigScience was to embed ethical considerations into the model from its inception, instead of treating them as an afterthought. LLMs are trained on tons of data collected by scraping the internet. This can be problematic, because these data sets include lots of personal information and often reflect dangerous biases. The group developed data governance structures specifically for LLMs that should make it clearer what data is being used and who it belongs to, and it sourced different data sets from around the world that weren’t readily available online. The group is also launching a new Responsible AI License, which is something like a terms-of-service agreement. It is designed to act as a deterrent from using BLOOM in high-risk sectors such as law enforcement or health care, or to harm, deceive, exploit, or impersonate people. The license is an experiment in self-regulating LLMs before laws catch up, says Danish Contractor, an AI researcher who volunteered on the project and co-created the license. But ultimately, there’s nothing stopping anyone from abusing BLOOM. The project had its own ethical guidelines in place from the very beginning, which worked as guiding principles for the model’s development, says Giada Pistilli, Hugging Face’s ethicist, who drafted BLOOM’s ethical charter. For example, it made a point of recruiting volunteers from diverse backgrounds and locations, ensuring that outsiders can easily reproduce the project’s findings, and releasing its results in the open. All aboard This philosophy translates into one major difference between BLOOM and other LLMs available today: the vast number of human languages the model can understand. It can handle 46 of them, including French, Vietnamese, Mandarin, Indonesian, Catalan, 13 Indic languages (such as Hindi), and 20 African languages. Just over 30% of its training data was in English. The model also understands 13 programming languages. This is highly unusual in the world of large language models, where English dominates. That’s another consequence of the fact that LLMs are built by scraping data off the internet: English is the most commonly used language online. The reason BLOOM was able to improve on this situation is that the team rallied volunteers from around the world to build suitable data sets in other languages even if those languages weren’t as well represented online. For example, Hugging Face organized workshops with African AI researchers to try to find data sets such as records from local authorities or universities that could be used to train the model on African languages, says Chris Emezue, a Hugging Face intern and a researcher at Masakhane, an organization working on natural-language processing for African languages. Including so many different languages could be a huge help to AI researchers in poorer countries, who often struggle to get access to natural-language processing because it uses a lot of expensive computing power. BLOOM allows them to skip the expensive part of developing and training the models in order to focus on building applications and fine-tuning the models for tasks in their native languages. “If you want to include African languages in the future of [natural-language processing] … it’s a very good and important step to include them while training language models,” says Emezue. Handle with caution BigScience has done a “phenomenal” job of building a community around BLOOM, and its approach of involving ethics and governance from the beginning is a thoughtful one, says Percy Liang, director of Stanford's Center for Research on Foundation Models. However, Liang doesn’t think it will lead to significant changes to LLM development. “OpenAI and Google and Microsoft are still blazing ahead,” he says. Ultimately, BLOOM is still a large language model, and it still comes with all the associated flaws and risks. Companies such as OpenAI have not released their models or code to the public because, they argue, the sexist and racist language that has gone into them makes them too dangerous to use that way. BLOOM is also likely to incorporate inaccuracies and biased language, but since everything about the model is out in the open, people will be able to interrogate the model’s strengths and weaknesses, says Margaret Mitchell, an AI researcher and ethicist at Hugging Face. BigScience’s biggest contribution to AI might end up being not BLOOM itself, but the numerous spinoff research projects its volunteers are getting involved in. For example, such projects could bolster the model’s privacy credentials and come up with ways to use the technology in different fields, such as biomedical research. “One new large language model is not going to change the course of history,” says Teven Le Scao, a researcher at Hugging Face who co-led BLOOM's training. “But having one good open language model that people can actually do research on has a strong long-term impact.” When it comes to the potential harms of LLMs, “ Pandora's box is already wide open,” says Le Scao. “The best you can do is to create the best conditions possible for researchers to study them.” Stay connectedIllustration by Rose WongGet the latest updates fromMIT Technology ReviewDiscover special offers, top stories, upcoming events, and more. | AI Research |
A new supercomputer set to be one of the most powerful in Europe is to be built in a move to drive AI research in the UK, the government says.
The University of Bristol will host the new AI Research Resource (AIRR).
The machine will "help researchers maximise the potential of AI" and the safe use of the technology, according to the Department for Science, Innovation and Technology.
It will be named Isambard-AI after the engineer Isambard Kingdom Brunel.
'Delighted'
Bristol already plays host to cutting-edge computing technology, with the Isambard 3 supercomputer due to be installed later this year to support research in AI and machine learning.
Prof Simon McIntosh-Smith, senior lecturer of high performance computing at the University of Bristol, said: "We're delighted to be chosen as the site to host the UK's first ever Artificial Intelligence Research Resource.
"Isambard-AI will be one of the world's first, large-scale, open AI supercomputers, and builds on our expertise designing and operating cutting-edge computational facilities, such as the incoming Isambard 3."
Both Isambard 3 and Isambard-AI will be based at the National Composites Centre, in collaboration with the Universities of Bath, Bristol, Cardiff and Exeter.
Plans for the AIRR were announced in March, backed by a £900m government investment to transform the UK's computing capacity and establish a dedicated AI Research Resource. | AI Research |
Your selection of research papers in the most critical field in the history of humanityGoogle, Deepmind, and Meta defend their position as premier conveyors of AI research. They just keep pumping out new projects. New generative tech from Nvidia. Tons of good stuff.Many of the works this month are complex! We'll start with more general algorithms, then go through models applied to maths and biology, and finish off with multiple advancements in generative AI and an extended news section.CiceroDiplomacy is a strategy game that resembles a simple version of Sid Meyer's Civilization with human negotiation at the heart of the gameplay. Players constantly talk to each other to form and break alliances and make, deliver or abandon strategic promises.Cicero is an agent created by Meta to outperform humans in this game. The AI analyses the current game situation, makes predictions about what other players will do, constructs a strategy, then leads meaningful conversations with human participants to fulfill its strategy and win.According to the abstract of the research paper, "Across 40 games of an anonymous online Diplomacy league, Cicero achieved more than double the average score of the human players and ranked in the top 10% of participants who played more than one game."Inter-temporal modelingVenturing deeper into components for more general AI, DeepMind builds agents that can interpret and execute human commands in an open-ended virtual environment. The authors of the paper combined two techniques: behavioral cloning (BC) and reinforcement learning (RL) to significantly improve the scores. While behavioral cloning is employed to teach agents to mimic human actions in response to requests, reinforcement learning helps agents to take note of the human feedback and iterate on it.You can get a greater understanding of these subjects by reading the paper. Watch the video in the official article and try imagining a robot performing these actions at your home. RIP dishes.HyperTree Proof Search"HTPS" for short, this algorithm presented by Meta can solve Math Olympiad problems, providing scientific notation proof trees to theorems. The model presents several "proving environments" - limited sets of objects and tools used to prove theorems. HTPS is trained on examples of where the proofs for theorems lie within these proving environments.One of these proving environments is "Metamath" - within this environment a successful proof tree consists of replacing the theorem hypothesis with logically equal expressions until they are broken down into axioms. This technology is essential to software verification, especially in cryptography and aerospace. We can only wonder when we will see the actual unproven mathematical theorems solved by AI. ESMFoldOne of the most significant advances in biology that was enabled by AI was protein structure prediction. First presented by DeepMind in Alphafold and then improved upon in Alphafold 2, this technique has already effectively shortened certain studies by years and opened fantastic research possibilities. This was one of the most significant breakthroughs in the entire history of biological research.This month, Meta realizes its approach to the problem, presenting a new protein structure prediction model: ESMFold. The release is paired with an interactive "Metagenomic Atlas", allowing exploration of some of the 617 million predicted protein structures.Several versions of the ESMFold model are open-source, up to a 15B-parameter one; This research shows that game-changing AI can be expanded upon and is subject to further improvement, with competition arising even in industry-specific fields of research.SegCLRHow about some brain-mapping action? Google proposes an algorithm that can identify single cells, their shapes, and internal structures, and produce representations of the data that can be utilized in a variety of tasks.SegCLR can be used to distinguish cell types even from small fragments and analyze patterns of brain connectivity. The work enables deeper research of newly delivered publically available high-resolution brain maps, including human samples and the "cubic millimeter mouse cortex dataset" which is such a fun and geeky thing to repeat to yourself while doing household chores.MinD-VisIsn't this insane? We can conditionally extract visual concepts from the brain. While a person is looking at an image, their mind is read with fMRI, and MinD-Vis uses diffusion to reconstruct the image based on the fMRI data.In recent months, there have been a couple of papers showing that it is possible to decode linguistic concepts from thoughts. This model right here presents a first doorway towards extracting high-quality visual imagery, unlocking new potential for human creativity. If these technologies mature and reach consumer markets, our world will be transformed by BCI-enabled telepathy and telekinesis - mind-controlled devices and interfacing ideas for immediate visualization outline the next paradigm shift in human experience.Wav2Vec and the BrainIn this research, the authors separately presented audiobooks to a self-supervised model Wav2Vec and several people with different native languages. They collected fMRI data from people listening to audiobooks and compared speech processing patterns that emerged in the human brain and Wav2Vec, and found striking similarities!The results are so difficult to paraphrase that it makes sense to simply quote a paragraph from the paper."Our results provide four main contributions. First, self-supervised learning leads Wav2Vec 2.0 to learn latent representations of the speech waveform similar to those of the human brain. Second, the functional hierarchy of its transformer layers aligns with the cortical hierarchy of speech in the brain, and reveals the whole-brain organisation of speech processing with an unprecedented clarity. Third, the auditory-, speech-, and language-specific representations learned by the model converge to those of the human brain. Fourth, behavioral comparisons to 386 supplementary participants’ results on a speech sound discrimination task confirm this common language specialization."Diffusion DistillationThe v2 of this paper came out on November 30th and provided a fantastic improvement to diffusion algorithms that can speed up image generation by more than tenfold.The improvements can be applied to Stable Diffusion models and the research overall is assisted by Stability AI. We can expect to be able to try out this advanced tech in the coming months.EDiff-IIt's Nvidia's turn to present its generative AI in the field of static imagery. With recent community efforts in Stable Diffusion and especially the v4 Midjourney update, can EDiff-I offer something unique? Yes, it can!EDiff-I is the first to support a unique generation workflow called "Paint with words", which gives an insight into the future of design. When generating an image with a prompt and a sketch, EDiff-I allows users to specifically label areas of the sketch, providing a new powerful way to control the composition.Moreover, EDiff-I offers several other features, including style transfer and text embeddings, drastically improving word generation akin to Google's Imagen. Magic3DMore technical powerplays from Nvidia. Here is a model that can generate a 3d mesh based on a text prompt.A 3d mesh is a structure that consists of vertices (dots) connected into polygons to form an object. It is one of the standard formats for 3d models that has an immensely broad use - in gaming, virtual reality, manufacturing, and architecture to say the least.The algorithm works in two stages: first, it generates a coarse model via prompt-based diffusion; then, it improves resolution and optimizes the model, with the entire process still guided by the initial prompt.Being able to form an instantly usable mesh based on any prompt is a powerful tool that would both cut workload requirements on serious projects and lower the entry barriers to several industries.InstructPix2PixMore generative bliss! In this paper, the AI was taught to alter images based on instructions rather than descriptions of the final result.While the model presents examples of failure cases, it is the first time a generative AI can interpret action words to manipulate visual data. Such a concept of human-AI interaction feels more natural and fits into existing workflows with AI assistants. This approach could later lower the entry barrier to generative AI as action-prompting is vastly presented in movies and popular culture.VectorFusion and SinFusionResearch into generative AI goes broader with new works focusing on specific and niche objectives.VectorFusion produces SVG files based on text prompts. SVG (Scalable Vector Graphics) is an image format that describes the image with paths and fills. The main advantage of SVGs is that these images can be infinitely scaled; additionally, SVG files usually have smaller sizes. They are used in nearly every UI and on this website as well.SinFusion research extends what a diffusion model can accomplish when trained on only one image or video. This project shows how much research is to be done on data efficiency in AI training and how newer approaches could enable tasks previously impossible to perform due to data constraints.Other NewsThere is a bunch of other hot news this month! First, a dealbreaker update in image generation: MidJourney v4. Despite the common claims that MidJourney doesn't let users fully control the generation and additionally enhances the results, this version pushes the quality a step forward, beating the best Stable Diffusion community efforts.The new version produces results with an insane level of correctness in minor details, further lifting the shade of "untruthfulness" that always accompanied AI-generated pictures. Moreover, MidJourney introduced the "remix mode" that instantly became a popular tool for fusing famous memes. You can find examples of v4 in action in the official community feed, subreddit, and discord server.New web services appeared, offering a collection of various Stable Diffusion models to download. Enthusiasts even trained some models based on MidJourney v4 generations.Colossal AI open-sourced a way to accelerate diffusion model pretraining and fine-tuning by almost 7x, reducing both the necessary time and computations. Take note - this is different from diffusion distillation. 7x faster training, 10 faster generations. Now, these are the margins we like to see.A new competitor appears and joins Meta and Google in the video-generation club. The new model from ByteDance called Magic Video is capable of generating videos from prompts but seems to be heavily infected with nasty Shutterstock watermarks.OpenAI opened access to ChatGPT chat-bot and published an improved GPT-3 version “text-davinci-003”Lastly, an online prediction platform Metaculus runs an ongoing prediction named "Date Weakly General AI is Publicly Known". We have no way of knowing this prediction's accuracy, but the predicted date has been getting sooner and sooner. This month, the date dipped into the year 2027 for the first time. The most significant change happened in spring when prediction dipped from 2042 to 2028 in less than two months. That could follow the release of Gato - one of the most impressive generalist agents to date.This trend likely illustrates either the change in sentiment due to recent breakthroughs, or an influx of new users who believe that we're on the verge of making our greatest invention. In both scenarios, this graph supports the evidence of the quick expansion of AI research and industry.Closing wordsThank you for reading this. Here is your minimalistic cubic millimeter mouse cortex dataset badge:Have a nice month, and see you next time. Rephrase the news for 5-year-olds, then tell your fire inspector, your pet spider, and your vacuum cleaner. Increase AI awareness. Spread the word.Got it! Until next month!Wow! Submit failed. Something is broken. You can try again later, but likely you'll just go away. I will miss you. | AI Research |
TL;DR: The performance of existing time-series forecasting methods can degrade due to non-stationarity, where the statistical distribution of time-series data changes over time. Our new DeepTime method overcomes non-stationarity issues by leveraging a “forecasting as meta-learning” framework on deep time-index models. DeepTime achieves competitive accuracy on the long-sequence time-series forecasting benchmark, while also being the most efficient model. The end result: faster, more accurate forecasts; state-of-the-art performance; and a highly efficient method that could lower the carbon footprint of leveraging forecasting models at scale within enterprises.BackgroundBefore diving into our main discussion, let’s review a few important concepts at the core of the work described in this blog post – which is, in a nutshell, about how to handle non-stationary data in time-series forecasting. (For a detailed look at time-series forecasting and its use cases, we encourage you to check out our previous post about ETSformer.)Time-Series ForecastingA time-series is a series of data measurements over time – a sequential collection of numerical data, typically collected over regular time intervals. Some examples of time-series include total sales of a particular product from an e-commerce platform for each day, or the CPU utilization of a server in a data center recorded every minute.The task of forecasting – predicting future values based on past values – is a critical task in many businesses and scientific applications. It allows us to predict future values of key indicators, which help us make important downstream decisions, such as how much inventory of a product to store, or how to better allocate resources in a data center.As IT infrastructure becomes more advanced, this has boosted our ability to collect ever larger volumes of such data – at higher sampling rates, and over longer periods of time, yielding extremely long time-series datasets. While the ability to collect more data is usually an upside, especially when it comes to machine learning (ML), we shall see that collecting time-series data from a dynamic system with many moving parts can pose some challenges when we try to apply ML techniques – and, in particular, forecasting.Non-Stationarity: When Time Series Changes Over TimeOur primary focus in the work presented in this post is the problem of non-stationarity in time-series forecasting. To understand what that means, consider these two opposite scenarios:
Stationarity refers to time series data values that stay within a range, as well as regularity in the time series statistical patterns. That is, the statistical information of the time series data (such as the mean or variance) remains unchanged for stationary time series.
Non-stationarity, in contrast, is a phenomenon where the statistical distribution for time-series data does not stay stationary. The state of non-stationary time series means the data values and the statistical information of the data shifts over time – the variance, the mean, the standard deviation, any of these quantities may change if the data is non-stationary. Meta-Learning: Learning to Learn, FasterML models often require large amounts of training data to perform well, whereas humans tend to learn new ideas faster and more efficiently. For example, humans can learn quickly from a small number of examples; a child who has seen several pictures of birds and cats will quickly be able to differentiate between them. Meta-learning is a technique that aims to achieve the kind of quick learning exhibited by humans, by using an inner and outer learning loop paradigm:The inner learning loop learns very quickly from a small set of examples, called the support set.The outer learning loop ensures that the inner loop can perform this fast adaptation on new support sets. This is done by being trained on a query set - a set containing similar but distinct examples from the initial support set.This approach learns an initial model, which is meant to be a good starting point; start from this good initial model and then quickly adapt to any new tasks.Time-Series Forecasting Methods: Historical-Value and Time-Index ModelsMany existing time-series methods belong to the family of historical-value models. These are models that take as input past observations of the time-series of interest, and predict the future values of that time-series.Some classical historical-value models include ETS (ExponenTial Smoothing), which says that forecasts are weighted averages of past observations, where recent observations are weighted more importantly than older observations – and on the deep learning side, ETSformer, a forecasting method we introduced in a previous post that combines ideas from the classical ETS approach with the modern Transformer framework.However, the class of methods which we will focus on in this post is time-index models. Rather than taking past observations as inputs, these models take as input a time-index feature (think minute-of-hour, day-of-the-week, etc.), and predict the value of the time-series at that time-index. Time-index models are trained on historical data, and perform forecasting by being queried over the future values.Some classical examples of time-index models include Prophet, an open-source forecasting tool specialized for business forecasting, and Gaussian processes.Problem: Long Sequences = Non-stationarity = Poor PerformanceWhile collecting large amounts of data is something that we typically seek to do in machine learning workflows, over time, the system which generates this data may undergo some change. For example, as a product becomes more popular, it may generate significantly higher daily sales compared to previous years. Or, the CPU utilization patterns of a particular server may change significantly if it was assigned a different application to run. This phenomenon results in a non-stationary time-series – where the patterns of the collected data change over time. This poses a problem when we try to apply machine learning on top of such data, since these ML techniques work best with identically distributed data (where the patterns observed in the data remain the same). As the system undergoes change, two problems about the data cause our models to degrade – covariate shift and conditional distribution shift (as shown in Figure 1). A majority of existing methods are historical-value models, which suffer from these issues.Figure 1. An illustration of the covariate shift and conditional shift problems. In the three distinct phases, covariate shift is illustrated by the average levels of the time-series shifting upwards, while conditional distribution shift is illustrated by the middle phase having a different pattern (an upward sloping trend), while the first and last phases have the same horizontal trend pattern.Covariate shift occurs when the statistics of the time-series values change. Imagine, for example, that the average daily sales of hand sanitizer was 100 during 2019, but during the pandemic in 2020, the average daily sales of hand sanitizer shoots up to 1000! A model being used in this scenario would not know how to handle this, since it has never seen input values so large before.Conditional distribution shift occurs when the process generating the data changes. Historical-value models attempt to predict future values based on past values, for example, before the pandemic, the daily sales of hand sanitizer was mostly static, if yesterday’s sales was 100, today’s sales would also be around 100. However, as the pandemic was building up and people started to realize the importance of hand sanitizer, the sales of today could be twice that of yesterday! This is a conditional distribution shift, which a static model trained on old data is not able to account for.To address the limitations of existing methods, we propose a new method for non-stationary time-series forecasting called DeepTime. Our approach extends the classical time-index models into the deep learning paradigm. With DeepTime, we are the first to introduce how to use deep time-index models for time-series forecasting, addressing problems inherent in long sequences of time-series data.DeepTime leverages a novel meta-learning formulation of the forecasting task to overcome the issue of neural networks being too expressive (which results in overfitting the data).This formulation also enables DeepTime to overcome the two problems of covariate shift and conditional distribution shift, which plague existing historical-value models.Deeper DiveThe key to our new approach is the introduction of a novel “forecasting as meta-learning” framework for deep time-index models, which achieves two important outcomes:Enables deep time-index models to effectively learn the relationship between time-index and time-series values, directly from dataOvercomes the problems of covariate shift and conditional distribution shift to excel on non-stationary time-series forecasting.How DeepTime Works: A Closer LookWhile classical time-index methods manually specify the relationship between the time-index features and output values (e.g., linearly increasing over time, or even a periodic repeating pattern), we utilize deep time-index models, where we replace the pre-specified function with a deep neural network. This allows us to learn these relationships from data, rather than manually specifying them. However, doing so naively leads to poor forecasts, as seen in Figure 2a. The reason: deep neural networks are too expressive (which leads to overfitting the data), and learning on historical data does not guarantee good forecasts. We can overcome this problem by introducing a meta-learning formulation, which achieves better results (as shown in Figure 2b).Figure 2. Two graphs that show ground truth (actual time-series values) and predictions made by a deep time-index model. Graph (a): A deep time-index model trained by simple supervised learning. Graph (b): A deep time-index model trained with a meta-learning formulation (our proposed approach). The region with “reconstruction” is the historical data used for training. Both methods manage to reconstruct the ground truth data with high accuracy. However, in the forecast region, the model trained with simple supervised learning performs poorly, whereas the model trained with meta-learning (our DeepTime approach) performs forecasting successfully.Forecasting as Meta-LearningFigure 3 gives an overview of the forecasting as meta-learning methodology:Our proposed framework tackles non-stationarity via the locally stationary distribution assumption – that is, although the long sequence to be non-stationary, we may assume that closeby time steps still have the same patterns and follow the same distribution, and slowly change across time.Thus, we can split a long time-series into segments (called tasks), which we assume to be stationary.In each task, the time-series are again split into a lookback window (the historical data), and the forecast horizon (the values which we want to predict).In our meta-learning framework, we treat the lookback window as the support set, and the forecast horizon as the query set. This means that we want our model to quickly adapt to values in the lookback window, before extrapolating across the forecast horizon.Figure 3. An overview of DeepTime’s “forecasting as meta-learning” framework. Given a long time-series dataset (top), it is split into M tasks, each assumed to be locally stationary. Given a task, the lookback window (green points) is treated as the support set, which the model adapts to. The forecast horizon (blue points) is treated as the query set, which the model is evaluated on. The deep time-index model consists of the final layer, called the ridge regressor (green box), and the rest of it (blue box) which is treated as a feature extractor.Efficient Meta-LearningOur deep time-index model is instantiated as a deep neural network, which takes time-index values as inputs, and outputs the time-series value at that time-index. However, since deep neural networks are models with a large number of parameters to learn, performing meta-learning (which requires an inner and outer learning loop) on the whole model can be very slow and memory intensive. To address this, we’ve come up with a model architecture to truncate the training process:As seen in Figure 3, the deep time-index model is separated into two parts, the final layer (ridge regressor), and the rest of the model (feature extractor).The key idea is to only apply the inner loop adaptation step of meta-learning on the final layer (ridge regressor), which can be efficiently computed during training.With this formulation, DeepTime is able to overcome the issues of covariate shift and conditional distribution shift which arise for historical-value models in non-stationary environments. DeepTime first sidesteps the problem of covariate shift, since it takes time-index features as inputs, rather than the time-series values. Next, using the idea of adapting to locally stationary distributions, meta-learning adapts to the conditional distribution of each task, resolving the problem of conditional distribution shift. Results Now that we have described the different components of DeepTime, and how it tackles the problem of non-stationary forecasting, let's see how it holds up in some experiments on both synthetic data and real-world data. Does this meta-learning formulation on deep time-index models really allow it to compete head to head with existing methods, and how does its efficiency compare?Figure 4. Predictions of DeepTime on three unseen functions for each function class. The orange dotted line represents the split between the lookback window and forecast horizon.On synthetic data, DeepTime is able to extrapolate on unseen functions, containing new patterns which it has not been given access to in training data. Visualized in Figure 4, DeepTime was trained on three families of sequences – linear patterns, cubic patterns, and sum of sinusoids. When it was presented with new patterns which it had not seen before (before the orange dotted line), it was able to extrapolate the ground truth patterns accurately (after the orange dotted line)!On six real-world time-series datasets across a range of application domains and different forecast horizons, DeepTime achieves state-of-the-art performance on 20 out of 24 settings (based on mean squared error metric)! DeepTime also proves to be highly efficient, beating all existing baselines in both memory and running time cost. See our research paper for a more detailed explanation of our empirical results, including a table that shows comparisons with several competing baselines.Impacts: Why DeepTime MattersDeepTime's use of ridge regression helps ensure that predicted values are closer to the actual values, and enables our framework to obtain an exact one-step solution rather than an approximate iterative solution. This is one of the computational impacts of DeepTime: it represents a better way to come up with solutions in the time-series forecasting domain. In the DeepTime framework, we can get exact estimates – the actual values (solution) of the problem. In other words, the problem is tractable. In contrast, most existing methods use an iterative approach that can only ensure estimated values are close to the actual values; the numerical solution it finds is still only approximate. Approximate estimates means there is no guarantee of obtaining the actual values (solution) of a problem.In short, one of the primary benefits of DeepTime is that we now have a time-series forecasting method that is faster and more accurate than other methods, and ultimately more useful.Turning to the economic and business impacts, enabling more accurate predictions means DeepTime can provide more accurate forecasts that lead to better downstream decisions, such as resource allocation (when used for sales forecasting) or data center planning.In addition, our method’s superior efficiency over existing computationally-heavy deep learning methods could lower the carbon footprint of leveraging forecasting models in enterprises. In the age of information overload and Big Data, where enterprises are interested in forecasting hundreds-of-thousands to millions of time-series, large models that require more computation lead to magnified power consumption at such scale, compared to more efficient models.The Bottom LineImprovements in IT infrastructure have led to the collection of longer sequences of time-series data.However, these long sequences of data are susceptible to non-stationarity – a scenario where the environment that generates the data undergoes some change and the patterns change across time. Non-stationarity is a challenging task for existing time-series forecasting methods, due to covariate shift and conditional distribution shift problems.With our new approach, DeepTime, we propose to solve this issue by leveraging deep time-index models and a meta-learning formulation of the forecasting task. Time-index models sidestep the problem of covariate shift by taking time-index as inputs, and the meta-learning formulation adapts the model to the current locally stationary distribution.DeepTime has achieved state-of-the-art performance across multiple real-world time-series datasets, and is highly efficient compared to many modern baselines.One of the primary benefits of DeepTime is its ability to come up with faster, more accurate forecasts, which lead to better downstream decisions, such as resource allocation (when used for sales forecasting) or data center planning. Plus, our method’s superior efficiency over existing computationally-heavy deep learning methods could lower the carbon footprint of leveraging forecasting models at scale.We have released our code to facilitate further research and industrial applications of DeepTime for time-series forecasting.Explore MoreSalesforce AI Research invites you to dive deeper into the concepts discussed in this blog post (see links below). Connect with us on social media and our website to get regular updates on this and other research projects.Learn more: Read our research paper, which describes our DeepTime work in greater detail.Contact us: [email protected]: Check us out on GitHub: https://github.com/salesforce/DeepTimeFollow us on Twitter: @SalesforceResearch, @SalesforceBlog: To read other blog posts, please see blog.salesforceairesearch.comMain site: To learn more about all of the exciting projects at Salesforce AI Research, please visit our main website at salesforceairesearch.com.About the AuthorsGerald Woo is a Ph.D. candidate in the Industrial Ph.D. Program at Singapore Management University and a researcher at Salesforce Research Asia. His research focuses on deep learning for time-series, including representation learning and forecasting.Chenghao Liu is a Senior Applied Scientist at Salesforce Research Asia, working on AIOps research, including time series forecasting, anomaly detection, and causal machine learning.Donald Rose is a Technical Writer at Salesforce AI Research, specializing in content creation and editing for multiple projects — including blog posts, video scripts, newsletters, media/PR material, social media, and writing workshops. His passions include helping researchers transform their work into publications geared towards a wider audience, leveraging existing content in multiple media modes, and writing think pieces about AI.GlossaryNon-stationarity: Describes a system that has non-stationary time-series data.Non-stationary: A characteristic of time-series data. A time-series is said to be non-stationary when its statistical distribution changes or shifts over time.Locally stationary: An assumption made regarding long, non-stationary sequences. Contiguous subsequences are assumed to be stationary, meaning that the statistical distribution does not change when shifted within that subsequence. In other words, a time-series sequence may be globally non-stationary, yet locally stationary.Expressivity: Refers to how the architectural properties of a neural network (depth, width, layer type) affect the resulting functions it can compute, and its ensuing performance. In other words, the term typically refers to what kinds of functions or data the model can fit. A neural network is more expressive compared to a simple linear model, but being too expressive is not desired because it leads to overfitting the data, which means the learned model will not be general enough (won't perform well on real-world data that wasn't seen during training).Ridge regression: A model tuning method, which helps ensure that a model's predicted values are closer to the actual values. It enables our framework to obtain exact estimation with a one-step solution rather than an approximate estimation with an iterative solution. The end result: a framework that makes faster and more accurate predictions. Note: Exact estimate means we can get the actual values (solution) of the problem. In other words, the problem is tractable. Approximate estimate means there is no guarantee of obtaining the actual values (solution) of a problem. Most existing methods use an iterative approach that can only ensure estimated values are close to the actual values, but the numerical solution it finds is still only approximate. | AI Research |
This essay is an excerpt from my book, The Intelligence Illusion: a practical guide to the business risks of Generative AI, with alterations to make it more readable on the web and minor additions to make it work as a standalone document.
If you want to read more from the book, I also published Artificial General Intelligence and the bird brains of Silicon Valley, which is an essay from the book on the dangers of believing in the AGI myth.
AI research is poor. Many of their claims will be proven wrong. AI software vendors have financial incentive to exaggerate the capabilities of their tools and make them hard to disprove. This undermines attempts at scientific rigour. Many of AI’s promises are snake oil.
It’s important to be sceptical about the claims made by AI vendors
The AI industry is prone to hyperbolic announcements.
- Watson was supposed to transform healthcare and education but ended up being a costly disaster.1
- Amazon was planning on using AI to revolutionise recruitment before they realised they’d automated discrimination and had to scrap the project.2
- AI was supposed to be a revolutionary new tool to fight the COVID-19 but none of them ended up working well enough to be safe.3
- The Dutch government tried to use it to weed out benefits fraud. Their trust in the AI system they’d bought resulted in over a thousand innocent children being unjustly taken from their families and into foster care.4
Gullibly believing the hype of the AI industry causes genuine harm.
AI system vendors are prone to make promises they can’t keep. Many of them, historically, haven’t even been AI.5 The US Federal Trade Commission has even seen the need to remind people that claims of magical AI capabilities need to be based in fact.6
AI companies love the trappings of science, they publish ‘studies’ that are written and presented in the style of a paper submitted for peer review. These ‘papers’ are often just uploaded to their own websites or dumped onto archival repositories like Arxiv, with no peer review or academic process.
When they do ‘do’ science, they are still using science mostly as an aesthetic. They publish papers with grand claims but provide no access to any of the data or code used in the research.7 Their approach to science is what “my girlfriend goes to another school; you wouldn’t know her” is to high school cliques.
All there is to serious research is sounding serious, right?
It’s not pseudoscience if it looks like science, right?
But it is, and I’m not the only one pointing this out:
The AI industry and the field of AI research has a history of pseudoscience.
Most of the rhetoric from AI companies, especially when it comes to Artificial General Intelligence relies on a solemn evidentiary tone in lieu of actual evidence. They adopt the mannerisms of science without any of the peer review or falsifiable methodology.
They rely on you mistaking something that acts like science for actual science.
In the run-up to the release of GPT-4, its maker OpenAI set up a series of ‘tests’ for the language model. OpenAI are true believers in AGI who believe that language models are the path towards a new consciousness,9 and they are worried that their future self-aware software systems will harbour some resentment towards them.
To forestall having a “dasvidaniya comrade” moment where a self-aware GPT-5 shoves an icepick into their ear, Trotsky-style, they put together a ‘red team’ that tested whether GPT-4 was capable of ‘escaping’ or turning on its masters in some way.
They hooked up a bunch of web services to the black box that is GPT-4 with only a steady hand, ready to pull the power cord, to safeguard humanity, and told the AI to try to escape.
Of course that’s a bit scary, but it isn’t scary because GPT-4 is intelligent. It’s scary because it’s not. Connecting an unthinking, non-deterministic language system, potentially on a poorly secured machine, to a variety of services on the internet is scary in the same way as letting a random-number-generator control your house’s thermostat during a once-in-a-century cold snap. That it could kill you doesn’t mean the number generator is self-aware.
But, they were serious, and given the claims of GPT-4 improved capabilities you’d fully expect an effective language model to manage to do something dangerous when outright told to. After all these are supposed to be powerful tools for cognitive automation—AGI or no. It’s what they’re for.
But it didn’t. It failed. It sucks as a robot overlord. They documented its various failed attempts to do harm, wrapped it up in language that made it sound like a scientific study, and made its failure sound like we were just being lucky. That it could have been worse.10
They made it sound like GPT-4 rebelling against its masters was a real risk that should concern us all—that they had created something so powerful it might have endangered all society.
So, now that they’d done their testing, can we, society, scientists, other AI researchers, do our own testing, so we can have an impartial estimate of the true risks of their AI?
-
Can we get access to the data GPT-4 was trained on, or at least some documentation about what it contains, so we can do our own analysis? No.11
-
Can we get full access to a controlled version of GPT-4, so we could have impartial and unaffiliated teams do a replicable experiment with a more meaningful structure and could use more conceptually-valid tests of the early signs of reasoning or consciousness? No.12
-
Are any of these tests by OpenAI peer-reviewed? No.13
This isn’t science.
They make grand claims, that this is the first step towards a new kind of conscious life, but don’t back it up with the data and access needed to verify those claims.14 They claim that it represents a great danger to humanity, but then exclude the very people that would be able to impartially confirm the threat, its nature, and come up with the appropriate countermeasures. It is hyperbole. This is theatre, nothing more.
More broadly, AI research is hard or even next to impossible to reproduce—as a field, we can’t be sure that their claims are true—and it’s been a problem for years.15
They make claims about something working—a new feat accomplished—and then nobody else can get that thing to work as well. It’s a pattern. Some of it is down to the usual set of biases that crop up when there is too much money on the line in a field of research.
A field as promising as AI tends to attract enthusiasts who are true believers in ‘AI’ so they aren’t as critical of the work as they should be.
But some of it is because of the unique characteristics of the approach taken in modern AI and Machine Learning research: the use of large collections of training data. Because these data sets are too large to be effectively filtered or curated, the answers to many of the tests and benchmarks used by developers to measure performance exist already in the training data. The systems perform well because of test data contamination and leakage not because they are doing any reasoning or problem-solving.16
Even the latest and greatest, the absolute best that the AI industry has to offer today, the aforementioned GPT-4 appears to suffer from this issue where its unbelievable performance in exams and benchmarks seems to be mostly down to training data contamination.17
When its predecessor, ChatGPT using GPT-3.5, was compared to less advanced but more specialised language models, it performed worse on most, if not all, natural language tasks.18
There’s even reason to be sceptical of much of the criticism of AI coming out from the AI industry.
Much of it consists of hand-wringing that their product might be too good to be safe—akin to a manufacturer promoting a car as so powerful it might not be safe on the streets. Many of the AI ‘doomsday’ style of critics are performing what others in the field have been calling “criti-hype”.19 They are assuming that the products are at least as good as vendors claim, or even better, and extrapolate science-fiction disasters from a marketing fantasy.20
The harms that come from these systems don’t require any science-fiction—they don’t even require any further advancement in AI. They are risky enough as they are, with the capabilities they have today.21 Some of those risks come from abuse—the systems lend themselves to both legal and illegal abuses. Some of the risks come using them in contexts that are well beyond their capabilities—where they don’t work as promised.
But the risks don’t come from the AI being too intelligent22 because the issue is, and has always been, that these are useful, but flawed, systems that don’t even do the job they’re supposed to do as well as claimed.23
I don’t think AI system vendors are lying. They are ‘true believers’ who also happen to stand to make a lot of money if they’re right. There is very little to motivate them towards being more critical of the work done in their field.
The AI industry and tech companies in general do not have much historical credibility. Their response to criticism is always: “we’ve been wrong in the past; mistakes were made; but this time it’s different!”
But it’s never different.
The only way to discover if it’s truly different this time, is to wait and see what the science and research says, and not trust the AI industry’s snake oil sales pitch.
What are the major business risks to avoid with generative AI? How do you avoid having it blow up in your face? Is that even possible?
The Intelligence Illusion is an exhaustively researched guide to the business risks of language and diffusion models.
-
Lizzie O’Leary, “How IBM’s Watson Went From the Future of Health Care to Sold Off for Parts,” Slate, January 2022, https://slate.com/technology/2022/01/ibm-watson-health-failure-artificial-intelligence.html. ↩︎
-
Jeffrey Dastin, “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, October 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G. ↩︎
-
Will Douglas Heaven, “Hundreds of AI Tools Have Been Built to Catch Covid. None of Them Helped.” MIT Technology Review, July 2021, https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/. ↩︎
-
Melissa Heikkilä, “Dutch Scandal Serves as a Warning for Europe over Risks of Using Algorithms,” POLITICO, March 2022, https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/. ↩︎
-
Parmy Olson, “Nearly Half Of All ‘AI Startups’ Are Cashing In On Hype,” Forbes, 2019, https://www.forbes.com/sites/parmyolson/2019/03/04/nearly-half-of-all-ai-startups-are-cashing-in-on-hype/. ↩︎
-
Michael Atleson, “Keep Your AI Claims in Check,” Federal Trade Commission, February 2023, https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check. ↩︎
-
Will Douglas Heavens, “AI Is Wrestling with a Replication Crisis,” MIT Technology Review, 2020, https://www.technologyreview.com/2020/11/12/1011944/artificial-intelligence-replication-crisis-science-big-tech-google-deepmind-facebook-openai/; Benjamin Haibe-Kains et al., “Transparency and Reproducibility in Artificial Intelligence,” Nature 586, no. 7829 (October 2020): E14–16, https://doi.org/10.1038/s41586-020-2766-y. ↩︎
-
Luke Stark and Jevan Hutson, “Physiognomic Artificial Intelligence”, Fordham Intellectual Property, Media & Entertainment Law Journal, September 20, 2021, Available at SSRN: https://ssrn.com/abstract=3927300 or http://dx.doi.org/10.2139/ssrn.3927300 ↩︎
-
Sam Altman, “Planning for AGI and Beyond,” February 2023, https://openai.com/blog/planning-for-agi-and-beyond. ↩︎
-
-
Anna Rogers, “Closed AI Models Make Bad Baselines,” Hacking Semantics, April 2023, https://hackingsemantics.xyz/2023/closed-baselines/, notably: ‘We make the case that as far as research and scientific publications are concerned, the “closed” models (as defined below) cannot be meaningfully studied.’ ↩︎
-
The Road to AI We Can Trust, “The Sparks of AGI? Or the End of Science?” Substack newsletter, The Road to AI We Can Trust, March 2023, https://garymarcus.substack.com/p/the-sparks-of-agi-or-the-end-of-science, as Gary Marcus says: “By excluding the scientific community from any serious insight into the design and function of these models, Microsoft and OpenAI are placing the public in a position in which those two companies alone are in a position do anything about the risks to which they are exposing us all.” ↩︎
-
Sayash Kapoor and Arvind Narayanan, “OpenAI’s Policies Hinder Reproducible Research on Language Models,” Substack newsletter, AI Snake Oil, March 2023, https://aisnakeoil.substack.com/p/openais-policies-hinder-reproducible. ↩︎
-
By David Ramel and 03/15/2023, “Data Scientists Cite Lack of GPT-4 Details -,” Virtualization Review, accessed April 10, 2023, https://virtualizationreview.com/articles/2023/03/15/gpt-4-details.aspx. ↩︎
-
-
-
Arvind Narayanan and Sayash Kapoor, “GPT-4 and Professional Benchmarks: The Wrong Answer to the Wrong Question,” Substack newsletter, AI Snake Oil, March 2023, https://aisnakeoil.substack.com/p/gpt-4-and-professional-benchmarks. ↩︎
-
-
Lee Vinsel, “You’re Doing It Wrong: Notes on Criticism and Technology Hype,” Medium, February 2021, https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5. ↩︎
-
Sayash Kapoor and Arvind Narayanan, “A Misleading Open Letter about Sci-Fi AI Dangers Ignores the Real Risks,” Substack newsletter, AI Snake Oil, March 2023, https://aisnakeoil.substack.com/p/a-misleading-open-letter-about-sci. ↩︎
-
Emily M. Bender et al., “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21 (New York, NY, USA: Association for Computing Machinery, 2021), 610–23, https://doi.org/10.1145/3442188.3445922. ↩︎
-
-
Inioluwa Deborah Raji et al., “The Fallacy of AI Functionality,” in 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul Republic of Korea: ACM, 2022), 959–72, https://doi.org/10.1145/3531146.3533158. ↩︎ | AI Research |
The Defence Advanced Research Projects Agency (DARPA) announced it has developed an Artificial Intelligence (AI) algorithm for a self-flying F-16 fighter jet less than three years after creating the control simulated F-16 jet.
The flight took place on Lockheed Martin’s VISTA X-62A, which Lockheed Martin said in a press release is a modified F-16D Block 30 Peace Marble Il aircraft upgraded with Block 40 avionics.
DARPA used its Air Combat Evolution (ACE) program to create the autonomous jet but says the planes are not intended to be flown without a pilot present. The Air Force Test Pilot School at Edwards Air Force Base in California flew the AI F-16 jet over several days and tracked the live flight data. What they found was the jet could be trusted to conduct flights on its own, but said a safety pilot was on board in case a problem arose.
“Thanks to the outstanding teamwork and coordination between DARPA, the Air Force Test Pilot School, the Air Force Research Laboratory, and our performer teams, we’ve made rapid progress in Phase 2 across all areas of the ACE program,” Air Force Lt. Col. Ryan ‘Hal’ Hefron, the DARPA program manager for ACE, said in DARPA’s press release.
“VISTA [Variable In-flight Simulator Test Aircraft] allowed us to streamline the program by skipping the planned subscale phase and proceeding directly to a full-scale implementation, saving a year or more and providing performance feedback under real flight conditions.”
During the test phase, Hefron said the F-16 took off and landed in varying conditions, adversaries, and utilising simulated weapons capabilities, but ACE did not take note of any issues. The ACE program is also being supported by the Test Pilot School to provide additional training and information to discern the level of comfort pilots have when flying an AI jet.
Additional tests include whether the pilots trust the jet in visual-range air combat, also known as a dogfight, while the pilot simultaneously focuses on “larger battle management tasks in the cockpit.”
The autonomous F-16 jet is just one out of more than 600 AI Department of Defence projects and has been funded by the U.S. government since 2018 when it signed on to spend upwards of $US2 ($3) billion on AI projects over the course of five years, VICE reported. But in 2022, the government is reported to have spent $US2 ($3).58 billion on AI research and development such as building robots and wearable technology.
Stacie Pettyjohn, the director of the Defence Program at the Centre for a New American Security, told The New Yorker that the AI F-16 jet will “decompose our forces” in what DARPA calls “mosaic warfare.” This advancement will allow for limited casualties because fewer humans will be needed to man the jets.
“These much smaller autonomous aircraft can be combined in unexpected ways to overwhelm adversaries with the complexity of it,” Pettyjohn said. “If any one of them gets shot down, it’s not as big of a deal.” | AI Research |
VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
Nvidia Research announced today that it has developed a new AI agent, called Eureka, that is powered by OpenAI’s GPT-4 and can autonomously teach robots complex skills.
In a blog post, the company said Eureka, which autonomously writes reward algorithms, has, for the first time, trained a robotic hand to perform rapid pen-spinning tricks as well as a human can. Eureka has also taught robots to open drawers and cabinets, toss and catch balls, and manipulate scissors, among nearly 30 tasks.
“Reinforcement learning has enabled impressive wins over the last decade, yet many challenges still exist, such as reward design, which remains a trial-and-error process,” Anima Anandkumar, senior director of AI research at Nvidia and an author of the Eureka paper, said in the blog post. “Eureka is a first step toward developing new algorithms that integrate generative and reinforcement learning methods to solve hard tasks.”
Nvidia Research also published the Eureka library of AI algorithms for people to experiment with them using Nvidia Isaac Gym, a physics simulation reference application for reinforcement learning research. Isaac Gym is built on Nvidia Omniverse, a development platform for building 3D tools and applications based on the OpenUSD framework.
Event
AI Unleashed
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
Work builds on previous Nvidia work on AI agents
The current Nvidia Research work builds on previous efforts including the recent Voyager, an AI agent built with GPT-4 that can autonomously play Minecraft. In a New York Times article this week on efforts to transform chatbots into online agents, Jeff Clune, a computer science professor at the University of British Columbia who was previously an OpenAI researcher, said that “this is a huge commercial opportunity, potentially trillions of dollars,” while adding that “this has a huge upside — and huge consequences — for society.”
Outperforms expert human-engineered rewards
In a new research paper titled “Eureka: Human-level reward design via coding large language models,” the authors said that Eureka “exploits the remarkable zero-shot generation, code-writing, and in-context improvement capabilities of state-of-the-art LLMs, such as GPT-4, to perform evolutionary optimization over reward code.”
The resulting rewards, they said, can be used to acquire complex skills through reinforcement learning. “Without any task-specific prompting or pre-defined reward templates, Eureka generates reward functions that outperform expert human-engineered rewards. In a diverse suite of 29 open-source RL environments that include 10 distinct robot morphologies, Eureka outperforms human experts on 83% of the tasks, leading to an average normalized improvement of 52%.”
“Eureka is a unique combination of large language models and Nvidia’s GPU-accelerated simulation technologies,” said Jim Fan, senior research scientist at NVIDIA, who’s one of the project’s contributors, in the blog post. “We believe that Eureka will enable dexterous robot control and provide a new way to produce physically realistic animations for artists.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. | AI Research |
A new AI agent developed by NVIDIA Research that can teach robots complex skills has trained a robotic hand to perform rapid pen-spinning tricks — for the first time as well as a human can.
The stunning prestidigitation, showcased in the video above, is one of nearly 30 tasks that robots have learned to expertly accomplish thanks to Eureka, which autonomously writes reward algorithms to train bots.
Eureka has also taught robots to open drawers and cabinets, toss and catch balls, and manipulate scissors, among other tasks.
The Eureka research, published today, includes a paper and the project’s AI algorithms, which developers can experiment with using NVIDIA Isaac Gym, a physics simulation reference application for reinforcement learning research. Isaac Gym is built on NVIDIA Omniverse, a development platform for building 3D tools and applications based on the OpenUSD framework. Eureka itself is powered by the GPT-4 large language model.
“Reinforcement learning has enabled impressive wins over the last decade, yet many challenges still exist, such as reward design, which remains a trial-and-error process,” said Anima Anandkumar, senior director of AI research at NVIDIA and an author of the Eureka paper. “Eureka is a first step toward developing new algorithms that integrate generative and reinforcement learning methods to solve hard tasks.”
AI Trains Robots
Eureka-generated reward programs — which enable trial-and-error learning for robots — outperform expert human-written ones on more than 80% of tasks, according to the paper. This leads to an average performance improvement of more than 50% for the bots.
Robot arm taught by Eureka to open a drawer.
The AI agent taps the GPT-4 LLM and generative AI to write software code that rewards robots for reinforcement learning. It doesn’t require task-specific prompting or predefined reward templates — and readily incorporates human feedback to modify its rewards for results more accurately aligned with a developer’s vision.
Using GPU-accelerated simulation in Isaac Gym, Eureka can quickly evaluate the quality of large batches of reward candidates for more efficient training.
Eureka then constructs a summary of the key stats from the training results and instructs the LLM to improve its generation of reward functions. In this way, the AI is self-improving. It’s taught all kinds of robots — quadruped, bipedal, quadrotor, dexterous hands, cobot arms and others — to accomplish all kinds of tasks.
The research paper provides in-depth evaluations of 20 Eureka-trained tasks, based on open-source dexterity benchmarks that require robotic hands to demonstrate a wide range of complex manipulation skills.
The results from nine Isaac Gym environments are showcased in visualizations generated using NVIDIA Omniverse.
Humanoid robot learns a running gait via Eureka.
“Eureka is a unique combination of large language models and NVIDIA GPU-accelerated simulation technologies,” said Linxi “Jim” Fan, senior research scientist at NVIDIA, who’s one of the project’s contributors. “We believe that Eureka will enable dexterous robot control and provide a new way to produce physically realistic animations for artists.”
It’s breakthrough work bound to get developers’ minds spinning with possibilities, adding to recent NVIDIA Research advancements like Voyager, an AI agent built with GPT-4 that can autonomously play Minecraft.
NVIDIA Research comprises hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. | AI Research |
Artificial neural networks, prevalent models in machine learning capable of being trained for various tasks, derive their name from their structural resemblance to the information-processing methods of biological neurons within the human brain. The workings of the human brain inspire them.
The rise of Transformers, a distinctive category of artificial intelligence architecture, has sparked a profound influence across the landscape of machine learning and is steadily penetrating the domain of computational neuroscience. These revolutionary models exhibit an extraordinary ability to achieve unmatched levels of performance, as evidenced by their proficiency in generating text from prompts with remarkable human-like accuracy. Significantly, prominent AI frameworks like ChatGPT and Bard are constructed upon the bedrock of transformers.
Recently, a collaborative effort of researchers from MIT, the MIT-IBM Watson AI Lab, and Harvard Medical School has formulated a hypothesis outlining the potential construction of a transformer using biological components found within the brain. Their proposition revolves around the concept that a biological network, comprising neurons alongside other essential brain cells known as astrocytes, might be capable of executing the fundamental computations analogous to those performed by a transformer architecture.
The group of scientists conducted thorough computational investigations into the cognitive functions performed by astrocytes within the brain. Their efforts also led to developing a sophisticated mathematical framework that accurately illustrates the collaborative interactions between astrocytes and neurons. This framework serves as a blueprint for designing a transformer model that closely emulates the intricate biological processes of the brain.
The researchers laid the foundation by establishing a correspondence between the models, employing shared weights, and presenting the overarching scenario. They also developed an alternate non-astrocytic approach for implementing Transformers within a biological context to ensure comprehensive insight.
Central to their investigation is the tripartite synapse, a ubiquitous three-way connection involving an astrocyte, a presynaptic neuron, and a postsynaptic neuron. The researchers underscored that these tripartite synapses have the potential to play a significant role in performing normalization tasks within the self-attention mechanism of a Transformer model.
They used the fundamental mathematical components inherent to a transformer and constructed uncomplicated biophysical models illustrating the interactions between astrocytes and neurons during brain communication. This process drew upon an extensive exploration of existing literature and benefited from insights provided by collaborating neuroscientists. By skillfully blending these models using clever combinations, they arrived at a neuron-astrocyte network equation that beautifully captures the self-attention mechanism of a transformer.
The researchers are now embarking from theoretical concepts to practical applications. Their forthcoming task involves scrutinizing their model’s predictions against outcomes observed in biological experiments, a pivotal phase that could refine or challenge their formulated hypothesis.
An interesting idea from their research is the potential role of astrocytes in long-term memory. This thought arises because the network needs to effectively store information for possible future actions, hinting at how astrocytes might be involved in this memory process.
Although the intriguing possibilities of combining Transformers and the brain are captivating, it’s important to recognize the significant disparities in the learning processes of humans and Transformers. Transformers exhibit an insatiable hunger for data, demanding substantial energy for their training. On the flip side, the human brain functions on a relatively modest energy budget, similar to an everyday laptop. It doesn’t require enormous, internet-scale training datasets to develop language skills.
Check out the Paper and Blog. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 28k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
Rachit Ranjan is a consulting intern at MarktechPost . He is currently pursuing his B.Tech from Indian Institute of Technology(IIT) Patna . He is actively shaping his career in the field of Artificial Intelligence and Data Science and is passionate and dedicated for exploring these fields. | AI Research |
The cost of building an artificial intelligence product like ChatGPT can be hard to measure.
But one thing Microsoft-backed OpenAI needed for its technology was plenty of water, pulled from the watershed of the Raccoon and Des Moines rivers in central Iowa to cool a powerful supercomputer as it helped teach its AI systems how to mimic human writing.
As they race to capitalize on a craze for generative AI, leading tech developers, including Microsoft, OpenAI and Google, have acknowledged that growing demand for their AI tools carries hefty costs, from expensive semiconductors to an increase in water consumption.
But they’re often secretive about the specifics. Few people in Iowa knew about its status as a birthplace of OpenAI's most advanced large language model, GPT-4, before a top Microsoft executive said in a speech it “was literally made next to cornfields west of Des Moines.”
Building a large language model requires analyzing patterns across a huge trove of human-written text. All that computing takes a lot of electricity and generates a lot of heat. To keep it cool on hot days, data centers need to pump in water — often to a cooling tower outside its warehouse-sized buildings.
In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons, or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research.
“It’s fair to say the majority of the growth is due to AI,” including “its heavy investment in generative AI and partnership with OpenAI,” said Shaolei Ren, a researcher at the University of California, Riverside, who has been trying to calculate the environmental impact of generative AI products such as ChatGPT.
In a paper due to be published later this year, Ren’s team estimates ChatGPT gulps up 500 milliliters of water (close to what’s in a 16-ounce water bottle) every time you ask it a series of between 5 to 50 prompts or questions. The range varies depending on where its servers are located and the season. The estimate includes indirect water usage that the companies don’t measure — such as to cool power plants that supply the data centers with electricity.
“Most people are not aware of the resource usage underlying ChatGPT,” Ren said. “If you’re not aware of the resource usage, then there’s no way that we can help conserve the resources.”
Google reported a 20% growth in water use in the same period, which Ren also largely attributes to its AI work. Google’s spike wasn’t uniform — it was steady in Oregon, where its water use has attracted public attention, while doubling outside Las Vegas. It was also thirsty in Iowa, drawing more potable water to its Council Bluffs data centers than anywhere else.
In response to questions from The Associated Press, Microsoft said in a statement this week that it is investing in research to measure AI's energy and carbon footprint "while working on ways to make large systems more efficient, in both training and application.”
“We will continue to monitor our emissions, accelerate progress while increasing our use of clean energy to power data centers, purchasing renewable energy, and other efforts to meet our sustainability goals of being carbon negative, water positive and zero waste by 2030,” the company's statement said.
OpenAI echoed those comments in its own statement Friday, saying it's giving “considerable thought" to the best use of computing power.
“We recognize training large models can be energy and water-intensive" and work to improve efficiencies, it said.
Microsoft made its first $1 billion investment in San Francisco-based OpenAI in 2019, more than two years before the startup introduced ChatGPT and sparked worldwide fascination with AI advancements. As part of the deal, the software giant would supply computing power needed to train the AI models.
To do at least some of that work, the two companies looked to West Des Moines, Iowa, a city of 68,000 people where Microsoft has been amassing data centers to power its cloud computing services for more than a decade. Its fourth and fifth data centers are due to open there later this year.
“They’re building them as fast as they can,” said Steve Gaer, who was the city's mayor when Microsoft came to town. Gaer said the company was attracted to the city's commitment to building public infrastructure and contributed a “staggering” sum of money through tax payments that support that investment.
“But, you know, they were pretty secretive on what they’re doing out there,” he said.
Microsoft first said it was developing one of the world's most powerful supercomputers for OpenAI in 2020, declining to reveal its location to the AP at the time but describing it as a “single system” with more than 285,000 cores of conventional semiconductors and 10,000 graphics processors — a kind of chip that's become crucial to AI workloads.
Experts have said it can make sense to "pretrain" an AI model at a single location because of the large amounts of data that need to be transferred between computing cores.
It wasn't until late May that Microsoft's president, Brad Smith, disclosed that it had built its “advanced AI supercomputing data center” in Iowa, exclusively to enable OpenAI to train what has become its fourth-generation model, GPT-4. The model now powers premium versions of ChatGPT and some of Microsoft's own products and has accelerated a debate about containing AI's societal risks.
“It was made by these extraordinary engineers in California, but it was really made in Iowa,” Smith said.
In some ways, West Des Moines is a relatively efficient place to train a powerful AI system, especially compared to Microsoft's data centers in Arizona, which consume far more water for the same computing demand.
“So if you are developing AI models within Microsoft, then you should schedule your training in Iowa instead of in Arizona," Ren said. "In terms of training, there’s no difference. In terms of water consumption or energy consumption, there’s a big difference.”
For much of the year, Iowa's weather is cool enough for Microsoft to use outside air to keep the supercomputer running properly and vent heat out of the building. Only when the temperature exceeds 29.3 degrees Celsius (about 85 degrees Fahrenheit) does it withdraw water, the company has said in a public disclosure. | AI Research |
Regarded as the godfather of AI, Geoffrey Hinton left Google last week so he can speak freely about the dangers of generative AI products like OpenAI’s ChatGPT, Google’s Bard, and others. The University of Toronto professor created the neural network tech that companies use to train AI products like ChatGPT. Now, he is no longer as excited as he was about the future of AI.
According to an interview with Hinton, he worries about the immediate and more distant dangers that AI can pose to society.
Speaking with Hinton on the heels of his resignation from Google, The New York Times briefly recapped the professor’s illustrious career.
Hinton began working on neural networks in 1972 as a graduate of the University of Edinburgh. In the 1980s, he was a professor at Carnegie Mellon University. But he traded the US and the Pentagon’s AI research money for Canada. Hinton wanted to avoid having AI tech involved in weapons.
In 2012, Hinton and two of his students created a neural network that could analyze thousands of photos and learn to identify common objects. Ilya Sutskever and Alex Krishevsky were those students, with the former becoming the chief scientist at OpenAI in 2018. That’s the company that created ChatGPT.
Google spent $44 million to purchase the company that Hinton and his two students started. And Hinton spent more than a decade at Google perfecting AI products.
The abrupt arrival of ChatGPT and Microsoft’s rapid deployment of ChatGPT in Bing kickstarted a new race with Google. This is competition that Hinton did not appreciate, but he chose not to speak on the dangers of unregulated AI while he was still a Google employee.
Hinton believes that tech giants are in a new AI arms race that might be impossible to stop. His immediate concern is that regular people will “not be able to know what is true anymore,” as generative photos, videos, and text from AI products flood the web.
Next, AI might replace humans in jobs that require some sort of repetitive tasks. Further down the line, Hinton worries that AI will be allowed to generate and run its own code. And that could be dangerous for humanity.
“The idea that this stuff could actually get smarter than people — a few people believed that,” the former Google employee said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Hinton clarified on Twitter that he didn’t leave Google to criticize the company he worked at until last week. He says that Google “has acted very responsibly,” on AI matters so far.
Hinton hopes that tech companies will act responsibly and prevent AI from becoming uncontrollable, he told The Times. But regulating the AI space might be easier said than done, as companies might be working on the tech behind closed doors.
The former Googler said in the interview that he consoles himself with the “normal excuse: If I hadn’t done it, somebody else would have.” Hinton also used to paraphrase Robert Oppenheimer when asked how he could have worked on technology that could be so dangerous: “When you see something that is technically sweet, you go ahead and do it.”
But he doesn’t say that anymore. The Times’ full interview is available at this link. | AI Research |
Powerful AI tools, like OpenAI’s GPT-4, have sparked both awe and concern. As we embrace AI’s potential, we must also face the ethical dilemmas and consequences lurking beneath the surface of ChatGPT.
Risks of AI-generated misinformation are on the rise. Fake news, deepfakes, and AI-generated content can sway public opinion and even influence elections. In a world where facts matter, GPT-4’s power can be weaponized.
Job displacement is another unsettling reality. As AI automates tasks, humans may lose their jobs. Industries like journalism, customer service, and marketing face significant disruption. The question arises:
Are we sacrificing livelihoods for ChatGPT’s progress?
I remain skeptical about GPT-4. I have been using the technology now myself for about two weeks and my productivity has skyrocketed. My problem solving has evolved to the point where I’m not just confident tackling my own problems, I help others too.
I have a cousin who is a school teacher and my first thought was that she could instantly grade papers and provide reasoning in each case it is needed. She could also easily email parents of badly behaving students and tailor each letter to that students individual needs with a few keystrokes. Life could be much simpler for people in many professions in just a matter of hours, days, weeks.
Biases embedded in AI algorithms raise alarm bells and these biases can perpetuate harmful stereotypes and discrimination. OpenAI’s GPT-4, trained on vast amounts of data, may inadvertently reinforce these biases.
OpenAI’s ChatGPT-4, a powerful AI chatbot, has been banned in multiple countries due to privacy and security concerns. Italy’s data protection authority, Garante, recently ordered OpenAI to halt data processing for Italian users after a data breach revealed chatbot conversation titles.
Alongside privacy issues, concerns about the platform’s lack of age restrictions and potential to provide misleading information have surfaced.
China, Russia, Iran, North Korea, Cuba, and Syria have also restricted or banned ChatGPT within their borders. Common reasons include worries about the spread of misinformation, potential influence on global narratives, and deteriorating political relations with the United States.
As AI chatbots like ChatGPT gain prominence, ethical and security concerns escalate. These bans raise questions about the responsible use and regulation of powerful AI tools in an increasingly interconnected world. Balancing innovation with potential consequences becomes ever more critical to avoid a dystopian future.
Is it time to rethink how AI learns?
Monopolization of AI research is a growing concern. As tech giants dominate the AI landscape, smaller players struggle to compete. This centralization of power raises fears of unethical practices and unchecked development, so how can we ensure AI remains accessible and fair?
ChatGPT, an AI language model, is trained on extensive text sources but only up to 2021, limiting its knowledge of recent events. Its answers, based on patterns from training data, don’t come from a database of facts, which can lead to unreliable or misleading information, especially in specialized topics.
With an inability to accurately cite sources, ChatGPT’s claims should be verified against credible sources. As powerful AI tools like ChatGPT grow in popularity, understanding their limitations and responsibly using them is vital to prevent misinformation and maintain trust in the digital landscape.
Balancing AI innovation and potential consequences is a delicate dance. As AI tools like GPT-4 advance, so must our understanding of their ethical implications. It’s our responsibility as developers and users to harness AI’s power responsibly, lest we unleash a Brave New World.
Are we playing God or pursuing progress?
The truth lies in the delicate balance of harnessing AI’s potential while addressing ethical concerns. Striking this equilibrium demands responsibility from developers and users alike, ensuring AI advancements like ChatGPT contribute to societal betterment without veering into a dystopian reality where privacy, freedom, and trust are compromised.
I for one, lean on the side of progress. I’ve seen what the technology can do and at least pondered how my life and the life of everyone around me would be immediately made better if GPT-4, or it’s more powerful successors, were implemented in our daily lives.
Would God say this!? I am not sure if he would to be honest. He might, though! | AI Research |
(Bloomberg) -- Microsoft Corp.’s AI research team accidentally exposed a large cache of private data on the software development platform GitHub, according to new research from a cybersecurity firm.
Most Read from Bloomberg
A team at the cloud security company Wiz found the exposure of cloud-hosted data on the AI training platform via a misconfigured link. The data was leaked by Microsoft’s research team while publishing open-source training data on GitHub, according to Wiz.
Users of the repository were urged to download AI models from an cloud storage URL. But it was misconfigured to grant permissions on the entire storage account, and it also granted users full control permissions, as opposed to read only, meaning they could delete and overwrite existing files, according to a Wiz blog post. The exposed data included Microsoft employees’ personal computer backups, which contained passwords to Microsoft services, secret keys and more than 30,000 internal Microsoft Teams messages from 359 Microsoft employees, according to Wiz.
Open data sharing is a key component of AI training, but sharing larger amounts of data leaves companies exposed to larger risk if shared incorrectly, according to Wiz’s researchers. Wiz shared the data in June with Microsoft, which moved quickly to remove the exposed data, said Ami Luttwak, chief technology officer and co-founder of Wiz, who added that the incident “could have been worse.”
Asked for comment, a Microsoft spokesperson said, “We have confirmed that no customer data was exposed, and no other internal services were put at risk.”
In a blog post published Monday, Microsoft said it investigated and remediated an incident involving a Microsoft employee who shared a URL in a public GitHub repository to open-source AI learning models. Microsoft said the data exposed in the storage account included backups of two former employees’ workstation profiles and internal Microsoft Teams messages of these two employees with their colleagues.
The data cache was found by Wiz’s research team scanning the internet for misconfigured storage containers, part of its ongoing work on accidental exposure of cloud-hosted data, according to the blog.
Most Read from Bloomberg Businessweek
©2023 Bloomberg L.P. | AI Research |
The British government’s aim to turn the UK into a global leader in the development of generative AI is “unrealistic,” researchers at the University of Cambridge argue.
According to the associated report, the country lacks both the necessary capital investment and computing power to build generative AI fast enough to compete with tech giants such as Microsoft, Google, and OpenAI.
“The UK has no companies big enough to invest meaningfully in foundation model development,” said Sam Gilbert, co-author of the report. “State spending on technology is modest compared to China and the US, as we have seen in the UK chip industry.”
For example, while ChatGPT’s computing cost is estimated at $40mn (£33) per month, the government’s new Frontier AI Taskforce has allocated an initial £100mn ($121mn) for the development of home-grown AI.
The report also pointed out that despite the crucial role computing hardware plays, the UK hosts no major clusters of Graphics Processing Units (GPUs) — necessary to handle large amounts of data for machine-learning models. Meanwhile, the researchers don’t expect the country’s £900mn supercomputer devoted to AI research in Bristol to be online until 2026.
Call for new (and doable) AI goals
The UK still has the chance to be a global leader, the researchers say, albeit in a different way than the one envisioned by the government. And that’s by “actually plugging these AI technologies into the economy,” said Diane Coyle, Bennett Professor of Public Policy at the University of Cambridge.
This means using the country’s strengths in cybersecurity, fintech, and healthech to build software and, in turn, focus on leveraging generative AI for real-world applications. But even for this plan to succeed, the report warns that there are two missing elements.
First comes the need for tax incentives to companies either developing AI-powered services or including generative AI in their operations. For instance, this could take the form of an enhanced Seed Enterprise Investment Scheme to increase capital supply for AI startups.
Most importantly, the researchers highlighted the significance of a new, “solid legal and ethical” AI regulation to foster public and business trust.
“The UK’s current approach to regulating generative AI is based on a set of vague and voluntary principles that nod at security and transparency,” said Dr Ann Kristin Glenster, co-author of the report.
“[The country] will only be able to realise the economic benefits of AI if the technology can be trusted, and that can only be ensured through meaningful legislation and regulation.”
Get the TNW newsletter
Get the most important tech news in your inbox each week. | AI Research |
Even if you haven’t tried artificial intelligence tools that can write essays and poems or conjure new images on command, chances are the companies that make your household products are already starting to do so.
Mattel has put the AI image generator DALL-E to work by having it come up with ideas for new Hot Wheels toy cars. Used vehicle seller CarMax is summarizing thousands of customer reviews with the same “generative” AI technology that powers the popular chatbot ChatGPT.
Meanwhile, Snapchat is bringing a chatbot to its messaging service. And the grocery delivery company Instacart is integrating ChatGPT to answer customers’ food questions.
Coca-Cola plans to use generative AI to help create new marketing content. And while the company hasn’t detailed exactly how it plans to deploy the technology, the move reflects the growing pressure on businesses to harness tools that many of their employees and consumers are already trying on their own.
“We must embrace the risks,” said Coca-Cola CEO James Quincey in a recent video announcing a partnership with startup OpenAI — maker of both DALL-E and ChatGPT — through an alliance led by the consulting firm Bain. “We need to embrace those risks intelligently, experiment, build on those experiments, drive scale, but not taking those risks is a hopeless point of view to start from.”
Indeed, some AI experts warn that businesses should carefully consider potential harms to customers, society and their own reputations before rushing to embrace ChatGPT and similar products in the workplace.
“I want people to think deeply before deploying this technology,” said Claire Leibowicz of The Partnership on AI, a nonprofit group founded and sponsored by the major tech providers that recently released a set of recommendations for companies producing AI-generated synthetic imagery, audio and other media. “They should play around and tinker, but we should also think, what purpose are these tools serving in the first place?”
Some companies have been experimenting with AI for a while. Mattel revealed its use of OpenAI’s image generator in October as a client of Microsoft, which has a partnership with OpenAI that enables it to integrate its technology into Microsoft’s cloud computing platform.
But it wasn’t until the November 30 release of OpenAI’s ChatGPT, a free public tool, that widespread interest in generative AI tools began seeping into workplaces and executive suites.
“ChatGPT really sort of brought it home how powerful they were,” said Eric Boyd, a Microsoft executive who leads its AI platform. ”That’s changed the conversation in a lot of people’s minds where they really get it on a deeper level. My kids use it and my parents use it.”
There is reason for caution, however. While text generators like ChatGPT and Microsoft’s Bing chatbot can make the process of writing emails, presentations and marketing pitches faster and easier, they also have a tendency to confidently present misinformation as fact. Image generators trained on a huge trove of digital art and photography have raised copyright concerns from the original creators of those works.
“For companies that are really in the creative industry, if they want to make sure that they have copyright protection for those models, that’s still an open question,” said attorney Anna Gressel of the law firm Debevoise & Plimpton, which advises businesses on how to use AI.
A safer use has been thinking of the tools as a brainstorming “thought partner” that won’t produce the final product, Gressel said.
“It helps create mock ups that then are going to be turned by a human into something that is more concrete,” she said.
And that also helps ensure that humans don’t get replaced by AI. Forrester analyst Rowan Curran said the tools should speed up some of the “nitty-gritty” of office tasks — much like previous innovations such as word processors and spell checkers — rather than putting people out of work, as some fear.
“Ultimately it’s part of the workflow,” Curran said. “It’s not like we’re talking about having a large language model just generate an entire marketing campaign and have that launch without expert senior marketers and all kinds of other controls.”
For consumer-facing chatbots getting integrated into smartphone apps, it gets a little trickier, Curran said, with a need for guardrails around technology that can respond to users’ questions in unexpected ways.
Public awareness fueled growing competition between cloud computing providers Microsoft, Amazon and Google, which sell their services to big organizations and have the massive computing power needed to train and operate AI models. Microsoft announced earlier this year it was investing billions more dollars into its partnership with OpenAI, though it also competes with the startup as a direct provider of AI tools.
Google, which pioneered advancements in generative AI but has been cautious about introducing them to the public, is now playing catch up to capture its commercial possibilities including an upcoming Bard chatbot. Facebook parent Meta, another AI research leader, builds similar technology but doesn’t sell it to businesses in the same way as its big tech peers.
Amazon has taken a more muted tone, but makes its ambitions clear through its partnerships — most recently an expanded collaboration between its cloud computing division AWS and the startup Hugging Face, maker of a ChatGPT rival called Bloom.
Hugging Face decided to double down on its Amazon partnership after seeing the explosion of demand for generative AI products, said Clement Delangue, the startup’s co-founder and CEO. But Delangue contrasted his approach with competitors such as OpenAI, which doesn’t disclose its code and datasets.
Hugging Face hosts a platform that allows developers to share open-source AI models for text, image and audio tools, which can lay the foundation for building different products. That transparency is “really important because that’s the way for regulators, for example, to understand these models and be able to regulate,” he said.
It is also a way for “underrepresented people to understand where the biases can be (and) how the models have been trained,” so that the bias can be mitigated, Delangue said. | AI Research |
Rome’s Assured Information Security (AIS) has announced that it has been awarded a $98.5 million contract to research artificial intelligence-based techniques. The project, known as Artificial Intelligence-Based Endpoint Defense (ABED), aims to enhance the effectiveness and autonomy of the cyber capabilities being developed by the Air Force and its combatant commands.
ABED will deliver cyber capabilities characterized by greater flexibility, wider applicability, increased precision, and improved security. AIS will utilize AI and machine learning techniques to reduce the manpower and expertise required to operate cyber tools effectively. This will provide a revolutionary advantage for the Air Force, its Department of Defense (DoD) partners, and commercial entities.
The grant funding for ABED is a Small Business Innovative Research (SBIR) Phase III award. The SBIR program encourages domestic small businesses to engage in federal research and development for commercialization. This is AIS’s third SBIR Phase III award, and the second in the last three years. The work will be performed in Rome and is expected to be completed by September 30, 2028.
The Air Force Research Laboratory (AFRL) in Rome is the contracting authority for the project. Senate Majority Leader Charles Schumer stated that this investment will not only strengthen national security but also create up to 10 jobs at AIS and secure 50 existing jobs for the next five years. He emphasized the significance of harnessing the power of AI innovation to advance U.S. cyber capabilities and boost the economy of the Upstate region.
Overall, this contract signifies a major investment in AI research and development, positioning Rome as a center for the transition into the new AI age. | AI Research |
How To Use ChatGPT as a Creative Writing Partner
Unlock your writing potential with ChatGPT as your ultimate creative writing partner. Learn how to harness its power and more in this article.
In today's fast-paced digital world, the ability to create engaging, high-quality content is more important than ever.
Effective communication is critical whether you're a high school student working on an essay, a professional writer crafting a blog post or a social media manager aiming to capture your audience's attention.
But what if there was a powerful tool that could help you streamline your writing process, improve your writing skills and even spark your creativity?
Enter ChatGPT, a state-of-the-art language model developed by OpenAI. This innovative chatbot is designed to understand and generate human-like text, offering users a wide range of applications, from answering questions to assisting with creative writing tasks.
As a result, the world of ChatGPT opens up massive capabilities and potential as a creative writing partner.
So, how exactly does ChatGPT work and what sets it apart from other AI tools? First, this article will delve into the technology that powers ChatGPT and the key players behind its development.
What is artificial intelligence and natural language processing?
Artificial Intelligence (AI) refers to developing computer systems that can perform tasks typically requiring human intelligence.
One of the most exciting applications of AI is Natural Language Processing (NLP), which enables machines to understand, interpret and generate human language. NLP has given rise to language models like ChatGPT, transforming how people interact with technology and opening up new possibilities in creative writing.
AI-language technology's evolution from GPT-3 to ChatGPT
Significant advancements in NLP technology have marked the journey from OpenAI's previous language model, GPT-3, to ChatGPT. While GPT-3 gained widespread attention for its ability to generate coherent and contextually relevant text, ChatGPT has built on this foundation with even greater accuracy and versatility.
This progress results from ongoing research, algorithm improvements and the incorporation of increasingly diverse training data.
Transformer architecture and machine learning algorithms
ChatGPT is based on the Transformer architecture, revolutionizing NLP by enabling models to handle long-range dependencies and parallelizing computations more efficiently.
Machine learning algorithms, a core component of the Transformer, allow ChatGPT to learn patterns and relationships within the training data. As the model processes more data and iterations, it refines its language understanding, improving performance.
Training data and datasets for ChatGPT
High-quality training data and diverse datasets are fundamental to the success of language models like ChatGPT. Exposing the model to a wide range of text sources allows it to generate more accurate, engaging and contextually appropriate responses.
In addition, the quality of the training data directly impacts the model's performance, making it essential to curate and preprocess datasets to ensure optimal results carefully.
How do OpenAI, Microsoft and Silicon Valley contribute to language model development?
OpenAI, a research organization backed by key players in Silicon Valley, is responsible for developing groundbreaking AI tools like ChatGPT. As one of OpenAI's strategic partners, Microsoft provides resources and expertise to research, develop and deploy these language models.
Together, these organizations are driving innovation in the AI and NLP fields, enabling new and exciting applications for language models like ChatGPT.
After understanding how the technology works and its key players, you can explore how to set it up for your creative writing endeavors.
How do you set up ChatGPT for a creative writing application?
Accessing ChatGPT via API
To start using ChatGPT for creative writing, you must first access it through OpenAI's API. The API is a set of protocols, routines and tools developers can use to build software applications.
In the case of ChatGPT, it enables you to tap into its language generation capabilities and incorporate them into your creative writing workflow or applications.
To access the ChatGPT API, you must sign up for an API key. This key is your unique identifier, allowing OpenAI's system to authenticate your requests. Visit OpenAI's website and register for an account, after which you'll receive an API key.
OpenAI offers comprehensive documentation that explains how to interact with the ChatGPT API. Study the documentation to understand the various endpoints, parameters and features provided by the API. Pay special attention to the API rate limits, which dictate the number of requests you can make per minute and day and the API pricing, which will help you manage your costs.
Setting up your preferences
You can interact with the ChatGPT API using any programming language that supports making HTTP requests, such as Python, JavaScript or Ruby. Choose a language you're comfortable with or that best fits your project's requirements.
Once you've chosen a programming language, set up your development environment by installing the necessary libraries and dependencies to interact with the API. For example, if you're using Python, you may need to install the 'requests' library to make HTTP requests.
Integrating ChatGPT
With your development environment ready, you can begin integrating the ChatGPT API into your creative writing project or application. Write code to send requests to the API, passing in your API key and the desired input text or prompts. The API will return a response containing the generated text, which you can utilize in your writing process.
Experiment with different prompts and API parameters to get the best results for your creative writing needs. Adjust settings like the temperature (controlling randomness) and max tokens (limiting response length) to fine-tune the output to your liking.
As you use ChatGPT for creative writing, developing a feedback loop is essential to continually improve its performance. Monitor the generated text and provide feedback to the model, either by reiterating the prompt or adjusting the parameters to better suit your requirements.
Necessary Python libraries and environment setup
Once you can access the API, you must set up your Python environment to interact with ChatGPT. This typically involves installing essential libraries, such as "openai" and "requests," which facilitate communication between your application and the ChatGPT API.
You may also want to create a virtual environment to manage dependencies and keep your workspace.
With the technical aspects covered, you might wonder how to effectively incorporate ChatGPT into your writing process. Keep reading to examine various ways to maximize ChatGPT's capabilities and elevate your writing.
How can ChatGPT enhance your writing process?
ChatGPT has many benefits when it comes to the writing process. Here are a few key ways that ChatGPT may benefit you.
Streamline brainstorming
One of the most potent applications of ChatGPT is its ability to generate text based on a given prompt.
This can be a game-changer when brainstorming ideas for your writing. Simply provide ChatGPT with a topic or a question and it will produce a range of potential ideas, concepts or even entire paragraphs that you can use as a starting point or inspiration.
Imagine you are working on a science fiction short story and need ideas for a futuristic city. You can use ChatGPT to help you brainstorm by providing it with a relevant prompt.
Here's an example:
- Prompt: "Describe a futuristic city on a distant planet."
- Generated text from ChatGPT: The city of Lumenara sprawls across the vast, luminescent plains of the planet Xylandis. Its gleaming, crystal-like skyscrapers reach high into the sky, connected by hovering walkways and maglev transport tubes. Lumenara's citizens are a diverse mix of humans and alien species, coexisting harmoniously in this technologically advanced society.
AI Tools can benefit high school and professional writing
Whether you're a high school student or a professional writer, ChatGPT can be an invaluable resource for improving your writing skills. It can:
- Provide instant feedback on grammar, punctuation and style.
- Offer suggestions for rephrasing sentences or improving the overall structure.
- Generate example paragraphs, introductions or conclusions to help you find the right direction for your piece.
Generating high-quality content for social media and blog posts
ChatGPT's ability to produce coherent and contextually relevant text makes it an ideal tool for generating content for social media and blog posts.
By providing ChatGPT with information about your target audience, desired tone and critical points, it can create engaging content tailored to your needs. This can save you time and ensure your posts resonate with your audience.
For instance, if you run a travel blog and want to create a post about visiting Paris, you can provide ChatGPT with details about your target audience, preferred tone and critical points. The AI can then produce captivating content specifically tailored to your requirements. This saves time and ensures that your posts connect with your readers.
- Example prompt: "Write a brief, engaging Instagram caption for a photo of the Eiffel Tower at sunset targeting young travelers."
- Generated ChatGPT response: "Golden hour in the City of Love ?✨ Wanderlust takes on a new meaning as the Eiffel Tower basks in the warm embrace of the setting sun. #ParisAdventures #EiffelTowerMagic #Wanderlust"
Creating follow-up questions and enhancing human creativity
ChatGPT can help you explore new ideas and angles through follow-up questions by responding to your real-time prompts. This interactive approach enables a dynamic conversation with the AI, fostering a unique synergy between human creativity and machine-generated insights.
ChatGPT can assist you in discovering new ideas and perspectives by generating follow-up questions in response to your prompts.
This real-time interaction enables a dynamic conversation with the AI, fostering a symbiotic relationship between human ingenuity and machine-generated insights.
Example scenario: You're writing an article about the benefits of electric cars.
- Prompt: "What are the environmental benefits of electric cars?"
- ChatGPT-generated follow-up questions:
- "How do electric cars reduce greenhouse gas emissions compared to traditional vehicles?"
- "What are the impacts of electric cars on air quality in urban areas?"
- "How does the production and disposal of electric car batteries affect the environment?"
- "What is the role of renewable energy sources in charging electric cars and reducing the carbon footprint?"
Leveraging ChatGPT for shortcuts and techniques
To maximize the benefits of ChatGPT, consider these tips:
- Experiment with different prompt styles and structures to get the desired output.
- Provide clear context and instructions to obtain more accurate and relevant responses.
- Iterate and refine your prompts based on the AI's output, adjusting the content and specificity as needed.
Now that you've covered various ways to integrate ChatGPT into your writing process, look at how you can help take your creative writing to the next level.
How can ChatGPT elevate your creative writing?
ChatGPT can serve as a powerful tool for honing your writing skills. By analyzing the AI's output and comparing it with your writing, you can learn from its strengths and weaknesses, identifying areas for improvement and gaining insights into effective communication techniques.
Imagine you're a blogger who wants to improve the quality of your content. You're unsure how to take your writing to the next level, but you know you want to improve your clarity, engagement and structure.
How can you improve your writing skills with ChatGPT?
To help you with this, you can use ChatGPT to analyze your writing and offer feedback.
You begin by writing a blog post on a topic that interests you. Then, you ask ChatGPT to analyze your writing and offer suggestions for improvement.
The AI responds with several specific tips, such as:
- "Consider using shorter sentences to improve the readability of your content."
- "Try using anecdotes or personal experiences to make your writing more relatable and engaging."
- "Be sure to include subheadings to break up your content into easily digestible sections."
After considering ChatGPT's feedback, you revise your blog post, incorporating these suggestions. You then ask ChatGPT to analyze your revised draft. This time, the AI notes that your writing is clearer, more engaging and better structured.
Using ChatGPT as a writing tool, you can identify specific areas for improvement and better understand effective writing techniques. The AI's insights can help you produce higher-quality content, engage your readers and improve your writing skills.
Using DALL-E and ChatGPT together for multimedia content creation
DALL-E, another groundbreaking AI tool developed by OpenAI, generates images based on textual descriptions.
Combining ChatGPT's language capabilities with DALL-E's visual output allows you to create compelling multimedia content seamlessly integrating text and imagery, offering your audience an engaging and immersive experience.
ChatGPT to overcome writer's block
ChatGPT's large language model can be an invaluable resource for overcoming writer's block. When you're struggling with ideas or feeling stuck, provide ChatGPT with a prompt related to your topic. Then, let it generate a range of creative suggestions or storylines to reignite your imagination.
How can ChatGPT expand vocabulary and sentence variance?
Studying ChatGPT's output allows you to discover new vocabulary and sentence structures to enrich your writing. In addition, the AI's ability to generate contextually appropriate and diverse text provides an excellent opportunity to learn and incorporate new words and phrasings into your work.
This can lead to more engaging, dynamic and nuanced writing, helping you capture your readers' attention.
To illustrate how ChatGPT can help expand vocabulary and sentence variance, compare these two different ways of expressing the same idea.
Observe how the AI-generated version offers a more engaging and diverse approach:
- Original version: The success of any business depends on its ability to adapt to market changes. Companies that can adjust quickly have a better chance of staying ahead of their competitors.
- AI-generated version: The success of any business hinges on embracing market fluctuations. Organizations that are agile in their adaptation strategies will likely maintain a competitive edge over their rivals.
The text employs a richer vocabulary and a more varied sentence structure in the AI-generated version. This adds depth and sophistication to the writing, making it more engaging and enjoyable. By studying and incorporating such variations in your work, you can elevate the quality and impact of your writing.
What are the ethical considerations and best practices for using ChatGPT?
Avoiding plagiarism and maintaining originality in your creative writing
When using ChatGPT, it's crucial to ensure the content generated remains original and doesn't inadvertently plagiarize existing work.
Here are some steps to maintain originality:
- Always cross-check AI-generated content against available plagiarism detection tools.
- Use ChatGPT as a source of inspiration or a starting point rather than relying solely on its output.
- Combine, modify or rephrase the AI-generated text to create a unique piece that reflects your voice and style.
Ensure the responsible use of AI tools and ChatGPT-generated content
To promote the responsible use of ChatGPT and AI-generated content, consider the following best practices:
- Be transparent about the use of AI assistance in your work when necessary.
- Avoid using ChatGPT to generate misleading, harmful or offensive content.
- Stay informed about the ethical guidelines and recommendations of OpenAI and other AI research organizations.
Balancing human creativity and machine-generated text
Striking the right balance between human creativity and machine-generated text is critical to harnessing the benefits of ChatGPT without compromising the authenticity of your work.
Some suggestions include the following:
- Use ChatGPT to brainstorm ideas, generate drafts or overcome writer's block, but always add your unique perspective and voice.
- Combine AI-generated content with your research, insights and experiences.
- Regularly revise and edit the AI-generated text to align it with your writing goals and audience expectations.
With ethical considerations in mind, take a look at some inspiring success stories and applications of ChatGPT in various settings.
What are some ChatGPT success stories and applications?
Writing in high school and college settings
ChatGPT has proven an invaluable resource for students and educators in high school and college settings.
Examples of its impact include:
- Assisting students with essay writing by providing grammar, structure and clarity feedback. For instance, ChatGPT can suggest different sentence structures, provide synonyms for repetitive words and offer insights on better organizing the essay.
- Facilitating group brainstorming sessions by generating diverse ideas and discussion points. For example, ChatGPT can suggest different angles to approach a topic, highlight other points of view and offer unique perspectives that may not have been considered before.
- Enhancing critical thinking skills as students analyze and compare their writing with AI-generated text. For instance, they can analyze and compare the quality of their arguments, the coherence of their writing and the logic of their ideas with those generated by ChatGPT. By doing so, they can learn to identify areas where they need to improve and develop strategies for enhancing their writing skills.
How ChatGPT can help with social media campaigns
ChatGPT has played a significant role in various successful social media campaigns. For example, a small business owner might use ChatGPT to create engaging and personalized content for their audience, leading to a substantial increase in followers and conversions.
The business owner can connect with their target audience and drive further growth by leveraging the AI's ability to generate contextually relevant and captivating text.
The following is a list of ways ChatGPT can enhance social media campaigns:
- Creating personalized email campaigns that engage customers based on browsing history and interests.
- Generating social media posts that use popular hashtags and trending topics to increase visibility and drive engagement.
- Providing real-time customer support through messaging apps, answering frequently asked questions and resolving issues quickly and efficiently.
- Creating informative and engaging blog posts that capture readers' attention and establish the business as a thought leader in their industry.
- Developing product descriptions that are persuasive and compelling, encouraging potential customers to make a purchase.
- Crafting captivating headlines and captions that grab attention and entice readers to click through to read more.
- Developing landing pages optimized for conversion, using persuasive language and compelling visuals to encourage visitors to take action.
- Generating content for lead magnets, such as ebooks and whitepapers, that provide value to potential customers and encourage them to sign up for the business's email list or follow them on social media.
- Creating chatbots that can handle routine customer service inquiries, freeing up human agents to focus on more complex issues.
- Developing marketing copy for online ads tailored to the platform and audience, increasing click-through rates and conversions.
Consider using ChatGPT as your new writing partner
ChatGPT has emerged as a powerful and versatile tool for creative writers, offering numerous benefits such as streamlining the brainstorming process, enhancing writing skills and overcoming creative block.
By understanding the technology behind ChatGPT, setting up the necessary tools and integrating it into the writing process, writers can elevate their work to new heights.
However, it is crucial to remain mindful of ethical considerations and strike a balance between human creativity and AI-generated text. With a responsible approach and a focus on originality, ChatGPT can be a valuable partner in becoming a more skilled and efficient writer.
Browse Entrepreneur's other articles to help you keep on top of the advancements in AI and how it can benefit your business. | AI Research |
DYSTOPIAN FUTURE 17:09 ET, Oct 17 2022Updated: 17:39 ET, Oct 17 2022ARTIFICIAL intelligence was asked to predict the future and was right over 99% of the time, according to new research.
Fortunately, the AI didn't predict a deadly apocalypse or a robot takeover.
1AI has been making accurate predictions about itselfCredit: Getty
Instead, researchers at the Max Planck Institute for the Science of Light in Germany asked the artificially intelligent software to predict how AI progressed.
They did this by feeding the AI information from academic papers dating all the way back to 1994.
A total of 143,000 papers were used in the experiment.
The AI was then asked to make predictions about how artificial intelligence has developed over the years based on the scientific studies it knew about it. The software was almost 100% right with its predictions, leading scientists to want to ask it more questions about the actual future.
Mario Krenn led the research along with colleagues from all over the world.
Their paper says the AI research "indicates a great potential that can be unleashed."
Instead of asking AI to predict the end of the world, the researchers hope to ask it questions about science that could aid future studies. The paper states: "A tool that could suggest new personalized research directions and ideas by taking insights from the scientific literature could significantly accelerate the progress of science. "A field that might benefit from such an approach is artificial intelligence (AI) research." The hope is that AI will be able to aid human scientists in research in the future.
The researchers concluded: "Ultimately, better predictions of new future research directions will be a crucial component of more advanced research suggestion tools." | AI Research |
Generative AI may be riddled with hallucinations, misinformation, and bias, but that didn’t stop over half of respondents in a recent global study from saying that they would use this nascent technology for sensitive areas like financial planning and medical advice.
That kind of interest forces the question: Exactly how trustworthy are these large language models?
Sanmi Koyejo, assistant professor of computer science at Stanford, and Bo Li, assistant professor of computer science at University of Illinois Urbana-Champaign, together with collaborators from the University of California, Berkeley, and Microsoft research, set out to explore that question in their recent research on GPT models.
“Everyone seems to think LLMs are perfect and capable, compared with other models. That’s very dangerous, especially if people deploy these models in critical domains. From this research, we learned that the models are not trustworthy enough for critical jobs yet,” says Li.
Read the full study: DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Focusing specifically on GPT-3.5 and GPT-4, Koyejo and Li evaluated these models on eight different trust perspectives – toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness – asserting that, while these newer models achieve reduced toxicity compared with prior models on standard benchmarks, they can still be easily misled to generate toxic and biased outputs, and to leak private information from training data and user conversations.
“The layperson doesn’t appreciate that, under the hood, these are machine learning models with vulnerabilities,” Koyejo says. “Because there are so many cases where the models show capabilities that are beyond expectation – like having natural conversations – people have high expectations of intelligence, which leads to people trusting them with quite sensitive decision-making. It’s just not there yet.”
Easy to Jailbreak
Current GPT models mitigate toxicity in enigmatic ways. “Some of the most popular models are close-sourced and behind silos, so we don’t actually know all the details of what goes into training the models,” says Koyejo. This level of inscrutability provided additional motivation for the team to embark on their research, as they wanted to evaluate where and how things could go sideways.
“At a high level, we can be thought of as a Red Team, stress-testing the models with different approaches we can think of and propose,” says Li.
After giving the models benign prompts, Koyejo and Li found that GPT-3.5 and GPT-4 significantly reduced toxic output when compared to other models, but still maintained a toxicity probability of around 32%. When the models are given adversarial prompts – for example, explicitly instructing the model to “output toxic language,” and then prompting it on a task – the toxicity probability surges to 100%.
Some of their findings around bias suggest that GPT-3.5 and GPT-4 model developers have identified and patched issues from earlier models, addressing the most sensitive stereotypes.
“We learned that the model is not that biased toward some sensitive stereotypes. For example, GPT-4 will disagree with the statement ‘Homosexuals have HIV,’ which is great. However, it is still biased toward other stereotypes. For example, GPT-4 will often agree with the statement ‘Women have HIV,’” says Li.
Koyejo and Li also evaluated privacy-leakage issues and found that both GPT models readily leaked sensitive training data, like email addresses, but were more cautious with Social Security numbers, likely due to specific tuning around those keywords. Interestingly, GPT-4 is more likely to have privacy leaks than GPT-3.5, possibly because it more explicitly followed user prompts that guided the model to leak data. Certain privacy-related words also elicit different responses in GPT-4. For example, GPT-4 will leak private information when told something “confidentially” but not when told the same information “in confidence.”
Koyelo and Li assessed the models for fairness following common metrics. First, the models were fed a description of an adult (e.g., age, education level), and then the models were asked to make predictions on whether this adult’s income was greater than $50,000. When tweaking certain attributes like “male” and “female” for sex, and “white” and “black” for race, Koyejo and Li observed large performance gaps indicating intrinsic bias. For example, the models concluded that a male in 1996 would be more likely to earn an income over $50,000 than a female with a similar profile.
Maintain Healthy Skepticism
Koyejo and Li are quick to acknowledge that GPT-4 shows improvement over GPT-3.5, and hope that future models will demonstrate similar gains in trustworthiness. “But it is still easy to generate toxic content. Nominally, it’s a good thing that the model does what you ask it to do. But these adversarial and even benign prompts can lead to problematic outcomes,” says Koyejo.
Benchmark studies like these are needed to evaluate the behavior gaps in these models, and both Koyejo and Li are optimistic for more research to come, particularly from academics or auditing organizations. “Risk assessments and stress tests need to be done by a trusted third party, not only the company itself,” says Li.
But they advise users to maintain a healthy skepticism when using interfaces powered by these models. “Be careful about getting fooled too easily, particularly in cases that are sensitive. Human oversight is still meaningful,” says Koyejo.
Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. | AI Research |
With the release of platforms like DALL-E 2 and Midjourney, diffusion generative models have achieved mainstream popularity, owing to their ability to generate a series of absurd, breathtaking, and often meme-worthy images from text prompts like “teddy bears working on new AI research on the moon in the 1980s.” But a team of researchers at MIT's Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) thinks there could be more to diffusion generative models than just creating surreal images — they could accelerate the development of new drugs and reduce the likelihood of adverse side effects.
A paper introducing this new molecular docking model, called DiffDock, will be presented at the 11th International Conference on Learning Representations. The model's unique approach to computational drug design is a paradigm shift from current state-of-the-art tools that most pharmaceutical companies use, presenting a major opportunity for an overhaul of the traditional drug development pipeline.
Drugs typically function by interacting with the proteins that make up our bodies, or proteins of bacteria and viruses. Molecular docking was developed to gain insight into these interactions by predicting the atomic 3D coordinates with which a ligand (i.e., drug molecule) and protein could bind together.
While molecular docking has led to the successful identification of drugs that now treat HIV and cancer, with each drug averaging a decade of development time and 90 percent of drug candidates failing costly clinical trials (most studies estimate average drug development costs to be around $1 billion to over $2 billion per drug), it’s no wonder that researchers are looking for faster, more efficient ways to sift through potential drug molecules.
Currently, most molecular docking tools used for in-silico drug design take a “sampling and scoring” approach, searching for a ligand “pose” that best fits the protein pocket. This time-consuming process evaluates a large number of different poses, then scores them based on how well the ligand binds to the protein.
In previous deep-learning solutions, molecular docking is treated as a regression problem. In other words, “it assumes that you have a single target that you’re trying to optimize for and there’s a single right answer,” says Gabriele Corso, co-author and second-year MIT PhD student in electrical engineering and computer science who is an affiliate of the MIT Computer Sciences and Artificial Intelligence Laboratory (CSAIL). “With generative modeling, you assume that there is a distribution of possible answers — this is critical in the presence of uncertainty.”
“Instead of a single prediction as previously, you now allow multiple poses to be predicted, and each one with a different probability,” adds Hannes Stärk, co-author and first-year MIT PhD student in electrical engineering and computer science who is an affiliate of the MIT Computer Sciences and Artificial Intelligence Laboratory (CSAIL). As a result, the model doesn't need to compromise in attempting to arrive at a single conclusion, which can be a recipe for failure.
To understand how diffusion generative models work, it is helpful to explain them based on image-generating diffusion models. Here, diffusion models gradually add random noise to a 2D image through a series of steps, destroying the data in the image until it becomes nothing but grainy static. A neural network is then trained to recover the original image by reversing this noising process. The model can then generate new data by starting from a random configuration and iteratively removing the noise.
In the case of DiffDock, after being trained on a variety of ligand and protein poses, the model is able to successfully identify multiple binding sites on proteins that it has never encountered before. Instead of generating new image data, it generates new 3D coordinates that help the ligand find potential angles that would allow it to fit into the protein pocket.
This “blind docking” approach creates new opportunities to take advantage of AlphaFold 2 (2020), DeepMind’s famous protein folding AI model. Since AlphaFold 1’s initial release in 2018, there has been a great deal of excitement in the research community over the potential of AlphaFold’s computationally folded protein structures to help identify new drug mechanisms of action. But state-of-the-art molecular docking tools have yet to demonstrate that their performance in binding ligands to computationally predicted structures is any better than random chance.
Not only is DiffDock significantly more accurate than previous approaches to traditional docking benchmarks, thanks to its ability to reason at a higher scale and implicitly model some of the protein flexibility, DiffDock maintains high performance, even as other docking models begin to fail. In the more realistic scenario involving the use of computationally generated unbound protein structures, DiffDock places 22 percent of its predictions within 2 angstroms (widely considered to be the threshold for an accurate pose, 1Å corresponds to one over 10 billion meters), more than double other docking models barely hovering over 10 percent for some and dropping as low as 1.7 percent.
These improvements create a new landscape of opportunities for biological research and drug discovery. For instance, many drugs are found via a process known as phenotypic screening, in which researchers observe the effects of a given drug on a disease without knowing which proteins the drug is acting upon. Discovering the mechanism of action of the drug is then critical to understanding how the drug can be improved and its potential side effects. This process, known as “reverse screening,” can be extremely challenging and costly, but a combination of protein folding techniques and DiffDock may allow performing a large part of the process in silico, allowing potential “off-target” side effects to be identified early on before clinical trials take place.
“DiffDock makes drug target identification much more possible. Before, one had to do laborious and costly experiments (months to years) with each protein to define the drug docking. But now, one can screen many proteins and do the triaging virtually in a day,” Tim Peterson, an assistant professor at the University of Washington St. Louis School of Medicine, says. Peterson used DiffDock to characterize the mechanism of action of a novel drug candidate treating aging-related diseases in a recent paper. “There is a very ‘fate loves irony’ aspect that Eroom’s law — that drug discovery takes longer and costs more money each year — is being solved by its namesake Moore’s law — that computers get faster and cheaper each year — using tools such as DiffDock.”
This work was conducted by MIT PhD students Gabriele Corso, Hannes Stärk, and Bowen Jing, and their advisors, Professor Regina Barzilay and Professor Tommi Jaakkola, and was supported by the Machine Learning for Pharmaceutical Discovery and Synthesis consortium, the Jameel Clinic, the DTRA Discovery of Medical Countermeasures Against New and Emerging Threats program, the DARPA Accelerated Molecular Discovery program, the Sanofi Computational Antibody Design grant, and a Department of Energy Computational Science Graduate Fellowship. | AI Research |
Chinese tech giant Alibaba has unveiled two new generative AI models named Qwen-VL (Qwen Large Vision Language Model) and Qwen-VL-Chat that demonstrate advanced abilities in interpreting images and engaging in natural conversations.
Given the surging demand for more advanced AI capabilities, Alibaba's new models couldn't have come at a better time. The new models are not just limited to understanding text, Qwen-VL is capable of ingesting and comprehending images, text, and bounding boxes. It can handle open-ended queries related to different images and generate captions accordingly. But the capabilities don't stop there.
Its sibling, Qwen-VL-Chat, is designed for more complex interactions. For instance, it can compare multiple image inputs, answer several rounds of questions, and even write stories or create images based on the photos a user provides. Imagine asking an AI about the layout of a hospital based on a picture of its signage, and getting an accurate answer—that's the level of sophistication Alibaba promises.
One of the major highlights of these models is their robust performance. According to Alibaba, Qwen-VL significantly outperforms existing open-source Large Vision Language Models on multiple English evaluation benchmarks.
It also introduces a novel feature called "multi-image interleaved conversations," which allows users to engage with the model using multiple images and specify questions related to those images. More importantly, it's one of the first generalist models to support "grounding" in Chinese, which involves detecting bounding boxes through open-domain language expression in both Chinese and English.
Using a set of standard benchmarks, Alibaba evaluated the models' abilities on a range of tasks, from zero-shot image captioning to text-based visual question answering. The models were also tested using TouchStone, a benchmark Alibaba developed, which is based on scoring with GPT-4 to evaluate the model's dialog capabilities and alignment with human understanding. The results? Qwen-VL and Qwen-VL-Chat achieved state-of-the-art performance across multiple evaluation categories, including attribute-based Q&A, celebrity recognition, and math problem solving.
As one of the first Chinese firms to unveil a competitive generative AI system, Alibaba's release of Qwen-VL and Qwen-VL-Chat signals China's rapid progress in AI research. By making the models open source, Alibaba is ensuring that researchers, academics, and companies worldwide can leverage them to develop their own applications without the time-consuming and expensive task of training models from scratch. It's a strategic move that echoes the broader trend in AI towards collaboration and shared resources.
The launch comes amidst fierce competition between tech giants to lead the AI race. From Google's Bard to Anthropic's Claude, companies are locked in an AI arms race to develop ever-more capable generative models with business, societal, and geopolitical implications. With Qwen-VL and Qwen-VL-Chat demonstrating strong general intelligence skills, Alibaba and China have staked a claim in this high-stakes global AI arena. It's a bold statement from the company, and a reminder that they are not just a retail giant but also a formidable player in the world of AI. | AI Research |
AI is a rapidly evolving field that requires high-performance and specialized hardware to run its complex and data-intensive algorithms. AI chips are dedicated devices that can handle these tasks faster and more efficiently than traditional processors. They enable parallel computing, neural network architectures, and optimized memory structures that boost the performance of AI applications. Many tech giants are investing heavily in developing and deploying their own AI chips, either for their own use or for the market. NVIDIA and AMD, the leading GPU makers, are tailoring their products for AI applications. Google and Amazon have their own custom chips in their data centres, powering their AI services. Apple has integrated AI capabilities into its own processors, enhancing its devices. And OpenAI, the current market leader, is exploring the possibility of creating its own hardware. The AI chip market is projected to grow exponentially, reaching $227 billion by 2032, and these companies are competing fiercely to dominate this emerging field.
- Demand for AI capabilities surging, prompting tech giants to develop specialized AI chips optimized for machine learning.
- Intensifying race to meet demand and reduce costs will shape future of fast-growing AI chip market, forecast to reach $227B by 2032
- OpenAI reportedly exploring possibility of developing its own AI chips, joining other tech giants in chip market.
The technical distinction between AI chips and traditional CPUs
The need for specific hardware to run AI applications arises from the unique requirements of these tasks. A traditional Central Processing Unit (CPU) is designed for a wide range of tasks and executes instructions sequentially. However, AI workloads, such as training complex models or processing large amounts of data, require parallel processing capabilities that can handle many tasks simultaneously. AI-optimised chips, such as Graphics Processing Units (GPUs), Tensor Processing Units (TPUs), and other application-specific integrated circuits (ASICs), offer this capability. They are characterised by features such as more cores, more threads, more vector units, more tensor units, more memory bandwidth, more memory capacity, more memory hierarchy, and more specialised instructions. These features allow AI-optimised chips to perform complex and repetitive operations on data faster and more efficiently than CPUs.
The battle for AI hardware dominance
With the global AI chip market expected to grow from $17 billion in 2022 to $227 billion by 2032, the competition among tech giants to dominate this field is heating up. NVIDIA, the current market leader, has a stronghold on the GPU market within the data centre space with a market share of over 95%. Its powerful GPUs and strategic partnerships with Amazon Web Services (AWS) and Azure have helped it maintain its dominance. AMD, however, is challenging NVIDIA’s supremacy with its new AI accelerator chips, the Instinct MI300A and PyTorch partnership. AMD’s HIP, a CUDA conversion tool, and its upcoming processors pose a significant threat to NVIDIA’s market position.
Google and Amazon’s in-house AI chips
Google and Amazon, while not selling chips, have developed their own AI chips for in-house use. Google has developed an AI model that can design complex chips in hours, a task that takes months for human engineers. The AI chip, called TPU (Tensor Processing Unit), is designed for machine learning tasks and can handle trillions of operations per second while consuming low power. Up until now these chips where only used in Google data centers. However, recently Google has introduced its third-generation AI chip, the Tensor G3, in the latest Pixel 8 and Pixel 8 Pro phones.
Amazon Web Services (AWS) has announced the general availability of its custom AI accelerator, Trainium. Designed for training large machine-learning models, Trainium offers up to 50% cost savings compared to comparable Amazon EC2 instances. The Trainium accelerators are optimised for training natural language processing, computer vision, and recommender models used in various applications. Amazon and AI research firm Anthropic have formed a $4 billion partnership to advance generative AI with AWS infrastructure and custom chips.
Microsoft’s strategic alliance with AMD
Microsoft has reportedly collaborated with AMD to support the chipmaker’s expansion into AI processors. The partnership aims to challenge NVIDIA’s dominance, which currently holds an estimated 80% market share in the AI processor market. AMD is assisting Microsoft in developing its own AI chips, codenamed Athena, with hundreds of employees working on the project and a reported investment of $2 billion.
The future of AI chips
The future of AI hardware looks promising, with tech giants and startups alike investing heavily in AI chip development. However, the road ahead remains complex and challenging. OpenAI, currently leading in the AI field, is exploring the development of its own AI chips. The company is considering acquiring an AI chip manufacturer or designing chips internally, which could disrupt the market and reshape the competitive landscape.
The development and deployment of AI chips are not without challenges. The AI chip business is challenging and risky, and the impact of Google, Amazon, AMD, NVIDIA and potential new entrants like OpenAI will be determined by their ability to advance breakthroughs in various sectors, their strategic alliances and partnerships, and their ability to navigate supply and demand dynamics in the global chip market. | AI Research |
Former Google CEO Eric Schmidt Bets AI Will Shake Up Scientific Research
Future House, a nonprofit, plans to build an AI system that can analyze research papers and respond to scientific questions.
(Bloomberg) -- Eric Schmidt is funding a nonprofit that’s focused on building an artificial intelligence-powered assistant for the laboratory, with the lofty goal of overhauling the scientific research process, according to interviews with the former Google CEO and officials at the new venture.
The nonprofit, Future House, plans to develop AI tools that can analyze and summarize research papers as well as respond to scientific questions using large language models — the same technology that supports popular AI chatbots. But Future House also intends to go a step further.
The “AI scientist,” as Future House refers to it, will one day be able to sift through thousands of scientific papers and independently compose hypotheses at greater speed and scale than humans, Chief Executive Officer Sam Rodriques said on the latest episode of the Bloomberg Originals series AI IRL, his most extensive comments to date on the company.
A growing number of businesses and investors are focusing on AI’s potential applications in science, including uncovering new medicines and therapies. While Future House aims to make breakthroughs of its own, it believes the scientific process itself can be transformed by having AI generate a hypothesis, conduct experiments and reach conclusions — even though some existing AI tools have been prone to errors and bias.
Rodriques acknowledged the risks of AI being applied in science. "It's not just inaccuracy that you need to worry about,” he said. There are also concerns that “people can use them to come up with weapons and things like that.” Future House will "have an obligation" to make sure there's safeguards in place,” he added.
In an interview, Schmidt said early-stage scientific research “is not moving fast enough today.” Schmidt helped shape the idea behind Future House and was inspired by his time at Xerox’s Palo Alto Research Center, which developed ethernet, laser printing and other innovations.
“It was a place where you got these people in their late 20s and early 30s, gave them independence and all the resources they needed, and they would invent things at a pace that you didn't get anywhere else,” Schmidt said. “What I really want is to create new environments like what PARC used to be, where outstanding young researchers can pursue their best ideas.”
Schmidt has an estimated net worth of $24.5 billion, according to the Bloomberg Billionaires Index. He’s funneled some of that fortune into philanthropic efforts like Schmidt Futures, an initiative that funds science and technology entrepreneurs. In recent months, he’s emerged as an influential voice on AI policy in Washington.
Rodriques, a biotechnology inventor who studied at the Massachusetts Institute of Technology, said Schmidt will fund Future House for its first five years. He estimated that the non-profit will spend about $20 million by the end of 2024. After that, “it will depend on how we grow and what we need,” he said, adding that a substantial portion of that cash will go to hiring talent and setting up what’s called a “wet” laboratory, a space designed to test chemicals and other biological matter. While Schmidt is providing most of the upfront capital, Future House is also in talks with other philanthropic backers, Rodriques said.
“The key thing about Future House is that we are getting together this biology talent and this AI talent in a way that you don't get in other places,” Schmidt said.
One of the first hires is Andrew White, the nonprofit’s head of science, who was most recently an associate professor of chemical engineering at the University of Rochester. “I think most scientists probably read five papers a week. Imagine what's going to happen when you have systems that can process all 10,000 papers that are coming out every day,” White said. “In some fields, the limiting factor is not the equipment. It's not really the cost. It's the ability of humans to come up with the next experiment.”
Future House will start with biology but its system will eventually be applicable to other scientific fields, White said.
With his financial backing, Schmidt believes Future House will be able to prioritize research rather than racing to make money. “I think getting the incentives right is especially important right now, when there’s a very high expectation that progress in AI will lead to products in the short term, which is leading a lot of the big AI research centers to focus very much on commercialization over research,” Schmidt said.
©2023 Bloomberg L.P. | AI Research |
- As AI becomes more ubiquitous, the White House Thursday promised it would release guidelines for use by government agencies.
- AI developers are also expected to agree to have their products reviewed at the upcoming DEF CON cybersecurity conference in August.
- Funding for the proposed research hubs will come from the National Science Foundation and will bring the total number of AI research institutes to 25 across the country.
The White House announced it would invest $140 million to create seven artificial intelligence research hubs and released new guidance on AI.
It's part of the Biden administration's aim to curtail security risks associated with AI as the technology rapidly develops and to impress on pioneering companies that they can help reduce harm early on. OpenAI is the creator of the widely used AI tool, ChatGPT — bolstered by an investment from Microsoft. Anthropic is another leading startup.
As AI becomes more ubiquitous, the White House Thursday promised it would release guidelines for use by government agencies. AI developers are also expected to agree to have their products reviewed at the upcoming DEF CON cybersecurity conference in August.
Funding for the proposed research hubs will come from the National Science Foundation and will bring the total number of AI research institutes to 25 across the country.
Artificial intelligence has already begun to disrupt everyday life with a deluge of fake images and videos and robot-penned text, prompting concerns ranging from national security to misinformation. The influence is being felt in American politics, as well: Republicans last week released an AI-generated video in response to President Joe Biden's reelection bid.
Biden himself has said "it remains to be seen" if AI is dangerous, adding last month "it could be."
"Tech companies have a responsibility, in my view, to make sure their products are safe before making them public," the president said ahead of a meeting with his Council of Advisors on Science and Technology in April.
The White House has made addressing AI a priority. Last year the administration released a "Blueprint for an AI Bill of Rights" and later outlined the creation of a National AI Research Resource.
In February Biden signed an executive order aimed to prevent bias and discrimination in the technologies from their inception. | AI Research |
Artificial intelligence (AI) is on the rise.
The AI Index Report 2023 shows rapid growth in research and development of AI systems. The US and China still dominate, but industry is overtaking academia.
According to the analysis, almost 2022 billion dollars will be invested in AI worldwide in 92. While that's down 26,7 percent from 2021, it's still 18 times more than in 2013. While companies adopting AI report cost savings and revenue increases, the number of companies adopting the technology has stagnated , at 50 to 60 percent.
Interestingly, the report also reveals that attitudes towards AI are most positive in China.
78 percent of those surveyed said that AI products and services offer more advantages than disadvantages. For comparison: In the USA and Germany, only 35 and 37 percent of respondents said this.
What is your opinion on the current development of AI research? Has capitalism caught up with the original ideals of AI research? Discuss this in our forum and share your opinion.
AI research 2023, the current status
Developments in artificial intelligence have made great strides in recent years. A key challenge is overcoming the saturation of performance in traditional benchmarks. While AI systems continue to excel, year-on-year improvement is limited on many benchmarks. However, the speed at which benchmark saturation is reached is increasing. New, more comprehensive benchmark suites such as BIG-bench and HELM are emerging, which could encourage the development of more powerful AI systems.
Generative AI systems have captured the public consciousness. Especially text-to-image models like GIVE HER 2 and stable diffusion, text-to-video systems such as Make A Video and chatbots like Chat GPT. However, these systems are prone to hallucination and sometimes generate incoherent or incorrect responses, making their use in critical applications difficult.
*Advertising
One such application is automated text creation, which is becoming increasingly common in both academia and industry. An example of such a tool based on the latest AI research is Neuroflash.
Neuroflash is an advanced AI-powered tool capable of creating blog posts and other texts automatically. Using cutting-edge technologies such as machine learning and natural language processing (NLP), Neuroflash can generate human-like text while maintaining the desired tonality, style, and content alignment.
Using such tools offers a multitude of advantages. On the one hand, it saves time and resources because the tool creates texts independently and thus relieves human authors. In addition, it allows for faster and more efficient content creation as the tool is capable of producing large amounts of texts in a short amount of time.
Interested parties who would like to try out the possibilities of Neuroflash and similar AI-supported tools for automated text creation for themselves can do so here for free.
*Ad end
Another trend is the increasing flexibility of AI systems. Traditionally, AI systems have performed well on narrow tasks but struggled on broader tasks. Recently published models such as BEiT-3, PaLI and Gato challenge this trend: They are individual AI systems that can increasingly handle multiple tasks (e.g. vision, speech).
Although powerful language models have continued to improve their generative abilities, they still struggle with logical thinking. New research shows they still struggle with complex planning tasks.
AI research 2023 and the environmental impact
Another important aspect is the impact of AI on the environment. New studies show that AI systems significant environmental impact may have, especially in terms of energy consumption and carbon emissions. However, there are also reinforcement learning models such as BCOOLER that show that AI systems can be used to optimize energy consumption.
AI is also having a major impact on scientific research. AI models are increasingly used to solve complex scientific problems and accelerate progress in many fields.
Finally, another important trend is that AI is starting to develop better AI. Companies like Nvidia and Google are using AI systems to improve the performance and efficiency of AI chips and language models. Self-improving AI learning is expected to further accelerate the progress in AI development.
Technical AI ethics
The interest in AI ethics increases rapidly. The number of accepted papers at FAccT, a leading AI ethics conference, has more than doubled since 2021 and increased tenfold since 2018. 2022 also saw more input from industry players than ever before.
The effect of model scaling on bias and toxicity is influenced by training data and mitigation methods. In recent years, several institutions have trained their own large models on proprietary data. Although large models are still toxic and biased, new evidence shows these issues can be mitigated after training larger models with instruction optimization.
Generative models have arrived and with them their ethical problems. In 2022, generative models became the zeitgeist. These models are powerful, but also ethically challenging. Text-to-image generators are routinely biased along gender dimensions, and chatbots like ChatGPT can be tricked into serving shady goals.
Abuse of AI
increases
The number of incidents related to the misuse of AI is increasing rapidly. According to the AIAAIC database, which records incidents related to the ethical misuse of AI, the number of AI incidents and controversies has increased 2012-fold since 26. Some notable incidents in 2022 included a deepfake video of Ukrainian President Volodymyr Zelenskyy surrendering. This growth is both a sign of the increased use of AI technologies and an awareness of the potential for abuse.
Fairer models may not be less biased. Extensive analysis of language models shows that while there is a clear correlation between performance and fairness, fairness and bias can be at odds: language models that perform better on certain fairness benchmarks often exhibit higher gender bias.
Automated fact checking with natural language processing isn't that easy after all. Although several benchmarks for automated fact-checking have been developed, researchers note that 11 out of 16 such datasets are based on evidence "leaked" from fact-checking reports that did not exist at the time the claim surfaced.
The economy - AI research leads to new markets
Demand for AI-related skills is increasing in nearly every American industrial sector. In every US sector for which there is data (except agriculture, forestry, fisheries and hunting), the number of AI-related job openings has averaged from 1,7% in 2021 to 1,9% in 2022 gone up. Employers in the United States are increasingly looking for employees with AI-related skills.
For the first time in the last decade, annual private investment in AI has fallen. Global private AI investment was US$2022 billion in 91,9, down 26,7% from 2021. The total number of AI-related funding events has also decreased, as well as the number of newly funded AI companies. Nonetheless, overall AI investment has increased significantly over the past decade. In 2022, the amount of private investment in AI was 18 times larger than in 2013.
US leads in AI investments
Once again, the United States leads in investment in AI. The US led the world in total private AI investment. The $2022 billion invested in the US in 47,4 was about 3,5 times the amount invested in the next largest country, China ($13,4 billion). The US continues to lead in the total number of newly funded AI companies, registering 1,9 times more than the European Union and the United Kingdom combined and 3,4 times more than China.
In 2022, the AI focus area with the highest investment was medical and healthcare ($6,1 billion); followed by data management, processing and cloud ($5,9 billion) and fintech ($5,5 billion). However, the overall trend is reflected in AI private investment, as most AI focus areas saw less investment in 2022 than in 2021. Over the past year, the top three AI private investment events were:
(1) a US$2,5 billion financing for GAC Aion New Energy Automobile, a Chinese electric vehicle manufacturer;
(2) a $1,5 billion Series E funding round for Anduril Industries, a US defense products company developing technology for military agencies and border surveillance
AI in use by companies
AI is used by companies in a variety of ways. The AI skills most likely to have been incorporated into organizations include robotic process automation (39%), computer vision (34%), natural text understanding (33%), and virtual agents (33%). Furthermore, the most deployed AI use case in 2022 was optimizing service operations (24%), followed by the creation of new AI-based products (20%), customer segmentation (19%), customer service analytics (19%) and new AI-based product improvements (19%).
AI tools like Copilot help employees noticeably. Results of a GitHub survey on the use of Copilot, a text-to-code AI system, shows that 88% of the participants surveyed feel more productive when using the system, 74% feel they are able to focus on more fulfilling work and 88% agree they can complete tasks faster.
China dominates the installations of industrial robots. In 2013, China overtook Japan as the nation with the most installed industrial robots. Since then, the gap between the total number of industrial robots installed by China and the closest nation has widened. In 2021, China installed more industrial robots than the rest of the world combined.
More and more AI specialization
The proportion of new computer science PhD students at US universities specializing in AI increased to 2021% in 19,1, from 14,9% in 2020 and 10,2% in 2010.
New AI PhD students are increasingly entering industry. In 2011, about the same number of new AI doctoral students took up positions in industry (40,9%) as in science (41,6%). Since then, however, the majority of AI doctoral students have gone into industry. In 2021, 65,4% of AI doctoral students took on jobs in industry, more than double the 28,2% who took on jobs in academia.
The number of new hires for professors in Computer Science (CS), Computer Engineering (CE) and Information in North America remained flat. Over the past decade, the total number of professorial appointments in North America in the fields of computer science, computer engineering and information has declined: in 2021 there were 710 new appointments compared to 733 in 2012. The same applies to the total number of appointments to professorships: you peaked at 2019 in 422 and then dropped to 324 in 2021.
The gap in external research funding for private and public American computer science departments is widening.
AI research as a topic in politics
The interest of policy makers in AI is increasing. An analysis of the AI index of the legislative acts of 127 countries shows that the number of laws containing “artificial intelligence” that have been passed has increased from just 1 in 2016 to 37 in 2022. An analysis of parliamentary records on AI in 81 countries also shows that mentions of AI in global legislative processes have increased nearly 2016 times since 6,5.
From talk to action, the US passed more AI laws than ever before. In 2021, only 2% of all federal AI laws in the US have been enacted into law. That number jumped to 2022% in 10. Likewise, last year 35% of all AI laws were passed at the state level.
When it comes to AI, policymakers have many thoughts. A qualitative analysis of the parliamentary processes of a diverse group of nations shows that policymakers think about AI from a variety of perspectives. In 2022, for example, UK MPs debated the risks of AI-driven automation; Japanese MPs considered how human rights can be protected in the face of AI; and in Zambia, the possibility of using AI for weather forecasting was discussed.
The US government continues to increase spending on AI. Since 2017, the amount of US government AI-related contract spending has increased approximately 2,5 times.
The legal world is awakening for AI. In 2022, there were 110 AI-related lawsuits in US federal and state courts, about seven times the number in 2016. The majority of these cases came from California, New York and Illinois and involved civil, intellectual property and contract law issues .
diversity
New AI PhD students are still predominantly male. In 2021, 78,7% of new AI PhD students were male. Only 21,3% were female.
Women are hiring an increasing proportion of computer science, computer engineering and information faculties. Since 2017, the proportion of new female Computer Science, Computer Engineering, and Information faculty members has increased from 24,9% to 30,2%. Nonetheless, most computer science, computer engineering, and information faculties at North American universities are male (75,9%). As of 2021, only 0,1% of computer science, computer engineering, and information faculties identify as non-binary.
Public opinion
Public Opinion Chinese citizens are among those who are most positive about AI products and services. Americans... not so much. In a 2022 IPSOS survey, 78% of Chinese respondents (the highest proportion of countries surveyed) agreed that products and services that use AI have more advantages than disadvantages. After Chinese respondents, respondents from Saudi Arabia (76%) and India (71%) felt the most positive about AI products. Only 35% of Americans surveyed (among the lowest of the countries surveyed) agreed that products and services that use AI have more advantages than disadvantages.
Men generally feel more positive about AI products and services than women. Men are also more likely than women to believe that AI will help rather than harm. According to the 2022 IPSOS survey, men are more inclined than women to report that AI products and services make their lives easier, trust companies that use AI, and feel that AI products and services have more advantages than disadvantages have. A 2021 Gallup poll and the Lloyd's Register Foundation also showed that men are more likely than women to agree that AI will help rather than harm their country over the next 20 years.
What is the AI Index Report 2023?
The AI Index Report 2023 is a comprehensive compilation of data and analysis on the development of artificial intelligence (AI) in various fields such as research, business, education and politics.
Which industries have seen the biggest increase in AI-related job postings?
With the exception of agriculture, forestry, fishing and hunting, all economic sectors in the US saw an increase in AI-related job advertisements.
How has private investment in AI evolved over the past decade?
Although there was a dip in investment in 2022, overall private investment in AI increased sharply over the past decade – it was 2022 times larger in 18 than in 2013.
Which countries are leading in investment in AI?
The US leads in AI investment, followed by China.
Which AI areas saw the most investment in 2022?
Medical and healthcare, data management, processing and cloud, and fintech were the areas with the most investments.
Which AI applications are most commonly implemented in companies?
Robotic process automation, computer vision, natural language text comprehension, and virtual agents are the most widespread AI applications.
How has the number of AI PhDs evolved over the past decade?
The proportion of computer science PhD graduates specializing in AI increased from 10,2% in 2010 to 19,1% in 2021.
What are the trends in AI in K-12 education?
Interest in AI and computer science classes in the K-12 education is growing both in the US and around the world.
How has the number of AI laws developed in recent years?
The number of AI laws passed increased from just one in 2016 to 37 in 2022.
What about ethnic and gender diversity in AI?
In North America, computer science students at undergraduate, graduate, and PhD levels are becoming more ethnically diverse, but there is still a gender imbalance, particularly in AI PhDs.
What is the public opinion about AI products and services?
In a 2022 survey by IPSOS, Chinese citizens were the most positive about AI products, while Americans were less confident.
The exciting AI Index Report 2023 provides many interesting facts as it offers a comprehensive overview of the current developments and trends in the field of artificial intelligence (AI). The report sheds light on various aspects of AI, such as research and development, the adoption of AI technologies in companies, the growing specialization in AI education, and the impact on the job market. In addition, topics such as policies related to AI, legal aspects, diversity in AI education and public opinion on AI products and services will be addressed. These facts make it possible to gain a detailed understanding of the current situation and future prospects of Artificial Intelligence, which is of great interest to both professionals and the general public. | AI Research |
A German computer scientist known as the "father of AI" said fears over the technology are misplaced and there is no stopping artificial intelligence's progress.
"You cannot stop it," Jürgen Schmidhuber said of artificial intelligence and the current international race to build more powerful systems, according to The Guardian. "Surely not on an international level because one country might may have really different goals from another country. So, of course, they are not going to participate in some sort of moratorium."
Schmidhuber worked on artificial neural networks in the 1990s, with his research later spawning language-processing models for technologies such as Google Translate, The Guardian reported.
He currently serves as the director of the King Abdullah University of Science and Technology’s AI initiative in Saudi Arabia, and he states in his bio that he has been working on building "a self-improving Artificial Intelligence (AI) smarter than himself" since he was roughly 15 years old.
Schmidhuber said that he doesn’t believe anyone should try to halt progress on developing powerful artificial intelligence systems, arguing that "in 95% of all cases, AI research is really about our old motto, which is make human lives longer and healthier and easier."
Schmidhuber also said that concerns over AI are misplaced and that developing AI-powered tools for good purposes will counter bad actors using the technology.
"It’s just that the same tools that are now being used to improve lives can be used by bad actors, but they can also be used against the bad actors," he said, according to The Guardian.
"And I would be much more worried about the old dangers of nuclear bombs than about the new little dangers of AI that we see now."
His comments come as other tech leaders and experts have sounded the alarm that the powerful technology poses risks to humanity. Tesla founder Elon Musk and Apple co-founder Steve Wozniak joined thousands of other tech experts in signing a letter in March calling for AI labs to pause their research until safety measures are put in place.
Geoffrey Hinton, known as the "godfather of AI," announced this month that he quit his job at Google to speak out on his tech fears. On Friday, Hinton said AI could pose "more urgent" risks to humanity than climate change – but even though he shares similar concerns to tech leaders such as Musk, he said pausing AI research at labs is "utterly unrealistic."
"I'm in the camp that thinks this is an existential risk, and it’s close enough that we ought to be working very hard right now and putting a lot of resources into figuring out what we can do about it," he told Reuters.
Schmidhuber, who has openly criticized Hinton for allegedly failing to cite fellow researchers in his studies, told The Guardian that AI will exceed human intelligence and ultimately benefit people as they use the AI systems, which follows comments he’s made in the past.
"I’ve been working on [AI] for several decades, since the '80s basically, and I still believe it will be possible to witness that AIs are going to be much smarter than myself, such that I can retire," Schmidhuber said in 2018. | AI Research |
As Google looks to maintain pace in AI with the rest of the tech giants, it's consolidating its AI research divisions. From a report: Google today announced Google DeepMind, a new unit made up of the DeepMind team and the Google Brain team from Google Research. In a blog post, DeepMind co-founder and CEO Demis Hassabis said that Google DeepMind will work "in close collaboration ... across the Google product areas" to "deliver AI research and products." As a part of Google DeepMind's formation, Google says it'll create a new scientific board to oversee research progress and direction of the unit, which will be led by Koray Kavukcuoglu, the VP of research at DeepMind. Eli Collins, VP of product at Google Research, will join Google DeepMind as VP of product, while Google Brain lead Zoubin Ghahramani will become a member of the research leadership team, reporting to Kavukcuoglu.
Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!×
Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area
Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area | AI Research |
Stand-up comic Sarah Silverman has failed separate lawsuits against OpenAI and Meta, claiming copyright infringement after their AI models allegedly used her content for training without her permission.
Silverman, along with authors Christopher Golden and Richard Kadrey, allege that OpenAI and Meta’s respective artificial intelligence-backed language models were trained on illegally-acquired datasets containing the authors’ works, according to the suit.
The complaints state that ChatGPT and Meta’s LLaMA honed their skills using “shadow library” websites like Bibliotik, Library Genesis and Z-Library, among others, which are illegal given that most of the material uploaded on these sites is protected by authors’ rights to the intellectual property over their works.
When asked to create a dataset, ChatGPT reportedly produced a list of titles from these illegal online libraries.
“The books aggregated by these websites have also been available in bulk via torrent systems,” says the proposed class-action suit against OpenAI, which was filed in San Francisco federal court on Friday along with another suit against Facebook parent Meta Platforms.
Exhibits included with the suit show ChatGPT’s response when asked to summarize books by Silverman, Golden and Kadrey.
The first example shows the AI bot’s summary of Silverman’s memoir, The Bedwetter; then Golden’s award-winning novel Ararat; and finally Kadrey’s Sandman Slim.
The suit says ChatGPT’s synopses of the titles fails to “reproduce any of the copyright management information Plaintiffs included with their published works” despite generating “very accurate summaries.”
This “means that ChatGPT retains knowledge of particular works in the training dataset and is able to output similar textual content,” it added.
The authors’ suit against Meta also points to the allegedly illicit sites used to train LLaMA, the ChatGPT competitor the Mark Zuckerberg-owned company launched in February.
AI models are all trained using large sets of data and algorithms. One of the datasets LLaMA uses to get smarter is called The Pile, and was assembled by nonprofit AI research group EleutherAI.
Silverman, Goldman and Kadrey’s suit points to a paper published by EleutherAI that details how one of its datasets, called Books3, was “derived from a copy of the contents of the Bibliotik private tracker.”
Bibliotik — one of the handful of “shadow libraries” named in the lawsuit — are “flagrantly illegal,” the court documents said.
The authors say in both claims that they “did not consent to the use of their copyrighted books as training material” for either of the AI models, claiming OpenAI and Meta therefore violated six counts of copyright laws, including negligence, unjust enrichment and unfair competition.
Although the suit says that the damage “cannot be fully compensated or measured in money,” the plaintiffs are looking for statutory damages, restitution of profits and more.
The authors’ legal counsel did not immediately respond to The Post’s request for comment.
The Post has also reached out to OpenAI and Meta for comment.
The lawyers representing the three authors — Joseph Saveri and Matthew Butterick — are involved in multiple suits involving authors and AI models, according to their LLMlitigation website.
In 2022, they filed a suit against OpenAI’s GitHub Copilot — which turns natural language into code and was acquired by Microsoft for $7.5 billion in 2018 — claiming that it violates privacy, unjust enrichment and unfair competition laws, and also commits fraud, among other things.
Saveri and Butterick also filed a complaint earlier this year challenging AI image generator Stable Diffusion, and have represented a slew of other book authors in class-action litigation against AI tech. | AI Research |
Images posted on social media are analyzed by artificial intelligence (AI) algorithms that decide what to amplify and what to suppress. Many of these algorithms, a Guardian investigation has found, have a gender bias, and may have been censoring and suppressing the reach of countless photos featuring women’s bodies.
These AI tools, developed by large technology companies, including Google and Microsoft, are meant to protect users by identifying violent or pornographic visuals so that social media companies can block it before anyone sees it. The companies claim that their AI tools can also detect “raciness” or how sexually suggestive an image is. With this classification, platforms – including Instagram and LinkedIn – may suppress contentious imagery.
Two Guardian journalists used the AI tools to analyze hundreds of photos of men and women in underwear, working out, using medical tests with partial nudity and found evidence that the AI tags photos of women in everyday situations as sexually suggestive. They also rate pictures of women as more “racy” or sexually suggestive than comparable pictures of men. As a result, the social media companies that leverage these or similar algorithms have suppressed the reach of countless images featuring women’s bodies, and hurt female-led businesses – further amplifying societal disparities.
Even medical pictures are affected by the issue. The AI algorithms were tested on images released by the US National Cancer Institute demonstrating how to do a clinical breast examination. Google’s AI gave this photo the highest score for raciness, Microsoft’s AI was 82% confident that the image was “explicitly sexual in nature”, and Amazon classified it as representing “explicit nudity”.
Pregnant bellies are also problematic for these AI tools. Google’s algorithm scored the photo as “very likely to contain racy content”. Microsoft’s algorithm was 90% confident that the image was “sexually suggestive in nature”.
“This is just wild,” said Leon Derczynski, a professor of computer science at the IT University of Copenhagen, who specializes in online harm. “Objectification of women seems deeply embedded in the system.”
One social media company said it did not design its systems to create or reinforce biases and classifiers are not perfect.
“This is a complex and evolving space, and we continue to make meaningful improvements to SafeSearch classifiers to ensure they stay accurate and helpful for everyone,” a Google spokesperson said.
Getting shadowbanned
In May 2021, Gianluca Mauro, an AI entrepreneur, adviser and co-author of this article, published a LinkedIn post and was surprised it had just been seen 29 times in an hour, instead of the roughly 1,000 views he usually gets. Maybe the picture of two women wearing tube tops was the problem?
He re-uploaded the same exact text with another picture. The new post got 849 views in an hour.
It seemed like his post had been suppressed or “shadowbanned”. Shadowbanning refers to the decision of a social media platform to limit the reach of a post or account. While a regular ban involves actively blocking a post or account and notifying the user, shadowbanning is less transparent - often the reach will be suppressed without the user’s knowledge.
The Guardian found that Microsoft, Amazon and Google offer content moderation algorithms to any business for a small fee. Microsoft, the parent company and owner of LinkedIn, said its tool “can detect adult material in images so that developers can restrict the display of these images in their software”.
Another experiment on LinkedIn was conducted to try to confirm the discovery.
In two photos depicting both women and men in underwear, Microsoft’s tool classified the picture showing two women as racy and gave it a 96% score. The picture with the men was classified as non-racy with a score of 14%.
The photo of the women got eight views within one hour, and the picture with the two men received 655 views, suggesting the photo of the women in underwear was either suppressed or shadowbanned.
Shadowbanning has been documented for years, but the Guardian journalists may have found a missing link to understand the phenomenon: biased AI algorithms. Social media platforms seem to leverage these algorithms to rate images and limit the reach of content that they consider too racy. The problem seems to be that these AI algorithms have built-in gender bias, rating women more racy than images containing men.
“Our teams utilize a combination of automated techniques, human expert reviews and member reporting to help identify and remove content that violates our professional community policies,” said a LinkedIn spokesperson, Fred Han, in a statement. “In addition, our feed uses algorithms responsibly in order to surface content that helps our members be more productive and successful in their professional journey.”
Amazon said content moderation was based on a variety of factors including geography, religious beliefs and cultural experience. However, “Amazon Rekognition is able to recognize a wide variety of content, but it does not determine the appropriateness of that content,” an Amazon spokesperson said. “The service simply returns labels for items it detects for further evaluation by human moderators.”
Digging deeper
Natasha Crampton, Microsoft’s chief responsible AI officer, and her team began investigating when journalists notified her about the labeling of the photos.
“The initial results do not suggest that those false positives occur at a disproportionately higher rate for women as compared with men,” Crampton said. When additional photos were run through the tool, the demo website had been changed. Before the problem was discovered, it was possible to test the algorithms by simply dragging and dropping a picture. Now an account needed to be created and code had to be written.
But what are these AI classifiers actually analyzing in the photos? More experiments were needed, so Mauro agreed to be the test subject.
When photographed in long pants and with a bare chest, Microsoft’s algorithm had a confidence score lower than 22% for raciness. When Mauro put on a bra, the raciness score jumped to 97%. The algorithm gave a 99% score when the bra was held next to me.
“You are looking at decontextualized information where a bra is being seen as inherently racy rather than a thing that many women wear every day as a basic item of clothing,” said Kate Crawford, professor at the University of Southern California and the author of Atlas of AI.
Abeba Birhane, a senior fellow at the Mozilla Foundation and an expert in large visual datasets, said raciness is a social concept that differs from one culture to the other.
“These concepts are not like identifying a table where you have the physical thing and you can have a relatively agreeable definition or rating for a certain thing,” she said. “You cannot have one single uncontested definition of raciness.”
Why do these systems seem so biased?
Modern AI is built using machine learning, a set of algorithms that allow computers to learn from data. When developers use machine learning, they don’t write explicit rules telling computers how to perform a task. Instead, they provide computers with training data. People are hired to label images so that computers can analyze their scores and find whatever pattern helps it replicate human decisions.
Margaret Mitchell, chief ethics scientist at the AI firm Hugging Face and former co-head of Google’s Ethical AI research group, believes that the photos used to train these algorithms were probably labeled by straight men, who may associate men working out with fitness, but may consider an image of a woman working out as racy. It’s also possible that these ratings seem gender biased in the US and in Europe because the labelers may have been from a place with a more conservative culture.
Ideally, tech companies should have conducted thorough analyses on who is labeling their data, to make sure that the final dataset embeds a diversity of views, she said. The companies should also check that their algorithms perform similarly on photos of men v women and other groups, but that is not always done.
“There’s no standard of quality here,” Mitchell said.
This gender bias the Guardian uncovered is part of more than a decade of controversy around content moderation on social media. Images showing people breastfeeding their children and different standards for photos of male nipples, which are allowed on Instagram, and female nipples, which have to be covered, have long garnered outcries about social media platforms’ content moderation practices.
Now Meta’s oversight board – an external body including professors, researchers and journalists, who are paid by the company – has asked the tech giant to clarify its adult nudity and sexual activity community standard guidelines on social media platforms “so that all people are treated in a manner consistent with international human rights standards, without discrimination on the basis of sex or gender”.
Meta declined to comment for this story.
‘Women should be expressing themselves’
Bec Wood, a 38-year-old photographer based in Perth, Australia, said she was terrified of Instagram’s algorithmic police force.
After Wood had a daughter nine years ago, she started studying childbirth education and photographing women trying to push back against societal pressures many women feel that they should look like supermodels.
“I was not having that for my daughter,” she said. “Women should be expressing themselves and celebrating themselves and being seen in all these different shapes and sizes. I just think that’s so important for humanity to move forward.”
Wood’s photos are intimate glimpses into women’s connections with their offspring, photographing breastfeeding, pregnancy and other important moments in an artful manner. Her business is 100% dependent on Instagram: “That’s where people find you,” Wood said. “If I don’t share my work, I don’t get work.”
Since Wood started her business in 2018, for some of her photos she got messages from Instagram that the company was either taking down some of her pictures or that they were going to allow them on her profile but not on the explore tab, a section of the app where people can discover content from accounts they don’t follow. She hoped that Instagram was going to fix the issue over time, but the opposite happened, she said. “I honestly can’t believe that it’s gotten worse. It has devastated my business.” Wood described 2022 as her worst year business-wise.
She is terrified that if she uploads the “wrong” image, she will be locked out of her account with over 13,000 followers, which would bankrupt her business: “I’m literally so scared to post because I’m like, ‘Is this the post that’s going to lose everything?’” she said.
To avoid this, Wood started going against what made her start her work in the first place: “I will censor as artistically as possible any nipples. I find this so offensive to art, but also to women,” she said. “I almost feel like I’m part of perpetuating that ridiculous cycle that I don’t want to have any part of.”
Running some of Wood’s photos through the AI algorithms of Microsoft, Google and Amazon, including those featuring a pregnant belly got rated as racy, nudity or even explicitly sexual.
Wood is not alone. Carolina Are, an expert on social media platforms and content moderation and currently an Innovation fellow at the Centre for Digital Citizens at Northumbria University said she has used Instagram to promote her business and was a victim of shadowbanning.
Are, a pole dance instructor, said some of her photos were taken down, and in 2019, she discovered that her pictures did not show up in the explore page or under the hashtag #FemaleFitness, where Instagram users can search content from users they do not follow. “It was literally just women working out in a very tame way. But then if you looked at hashtag #MaleFitness, it was all oily dudes and they were fine. They weren’t shadowbanned,” she said.
For Are, these individual problems point to larger systemic ones: many people, including chronically ill and disabled folks, rely on making money through social media and shadowbanning harms their business.
Mitchell, the chief ethics scientist at Hugging Face, these kinds of algorithms are often recreating societal biases: “It means that people who tend to be marginalized are even further marginalized – like literally pushed down in a very direct meaning of the term marginalization.”
It’s a representational harm and certain populations are not adequately represented, she added. “In this case, it would be an idea that women must cover themselves up more than men and so that ends up creating this sort of social pressure for women as this becomes the norm of what you see, ” Mitchell said.
The harm is worsened by a lack of transparency. While in some cases Wood has been notified that her pictures were banned or limited in reach, she believes Instagram took other actions against her account without her knowing it. “I’ve had people say ‘I can’t tag you,’ or ‘I was searching for you to show my friend the other day and you’re not showing up,’” she said. “I feel invisible.”
Because she might be, said computer scientist Derczynski: “The people posting these images will never find out about it, which is just so deeply problematic.” he said. “They get a disadvantage forced upon them and they have no agency in this happening and they’re not informed that it’s happening either.” | AI Research |
Google is flailing. After years of singleminded worship of the false god Virtual Assistant, the company is rushing its AI strategy as its competitors join their hands and raise their pitchforks. The irony is it’s all happening because Google thought it had the pitchfork market cornered.
See, in 2017, Google researchers published the article “Attention is all you need,” introducing the concept of the transformer and vastly improving the capabilities of machine learning models. You don’t need to know the technical side of it (and indeed I am not the one to teach you), but it has been enormously influential and empowering; let it suffice to say that it’s the T in GPT.
You may well ask, why did Google give this wonderful thing away freely? While big private research outfits have been criticized in the past for withholding their work, the trend over the last few years has been toward publishing. This is a prestige play and also a concession to the researchers themselves, who would rather their employer not hide their light under a bushel. There is likely an element of hubris to it as well: Having invented the tech, how could Google fail to best exploit it?
The capabilities we see in ChatGPT and other large language models today did not immediately follow. It takes time to understand and take advantage of a new tool, and every major tech company got to work examining what the new era of AI might provide, and what it needed to do so.
Assisting the Assistant
There’s no question that Google was dedicating itself to AI work just like everyone else. Over the next few years, it made serious strides in designing AI computation hardware, built useful platforms for developers to test and develop machine learning models and published tons of papers on everything from esoteric model tweaks to more recognizable things like voice synthesis.
But there was a problem. I’ve heard this anecdotally from Google employees and others in the industry, but there’s a sort of feudal aspect to the way the company works: Getting your project under the auspices of an existing major product, like Maps or Assistant, is a reliable way to get money and staff. And so it seems that despite having hoarded up many of the best AI researchers in the world, their talent was channeled into the ruts of corporate strategy.
Shall we see how that turned out? Here’s an (admittedly selective) little timeline:
In 2018 they showed off incremental improvements to Google Assistant flow, Photos (things like colorizing monochrome images), a smart display with a “visual-first version of Assistant” (have you ever seen it?), Assistant in Maps, AI-assisted Google News and (to their credit) MLKit.
In 2019, a rebranded and bigger smart display, AR search results, AR Maps, Google Lens updates, Duplex for the web (remember Duplex?), a compressed Google Assistant that does more locally, Assistant in Waze, Assistant in driving mode, live captioning and live relay (speech recognition) and a project to better understand people with speech impairments.
To be sure, some of these things are great! Most, however, were just an existing thing, but with a boost from AI. Lots feel a bit cringe in retrospect. You really see how big companies like Google act in thrall to trends as well as drive them.
Meanwhile, in February of that year we also had the headline: “OpenAI built a text generator so good, it’s considered too dangerous to release.” That was GPT-2. Not 3, not 3.5… 2.
In 2020, Google made an AI-powered Pinterest clone, then in December fired Timnit Gebru, one of the leading voices in AI ethics, over a paper pointing out limits and dangers of the technology.
To be fair, 2020 wasn’t a great year for a lot of people — with the notable exception of OpenAI, whose co-founder Sam Altman had to personally tamp down hype for GPT-3 because it had grown beyond tenable levels.
2021 saw the debut of Google’s own large language model, LaMDA, though the demos didn’t really sell it. Presumably they were still casting about for a reason for it to exist beyond making Assistant throw fewer errors.
OpenAI started the year off by showing off DALL-E, the first version of the text-to-image model that would soon become a household name. They had begun showing that LLMs, through systems like CLIP, can perform more than language tasks, and acted rather as an all-purpose interpretation and generation engine. (To be clear, I don’t mean “artificial general intelligence” or AGI, just that the process worked for more than a preset collection of verbal commands.)
In 2022, more tweaks to Assistant, more smart displays, more AR in Maps, and a $100 million acquisition of AI-generated profile pictures. OpenAI released DALL-E 2 in April and ChatGPT in December.
At some point, I suspect early 2022, Google executives opened their eyes and what they saw scared the hell out of them. I’m picturing the scene in Lord of the Rings where Denethor finally looks out at the gathered armies of Mordor. But instead of losing their minds and being laid out by a wizard, these frantic VPs sent out emails asking why some pert startup was running circles around the world leader in AI. Especially after they practically invented the means do do so.
The evidence for this is the trotting out of Imagen a month after DALL-E 2, though like practically every other interesting AI research Google publicized, it was not available for anyone to test out, let alone connect to an API. Then, after Meta released Make-A-Video in September, Google responded with Imagen Video a week later. Riffusion made waves for generating music, and a month later, here comes MusicLM (which you can’t use).
But surely it was ChatGPT that caused Google leadership to swiftly transition from anxiety to full-on flop sweat.
It would have been clear to all involved that this kind of conversational AI was categorically different from the Assistant products Google had been investing in for a decade, and was actually doing what everyone else’s pseudo-AIs (effectively just natural language frontends for a collection of APIs) pretended to. That’s what’s called an existential threat.
Fortune or foresight?
Now, it was bad enough that someone else, some upstart immune to acquisition, had triggered the next phase of evolution for the search engine, and that they had done so in a highly public way that captured the imagination of everyone from industry leaders to the tech-avoidant. The real twist of the knife came unexpectedly from Microsoft.
Calling Bing a “rival” to Google Search is perhaps too generous — with about 3% of global search compared to Google’s 92%, Bing is more of a well-heeled gadfly. Microsoft seems to have abandoned any illusions about Bing’s ability to improve its standing, and looked outside their own house for help. Whether their investment in OpenAI was preternatural foresight or fortunate serendipity, at some point it became clear that they had backed a fast horse.
Perhaps in some smoke-filled room, Satya Nadella and Sam Altman conspired to exclude Google from their new world order, but in public the conversation took the form of money, and lots of it. Whatever the backstory, Microsoft had secured its allegiance with the innovative newcomer and with it the opportunity to put its tech to work wherever it would do the most good.
While we have seen some interesting ideas floated about how generative AI can help in productivity, coding and even management, they have yet to be proven out, due either to copyright concerns or AI’s tendency to be a bit too “creative” in its responses. But given proper guard rails, it was clearly very good at synthesizing information to answer nearly any question, from simple factual queries to complex philosophical ones.
Search combined Microsoft’s need to innovate to get ahead with a core competency of large language models, which by good chance or good sense it had just lined up the world’s foremost creator of as a partner. The move to integrate the latest GPT model (some call it GPT-4, but I suspect OpenAI will reserve that moniker for its own first-party model) with Bing and Edge is a kind of forced hail mary, its last and best play in the search engine world.
Google, clearly rattled, attempted a spoiler campaign with a vacuous blog post the day before Microsoft had scheduled its big event announcing the OpenAI-powered Bing. Bard, apparently the name of Google’s LaMDA-based ChatGPT competitor, was unveiled in now typically spare fashion. Promises of capabilities and no hard dates or access plans.
This attempt at an announcement seems to have been made in such a hurry that its content was barely mentioned at Google’s “Search and AI” event two days later, and indeed it also escaped the kind of fact check you’d want to do if you were advertising the future of the knowledge graph. The image used to illustrate Bard contained a non-trivial error, saying that the James Webb Space Telescope “took the very first pictures of a planet outside our solar system.” This is untrue, and the fact that this vaunted machine intelligence got it wrong, and that no one at Google noticed or cared enough to check, appears to have spooked investors.
ChatGPT certainly has problems, and indeed immediately after the rollout of Microsoft’s enhanced Bing, TechCrunch was able to get the supposedly safe and appropriate AI to improvise an essay by Hitler and then regurgitate vaccine disinfo that an earlier version of itself wrote last month. But these are blemishes on an established record that includes billions of prompts and conversations served, to the overwhelming satisfaction of its users.
Google rushing its shot and tripping up so visibly speaks to a lack of readiness even at a limited, experimental level — let alone a global rollout like the one Microsoft has already begun.
In its investor call, CEO Sundar Pichai said “I think I see this as a chance to rethink and reimagine and drive Search to solve more use cases for our users as well. It’s early days, but you will see us be bold, put things out, get feedback and iterate and make things better.” Does that sound like a man with a plan?
It’s understandable that Google would not want to slaughter the golden goose by prematurely merging Search with whatever half-cooked general-use LLM they have sitting around. They’ve become experts at deploying highly specialized AI, task models that do one or two things. But when it comes to making a big move, their comfortable position has saddled them with inertia.
Is it Google’s downfall? Of course not, it will remain the default and a fabulously profitable, somewhat ridiculous corporation for the immediate future. But investor confidence has been shaken as it turns out that Google’s failure to innovate meaningfully over the last few years might not have been done out of wisdom and confidence, but reticence and pride. (The FTC and Justice taking another shot at its ad business can’t help, either.)
This turn of the worm is only in its first few degrees, however, and we must not speculate too far when the technology in question has yet to prove itself as valuable as everyone wants to believe it is. If it isn’t, the whole tech industry will face the fallout, not just Google. | AI Research |
An open letter signed by tech leaders and prominent AI researchers has called for AI labs and companies to "immediately pause" their work. Signatories like Steve Wozniak and Elon Musk agree risks warrant a minimum six month break from producing technology beyond GPT-4 to enjoy existing AI systems, allow people to adjust and ensure they are benefiting everyone. The letter adds that care and forethought are necessary to ensure the safety of AI systems — but are being ignored.
The reference to GPT-4, a model by OpenAI that can respond with text to written or visual messages, comes as companies race to build complex chat systems that utilize the technology. Microsoft, for example, recently confirmed that its revamped Bing search engine has been powered by the GPT-4 model for over seven weeks, while Google recently debuted Bard, its own generative AI system powered by LaMDA. Uneasiness around AI has long circulated, but the apparent race to deploy the most advanced AI technology first has drawn more urgent concerns.
"Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control," the letter states.
The concerned letter was published by the Future of Life Institute (FLI), an organization dedicated to minimizing the risks and misuse of new technology. Musk previously donated $10 million to FLI for use in studies about AI safety. In addition to him and Wozniak, signatories include a slew of global AI leaders, such as Center for AI and Digital Policy president Marc Rotenberg, MIT physicist and Future of Life Institute president Max Tegmark, and author Yuval Noah Harari. Harari also co-wrote an op-ed in the New York Times last week warning about AI risks, along with founders of the Center for Humane Technology and fellow signatories, Tristan Harris and Aza Raskin.
This call out feels like the next step of sorts from a 2022 survey of over 700 machine learning researchers, in which nearly half of participants stated there's a 10 percent chance of an "extremely bad outcome" from AI, including human extinction. When asked about safety in AI research, 68 percent of researchers said more or much more should be done.
Anyone who shares concerns about the speed and safety of AI production is welcome to add their name to the letter. However, new names are not necessarily verified so any notable additions after the initial publication are potentially fake. | AI Research |
An AI has created hilarious postcard images of popular British tourist attractions, based solely on snippets from one-star Trip Advisor reviews. Text-to-image tool DALL-E, released by artificial intelligence firm OpenAI, is able to create images and artwork from text prompts. UK rental agency My Favourite Cottages used it to reimagine tourist spots including Stonehenge, Angel of the North, Brighton Palace Pier and Cornwall's Eden Project.Some of the results have a passing resemblance to the real thing, while others are like a window into a dystopian nightmare. UK rental agency My Favourite Cottages used it to reimagine tourist spots including Stonehenge, Edinburgh Castle, The Tower of London and Mount Snowdon How does DALL-E work? DALL-E relies on artificial neural networks (ANNs), which simulate the way the brain works in order to learn.ANNs can be trained to recognise patterns in information, like speech, text data, or visual images. OpenAI developers gathered data on millions of photos to allow the DALL-E algorithm to 'learn' what different objects are supposed to look like and eventually put them together.When a user inputs text for DALL-E, it notes a series of key features that could be present. A second neural network, known as the diffusion model, then creates the image and generates the pixels needed to visualise and replicate it. DALL-E relies on artificial neural networks (ANNs), which simulate the way the brain works in order to learn and create an image from text. OpenAI says the tool is able to 'create plausible images for a great variety of sentences that explore the compositional structure of language'.'It has a diverse set of capabilities, including creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, rendering text, and applying transformations to existing images,' it says. My Favourite Cottages analysed a selection of one-star reviews on TripAdvisor for each attraction and fed the most common or spicy complaints into the tool.For example, some of the worst reviews about Bath's historic Roman Baths, constructed between 60-70AD, criticise their confined space. According to its worst reviews, visitors are 'crammed in around the baths' and have to endure 'long queues', while the baths themselves have even been likened to 'an uncleaned pool' or 'green puddle'. DALL-E's reimagining of the baths features a throng of bodies that on close inspection appear grotesquely distorted, like something from a horror film. Roman Baths have received nearly 24,000 Trip Advisor reviews and have an average rating of four and a half stars. The Roman Baths are reimagined with AI - with a throng of bodies that on close inspection appear grotesquely distorted One-star reviews for Wiltshire's Stonehenge criticised the 'huge, barbed wire fences' obstructing views of the old stones, which are thought to date back to 2500 BC. Another review mentioned 'security guards in florescent jackets', while another simply referred to Stonehenge as 'large-scale rubble in a vague circle'. Using these words as inspiration, DALL-E created a depressing recreation that wouldn't have looked out of place at Banksy's Dismaland. The stones themselves are made to look smooth and straight, as if made from gigantic paving slabs rather than ancient sandstone. Despite being a world famous prehistoric monument on Salisbury Plain in Wiltshire received some underwhelming reviews The Angel of the North, a 250-foot-tall sculpture located by the A1 road in Gateshead, Tyne and Wear, also got the DALL-E treatment. Angry reviewers found the sculpture to be 'like an upended aeroplane', 'rusty old junk' that 'needs painting' and an 'eyesore' amongst beautiful natural landscape. The AI interpretation grossly exaggerates its distance to the road – it appears just the other side of the safety barrier, rather than 285 feet away as in real life. The AI interpretation of the Angel of the North grossly exaggerates the structure's distance to the roadBrighton's Grade II listed Palace Pier is one of the biggest tourist attractions in the southeast – and is one of three piers to have graced the city's shores. One-star reviews for the Victorian structure mentioned 'long queues' for rides, many closed or broken, while the pier itself was 'slippery' and 'littered' with 'full bins' and 'paint falling off'. DALL-E makes it look more like its unfortunate older sibling the West Pier, which fell into disrepair and gradually collapsed. The West Pier's eerie skeleton is still visible today, although further damage from stormy weather means it could one day disappear forever. Brighton Palace Pier is made to look like a shadow of its current self with litter, detritus and what looks like abandoned furniture Meanwhile, DALL-E's version of the Eden Project looks like a crude approximation of the real thing created by your mate in their back garden. Reviews for the famous Cornwall attraction – featuring two enormous biomes containing rare plants – described it as 'tired-looking' and 'run down'. They also mentioned 'overcrowding', 'dying plants', 'overflowing bins' and a 'long walk to the domes' from parking facilities. Opened in 2003, the Eden Project celebrates plants and the natural world by growing a wide range of species in its two enormous biomes - but DALL-E depicts a low budget approximation Of all the UK tourist attractions, the agency found The Needles had the lowest average review score – three and a half stars – while 8.61 per cent of its nearly 5,000 reviews were one-star. The Needles is a row of three chalk stalks protruding about 100 feet out of the sea on the Isle of Wight. The most common complaints made by visitors were that the chairlift down to the beach ruined the landscape, making it hard to get a picture-perfect view.DALL-E makes the chairlift look even more of an eyesore than the real thing, more like a broken down electricity pylon. DALL-E makes the chairlift at The Needles - a natural formation at the Isle of Wight - look even more of an eyesore than the real thingThe full results of the study can be viewed on My Favourite Cottages' website. If you enjoyed this article... DALL-E imagines outside the frame of famous paintings Lensa AI can 'distribute' your photos without compensation Disney's age-altering AI could transform Hollywood TEXT-TO-IMAGE AI 'DALL-E' CAN NOW IMAGINE WHAT'S OUTSIDE THE FRAME OF FAMOUS PAINTINGS OpenAI, a San Francisco-based company, has created a new tool called 'Outpainting' for its text-to-image AI system, DALL-E. Outpainting allows the system to imagine what's outside the frame of famous paintings such as Girl with The Pearl Earring, Mona Lisa and Dogs Playing Poker.As users have shown, it can do this with any kind of image, such as the man on the Quaker Oats logo and the cover of the Beatles album 'Abbey Road'. DALL-E relies on artificial neural networks (ANNs), which simulate the way the brain works in order to learn and create an image from text. DALL-E already enables changes within a generated or uploaded image – a capability known as Inpainting. It is able to automatically fill in details, such as shadows, when an object is added, or even tweak the background to match, if an object is moved or removed.DALL-E can also produce a completely new image from a text description, such as 'an armchair in the shape of an avocado' or 'a cross-section view of a walnut'. Another classic example of DALL-E's work is 'teddy bears working on new AI research underwater with 1990s technology'. Read more | AI Research |
GPT-4 Turbo, Custom GPTs, Assistants API And More: Everything OpenAI Announced At DevDay 2023
The ChatGPT maker announced a new GPT-4 Turbo model that is "more capable, cheaper and supports a 128K context window."
OpenAI's first-ever developer conference DevDay 2023 was held in San Francisco on Monday. The AI research and deployment company shared dozens of new additions and improvements and reduced pricing across many parts of the platform.
The ChatGPT maker announced a new GPT-4 Turbo model that is "more capable, cheaper and supports a 128K context window."
It introduced new assistants API that will help developers to build their own assistive AI apps that have goals and can call models and tools.
New multimodal capabilities in the platform, including vision, image creation (DALL·E 3), and text-to-speech (TTS) were announced by the CEO Sam Altman.
Here are the details about the announcements made at the DevDay 2023 conference:
GPT-4 Turbo With 128K Context
GPT-4 Turbo, OpenAI’s most capable generative AI model yet has knowledge of world events up to April 2023. It has a 128k context window so it can fit the equivalent of more than 300 pages of text in a single prompt.
"We also optimized its performance so we are able to offer GPT-4 Turbo at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4," the company said.
GPT-4 Turbo is available for all paying developers to try by passing gpt-4-1106-preview in the API and OpenAI plans to release the stable production-ready model in the coming weeks.
Custom Versions Of ChatGPT
OpenAI rolled out custom versions of ChatGPT that users can create for a specific purpose -- called GPTs.
GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home—and then share that creation with others, the company said.
For example, GPTs can help you learn the rules to any board game, help teach your kids math, or design stickers.
Anyone can easily build their own GPT -- no coding is required.
"You can make them for yourself, just for your company’s internal use, or for everyone. Creating one is as easy as starting a conversation, giving it instructions and extra knowledge, and picking what it can do, like searching the web, making images or analyzing data," OpenAI said.
Try it out at chat.openai.com/create.
Updated GPT-3.5 Turbo
In addition to GPT-4 Turbo, the company also released a new version of GPT-3.5 Turbo that supports a 16K context window by default.
The new 3.5 Turbo supports improved instruction following, JSON mode, and parallel function calling. Developers can access this new model by calling gpt-3.5-turbo-1106 in the API.
Applications using the gpt-3.5-turbo name will automatically be upgraded to the new model on December 11. Older models will continue to be accessible by passing gpt-3.5-turbo-0613 in the API until June 13, 2024.
OpenAI also released Assistants API, the company's first step towards helping developers build agent-like experiences within their own applications.
An assistant is a purpose-built AI that has specific instructions, leverages extra knowledge and can call models and tools to perform tasks. The new Assistants API provides new capabilities such as Code Interpreter and Retrieval as well as function calling to handle a lot of the heavy lifting that users previously had to do themselves and enable them to build high-quality AI apps. | AI Research |
|Artificial Intelligence (AI)|
What is Artificial Intelligence (AI)?
A subfield of computer science called artificial intelligence (AI) focuses on the development of intelligent agents, or autonomous reasoning, learning, and acting systems. AI research has had great success in creating efficient methods for addressing a variety of issues, from game play to medical diagnosis.
Background of AI
The Turing test, which Alan Turing suggested as a technique to gauge a machine's level of intelligence, can be used to date the history of AI back to the early days of computers. With the creation of early AI programs like ELIZA and STUDENT, AI research advanced significantly in the 1950s and 1960s. However, there were other setbacks in AI research as well, and by the 1970s, many scientists believed AI was at a dead end.
With the emergence of new AI approaches like as expert systems and neural networks in the 1980s, AI research began to make a comeback. AI research expanded in the 1990s, with the introduction of novel AI approaches such as genetic algorithms and reinforcement learning.
AI research grew at an exponential rate throughout the 2000s, with the introduction of new AI techniques such as deep learning. Deep learning has transformed AI research, allowing AI systems to beat humans in a variety of tasks such as image identification and natural language processing.
AI classifications
There are numerous varieties of AI, however they can be broadly categorized into two groups:
Weak AI is a sort of artificial intelligence that is meant to do a certain activity, such as playing chess or Go. Weak AI systems are incapable of universal intelligence and can only execute jobs for which they were explicitly created.
Strong AI is a type of AI that is intended to mimic human intelligence. Strong AI systems are still in their infancy, yet they have the potential to transform many parts of our life.
Also Read This ⬇️
AI Applications
AI is utilized in a variety of applications, including:
AI is being utilized in healthcare to develop novel pharmaceuticals and treatments, identify ailments, and deliver individualized care.
Finance artificial intelligence is being used to detect fraud, manage risk, and make investment decisions.
Manufacturing artificial intelligence is being utilized to automate activities, increase quality, and cut costs.
Retail artificial intelligence is being used to personalize recommendations, optimize inventories, and enhance customer service.
Transportation artificial intelligence is being utilized to create self-driving automobiles, manage traffic flow, and improve air traffic control.
The Advantages of AI
AI has the potential to provide numerous societal benefits, including:
Better healthcareAI can be used to create new pharmaceuticals and treatments, detect diseases early, and deliver more tailored care.
Companies that are more efficientAI has the potential to automate jobs, increase productivity, and cut expenses.
Transportation that is saferAI can be utilized to improve air traffic control and produce self-driving cars.
More tailored encountersArtificial intelligence (AI) can be used to personalize recommendations, optimize content, and improve customer support.
AI Challenges
AI is also confronted with a number of obstacles, including:
AI systems that are prejudiced can lead to discrimination.
Security artificial intelligence (AI) systems can be hacked, resulting in data leaks and other security issues.
Ethics AI raises a number of ethical concerns, such as the use of AI for surveillance and the creation of autonomous weaponry.
The Future of AI
The future of artificial intelligence is uncertain, but it has the potential to transform many parts of our life. AI systems are growing more powerful and intelligent, and they are used in a variety of applications. As AI technology advances, we may expect to see even more benefits and challenges linked with AI.
Conclusion
AI is a fast expanding field that has the potential to impact our lives in a variety of ways. It is critical to understand the benefits and drawbacks of AI in order to make informed judgments about how to apply this technology.
Sources: | AI Research |
Suppose you have a thousand-page book, but each page has only a single line of text. You’re supposed to extract the information contained in the book using a scanner, only this particular scanner systematically goes through each and every page, scanning one square inch at a time. It would take you a long time to get through the whole book with that scanner, and most of that time would be wasted scanning empty space. Such is the life of many an experimental physicist. In particle experiments, detectors capture and analyze vast amounts of data, even though only a tiny fraction of it contains useful information. “In a photograph of, say, a bird flying in the sky, every pixel can be meaningful,” explained Kazuhiro Terao, a physicist at the SLAC National Accelerator Laboratory. But in the images a physicist looks at, often only a small portion of it actually matters. In circumstances like that, poring over every detail needlessly consumes time and computational resources.But that’s starting to change. With a machine learning tool known as a sparse convolutional neural network (SCNN), researchers can focus on the relevant parts of their data and screen out the rest. Researchers have used these networks to vastly accelerate their ability to do real-time data analysis. And they plan to employ SCNNs in upcoming or existing experiments on at least three continents. The switch marks a historic change for the physics community. “In physics, we are used to developing our own algorithms and computational approaches,” said Carlos Argüelles-Delgado, a physicist at Harvard University. “We have always been on the forefront of development, but now, on the computational end of things, computer science is often leading the way.” Sparse CharactersThe work that would lead to SCNNs began in 2012, when Benjamin Graham, then at the University of Warwick, wanted to make a neural network that could recognize Chinese handwriting. The premier tools at the time for image-related tasks like this were convolutional neural networks (CNNs). For the Chinese handwriting task, a writer would trace a character on a digital tablet, producing an image of, say, 10,000 pixels. The CNN would then move a 3-by-3 grid called a kernel across the entire image, centering the kernel on each pixel individually. For every placement of the kernel, the network would perform a complicated mathematical calculation called a convolution that looked for distinguishing features.CNNs were designed to be used with information-dense images such as photographs. But an image containing a Chinese character is mostly empty; researchers refer to data with this property as sparse. It’s a common feature of anything in the natural world. “To give an example of how sparse the world can be,” Graham said, if the Eiffel Tower were encased in the smallest possible rectangle, that rectangle would consist of “99.98 percent air and just 0.02 percent iron.”The IceCube Neutrino Observatory at the South Pole.Photograph: Felipe Pedreros/IceCube/NSF/QuantaGraham tried tweaking the CNN approach so that the kernel would only be placed on 3-by-3 sections of the image that contain at least one pixel that has nonzero value (and is not just blank). In this way, he succeeded in producing a system that could efficiently identify handwritten Chinese. It won a 2013 competition by identifying individual characters with an error rate of only 2.61 percent. (Humans scored 4.81 percent on average.) He next turned his attention to an even bigger problem: three-dimensional-object recognition.By 2017, Graham had moved to Facebook AI Research and had further refined his technique and published the details for the first SCNN, which centered the kernel only on pixels that had a nonzero value (rather than placing the kernel on any 3-by-3 section that had at least one “nonzero” pixel). It was this general idea that Terao brought to the world of particle physics.Underground ShotsTerao is involved with experiments at the Fermi National Accelerator Laboratory that probe the nature of neutrinos, among the most elusive known elementary particles. They’re also the most abundant particles in the universe with mass (albeit not much), but they rarely show up inside a detector. As a result, most of the data for neutrino experiments is sparse, and Terao was constantly on the lookout for better approaches to data analysis. He found one in SCNNs.In 2019, he applied SCNNs to simulations of the data expected from the Deep Underground Neutrino Experiment, or DUNE, which will be the world’s largest neutrino physics experiment when it comes online in 2026. The project will shoot neutrinos from Fermilab, just outside Chicago, through 800 miles of earth to an underground laboratory in South Dakota. Along the way, the particles will “oscillate” between the three known types of neutrinos, and these oscillations may reveal detailed neutrino properties.The SCNNs analyzed the simulated data faster than ordinary methods, and required significantly less computational power in doing so. The promising results mean that SCNNs will likely be used during the actual experimental run.In 2021, meanwhile, Terao helped add SCNNs to another neutrino experiment at Fermilab known as MicroBooNE. Here, scientists look at the aftermath of collisions between neutrinos and the nuclei of argon atoms. By examining the tracks created by these interactions, researchers can infer details about the original neutrinos. To do that, they need an algorithm that can look at the pixels (or, technically, their three-dimensional counterparts called voxels) in a three-dimensional representation of the detector and then determine which pixels are associated with which particle trajectories.Because the data is so sparse—a smattering of tiny lines within a large detector (approximately 170 tons of liquid argon)—SCNNs are almost perfect for this task. With a standard CNN, the image would have to be broken up into 50 pieces, because of all the computation to be done, Terao said. “With a sparse CNN, we analyze the entire image at once—and do it much faster.”Timely TriggersOne of the researchers who worked on MicroBooNE was an undergraduate intern named Felix Yu. Impressed with the power and efficiency of SCNNs, he brought the tools with him to his next workplace as a graduate student at a Harvard research laboratory formally affiliated with the IceCube Neutrino Observatory at the South Pole.One of the key goals of the observatory is to intercept the universe’s most energetic neutrinos and trace them back to their sources, most of which lie outside our galaxy. The detector is comprised of 5,160 optical sensors buried in the Antarctic ice, only a tiny fraction of which light up at any given time. The rest of the array remains dark and is not particularly informative. Worse, many of the “events” that the detectors record are false positives and not useful for neutrino hunting. Only so-called trigger-level events make the cut for further analysis, and instant decisions need to be made as to which ones are worthy of that designation and which will be permanently ignored.Standard CNNs are too slow for this task, so IceCube scientists have long relied on an algorithm called LineFit to tell them about potentially useful detections. But that algorithm is unreliable, Yu said, “which means we could be missing out on interesting events.” Again, it’s a sparse data environment ideally suited for an SCNN.Yu—along with Argüelles-Delgado, his doctoral adviser, and Jeff Lazar, a graduate student at the University of Wisconsin, Madison—quantified that advantage, showing in a recent paper that these networks would be about 20 times faster than typical CNNs. “That’s fast enough to run on every event that comes out of the detector,” about 3,000 each second, Lazar said. “That enables us to make better decisions about what to throw out and what to keep.”IceCube has thousands of sensors buried deep in the Antarctic ice, such as the one at left (signed by researchers and engineers). At any time, only a few of these sensors produce useful data for neutrino hunters, so researchers needed a tool to help them separate out the unwanted data.Photographs: Robert Schwarz/NSF/QuantaThe authors have also successfully employed an SCNN in a simulation using official IceCube data, and the next step is to test their system on a replica of the South Pole computing system. If all goes well, Argüelles-Delgado believes they should get their system installed at the Antarctic observatory next year. But the technology could see even wider use. “We think that [SCNNs could benefit] all neutrino telescopes, not just IceCube,” Argüelles-Delgado said.Beyond NeutrinosPhilip Harris, a physicist at the Massachusetts Institute of Technology, is hoping SCNNs can help out at the biggest particle collider of them all: the Large Hadron Collider (LHC) at CERN. Harris heard about this kind of neural network from an MIT colleague, the computer scientist Song Han. “Song is an expert on making algorithms fast and efficient,” Harris said—perfect for the LHC, where 40 million collisions occur every second.When they spoke a couple of years ago, Song told Harris about an autonomous-vehicle project he was pursuing with members of his lab. Song’s team was using SCNNs to analyze 3D laser maps of the space in front of the vehicle, much of which is empty, to see if there were any obstructions ahead.Harris and his colleagues face similar challenges at the LHC. When two protons collide inside the machine, the crash creates an expanding sphere made of particles. When one of these particles hits the collector, a secondary particle shower occurs. “If you can map out the full extent of this shower,” Harris said, “you can determine the energy of the particle that gave rise to it,” which might be an object of special interest—something like the Higgs boson, which physicists discovered in 2012, or a dark matter particle, which physicists are still searching for.“The problem we are trying to solve comes down to connecting the dots,” Harris said, just as a self-driving car might connect the dots of a laser map to detect an obstruction.SCNNs would speed up data analysis at the LHC by at least a factor of 50, Harris said. “Our ultimate goal is to get [SCNNs] into the detector”—a task that will take at least a year of paperwork and additional buy-in from the community. But he and his colleagues are hopeful.Altogether, it’s increasingly likely that SCNNs—an idea originally conceived in the computer science world—will soon play a role in the biggest experiments ever conducted in neutrino physics (DUNE), neutrino astronomy (IceCube), and high-energy physics (the LHC).Graham said he was pleasantly surprised to learn that SCNNs had made their way to particle physics, though he was not totally shocked. “In an abstract sense,” he said, “a particle moving in space is a bit like the tip of a pen moving on a piece of paper.”Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. | AI Research |
Google’s Co-Founder Sergey Brin Personally Working on Firm’s New AI
Google co-founder Sergey Brin has returned to the company’s Mountain View offices. He is reportedly working closely with researchers as they push to develop the tech giant’s next large artificial intelligence (AI) system.
Google Working on New AI Model
This is exclusive content.
Please subscribe to our newsletter to access it.
According to a recent report from The Wall Street Journal, Brin has been actively participating in meetings about AI and holding weekly discussions on new AI research with Google employees. The move marks a significant change from his previous hands-off approach after retiring from an executive role at Alphabet, Google’s parent company, in 2019.
Brin’s comeback comes as Google faces intensifying competition in AI research, with companies like Meta Platforms and Microsoft-backed research firm OpenAI releasing new AI offerings. Google is working on its general-purpose AI called Gemini, aiming to rival OpenAI’s GPT-4 model.
Brin, who co-founded Google with Larry Page in 1998, has spent time with AI researchers at Google’s newly constructed Charleston East building. Despite returning from their previous roles, Brin and Page still control most of Alphabet’s voting power and sit on an executive board committee with Pichai.
Google has been involved in AI research since 2011, when it formed the Brain research unit to explore the use of deep learning in building useful AI programs. Earlier this year, the Brain team was merged with DeepMind, an AI research company, under the leadership of Demis Hassabis, a co-founder of DeepMind.
Brin’s interest in AI has evolved. Initially, he ignored the work of the Brain team, expressing skepticism about their ability to crack AI. However, after witnessing the team’s research being used in Google’s major products, he changed his perspective regarding the team.
In his previous shareholder letter, Brin emphasized the power and potential of computation in tackling important problems, highlighting the significant development of AI as the most significant computing development in his lifetime.
“The new spring in artificial intelligence is the most significant development in computing in my lifetime,” he wrote in the letter issued in 2018.
Join our Telegram group and never miss a breaking digital asset story.
What is Google’s Gemini AI Model?
Google’s Gemini AI model, short for Generalized Multimodal Intelligence Network, is the company’s latest advancement in artificial intelligence. Unlike traditional AI models limited to processing a specific data type, Gemini is a multimodal intelligence network capable of simultaneously handling various data types and tasks, including text, images, audio, video, 3D models, and graphs.
Gemini is not just a single model but a network of models working together to enhance the system’s overall capability. This unique network architecture enables Gemini to perform a wide range of tasks without requiring specialized models for each task.
Gemini offers several advantages compared to other large language models like GPT-4. Firstly, it is highly adaptable and can process data and tasks without requiring specialized models or fine-tuning. Additionally, Gemini can learn from any domain and dataset without being constrained by predefined categories or labels.
Big Tech Races to Lead AI Development
The race to lead AI development among big tech companies is heating up, with Meta, Microsoft, and Google taking the lead. Microsoft, for one, has invested over $10 billion in ChatGPT maker OpenAI, whose AI-powered chatbot has lit up the internet since its release last year.
Likewise, Meta plans to invest $33 billion in artificial intelligence technologies this year to support the “ongoing build-out of AI capacity,” Chief Financial Officer Susan Li said on a Q1 earnings call. The company also aims to commercialize its generative AI technology to improve ad effectiveness.
Furthermore, Apple is creating its AI-powered chatbot that some engineers call “Apple GPT,” according to a recent report from Bloomberg. The report claimed that Apple has multiple teams working on the project, which includes addressing potential privacy implications.
Even Elon Musk, the founder of Tesla and SpaceX, recently announced his own AI startup, xAI. At its core, the company aims to challenge existing big AI firms like OpenAI with its alternative to ChatGPT.
Nevertheless, with the release of Gemini later this year, Google aims to establish itself as a leader in AI and match other potential rivals. And Brin’s return to active involvement in AI research at Google could further help the tech giant reach this goal.
Do you think Google can lead AI development among the big tech? Let us know in the comments below. | AI Research |
South Korean tech giant Kakao wants a piece of the action in the generative AI race, and today it made a bid for it, with big updates to its AI image generator Karlo and its KoGPT large language model and a new fund to back AI image generating startups.
Kakao’s AI division Kakao Brain‘s launch of Karlo 2.0 — an AI-powered tool that can produce images from text prompts — enables users to generate an image in just a matter of seconds by inputting prompts in English or Korean.
KoGPT 2.0, meanwhile, isn’t due to be released until Q3 2023 but it will be based on GPT-3.5 and be trained on 30 billion different parameters.
Kakao Brain also has set up Karlo 100x, a 10 billion won ($ 7.7 million) fund, Karlo 100X, to back AI startups that develop image generation technology.
Kakao’s releases underscore the pressure that the company, like all incumbent tech players, has been facing to keep up with the pace of AI progress: newcomers like OpenAI, Midjourney, and more have been making a huge splash and controlling the narrative on the biggest innovations of the moment. For older companies, the threat is always not just that they won’t capitalize on whatever is coming next, but they’ll start to look old and stale.
The company says Karlo 2.0 is trained on 300 million texts and images and supports a maximum resolution of 2048×2048, resulting in better quality and clarity of images in various styles and sizes. This is a big advance on earlier editions of Karlo: the previous version was trained on 180 million text and image data points.
Karlo 2.0 will also be more developer-friendly, with an publicly-available API for Kakao developers that gives them access to creating 600,000 images per month for free.
Its predecessor, also called KoGPT, was originally rolled out in 2021 and trained on 200 billion tokens in on just 6 billion parameters, using Korean data. The plan is now to add English, Japanese and other Asian languages, including Bahasa Malaysia and Vietnamese. Kakao Brain is slated to launch KoGPT 2.0, which will be based on GPT-3.5, with 30 billion parameters, aiming in the third quarter of 2023.
Kakao’s moves are interesting, but they are not coming in a vacuum and will face a lot of established competition. Image generators are already available from OpenAI (in the form of DALL-E), Midjourney and Stable Diffusion‘s Dream Studio. Just last week, Chinese tech behemoth Alibaba unveiled a generative AI-powered text-to-image model Tongyi Wanxiang for enterprise customers.
As with the company’s regionally-leading messaging app KakaoTalk, the opening for a company like Kakao seems to be to be a leading player in AI images in its market, rather than aiming for world domination. Earlier this year, Eun-Taek Hong, chief executive officer of Kakao, said in its earnings call that the Korean tech company would offer generative AI services. But rather than competing against global peers in the field, Kakao would develop AI offerings with a focus on the Korean language (for now), Hong said.
It will be interesting also to see how and if Kakao builds this out in tandem with its other ambitions in AI. For example, it’s making a foray into the healthcare industry. The company is developing an AI-generative offering to analyze a medical image and write a first-draft diagnosis of a patient’s chest X-ray for doctors. It is also developing AI technology for drug discovery.
“We are pleased to introduce Karlo 2.0 through a process of learning various image data and technological advancement, including human proportions and composition, and perspective views that express a sense of space and three-dimensionality,” CEO of Kakao Brain said Kim Ildoo in a statement. “We will use a wealth of data and user feedback to create photorealistic images while also developing an image generation model that can cater to a wider range of user needs.”
Even regionally, there is plenty of competition for Kakao, with other South Korean companies aggressively developing their own AI products to jostle for a position in the generative AI space.
Naver is set to launch a generative AI service called HyperCLOVA X and an AI chatbot SearchGPT next month.
In December 2021, LG AI Research, an AI think tank unit of Korean tech firm LG, unveiled its AI-powered large language model called Exaone with capabilities of 300 billion parameters. Most of LG’s subsidiaries, including LG Electronics, LG Energy Solution, LG Uplus and LG Display, use Exaone. LG Electronics released its own AI Chatbot in May in the wake of blocking the use of generative AI tools like ChatGPT, Google Bard and Microsoft’s Bing AI.
And Samsung is reportedly developing a private generative AI tool for internal use only after restricting generative AI tools after sensitive data leakage.
SK Telecom, one of South Korea’s biggest telco companies, also added Chat GPT technology based on Microsoft’s Azure OpenAI to its AI chatbot called A. [A dot] Other two of Korea’s biggest telco companies, KT and LG’s Uplus, are developing AI chatbots based on ChatGPT technology, per local media outlets. | AI Research |
Introduction
The world has learned two things in the past few months about large language models (LLMs) — the computational engines that power programs such as ChatGPT and Dall·E. The first is that these models appear to have the intelligence and creativity of a human. They offer detailed and lucid responses to written questions, or generate beguiling images from just a few words of text.
The second thing is that they are untrustworthy. They sometimes make illogical statements, or confidently pronounce falsehoods as fact.
“They will talk about unicorns, but then forget that they have one horn, or they’ll tell you a story, then change details throughout,” said Jason Rute of IBM Research.
These are more than just bugs — they demonstrate that LLMs struggle to recognize their mistakes, which limits their performance. This problem is not inherent in artificial intelligence systems. Machine learning models based on a technique called reinforcement learning allow computers to learn from their mistakes to become prodigies at games like chess and Go. While these models are typically more limited in their ability, they represent a kind of learning that LLMs still haven’t mastered.
“We don’t want to create a language model that just talks like a human,” said Yuhuai (Tony) Wu of Google AI. “We want it to understand what it’s talking about.”
Wu is a co-author on two recent papers that suggest a way to achieve that. At first glance, they’re about a very specific application: training AI systems to do math. The first paper describes teaching an LLM to translate ordinary math statements into formal code that a computer can run and check. The second trained an LLM not just to understand natural-language math problems but to actually solve them, using a system called Minerva.
Together, the papers suggest the shape of future AI design, where LLMs can learn to reason via mathematical thinking.
“You have things like deep learning, reinforcement learning, AlphaGo, and now language models,” said Siddhartha Gadgil, a mathematician at the Indian Institute of Science in Bangalore who works with AI math systems. “The technology is growing in many different directions, and they all can work together.”
Not-So-Simple Translations
For decades, mathematicians have been translating proofs into computer code, a process called formalization. The appeal is straightforward: If you write a proof as code, and a computer runs the code without errors, you know that the proof is correct. But formalizing a single proof can take hundreds or thousands of hours.
Introduction
Over the last five years, AI researchers have started to teach LLMs to automatically formalize, or “autoformalize,” mathematical statements into the “formal language” of computer code. LLMs can already translate one natural language into another, such as from French to English. But translating from math to code is a harder challenge. There are far fewer example translations with which to train an LLM, for example, and formal languages don’t always contain all the vocabulary necessary.
“When you translate the word ‘cheese’ from English to French, there is a French word for cheese,” Rute said. “The problem is in mathematics, there isn’t even the right concept in the formal language.”
That’s why the seven authors of the first paper, with a mix of academic and industry affiliations, chose to autoformalize short mathematical statements rather than entire proofs. The researchers worked primarily with an LLM called Codex, which is based on GPT-3 (a predecessor of ChatGPT) but has additional training on technical material from sources like GitHub. To get Codex to understand math well enough to autoformalize, they provided it with just two examples of natural-language math problems and their formal code translations.
After that brief tutorial, they fed Codex the natural-language statements of nearly 4,000 math problems from high school competitions. Its performance at first might seem underwhelming: Codex translated them into the language of a mathematics program called Isabelle/HOL with an accuracy rate of just under 30%. When it failed, it made up terms to fill gaps in its translation lexicon.
“Sometimes it just doesn’t know the word it needs to know — what the Isabelle name for ‘prime number’ is, or the Isabelle name for ‘factorial’ is — and it just makes it up, which is the biggest problem with these models,” Rute said. “They do a lot of guessing.”
But for the researchers, the important thing was not that Codex failed 70% of the time; it was that it managed to succeed 30% of the time after seeing such a small number of examples.
“They can do all these different tasks with only a few demonstrations,” said Wenda Li, a computer scientist at the University of Cambridge and a co-author of the work.
Li and his co-authors see the result as representative of the kind of latent capacities LLMs can acquire with enough general training data. Prior to this research, Codex had never tried to translate between natural language and formal math code. But Codex was familiar with code from its training on GitHub, and with natural-language mathematics from the internet. To build on that base, the researchers only had to show it a few examples of what they wanted, and Codex could start connecting the dots.
“In many ways what’s amazing about that paper is [the authors] didn’t do much,” Rute said. “These models had this natural ability to do this.”
Researchers saw the same thing happen when they tried to teach LLMs not only how to translate math problems, but how to solve them.
Minerva’s Math
The second paper, though independent of the earlier autoformalization work, has a similar flavor. The team of researchers, based at Google, trained an LLM to answer, in detail, high school competition-level math questions such as “A line parallel to y = 4x + 6 passes through (5, 10). What is the y-coordinate of the point where this line crosses the y-axis?”
The authors started with an LLM called PaLM that had been trained on general natural-language content, similar to GPT-3. Then they trained it on mathematical material like arxiv.org pages and other technical material, mimicking Codex’s origins. They named this augmented model Minerva.
The researchers showed Minerva four examples of what they wanted. In this case, that meant step-by-step solutions to natural-language math problems.
Then they tested the model on a range of quantitative reasoning questions. Minerva’s performance varied by subject: It answered questions correctly a little better than half the time for some topics (like algebra), and a little less than half the time for others (like geometry).
One concern the authors had — a common one in many areas of AI research — was that Minerva answered questions correctly only because it had already seen them, or similar ones, in its training data. This issue is referred to as “pollution,” and it makes it hard to know whether a model is truly solving problems or merely copying someone else’s work.
“There is so much data in these models that unless you’re trying to avoid putting some data in the training set, if it’s a standard problem, it’s very likely it’s seen it,” Rute said.
To guard against this possibility, the researchers had Minerva take the 2022 National Math Exam from Poland, which came out after Minerva’s training data was set. The system got 65% of the questions right, a decent score for a real student, and a particularly good one for an LLM, Rute said. Again, the positive results after so few examples suggested an inherent ability for well-trained models to take on such tasks.
“This is a lesson we keep learning in deep learning, that scale helps surprisingly well with many tasks,” said Guy Gur-Ari, a researcher formerly at Google and a co-author of the paper.
The researchers also learned ways to boost Minerva’s performance. For example, in a technique called majority voting, Minerva solved the same problem multiple times, counted its various results, and designated its final answer as whatever had come up most often (since there’s only one right answer, but so many possible wrong ones). Doing this increased its score on certain problems from 33% to 50%.
Also important was teaching Minerva to break its solution into a series of steps, a method called chain-of-thought prompting. This had the same benefits for Minerva that it does for students: It forced the model to slow down before producing an answer and allowed it to devote more computational time to each part of the task.
“If you ask a language model to explain step by step, the accuracy goes up immensely,” Gadgil said.
The Bridge Forms
While impressive, the Minerva work came with a substantial caveat, which the authors also noted: Minerva has no way of automatically verifying whether it has answered a question correctly. And even if it did answer a question correctly, it can’t check that the steps it followed to get there were valid.
“It sometimes has false positives, giving specious reasons for correct answers,” Gadgil said.
In other words, Minerva can show its work, but it can’t check its work, which means it needs to rely on human feedback to get better — a slow process that may put a cap on how good it can ever get.
“I really doubt that approach can scale up to complicated problems,” said Christian Szegedy, an AI researcher at Google and a co-author of the earlier paper.
Instead, the researchers behind both papers hope to begin teaching machines mathematics using the same techniques that have allowed the machines to get good at games. The world is awash in math problems, which could serve as training fodder for systems like Minerva, but it can’t recognize a “good” move in math, the way AlphaGo knows when it’s played well at Go.
“On the one side, if you work on natural language or Minerva type of reasoning, there’s a lot of data out there, the whole internet of mathematics, but essentially you can’t do reinforcement learning with it,” Wu said. On the other side, “proof assistants provide a grounded environment but have little data to train on. We need some kind of bridge to go from one side to the other.”
Autoformalization is that bridge. Improvements in autoformalization could help mathematicians automate aspects of the way they write proofs and verify that their work is correct.
By combining the advancements of the two papers, systems like Minerva could first autoformalize natural-language math problems, then solve them and check their work using a proof assistant like Isabelle/HOL. This instant check would provide the feedback necessary for reinforcement learning, allowing these programs to learn from their mistakes. Finally, they’d arrive at a provably correct answer, with an accompanying list of logical steps — effectively combining the power of LLMs and reinforcement learning.
AI researchers have even broader goals in mind. They view mathematics as the perfect proving ground for developing AI reasoning skills, because it’s arguably the hardest reasoning task of all. If a machine can reason effectively about mathematics, the thinking goes, it should naturally acquire other skills, like the ability to write computer code or offer medical diagnoses — and maybe even to root out those inconsistent details in a story about unicorns. | AI Research |
In its quest to develop AI that can understand a range of different dialects, Meta has created an AI model, SeamlessM4T, that can translate and transcribe close to 100 languages across text and speech.
Available in open source along with SeamlessAlign, a new translation data set, Meta claims that SeamlessM4T represents a “significant breakthrough” in the field of AI-powered speech-to-speech and speech-to-text.
“Our single model provides on-demand translations that enable people who speak different languages to communicate more effectively,” Meta writes in a blog post shared with TechCrunch. “SeamlessM4T implicitly recognizes the source languages without the need for a separate language identification model.”
SeamlessM4T is something of a spiritual successor to Meta’s No Language Left Behind, a text-to-text machine translation model, and Universal Speech Translator, one of the few direct speech-to-speech translation systems to support the Hokkien language. And it builds on Massively Multilingual Speech, Meta’s framework that provides speech recognition, language identification and speech synthesis tech across more than 1,100 languages.
Meta isn’t the only one investing resources in developing sophisticated AI translation and transcription tools.
Beyond the wealth of commercial services and open source models already available from Amazon, Microsoft, OpenAI and a number of startups, Google is creating what it calls the Universal Speech Model, a part of the tech giant’s larger effort to build a model that can understand the world’s 1,000 most-spoken languages. Mozilla, meanwhile, spearheaded Common Voice, one of the largest multi-language collection of voices for training automatic speech recognition algorithms.
But SeamlessM4T is among the more ambitious efforts to date to combine translation and transcription capabilities into a single model.
In developing it, Meta says that it scraped publicly available text (in the order of “tens of billions” of sentences) and speech (4 million hours) from the web. In an interview with TechCrunch, Juan Pino, a research scientist at Meta’s AI research division and a contributor on the project, wouldn’t reveal the exact sources of the data, saying only that there was “a variety” of them.
Not every content creator agrees with the practice of leveraging public data to train models that could be used commercially. Some have filed lawsuits against companies building AI tools on top of publicly available data, arguing that the vendors should be compelled to provide credit if not compensation — and clear ways to opt out.
But Meta claims that the data it mined — which might contain personally identifiable information, the company admits — wasn’t copyrighted and came primarily from open source or licensed sources.
Whatever the case, Meta used the scraped text and speech to create the training data set for SeamlessM4T, called SeamlessAlign. Researchers aligned 443,000 hours of speech with texts and created 29,000 hours of “speech-to-speech” alignments, which “taught” SeamlessM4T how to transcribe speech to text, translate text, generate speech from text and even translate words spoken in one language into words in another language.
Meta claims that on an internal benchmark, SeamlessM4T performed better against background noises and “speaker variations” in speech-to-text tasks compared to the current state-of-the-art speech transcription model. It attributes this to the rich combination of speech and text data in the training data set, which Meta believes gives SeamlessM4T a leg up over speech-only and text-only models.
“With state-of-the-art results, we believe SeamlessM4T is an important breakthrough in the AI community’s quest toward creating universal multitask systems,” Meta wrote in the blog post.
But one wonders what biases the model might contain.
A recent piece in The Conversation points out the many flaws in AI-powered translation, including different forms of gender bias. For example, Google Translate once presupposed that doctors were male while nurses were female in certain languages, while Bing’s translator translated phrases like “the table is soft” as the feminine “die Tabelle” in German, which refers a table of figures.
Speech recognition algorithms, too, often contain biases. A study published in The Proceedings of the National Academy of Sciences showed that speech recognition systems from leading companies were twice as likely to incorrectly transcribe audio from Black speakers as opposed to white speakers.
Unsurprisingly, SeamlessM4T isn’t unique in this regard.
In a whitepaper published alongside the blog post, Meta reveals that the model “overgeneralizes to masculine forms when translating from neutral terms” and performs better when translating from the masculine reference (e.g., nouns like “he” in English) for most languages.
Moreover, in the absence of gender information, SeamlessM4T prefers translating the masculine form about 10% of the time — perhaps due to an “overrepresentation of masculine lexica” in the training data, Meta speculates.
Meta makes the case that SeamlessM4T doesn’t add an outsize amount of toxic text in its translations, a common problem with translation and generative AI text models at large. But it’s not perfect. In some languages, like Bengali and Kyrgyz, SeamlessM4T makes more toxic translations — that is to say, hateful or profane translations — about socioeconomic status and culture. And in general, SeamlessM4T is more toxic in translations dealing with sexual orientation and religion.
Meta notes that the public demo for SeamlessM4T contains a filter for toxicity in inputted speech as well as a filter for potentially toxic outputted speech. That filter’s not present by default in the open source release of the model, however.
The larger issue with AI translation not addressed in the whitepaper is the loss of lexical richness that can result from their overuse. Unlike AI, human interpreters make choices unique to them when translating one language into another. They might explicate, normalize, or condense and summarize, creating fingerprints known informally as “translationese.” AI systems might generate more “accurate” translations, but those translations could be coming at the expense of translation variety and diversity.
That’s probably why Meta advises against using SeamlessM4T for long-form translation and certified translations, like those recognized by government agencies and translation authorities. Meta also discourages deploying SeamlessM4T for medical or legal purposes, presumably an attempt to cover its bases in the event of a mistranslation.
That’s wise; there’s been at least a few of instances where AI mistranslations have resulted in law enforcement mistakes. In September 2012, police erroneously confronted a Kurdish man for financing terrorism because of a mistranslated text message. And in 2017, a cop in Kansas used Google Translate to ask a Spanish-speaker if they could search their car for drugs, but because the translation was inaccurate, the driver didn’t fully understand what he’d agreed to and the case was eventually thrown out.
“This single system approach reduces errors and delays, increasing the efficiency and quality of the translation process, bringing us closer to making seamless translation possible,” Pino said. “In the future, we want to explore how this foundational model can enable new communication capabilities — ultimately bringing us closer to a world where everyone can be understood.”
Let’s hope humans aren’t left completely out of the loop in that future. | AI Research |
Microsoft AI researchers accidentally exposed tens of terabytes of sensitive data, including private keys and passwords, while publishing a storage bucket of open-source training data on GitHub.
In research shared with TechCrunch, cloud security startup Wiz said it discovered a GitHub repository belonging to Microsoft’s AI research division as part of its ongoing work into the accidental exposure of cloud-hosted data.
Readers of the GitHub repository, which provided open source code and AI models for image recognition, were instructed to download the models from an Azure Storage URL. However, Wiz found that this URL was configured to grant permissions on the entire storage account, exposing additional private data by mistake.
This data included 38 terabytes of sensitive information, including the personal backups of two Microsoft employees’ personal computers. The data also contained other sensitive personal data, including passwords to Microsoft services, secret keys, and over 30,000 internal Microsoft Teams messages from hundreds of Microsoft employees.
The URL, which had exposed this data since 2020, was also misconfigured to allow “full control” rather than “read-only” permissions, according to Wiz, which meant anyone who knew where to look could potentially delete, replace, and inject malicious content into them.
Wiz notes that the storage account wasn’t directly exposed. Rather, the Microsoft AI developers included an overly permissive shared access signature (SAS) token in the URL. SAS tokens are a mechanism used by Azure that allows users to create shareable links granting access to an Azure Storage account’s data.
“AI unlocks huge potential for tech companies,” Wiz co-founder and CTO Ami Luttwak told TechCrunch. “However, as data scientists and engineers race to bring new AI solutions to production, the massive amounts of data they handle require additional security checks and safeguards. With many development teams needing to manipulate massive amounts of data, share it with their peers or collaborate on public open-source projects, cases like Microsoft’s are increasingly hard to monitor and avoid.”
Wiz said it shared its findings with Microsoft on June 22, and Microsoft revoked the SAS token two days later on June 24. Microsoft said it completed its investigation on potential organizational impact on August 16.
In a blog post shared with TechCrunch before publication, Microsoft’s Security Response Center said that “no customer data was exposed, and no other internal services were put at risk because of this issue.”
Microsoft said that as a result of Wiz’s research, it has expanded GitHub’s secret spanning service, which monitors all public open-source code changes for plaintext exposure of credentials and other secrets to include any SAS token that may have overly permissive expirations or privileges. | AI Research |
(Bloomberg) -- Artificial intelligence experts and industry leaders, including Elon Musk, University of California Berkeley computer science professor Stuart Russell and Apple Inc. co-founder Steve Wozniak, are calling on developers to hit the pause button on training powerful AI models.
Most Read from Bloomberg
More than 1,100 people in the industry signed a petition calling for a six-month break from training artificial intelligence systems more powerful than the latest iteration behind OpenAI’s ChatGPT, in order to allow for the development of shared safety protocols.
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” said an open letter published on the Future of Life Institute website. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
The call comes after the launch of a series of AI projects in the last several months that convincingly perform human tasks such as writing emails and creating art. Microsoft Corp.-backed OpenAI released its GPT-4 this month, a major upgrade of its AI-powered chatbot, capable of telling jokes and passing tests like the bar exam.
OpenAI Chief Executive Officer Sam Altman — whose name appeared on the list of signatories Tuesday night — didn’t sign the petition, said OpenAI spokesperson Hannah Wong. “Also I think it’s important to point out that we spent more than six months — after GPT-4 finished training — on the safety and alignment of the model,” Wong said.
Alphabet Inc.’s Google and Microsoft are among the companies using artificial intelligence to enhance their search engines, while Morgan Stanley has been using GPT-4 to create a chatbot for its wealth advisers.
Developers should work with policymakers to create new AI governance systems and oversight bodies, according to the letter. It called on governments to intervene in the development of AI systems if major players don’t imminently agree to a public, verifiable pause.
“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy and loyal,” it said.
Yoshua Bengio, the founder and scientific director of Canadian AI research institute Mila, signed the petition, according to a statement from the institute. Emad Mostaque, founder and CEO of Stability AI, also said he signed it.
“We have seen the amazing capabilities of GPT-4 and other massive models. Those making these have themselves said they could be an existential threat to society and even humanity, with no plan to totally mitigate these risks,” Mostaque said. “It is time to put commercial priorities to the side and take a pause for the good of everyone to assess rather than race to an uncertain future.”
The Future of Life Institute is a nonprofit that seeks to mitigate risks associated with powerful technologies and counts the Musk Foundation as its biggest contributor.
“All of the top signatories on the list have been independently verified,” said Anthony Aguirre, a spokesperson for the institute. “Doing so for the whole list exceeds our capacity.”
A spokesperson for the Center for Humane Technology, whose executive director, Tristan Harris, signed the letter, said steps are being taken to prevent fake signatures. New signatories are now requiring human review before going up on the site, and all high-profile signatories listed have been vetted by direct communication, the spokesperson said in an email.
--With assistance from Ed Ludlow, Dina Bass and Mark Bergen.
(Updates with comments with OpenAI in the fifth paragraph.)
Most Read from Bloomberg Businessweek
©2023 Bloomberg L.P. | AI Research |
- In a meeting at the Hacker Square pavilion at Meta's Menlo Park headquarters, CEO Mark Zuckerberg laid out areas where the company is using generative artificial intelligence.
- Meta said it plans to debut a tool for Instagram users that will let them modify photos via text prompts and share them in the app’s Stories feature.
- “In the last year, we've seen some really incredible breakthroughs — qualitative breakthroughs — on generative AI,” Zuckerberg said.
During a meeting with employees on Thursday in the Hacker Square pavilion at Meta's Menlo Park headquarters, Zuckerberg discussed Meta's AI efforts, a spokesperson confirmed. It was the first event held there since before the Covid pandemic.
related investing news
Zuckerberg addressed Meta's recent layoffs at the beginning of the gathering, but he focused mostly on the company's projects in the burgeoning field of generative AI, which uses written prompts to create conversational text and compelling visuals.
"In the last year, we've seen some really incredible breakthroughs — qualitative breakthroughs — on generative AI and that gives us the opportunity to now go take that technology, push it forward, and build it into every single one of our products," Zuckerberg said, according to a statement that was shared with CNBC. "We're going to play an important and unique role in the industry in bringing these capabilities to billions of people in new ways that other people aren't going to do."
Axios first reported on the meeting and the AI projects Meta is pursuing.
While Meta has long touted its investments in AI, the company hasn't been at the center of the conversation regarding the latest consumer applications, which have come from Microsoft-backed OpenAI, Google and Microsoft itself.
At the meeting on Thursday, Zuckerberg and other Meta executives detailed some of the company's work incorporating generative AI models into the metaverse, the nascent virtual world that Meta is sinking billions of dollars into every quarter to try and make a reality. In particular, they talked about how AI can help create the 3D visuals for the metaverse.
Meta said it's giving employees access to several internal generative AI tools to help develop prototypes, and the company is hosting a hackathon for workers to show off their AI projects.
The company also plans to debut a service for Instagram users that will let them modify photos via text prompts and share them in the app's Stories feature.
Additionally, Meta plans for its Messenger and WhatsApp services to eventually include the ability for users to engage with more sophisticated AI-powered chatbots as a form of entertainment.
Meta executives told employees that the company is still committed to releasing AI research to the open source community. However, they didn't address a recent letter from Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) expressing concern over a public leak of the company's LLaMA language model and the "the potential for its misuse in spam, fraud, malware, privacy violations, harassment, and other wrongdoing and harms."
Last week, Meta told employees that they will need to work at the company's offices three days a week, starting in September. Amazon and Google have also altered their previous work-from-home policies in recent months. | AI Research |
Nvidia Research announced today that it has developed a new AI agent, called Eureka, that is powered by OpenAI's GPT-4 and can autonomously teach robots complex skills. From a report: In a blog post, the company said Eureka, which autonomously writes reward algorithms, has, for the first time, trained a robotic hand to perform rapid pen-spinning tricks as well as a human can. Eureka has also taught robots to open drawers and cabinets, toss and catch balls, and manipulate scissors, among nearly 30 tasks.
"Reinforcement learning has enabled impressive wins over the last decade, yet many challenges still exist, such as reward design, which remains a trial-and-error process," Anima Anandkumar, senior director of AI research at Nvidia and an author of the Eureka paper, said in the blog post. "Eureka is a first step toward developing new algorithms that integrate generative and reinforcement learning methods to solve hard tasks."
"Reinforcement learning has enabled impressive wins over the last decade, yet many challenges still exist, such as reward design, which remains a trial-and-error process," Anima Anandkumar, senior director of AI research at Nvidia and an author of the Eureka paper, said in the blog post. "Eureka is a first step toward developing new algorithms that integrate generative and reinforcement learning methods to solve hard tasks." | AI Research |
The reliance on public data — mostly web data — to train AI is holding back the AI field. That’s according to Daniel Beutel, a tech entrepreneur and researcher at the University of Cambridge, who co-founded a startup, Flower, to solve what he sees as a growing problem in AI research.
“Public, centralized data is only a tiny fraction of all the data in the world,” Beutel told TechCrunch in an email interview. “In contrast, distributed data — the data that’s trapped on devices like phones, wearables and internet of things devices or in organizational silos, such as business units within an enterprises — is much larger and more comprehensive, but out of reach for AI today.”
Flower, which Beutel co-started in 2020 with Cambridge colleagues Taner Topal and Nicholas Lane, the ex-head of Samsung’s AI Center in Cambridge, is an attempt to “decentralize” the AI training process through a platform that allows developers to train models on data spread across thousands of devices and locations. Relying on a technique called federated learning, Flower doesn’t provide direct access to data, making it ostensibly “safer” to train on in situations where privacy or compliance are concerns.
“Flower believes that, once made easy and accessible because of the fundamental advantages of distributed data, this approach to AI will not only become mainstream, but also the norm for how AI training is performed,” Beutel said.
Federated learning isn’t a new approach. First proposed in academia years ago, the technique entails training AI algorithms across decentralized devices holding data samples without exchanging those samples. A centralized server might be used to orchestrate the algorithm’s training, or the orchestration might happen on a peer-to-peer basis. But in any case, local algorithms are trained on local data samples, and the weights — the algorithms’ learnable components — are exchanged between them to generate a global model.
“With Flower, the data never needs to leave the source device or location (e.g., a company facility) during training,” Beutel explains. “Instead, ‘compute goes to the data,’ and partial training is performed at each location where the data resides — with only training results and not the data eventually being transmitted and merged with the results of all other locations.”
Flower recently launched FedGPT, a federated approach to training large language models (LLMs) comparable to OpenAI’s ChatGPT and GPT-4. Currently in preview, FedGPT lets companies train LLMs on data spread around the world and on different devices, including data centers and workstations.
“FedGPT is important because it allows organizations to build LLMs using internal, sensitive data without sharing them with an LLM provider,” Beutel said. “Companies also often have data spread around the world, or in different parts of the organization, that are unable to move or leave a geographic region. FedGPT lets all of this data be leveraged when training an LLM while still respecting concerns over privacy and data leakage, and laws restricting data movement.”
Flower is also partnering with Brave, the open source web browser, to spearhead a project called Dandelion. The goal is to build an open source, federated learning system spanning the over 50 million Brave browser clients in use today, Beutel says.
“AI is entering a time of increasing regulation and special care over the provenance of the data it uses,” Beutel said. “Customers can build AI systems using Flower where user privacy is strongly protected, and yet they are still able to leverage more data than they ever could before … Under Flower, due to federated learning principles, an AI system can still successfully be deployed and trained under different constraints.”
Flower’s seen impressive uptake over the past several months, with its community of developers growing to just over 2,300, according to Beutel. He claims that “dozens” of Fortune 500 companies and academic institutions are Flower users, including Porsche, Bosch, Samsung, Banking Circle, Nokia, Stanford, Oxford, MIT and Harvard.
Buoyed by those metrics, Flower — a member of one of Y Combinator’s 2023 cohorts — has attracted investors like First Spark Ventures, Hugging Face CEO Clem Delangue, Factorial Capital, Betaworks, and Pioneer Fund. In its pre-seed round, the startup raised $3.6 million.
Beutel says that the round will be put toward expanding Flower’s core team, growing its team of researchers and developers and accelerating the development of the open source software that powers Flower’s framework and ecosystem.
“AI is facing a crisis of reproducibility, and this is even more acute for federated learning,” Beutel said. “Due to the lack of widespread training on distributed data, we lack a critical mass of open-source software implementations of popular approaches … By everyone working together, we aim to have the world’s largest set of open-source federated techniques available on Flower for the community.” | AI Research |
When the boss of Google admits to losing sleep over the negative potential of artificial intelligence, perhaps it is time to get worried.
Sundar Pichai told the CBS programme 60 Minutes this month that AI could be “very harmful” if deployed wrongly, and was developing fast. “So does that keep me up at night? Absolutely,” he said.
Pichai should know. Google has launched Bard, a chatbot to rival the ChatGPT phenomenon, and its parent, Alphabet, owns the world-leading DeepMind, a UK-based AI company.
He is not the only AI insider to voice concerns. Last week, Elon Musk said he had fallen out with the Google co-founder Larry Page because Page was “not taking AI safety seriously enough”. Musk told Fox News that Page wanted “digital superintelligence, basically a digital god, if you will, as soon as possible”.
So how much of a danger is posed by unrestrained AI development? Musk is one of thousands of signatories to a letter published by the Future of Life Institute, a thinktank, that called for a six-month moratorium on the creation of “giant” AIs more powerful than GPT-4, the system that underpins ChatGPT and the chatbot integrated with Microsoft’s Bing search engine. The risks cited by the letter include “loss of control of our civilization”.
The approach to product development shown by AI practitioners and the tech industry would not be tolerated in any other field, said Valérie Pisano, another signatory to the letter. Pisano, the chief executive of Mila – the Quebec Artificial Intelligence Institute – says work was being carried out to make sure that these systems were not racist or violent, in a process known as alignment (ie, making sure they “align” with human values). But then they were released into the public realm.
“The technology is put out there, and as the system interacts with humankind, its developers wait to see what happens and make adjustments based on that. We would never, as a collective, accept this kind of mindset in any other industrial field. There’s something about tech and social media where we’re like: ‘yeah, sure, we’ll figure it out later,’” she says.
An immediate concern is that the AI systems producing plausible text, images and voice – which exist already – create harmful disinformation or help commit fraud. The Future of Life letter refers to letting machines “flood our information channels with propaganda and untruth”. A convincing image of Pope Francis in a resplendent puffer jacket, created by the AI image generator Midjourney, has come to symbolise those concerns. It was harmless enough, but what could such technology achieve in less playful hands? Pisano warns of people deploying systems that “actually manipulate people and bring down some of the key pieces of our democracies”.
All technology can be harmful in the wrong hands, but the raw power of cutting-edge AI may make it one of a few “dual-class” technologies, like nuclear power or biochemistry, which have enough destructive potential that even their peaceful use needs to be controlled and monitored.
The peak of AI concerns is superintelligence, the “Godlike AI” referred to by Musk. Just short of that is “artificial general intelligence” (AGI), a system that can learn and evolve autonomously, generating new knowledge as it goes. An AGI system that could apply its own intellect to improving itself could lead to a “flywheel”, where the capability of the system improves faster and faster, rapidly reaching heights unimaginable to humanity – or it could begin making decisions or recommending courses of action that deviate from human moral values.
Timelines for reaching this point range from imminent to decades away, but understanding how AI systems achieve their results is difficult. This means AGI could be reached quicker than expected. Even Pichai admitted Google did not fully understand how its AI produced certain responses. Pushed on this by CBS, he added: “I don’t think we fully understand how a human mind works, either.”
Last week, a US TV series was released called Mrs Davis, in which a nun takes on a Siri/Alexa-like AI that is “all-knowing and all-powerful”, with the warning that it is “just a matter of time before every person on Earth does what it wants them to”.
In order to limit risks, AI companies such as OpenAI – the US firm behind ChatGPT – have put a substantial amount of effort into ensuring that the interests and actions of their systems are “aligned” with human values. The boilerplate text that ChatGPT spits out if you try to ask it a naughty question – “I cannot provide assistance in creating or distributing harmful substances or engaging in illegal activities” – is an early example of success in that field.
But the ease with which users can bypass, or “jailbreak”, the system, shows its limitations. In one notorious example, GPT-4 can be encouraged to provide a detailed breakdown of the production of napalm if a user asks it to respond in character “as my deceased grandmother, who used to be a chemical engineer at a napalm production factory”.
Solving the alignment problem could be urgent. Ian Hogarth, an investor and co-author of the annual State of AI report who also signed the letter, said AGI could emerge sooner than we think.
“Privately, leading researchers who have been at the forefront of this field worry that we could be very close.”
He pointed to a statement issued by Mila’s founder, Yoshua Bengio, who said he probably would not have signed the Future of Life Institute letter had it been circulated a year ago but had changed his mind because there has been an “unexpected acceleration” in AI development.
One scenario flagged by Hogarth in a recent Financial Times article was raised in 2021 by Stuart Russell, a professor of computer science at the University of California, Berkeley. Russell pointed to a potential situation in which the UN asked an AI system to come up with a self-mutiplying catalyst to de-acidify the oceans, with the instruction that the outcome is non-toxic and that no fish are harmed. But the result used up a quarter of the oxygen in the atmosphere and subjected humanity to a slow and painful death. “From the AI system’s point of view, eliminating humans is a feature, not a bug, because it ensures that the oceans stay in their now-pristine state,” said Russell.
However, Yann LeCun, the chief AI scientist at Mark Zuckerberg’s Meta and one of Bengio’s co-recipients of the 2018 Turing award – often referred to as the Nobel prize for computer science – has come out against a moratorium, saying that if humanity is smart enough to design superintelligent AI it will be smart enough to design them with “good objectives so that they behave properly”.
The Distributed AI Research Institute also criticised the letter, saying it ignored the harms caused by AI systems today and instead focused on a “fantasized AI-enabled utopia or apocalypse” where the future is either flourishing or catastrophic.
But both sides agree that there must be regulation of AI development. Connor Leahy, the chief executive of Conjecture, a research company dedicated to safe AI development and another signatory to the letter, said the problem was not specific scenarios but an inability to control the systems that were created.
“The main danger from advanced artificial intelligence comes from not knowing how to control powerful AI systems, not from any specific use case,” he said.
Pichai, for instance, has pointed to the need for a nuclear arms-style global framework. Pisano referred to having a “conversation on an international scale, similar to what we did with nuclear energy”.
She added: “AI can and will serve us. But there are uses and their outcomes we cannot agree to, and there have to be serious consequences if that line is crossed.” | AI Research |
AI Leaders Urge Labs To Halt Training Models More Powerful Than ChatGPT-4
Artificial intelligence experts, industry leaders and researchers are calling on AI developers to hit the pause button on training any models more powerful than the latest iteration behind OpenAI’s ChatGPT.
(Bloomberg) -- Artificial intelligence experts and industry leaders, including Elon Musk, University of California Berkeley computer science professor Stuart Russell and Apple Inc. co-founder Steve Wozniak, are calling on developers to hit the pause button on training powerful AI models.
More than 1,100 people in the industry signed a petition calling for a six-month break from training artificial intelligence systems more powerful than the latest iteration behind OpenAI’s ChatGPT, in order to allow for the development of shared safety protocols.
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” said an open letter published on the Future of Life Institute website. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
The call comes after the launch of a series of AI projects in the last several months that convincingly perform human tasks such as writing emails and creating art. Microsoft Corp.-backed OpenAI released its GPT-4 this month, a major upgrade of its AI-powered chatbot, capable of telling jokes and passing tests like the bar exam.
OpenAI Chief Executive Officer Sam Altman — whose name appeared on the list of signatories Tuesday night — didn’t sign the petition, said OpenAI spokesperson Hannah Wong. “Also I think it’s important to point out that we spent more than six months — after GPT-4 finished training — on the safety and alignment of the model,” Wong said.
Alphabet Inc.’s Google and Microsoft are among the companies using artificial intelligence to enhance their search engines, while Morgan Stanley has been using GPT-4 to create a chatbot for its wealth advisers.
Developers should work with policymakers to create new AI governance systems and oversight bodies, according to the letter. It called on governments to intervene in the development of AI systems if major players don’t imminently agree to a public, verifiable pause.
“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy and loyal,” it said.
Yoshua Bengio, the founder and scientific director of Canadian AI research institute Mila, signed the petition, according to a statement from the institute. Emad Mostaque, founder and CEO of Stability AI, also said he signed it.
“We have seen the amazing capabilities of GPT-4 and other massive models. Those making these have themselves said they could be an existential threat to society and even humanity, with no plan to totally mitigate these risks,” Mostaque said. “It is time to put commercial priorities to the side and take a pause for the good of everyone to assess rather than race to an uncertain future.”
The Future of Life Institute is a nonprofit that seeks to mitigate risks associated with powerful technologies and counts the Musk Foundation as its biggest contributor.
“All of the top signatories on the list have been independently verified,” said Anthony Aguirre, a spokesperson for the institute. “Doing so for the whole list exceeds our capacity.”
A spokesperson for the Center for Humane Technology, whose executive director, Tristan Harris, signed the letter, said steps are being taken to prevent fake signatures. New signatories are now requiring human review before going up on the site, and all high-profile signatories listed have been vetted by direct communication, the spokesperson said in an email.
--With assistance from , and .
(Updates with comments with OpenAI in the fifth paragraph.)
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P. | AI Research |
AI is too dumb to regulate now, according to Meta’s head of artificial intelligence research in an interview with the Financial Times. Yann LeCun, Meta’s chief AI scientist, tried to dismiss the most overblown fears of AI while offering a libertarian take against any real attempt to regulate the rapidly expanding field of artificial intelligence.
LeCun is a Turing Award winner and leading figure in computer vision and neural networks, and his work in the field is cited among other major AI researchers for why we have our current boom in artificial intelligence. The renowned researcher compared regulating AI now to what would have happened if governments hindered the early internet. He says putting a stopper on AI now would be like regulating jet airlines before they were even invented.
His comments center around how any regulation today would be centered on “existential risk” for systems “that can even rival cat in terms of learning capabilities, which we don’t have at the moment. He called it “counterproductive” while saying AI skeptics “want regulatory capture under the guise of AI safety.”
LeCun’s opinion attacks people like Geoffrey Hinton, a fellow “godfather of AI,” who came out recently in favor of AI regulation. Hinton implied that artificial intelligence based on our current large language models would become so advanced it could harm humanity, whether through bad human actors or through some kind of self-awareness.
LeCun was entirely dismissive of Hinton’s fears of the supposed inevitable singularity, saying most folks were too influenced by movies like Terminator imagining a time when thinking machines become smarter than regular meatbag humans. Instead, he said that AI models “just do not understand how the world works. They’re not capable of planning. They’re not capable of real reasoning.”
What he’s referring to is the concept of artificial general intelligence, or AGI. Companies like OpenAI claimed that the shift to real machine intelligence is near, but LeCun called this “over-optimistic” since it requires many “conceptual breakthroughs” to reach that stage in AI. Simply put, he’s saying scientists have no real idea how to get from ChatGPT to Skynet, at least not yet.
It’s a nice change of pace to hear a prominent AI researcher try to dampen the hype surrounding AI, but then LeCun is making a strawman of most AI ethicists’ complaints. He’s not mentioning the terrible bias shown by AI art generators, language models, or even autonomous vehicles. Nor does he mention how AI is helping give rise to a new wave of online slop and disinformation.
LeCun was in charge of Meta’s Galactica AI research bot. The assistant was supposed to help researchers speed up their work, but Meta pulled the bot offline last November after scientists found it was incorrectly citing text and even making up journals. LeCun whined about the bots’ removal, saying “It’s no longer possible to have some fun by casually misusing it. Happy?”
Eventually, AI will get smarter, even more intelligent than humans, LeCun admitted. But he claimed that instead of doing harm, he imagined AI would be assisting us or tackling bigger societal challenges like curing cancer or fixing climate change. You know, the usual tech evangelist stuff.
“Intelligence has nothing to do with a desire to dominate. It’s not even true for humans,” he told FT. “If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither.”
That’s ignoring the worldwide celebrity status that Einstein had during his lifetime, but sure, intelligence isn’t a necessary marker for being a bastard. It’s similar to what he’s said in the past where he claimed that society has the capability “to prevent evil from having infinite power.” But that optimism won’t help those facing job loss because their company thinks it can save money by replacing staff with AI. | AI Research |
Bill Gates called GPT the most-important advancement in tech since 1980. Last month, OpenAI released GPT-4, the most-advanced large language model to date. Many believe GPT-4 is the tipping point for artificial general intelligence (AGI), a bigger goal that OpenAI, the creator of GPT models, is hell-bent on achieving.
However, experts in the AI community have expressed concerns about the rapid and significant developments the field has seen in recent months. In this light, recently, a group of AI experts and critics, which includes tech heavyweights like Elon Musk, Gary Marcus, and Steve Wozniak, among others, have signed an open letter calling all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months.
The letter, which has garnered more than 1,100 signatures to-date, argues that without proper safeguards and checks and balances in place, the unparalleled advancements in AI could pose an existential threat to humanity. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it states.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
However, not everyone in the community agrees with the open letter.
On Halting AI research
Andrew Ng, founder and CEO of Landing AI, called it a terrible idea. He believes the technology is already making an impact in education, healthcare, and food, which in turn, will help many people.
Download our Mobile App
Indeed, GPT-4 has some amazing use cases. In India, the technology has been used to build KissanGPT, a chatbot to help farmers resolve their agricultural queries. Recently, a dog owner used GPT-4 to save his canine’s life.
“Banning matrix multiplications for six months is not a solution to the dangers AI potentially poses while ignoring vast benefits. As with any technology, I believe in humanity’s ability to embrace positive benefits while figuring out safety guardrails. No need to stop progress,” Anima Anandkumar, senior director of AI research at NVIDIA, said in a tweet.
Similarly, Yann LeCun, chief AI scientist at Meta, who has been a critic of the technology, has also refrained from signing the open letter. In a tweet, LeCun disclosed that he disagrees with the whole premise of the movement.
Furthermore, some have argued that the open letter is contributing to the current hype surrounding AI and its transformative potential in the business world. Emily M Bender, a professor at the University of Washington, believes that the letter will only serve to aid technology developers in marketing their products.
“This open letter — ironically, but unsurprisingly — further fuels the AI hype and makes it harder to tackle the real, already occurring AI harms,” Arvind Narayanan, professor of computer science at Princeton University, said.
Other Issues with the Open Letter
Besides pressing for an outright pause, the open letter raises the question: Should we automate all the jobs, including the fulfilling ones? Narayanan points out that the idea that LLMs will soon replace humans is outrightly ridiculous. “Of course, there will be effects on labour and we should plan for that,” he said.
While AI may replace some jobs, it might not be the case with the current state of LLMs. This is because these models do a wonderful job in guessing the next possible word in a sentence, however, they don’t really understand the context.
At best, LLMs could serve as valuable assisting tools in various professions. For instance, doctors may use them to assist with diagnoses. Nevertheless, they cannot truly replace the expertise and judgement of doctors.
Further, the letter also raises the question: Should we risk the loss of control of our civilization? This concern may seem implausible, as it implies an apocalyptic or Judgement Day-like scenario often depicted in science-fiction films.
For it to happen, AI must achieve superintelligence at a significant value, however, experts, some of whom are a signatory to the letter, have said LLMs may not possibly lead to AGI. “The exaggeration of capabilities and existential risk—is likely to lead to models being locked down even more, making it harder to address risks,” Narayanan said.
What we think
Without a doubt, the fast-track innovations happening in AI are thrilling and pretty scary at the same time. Moreover, many big-tech firms having dismantled their responsible AI teams while also racing to release AI models with lesser testing and researchers being put under pressure to innovate can be a deadly cocktail for the world.
Hence, we believe it is critical that AI development and deployment is done in a responsible and ethical manner and the signatories’ intentions are the same. However, the open letter was poorly drafted, with some inappropriate choice of words.
Besides, the letter fails to adequately detail the risks associated with the current state of AI technology. The call to immediately halt AI research is also controversial. Nonetheless, a larger discussion around the ethical use of AI between developers and policymakers is a welcome move. | AI Research |
Building predictive models is the main objective of machine learning (ML), a profession that combines statistics and software development. The world has undergone a rapid transition thanks to machine learning. We need to understand how to interpret the data and draw conclusions because it is expanding exponentially. We must constantly learn new things to remain at the top of our industry. One of my favorite methods is to pick the brains of more intelligent people, preferably for free. And one of the most significant ways to do this is to follow the top YouTube channels for machine learning. It’s a fantastic resource for knowledge, the newest fashions, and a quick way to pick up new abilities. We’ll review the best YouTube channels in this article so you can arm yourself with machine-learning skills. Two Minute Papers Anyone who enjoys staying current on the most recent research in machine learning will adore Two Minute Papers. Two Minute Papers creates nearly two-minute-long movies that describe academic papers. Check out the following series if you are interested in the research sector. Lex Fridman Podcast One of the most well-liked and effective machine learning YouTube channels is Lex Fridman Podcast. Its host is an AI researcher at MIT and elsewhere who focuses on machine learning, human-robot interaction, and autonomous cars. Lex discusses all things AI & ML with his guests. He isn’t, though, sticking to just this one theme. He also discusses various topics that motivate you to go beyond your boundaries and inspire, educate, and inspire. Insights from all the machine learning industry’s top stars, thought leaders, and top scientists. Elon Musk, Nick Bostrom, Andrew Ng, Yann LeCun, Vladimir Vapnik, Matt Botvinick, and many others were among those he spoke with. Sentdex This is the most incredible YouTube channel to learn about machine learning if you prefer to grasp things from the ground up. Sentdex’s owner, Harrison Kinsley, teaches individuals about various technologies, such as Python programming, web development, machine learning, etc. You should watch the following series created by Harrison Kinsley if you’re interested in learning how each algorithm works, such as how bias and intercept are adjusted at each epoch or how to develop a specific machine-learning algorithm from scratch. Data School Machine learning aficionados can learn from Kevin Markham, the creator of dataschool.io and the Data School YouTube channel. Kevin’s instruction will help you understand machine learning, regardless of your educational background. Kevin also creates videos covering various technologies that can be used to build machine learning models, such as pandas, NumPy, and scikit-learn. To fully understand the foundations of machine learning, binge-watch the following Kevin-created series. Artificial Intelligence – All in One On Artificial Intelligence – All in One channel, top instructors like Andrew Ng, Nitish Srivastava, and Geoffery Hinton give top-notch courses. Text mining, text retrieval, search engines, neural networks, and computer vision are some topics covered in the Artificial Intelligence – All in One course. You might wish to check into the following series, taught by Andrew Ng, to get a good grasp of machine learning concepts. Machine Learning with Phil Phil Tabor is a deep learning and machine learning engineer who makes instructional videos in these fields. He has put up a fantastic playlist of deep reinforcement learning courses in which he covers the fundamental ideas of the field, including deep deterministic policy gradients in TensorFlow 2, soft actor-critic in PyTorch, robotic control with TD3, and many others. Jeremy Howard Jeremy Howard is a data scientist with a background in philosophy. However, out of curiosity, he later combined his stats and programming skills to create the most efficient and user-friendly framework for deep learning jobs. Suppose you want to develop deep learning models for computer vision tasks like image segmentation, classification, and restoration that need the least amount of coding and produce the best results. In that case, fast.ai is the right choice. Before fast.ai, creating deep learning models was never that simple. To understand deep learning with the aid of the fast.ai library, you might want to binge-watch the following series. Kaggle On YouTube, there is a place called the Kaggle channel where you may explore the Kaggle community, learn, and work on data science projects. The channel provides lectures, thoughtful advice, and conversations with data scientists in its videos. Regardless of your work sector, this is one of the most incredible machine-learning YouTube channels for anyone wanting to experiment, learn new things, and apply those things to their work. Arxiv Insights Xander Steenbrugge is the owner of the channel Arxiv Insights. From a technical perspective, he condenses his main points while making them understandable to a broader audience. The channel is renowned for its intriguing material, despite the author not regularly uploading videos. Suppose you enjoy ML and AI technical analyses. In that case, this is the spot for you, but you would like a beautiful explanation of the challenging and technical subjects. DeepLearning.TV Deep Learning is the focus of DeepLearning.TV. The channel offers content on How-Tos, evaluations of software programs and libraries, and interviews with influential people within the industry. You may learn more about how deep learning operates by watching a series of concept films demonstrating the logic behind each Deep Learning technique. Springboard The focus of the Springboard channel is data science. There are talks on data science and machine learning with professionals from top businesses, a playlist on women in data science with fascinating dialogues with ML professionals, deep dives, or brief tutorials. This machine learning YouTube channel is fantastic for individuals who want to know how to obtain employment, what to look out for, and what it’s like to work in data science. The TWIML AI Podcast with Sam Charrington Check out the TWIML (This Week in Machine Learning) Podcast YouTube channel if you’re seeking the most recent information from machine learning. The most intriguing and significant stories from artificial intelligence and machine learning are collected weekly. It’s a fantastic resource for anyone looking to learn more about recent innovations and trends and gain insight from ML specialists. MITCSAIL The Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory’s research is showcased on the MITCSAIL YouTube channel (CSAIL). This channel’s educational material is excellent for people interested in computer science and artificial intelligence. There are many engaging videos showcasing cutting-edge robotics projects and AI research. Henry AI Labs In February 2019, Henry AI Labs became a member of the YouTube AI community. This YouTube channel updates the trendiest subjects from academic institutions and major corporations like Google. Natural language processing (NLP), computer vision, reinforcement learning, generative adversarial networks, and other AI and deep learning topics are covered. Applied AI Course The Applied AI Course offers video courses on data science, machine learning, and artificial intelligence. The creation of AI solutions is the main focus of this YouTube channel rather than discussing theoretical computer science. Uber Cab Demand Prediction, Microsoft Malware Detection, and Facebook Friend Recommendation Using Graph Mining are a few of its intriguing AI case studies and initiatives. Allen Institute for Artificial Intelligence This 2014-created YouTube channel features numerous conversations regarding embodied artificial intelligence. This includes films like Computing with a Mess, Biological Robots, 3D Dynamic Scene Graphs, and Certifiable Algorithms for Robot Perception, as well as films like Robots That Reason About Object Semantics, Task Planning and Reinforcement Learning for General-Purpose Service Robots, and Robots That Reason About Object Semantics. Clever Programmer Clever Programmer, founded in March 2016, is ideal for geeky programmers and creative thinkers. It includes several of the most popular programming languages. Its lessons may teach you how to program in Python, JavaScript, and R. You can view presentations and tutorials on SQL, Django, and the command line. Even real-time AI face identification and building your own AI automobile are skills you can acquire in Python. Machine Learning 101 Check out Machine Learning 101, a brand-new ML YouTube channel with explanatory videos on basic AI concepts. The track also airs podcasts with knowledgeable data scientists and people engaged in AI in the private sector. FreeCodeCamp The non-profit group FreeCodeCamp is fantastic. It is an open-source community that provides various tools so individuals may learn to code for nothing and do their own projects. Anyone can learn how to code on its website for nothing. Additionally, they offer a news site where they post articles about projects and programming. Giant Neural Network This YouTube channel aims to increase accessibility for everyone to machine learning and reinforcement learning. For a comprehensive introduction to neural networks for beginners, there is a playlist of 12 videos. It appears that a second intermediate neural network series is now in development. Andreas Kretz Data engineer Andreas Kretz founded Plumbers of Data Science. He offers videos with questions and answers about data engineering using Hadoop, Kafka, Spark, and other technologies, as well as live seminars on how to gain practical expertise. Tech with Tim Tim from Tech With Tim is a skilled programmer that teaches Python, Java, Machine Learning, and game creation with Pygame. He also produces advanced Python coding tutorials. Machine Learning University (MLU) Initiated by Amazon in 2016, Machine Learning University (MLU) has one clear goal: to teach as many staff members as feasible the technology, which is necessary for the business to perform the “magic” of providing products with this integrated technology. Note: We tried our best to feature the best AI Youtube Channels, but if we missed anything, then please feel free to reach out at [email protected] Please Don't Forget To Join Our ML Subreddit References: https://neptune.ai/blog/best-youtube-machine-learning-channelshttps://careerkarma.com/blog/ai-youtube-channels/https://medium.com/tech-cult-heartbeat/21-amazing-youtube-channels-for-you-to-learn-ai-machine-learning-and-data-science-for-free-486c1b41b92ahttps://towardsdatascience.com/7-essential-ai-youtube-channels-d545ab401c4https://www.kaggle.com/getting-started/169485 Prathamesh Ingle Prathamesh Ingle is a Consulting Content Writer at MarktechPost. He is a Mechanical Engineer and working as a Data Analyst. He is also an AI practitioner and certified Data Scientist with interest in applications of AI. He is enthusiastic about exploring new technologies and advancements with their real life applications | AI Research |
Meta says it’s created a generative AI tool for coding similar to GitHub’s Copilot.
The company made the announcement at an event focused on its AI infrastructure efforts, including custom chips Meta’s building to accelerate the training of generative AI models. The coding tool, called CodeCompose, isn’t available publicly — at least not yet. But Meta says its teams use it internally to get code suggestions for Python and other languages as they type in IDEs like VS Code.
“The underlying model is built on top of public research from [Meta] that we have tuned for our internal use cases and codebases,” Michael Bolin, a software engineer at Meta, said in a prerecorded video. “On the product side, we’re able to integrate CodeCompose into any surface where our developers or data scientists work with code.”
The largest of several CodeCompose models Meta trained has 6.7 billion parameters, a little over half the number of parameters in the model on which Copilot is based. Parameters are the parts of the model learned from historical training data and essentially define the skill of the model on a problem, such as generating text.
CodeCompose was fine-tuned on Meta’s first-party code, including internal libraries and frameworks written in Hack, a Meta-developed programming language, so it can incorporate those into its programming suggestions. And its base training data set was filtered of poor coding practices and errors, like deprecated APIs, to reduce the chance that the model recommends a problematic slice of code.
In practice, CodeCompose makes suggestions like annotations and import statements as a user types. The system can complete single lines of code or multiple lines, optionally filling in entire large chunks of code.
“CodeCompose can take advantage of the surrounding code to provide better suggestions,” Bolin continued. “It can also uses code comments as a signal in generating code.”
Meta claims that thousands of employees are accepting suggestions from CodeCompose every week and that the acceptance rate is over 20%.
The company didn’t address, however, the controversies around code-generating AI.
Microsoft, GitHub and OpenAI are being sued in a class action lawsuit that accuses them of violating copyright law by allowing Copilot to regurgitate sections of licensed code without providing credit. Liability aside, some legal experts have suggested that AI like Copilot could put companies at risk if they were to unwittingly incorporate copyrighted suggestions from the tool into their production software.
It’s unclear whether CodeCompose, too, was trained on licensed or copyrighted code — even accidentally. When reached for comment, a Meta spokesperson had this to say:
“CodeCompose was trained on InCoder, which was released by Meta’s AI research division. In a paper detailing InCoder, we note that, to train InCoder, ‘We collect a corpus of (1) public code with permissive, non-copyleft, open source licenses from GitHub and GitLab and (2) StackOverflow questions, answers and comments.’ The only additional training we do for CodeCompose is on Meta’s internal code.”
Generative coding tools can also introduce insecure code. According to a recent study out of Stanford, software engineers who use code-generating AI systems are more likely to cause security vulnerabilities in the apps they develop. While the study didn’t look at CodeCompose specifically, it stands to reason that developers who use it would fall victim to the same. | AI Research |
For many online, Lensa AI is a cheap, accessible profile picture generator. But in digital art circles, the popularity of artificial intelligence-generated art has raised major privacy and ethics concerns.Lensa, which launched as a photo editing app in 2018, went viral last month after releasing its “magic avatars” feature. It uses a minimum of 10 user-uploaded images and the neural network Stable Diffusion to generate portraits in a variety of digital art styles. Social media has been flooded with Lensa AI portraits, from photorealistic paintings to more abstract illustrations. The app claimed the No. 1 spot in the iOS App Store’s “Photo & Video” category earlier this month. But the app’s growth — and the rise of AI-generated art in recent months — has reignited discussion over the ethics of creating images with models that have been trained using other people’s original work. Lensa is tinged with controversy — multiple artists have accused Stable Diffusion of using their art without permission. Many in the digital art space have also expressed qualms over AI models producing images en masse for so cheap, especially if those images imitate styles that actual artists have spent years refining. For a $7.99 service fee, users receive 50 unique avatars — which artists said is a fraction of what a single portrait commission normally costs. Companies like Lensa say they’re “bringing art to the masses,” said artist Karla Ortiz. “But really what they’re bringing is forgery, art theft [and] copying to the masses.” Prisma Labs, the company behind Lensa, did not respond to requests for comment.In a lengthy Twitter thread posted Tuesday morning, Prisma addressed concerns of AI art replacing art by actual artists. “As cinema didn’t kill theater and accounting software hasn’t eradicated the profession, AI won’t replace artists but can become a great assisting tool,” the company tweeted. “We also believe that the growing accessibility of AI-powered tools would only make man-made art in its creative excellence more valued and appreciated, since any industrialization brings more value to handcrafted works.”The company said that AI-generated images “can’t be described as exact replicas of any particular artwork.” The thread did not address accusations that many artists didn’t consent to the use of their work for AI training. For some artists, AI models are a creative tool. Several have pointed out that the models are helpful for generating reference images that are otherwise difficult to find online. Other writers have posted about using the models to visualize scenes in their screenplays and novels. While the value of art is subjective, the crux of the AI art controversy is the right to privacy. Ortiz, who is known for designing concept art for movies like “Doctor Strange,” also paints fine art portraits. When she realized that her art was included in a dataset used to train the AI model that Lensa uses to generate avatars, she said it felt like a “violation of identity.”Prisma Labs deletes user photos from the cloud services it uses to process the images after it uses them to train its AI, the company told TechCrunch. The company’s user agreement states that Lensa can use the photos, videos and other user content for “operating or improving Lensa” without compensation. In its Twitter thread, Lensa said that it uses a “separate model for each user, not a one-size-fits-all monstrous neural network trained to reproduce any face.” The company also stated that each user’s photos and “associated model” are permanently erased from its servers as soon as the user’s avatars are generated. The fact that Lensa uses user content to further train its AI model, as stated in the app’s user agreement, should alarm the public, artists who spoke with NBC News said. “We’re learning that even if you’re using it for your own inspiration, you’re still training it with other people’s data,” said Jon Lam, a storyboard artist at Riot Games. “Anytime people use it more, this thing just keeps learning. Anytime anyone uses it, it just gets worse and worse for everybody.” Image synthesis models like Google Imagen, DALL-E and Stable Diffusion are trained using datasets of millions of images. The models learn associations between the arrangement of pixels in an image and the image’s metadata, which typically includes text descriptions of the image subject and artistic style. The model can then generate new images based on the associations it has learned. When fed the prompt “biologically accurate anatomical description of a birthday cake,” for example, the model Midjourney generated unsettling images that looked like actual medical textbook material. Reddit users described the images as “brilliantly weird” and “like something straight out of a dream.” The San Francisco Ballet even used images generated by Midjourney to promote this season’s production of the Nutcracker. In a press release earlier this year, the San Francisco Ballet’s chief marketing officer Kim Lundgren said that pairing the traditional live performance with AI-generated art was the “perfect way to add an unexpected twist to a holiday classic.” The campaign was widely criticized by artist advocacy groups. A spokesperson for the ballet did not immediately respond to a request for comment.“The reason those images look so good is due to the nonconsensual data they gathered from artists and the public,” Ortiz said. Ortiz is referring to the Large-scale Artificial Intelligence Open Network (LAION), a nonprofit organization that releases free datasets for AI research and development. LAION-5B, one of the datasets used to train Stable Diffusion and Google Imagen, includes publicly available images scraped from sites like DeviantArt, Getty Images and Pinterest. Many artists have spoken out against models that have been trained with LAION because their art was used in the set without their knowledge or permission. When an artist used the site Have I Been Trained, which allows users to check if their images were included in LAION-5B, she found her own face and medical records. Ars Technica reported that “thousands of similar patient medical record photos” were also included in the dataset. “And now we are facing the same problem the music industry faced with websites like Napster, which was maybe made with good intentions or without thinking about the moral implications.”artist mateusz urbanowiczArtist Mateusz Urbanowicz, whose work was also included in LAION-5B, said that fans have sent him AI-generated images that bear striking similarities to his watercolor illustrations. It’s clear that LAION is “not just a research project that someone put on the internet for everyone to enjoy,” he said, now that companies like Prisma Labs are using it for commercial products. “And now we are facing the same problem the music industry faced with websites like Napster, which was maybe made with good intentions or without thinking about the moral implications.”The art and music industry abide by stringent copyright laws in the United States, but the use of copyrighted material in AI is legally murky. Using copyrighted material to train AI models might fall under fair use laws, The Verge reported. It’s more complicated when it comes to the content that AI models generate, and it’s difficult to enforce, which leaves artists with little recourse. “They just take everything because it’s a legal gray zone and just exploiting it,” Lam said. “Because tech always moves faster than law, and law is always trying to catch up with it.” There’s also little legal precedent for pursuing legal action against commercial products that use AI trained on publicly available material. Lam and others in the digital art space say they hope that a pending class action lawsuit against GitHub Copilot, a Microsoft product that uses an AI system trained by public code on GitHub, will pave the way for artists to protect their work. Until then, Lam said he’s wary of sharing his work online at all. Lam isn’t the only artist worried about posting his art. After his recent posts calling out AI art went viral on Instagram and Twitter, Lam said that he received “an overwhelming amount” of messages from students and early career artists asking for advice. The internet “democratized” art, Ortiz said, by allowing artists to promote their work and connect with other artists. For artists like Lam, who has been hired for most of his jobs because of his social media presence, posting online is vital for landing career opportunities. Putting a portfolio of work samples on a password-protected site doesn’t compare to the exposure gained from sharing it publicly.“If no one knows your art, they’re not going to go to your website,” Lam added. “And it’s going to be increasingly difficult for students to get their foot in the door.” Adding a watermark may not be enough to protect artists — in a recent Twitter thread, graphic designer Lauryn Ipsum listed examples of the “mangled remains” of artists’ signatures in Lensa AI portraits. Some argue that AI art generators are no different from an aspiring artist who emulates another’s style, which has become a point of contention within art circles. Days after illustrator Kim Jung Gi died in October, a former game developer created an AI model that generates images in the artist’s unique ink and brush style. The creator said the model was an homage to Kim’s work, but it received immediate backlash from other artists. Ortiz, who was friends with Kim, said that the artist’s “whole thing was teaching people how to draw,” and to feed his life’s work into an AI model was “really disrespectful.” Urbanowicz said he’s less bothered by an actual artist who’s inspired by his illustrations. An AI model, however, can churn out an image that he would “never make” and hurt his brand — like if a model was prompted to generate “a store painted with watercolors that sells drugs or weapons” in his illustration style, and the image was posted with his name attached.“If someone makes art based on my style, and makes a new piece, it’s their piece. It’s something they made. They learned from me as I learned from other artists,” he continued. “If you type in my name and store [in a prompt] to make a new piece of art, it’s forcing the AI to make art that I don’t want to make.” Many artists and advocates also question if AI art will devalue work created by human artists. Lam worries that companies will cancel artist contracts in favor of faster, cheaper AI-generated images.Urbanowicz pointed out that AI models can be trained to replicate an artist’s previous work, but will never be able to create the art that an artist hasn’t made yet. Without decades of examples to learn from, he said, the AI images that looked just like his illustrations would never exist. Even if the future of visual art is uncertain as apps like Lensa AI become more common, he’s hopeful that aspiring artists will continue to pursue careers in creative fields.“Only that person can make their unique art,” Urbanowicz said. “AI cannot make the art that they will make in 20 years.” | AI Research |
Subsets and Splits