article_text
stringlengths
294
32.8k
topic
stringlengths
3
42
The jackpot from a seven-year mission to bring back bits of an asteroid was unveiled on Wednesday. From a report: NASA officials in Houston displayed images of salt-and-pepper chunks of rock and particles of dark space dust that were brought back to Earth from the asteroid, Bennu, and described initial scientific observations about the material. The mission, Osiris-Rex, concluded in September when a capsule full of asteroid was jettisoned through Earth's atmosphere and recovered in the Utah desert. The first pieces of materials that leaked outside the container were analyzed using a variety of laboratory techniques, revealing just the earliest findings. Scientists found water molecules trapped in clay minerals -- water from asteroids similar to Bennu could have filled Earth's oceans. "The reason that Earth is a habitable world, that we have oceans and lakes and rivers and rain, is because these clay minerals, like minerals, like the ones we're seeing from Bennu, landed on Earth four billion years ago," Dante Lauretta, the mission's principal investigator, said during a NASA event on Wednesday. The materials also contained sulfur, key for many geological transformations in rocks. "It determines how quickly things melt and it is also critical for biology," said Dr. Lauretta, who displayed microscopic images and 3-D visualizations of the material. The scientists also found magnetite, an iron oxide mineral that can play an important role as a catalyst in organic chemical reactions. "We're looking at the kinds of minerals that may have played a central role in the origin of life on Earth," Dr. Lauretta said. The samples are also chock-full of carbon, the element that is the building block for life. Scientists found water molecules trapped in clay minerals -- water from asteroids similar to Bennu could have filled Earth's oceans. "The reason that Earth is a habitable world, that we have oceans and lakes and rivers and rain, is because these clay minerals, like minerals, like the ones we're seeing from Bennu, landed on Earth four billion years ago," Dante Lauretta, the mission's principal investigator, said during a NASA event on Wednesday. The materials also contained sulfur, key for many geological transformations in rocks. "It determines how quickly things melt and it is also critical for biology," said Dr. Lauretta, who displayed microscopic images and 3-D visualizations of the material. The scientists also found magnetite, an iron oxide mineral that can play an important role as a catalyst in organic chemical reactions. "We're looking at the kinds of minerals that may have played a central role in the origin of life on Earth," Dr. Lauretta said. The samples are also chock-full of carbon, the element that is the building block for life.
Chemistry and Material Sciences
Sign up for CNN’s Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more. NASA has launched its first mission to a mysterious world made largely of metal, on a journey to find out whether the asteroid is the exposed core of an early planetary building block from the beginning of the solar system. The Psyche mission lifted off at 10:19 a.m. ET Friday aboard a SpaceX Falcon Heavy rocket from NASA’s Kennedy Space Center in Florida. Psyche will be the first of many upcoming NASA science missions slated to launch aboard the Falcon Heavy rocket. The mission, named for the 16 Psyche asteroid it will observe, will travel about 2.2 billion miles (3.6 billion kilometers) over the next six years to reach the space rock, located in the outer part of the main asteroid belt between the orbits of Mars and Jupiter, said David Oh, Psyche chief engineer for operations at NASA’s Jet Propulsion Laboratory in Pasadena, California. “We’re launching a mission to an object that humankind has never before studied up close,” said Lindy Elkins-Tanton, Psyche principal investigator and foundation and regents professor at Arizona State University’s School of Earth and Space Exploration. “This will be our first time visiting a world that has a metal surface. There aren’t that many completely unexplored types of worlds in our solar system for us to go see, so that is what is so exciting about this.” Scientists believe the large M-type, or metallic, asteroid could be the exposed core of an early planetary building block and may resemble the cores of rocky planets in our solar system, such as Earth, Mars, Venus and Mercury. The core may have been exposed due to violent collisions with other rocky bodies early in the solar system’s formation. Ground and space-based telescopes have observed Psyche in the past, mainly detecting radiation bouncing off the reflective metal on the asteroid’s surface. Together, these observations have helped scientists develop a model of the asteroid’s shape, which is about the size of Massachusetts without Cape Cod, and a surface area equivalent to California, Elkins-Tanton said. But even using the Hubble Space Telescope, Psyche only appears as a few pixels. “We do not know what Psyche looks like,” Elkins-Tanton said. “I always joke that it’s shaped like a potato because potatoes come in many shapes, so I’m not wrong. But we’re going to find out when we get there.” The long road to Psyche The Psyche mission, originally scheduled to launch in October 2022, has faced its share of delays. Last year, there wasn’t enough time to conduct a full checkout of the spacecraft’s flight software to ensure it was ready before the 11-week launch window closed. And this year’s launch window was pushed from October 5 to October 12 to give engineers enough time to adjust the temperature limits of the nitrogen cold gas thrusters on the spacecraft, which will be used to orient it in space. “There would have been a potential risk of overheating the thrusters and so it was a serious issue that we had to deal with,” said Henry Stone, Psyche project manager at NASA’s Jet Propulsion Laboratory. “And we would have probably discovered that right away had we not caught this and done the due diligence.” The Falcon Heavy rocket powered the beginning of the spacecraft’s long cosmic journey before separating. Falcon Heavy’s side boosters landed back at Cape Canaveral Space Force Station in Florida, causing a sonic boom that could be heard by local residents, and will be used for future missions. Once in space, the team will spend three to four months going through initial checks of the spacecraft and its instruments. To accomplish the rest of the mission, the van-size spacecraft will rely on its new solar electric propulsion system, powered by Hall-effect thrusters, Oh said. The thrusters will utilize the spacecraft’s large solar arrays and “use electricity to ionize xenon gas and accelerate those charged ions through an electric field to very, very high speeds,” Oh said. The result: a speed five times faster than that of the fuel released in a regular chemical rocket. The Psyche spacecraft will arrive at Mars in May 2026 and use the planet’s gravity to effectively slingshot its trajectory to Psyche. The mission will arrive at the asteroid in late July 2029 and spend 26 months orbiting the asteroid to map its surface, take images and determine whether Psyche truly is a metal core. The spacecraft will use different orbits around the asteroid, going from 440 miles (708 kilometers) away to just 40 miles (64 kilometers) above the surface. The Psyche mission’s imagers will begin transmitting data to Earth as soon as the spacecraft spots the asteroid. Also along for the ride is the Deep Space Optical Communications technology demonstration, or DSOC. Occurring during the first two years of the journey to Psyche, it will be NASA’s most distant experiment of high-bandwidth laser communications, testing the sending and receiving of data to and from Earth using an invisible near-infrared laser. The laser can send data at 10 to 100 times that of traditional radio wave systems NASA uses on other missions. If the tech demo is successful, DSOC could one day be used to communicate with humans exploring Mars. Secrets of a metal world The spacecraft’s instruments will help scientists determine the asteroid’s chemical and mineral composition, topography, mass, gravitational field and rotation. The mission’s magnetometer will attempt to detect evidence of a magnetic field around Psyche, which could suggest that the space rock initially formed as a planetary core. Psyche, which has a high density, is largely made of iron and nickel, along with something else, which may be rock, sulfur or carbon-based, Elkins-Tanton said. If Psyche isn’t a core, it could be a rare, leftover object from the formation of the solar system that has never been observed. “The other idea is Psyche is a kind of primordial unmelted body basically formed from the very first materials in the solar system that came together under gravity and was preserved in this primordial state ever since,” said Ben Weiss, Psyche deputy principal investigator and magnetometer lead. Weiss is a professor of planetary science at the Massachusetts Institute of Technology. Psyche may have formed near the sun and then was reduced to metal over time as oxygen atoms were stripped away from iron atoms — an object hypothesized but never found. The science team is eager to see the surface features of the asteroid. Radar indicates that there are two large craters on the surface. But what does a metal crater look like? Psyche may have little metal spikes, spires and even tiny pieces that resemble a type of metal sand within the crater, said Elkins-Tanton. It’s also possible that Psyche experienced volcanic eruptions, creating huge cliffs and greenish-yellow lava flows due to their sulfur content. “This is our scientifically motivated idea, almost certain to be completely wrong,” Elkins-Tanton said. “It’s going to surprise us when we get there. I think there’s a very good chance that it’s going to be outside of our imaginings, and that is my fondest hope.”
Chemistry and Material Sciences
JOHNSON SPACE CENTER, Texas—As they unveiled the first samples recovered from an asteroid on Wednesday, scientists were giddy at the prospects of what this material will tell us about the origin of our planet and possibly even ourselves. After seven years in space, a small spacecraft carrying samples from the asteroid Bennu landed in a Utah desert in late September. Following carefully choreographed procedures to prevent the contamination of the asteroid dust and rocks from life on Earth, the samples were transferred to a clean room at Johnson Space Center in Houston two weeks ago. Since then, scientists have examined some of the material that was collected outside of the primary container to glean some initial insights. They revealed some of their first data during an event at the center on Wednesday. "Boy, did we really nail it," said Dante Lauretta, a scientist from the University of Arizona who is the principal investigator of the OSIRIS-REx mission. Scientists have not even opened the main container yet, a process that will unfold in the coming weeks as cataloging all of this material begins. Before the launch of this mission, scientists said the recovery of 60 grams of material would be considered a success. While the effort to determine the overall mass is ongoing, Lauretta said early estimates are that the asteroid capture mission collected about 250 grams of pebbles and dust from the surface of Bennu. After the material is cataloged, it will be loaned out in small quantities to 230 scientists across 35 countries who are members of the Origins, Spectral Interpretation, Resource Identification and Security-Regolith Explorer, or OSIRIS-Rex mission team. Digging into our origins So what's the big deal about recovering pristine samples from the surface of an asteroid? The big deal is that Bennu, an asteroid in a near-Earth orbit that is about one-half kilometer across, is believed to be a time capsule for the types of rocks and chemicals that existed when the planets formed in our Solar System more than 4 billion years ago. By studying Bennu, scientists are looking back to that primordial era when Earth began transitioning from an extremely hot world with a hellish surface environment into something more like a mud ball. Poking these pebbles and rocks with sophisticated equipment here on Earth may allow Lauretta and the other scientists to answer questions about how terrestrial planets like Earth and Mars formed and possibly whether asteroids seeded Earth with the building blocks for life. In a preliminary analysis of some of the dust, Lauretta said scientists hit the jackpot with a sample that is nearly 5 percent carbon by mass and has abundant water in the form of hydrated clay minerals. It is highly plausible that asteroids like this delivered the vast majority of the water now found in Earth's oceans, lakes, and rivers billions of years ago. By piecing together clues from the asteroid dust—both its water and organic molecules—the scientists believe they may better understand how Earth went from an uninhabited mudball to the world teeming with life today. "This is incredible material," said Daniel Glavin, a co-investigator on the mission. "It’s loaded with organics. If we're looking for biologically essential organic molecules, we picked the right asteroid, and we brought back the right sample. This is an astrobiologist's dream."
Chemistry and Material Sciences
In summer 2018, a group of scientists set off on a daring expedition. Their goal: to find life that could survive and thrive in the hottest places on Earth. This mission took them to Ambrym Island in the Vanuatu archipelago, a series of geologically intriguing keys in the South Pacific Ocean, home to one of the planet’s most dangerous volcanoes. Surrounded by rugged cliffs and sloping plateaus of multi-colored stone, the island’s Marum Crater contains a 7.4-mile wide caldera of boiling lava that spits up more plumes of poisonous gas than any other scissure in the world. Jeffrey Marlow, an assistant professor of biology at Boston University, and his team delved into this netherworld on the hunt for extremophiles, atypical microorganisms that have evolved to survive the most extreme conditions on Earth. Some of these creatures can withstand near-freezing temperatures and crushing pressures in the deep sea, or the high acidity of sulfuric pools. Deinococcus radiodurans can survive under UV light or in vacuums without oxygen, conditions that typically deter any microbial growth. NASA even dubbed it Conan the Bacterium because of its unusual durability. Astrobiologists suspect it could be extraordinarily useful in outer space, as astronauts work towards setting up long-term, self-sufficient colonies in other worlds. Marlow’s team spent days trying to secure extremophiles that withstand nearly 2000 degree volcanic heat. They wondered what kinds of internal chemical conditions allow these microbes to inhabit fresh surfaces on Earth—a question that challenges our current notions of habitability. “Environmentally, it’s just the craziest place I’ve ever seen,” he says. “[The lake] was just always kind of humming away at this background level.” Though the Marum Crater lava lake has since disappeared, the insights the team gleaned have vital implications for modeling microbial populations’ survival in other kinds of perilous locations. The deeper they dug into the volcanic sediment, the more they found evidence that some hidden subsets of microbes can survive side-by-side in both protected niches and in specific kinds of chaos. This suggests that the volcanic environment likely provides a valuable space for microbes to grow sustainably. It’s further proof that life finds many ways to adapt to the galaxy’s harshest circumstances. Marlow’s work contributes vastly to what we know about the inner processes of these mysterious organisms. Scaled up, microbes like this could one day make an appealing tool to help terraform other planets, transforming them from barren wastelands to environments that complex organisms, like humans, could prosper in. “Evolution has been equal to the task of dealing with the environmental diversity we have on Earth,” Marlow says. “The question is, as those environments get even more diverse as we look beyond Earth—can life handle that diversity as well?” A Complex Existence Fungi—both the garden variety or the kitchen variety that creeps up on stale bread—are a key reason why Earth is so rife with organic activity. First discovered in the late 19th century, these microbes were once viewed as mere propagators of infectious disease. Since then, modern research tools have revealed that these incredibly useful and at times, lethal, microorganisms exist everywhere around us and fulfill a multitude of crucial functions for the good of the environment. But could their power potentially be harnessed to transform even the most inhospitable of worlds into an agricultural idyll? This idea isn’t so far-fetched, Marlow says. Microbes already have a history of terraforming Earth. “The idea that now our entire atmosphere is oxygenated and all animals depend on oxygen,” he says. “In the early phases of life on Earth, that would have been impossible to consider.” Billions of years ago, when oxygen was still considered a toxic molecule, microbes called cyanobacteria began using photosynthesis to gain energy from sunlight then, as a consequence, releasing large quantities of O2 into the atmosphere. This changed Earth’s biosphere into one that was suitable for other kinds of life. And multicellular organisms quickly rolled onto the scene. Today, trillions of microbes regulate chemicals in our soil, food, and bodies. When microbes devour dead matter, they release and recycle essential chemicals such as oxygen and nitrogen back into the atmosphere. These chemical nutrients are then later used to spur new life—rejuvenating plants and animals alike in an ever-evolving cycle. To the untrained eye, this process is just about indistinguishable from magic. As space agencies prepare to send humans hundreds of millions of miles away from home, scientists are now more than ever looking to find solutions for turning strange worlds, like Mars, into places where future explorers could confidently settle. Could the solution lie with microbes sent to distant destinations in preparation for our arrival? Microbial Pilgrimage The moment an astronaut steps foot on Mars for a terraforming mission, they are up against great odds, as they deal with an extreme lack of, well—everything. Roughly half the size of Earth, the Martian atmosphere is about 100 times thinner and lacks much of the chemical nutrients life has come to depend on. It’d be impractical to stock a spaceship with all of the supplies needed for an interplanetary move. With no way to receive care packages or technological aid from Earth, astronauts would need to pack enough nutrients to last the duration of the journey and jumpstart the process of terraforming an alien world. And while the two planets have similar terrains, our cosmic neighbor has an inhospitable deficit of liquid water. To create a successful off-world utopia, the first thing human settlers will have to do is cultivate a continuous supply of food, water, and oxygen. Josh Silverman, CEO of Windfall Bio, an agricultural technology company that uses methanotrophs (methane-eating microbes) to aid farmers in sustainable farming, says explorers will need to create food chains that rely on small, self-sustaining, self-replicating platforms. On Earth, we take for granted the role microbes play in keeping the landscape lush and ecosystems well-fed. On another planet, our ability to create advanced ecosystems from scratch depends on our ability to close the loop. In this case, that means ensuring that necessary chemical and nutrient resources aren’t lost in the transition between one state of matter and the next. “If you’re going to be on Mars, one of the most important things is making sure you’re not wasting any resources,” Silverman says. “On Mars, every molecule is valuable.” Several machines on the International Space Station rely on this same closed loop concept. They recycle the carbon dioxide astronauts exhale back into clean air and distill liquids like urine and sweat into drinking water. Additionally, because it’s more stable than liquid fuel, methane has also become a promising fuel for future spacecraft, particularly for long, arduous journeys to Mars. Space agencies like NASA have long declared that technologies far beyond our current capabilities are needed before humans can come anywhere close to successfully settling down on another world. Modern shortcomings aside, some popular Martian terraforming concepts are wacky: one proposes redirecting an icy asteroid’s orbit so it crashes and produces liquid water on the planet’s surface; retired NASA scientist Jim Green has suggested building a large magnetic shield to block the sun’s rays from stripping away Mars’s atmosphere. Though we’re still in the early days of the 21st century,humanity has successfully altered the course of a faraway asteroid and we may be one step closer to deploying the perfect solution. One of the most well-agreed on priorities in a brand new world would be seeding microbes or genetically engineered plants on the surface to help create a stable, habitable atmosphere. Another idea would be to use them to cultivate the soil. It sounds like a relatively easy way to conquer a planet: grow some microbes and just let them loose. But even as advances in synthetic biology may accelerate this terraforming process, we most likely wouldn’t see results within our lifetimes. “It’s appealing and tempting to draw out molecular pathways and say, we can just re-engineer this microbe to do new functions we want [it to] do,” says Marlow. “But it’s really hard to move from a theory and a proof of principle into something that is that scale and can survive in a natural environment.” Moreover, not every planet will have as friendly a constitution as Earth’s. In 2022, scientists demonstrated that with cyanobacteria’s help, the plant alfalfa could grow in soil that mimicked Martian regolith. By finding a way for the plant to flourish, they essentially created a “circular farming process,” with minimal need for external inputs, closed nutrient loops, and regenerative soils. Experts believe we’ll need such circular processes if we hope to grow on another planet. Faced with the prospect of vibrant or even dead alien ecosystems, we’d also have to figure out how seeded microbes might propagate if left in places with no discernable self-sustaining energy source. Who knows how these complex “little creatures” might evolve once inextricably mired in otherworldly ecosystems? Closing The Loop Marlow’s team left Marum Crater with more questions than answers. However, by studying the microbial populations protected beneath the volcanic sediment that dots the perimeter of the lava lake, they got an up-close and personal glimpse into what the first ecosystems on Earth may have looked like, at least from a microbial perspective. The benefits of this research won’t necessarily only apply in space, where future astronauts await natural water sources, and food stores. The power of microbes could just as easily be used to address agricultural issues here on Earth. According to the United Nations, the world’s population will be on the cusp of 10 billion by 2050, and the agricultural sector will need to increase food production almost by 70% to account for our growing needs. Lisa Dyson, a scientist and CEO of Kiverdi, a technology company committed to using microbes to manufacture sustainable commercial products, noted during a 2016 Ted Talk presentation in Paris that microbes, whose value is often largely overlooked and understudied, could help with such a task. “This type of technology can actually help us address an issue with agriculture and allow us to create a kind of agriculture that’s sustainable, that will allow us to scale to meet the demands of tomorrow,“ Dyson said. With the aid of supercharged microbes that recycle carbon, called hydrogenotrophs, it could even be possible to revive ecologically sparse or difficult to-access areas into viable farmland. Pouring all of our resources into turning nature’s smallest organisms into miracle workers could be a hopeful turn for both Earth’s climate troubles as well as the global hunger crisis. It’s a goal Silverman envisions his company might help us reach, especially if more people begin to recognize microbes’ potential to change the world as we know it. “Biology is a great platform for making really interesting things,” Silverman says. “It’s slower, but it actually can work if you give it long enough. It builds itself.”
Chemistry and Material Sciences
On the outskirts of the small town of Folschviller in eastern France stand three nondescript sheds. One of these temporary structures has recently become a hive of activity due to a continuous stream of visitors, including scientists, journalists, and the public. The shed sits above a borehole first drilled in 2006 and houses a gas measurement system called SysMoG, which was originally developed to determine the underground methane concentration. While the device did detect almost pure methane (99 percent) at a depth of 650 meters, probing further down, the borehole resulted in an unexpected and surprising discovery: hydrogen in high concentration. “At 1,100 meters, the concentration of dissolved hydrogen is 14 percent. At 3,000 meters, the estimated concentration could be as high as 90 percent,” Jacques Pironon, director of research at GeoRessources lab at the Université de Lorraine, said. Based on the estimates of methane resources and the concentration of hydrogen detected so far, scientists have conjectured that the Lorraine region in eastern France, of which Folschviller is a part, could contain 46 million tons of white—or naturally produced—hydrogen. That would make it one of the world’s largest known hydrogen deposits. Looking for something else This remarkable discovery was not the objective of the project, called Regalor. Instead, it aimed to determine the feasibility of methane production in the Lorraine region and to record the presence of traces of other gases. “Our original research was related to the study of carboniferous sediments in northeast France. This was important as Lorraine was one of France’s largest coal-producing regions,” Pironon said. According to Pironon, after coal production ended in Lorraine in 2004, a company called Francaise de L’Energie proposed sourcing methane from the region’s vast coal fields. Following this, the regional government sought the expertise of Laboratoire GeoRessources to determine if such a project was realistic. “As a part of this project, we developed a new tool with the French-Swiss company called Solexperts. It consists of a patented probe SysMoG that can be lowered by a logging winch to depths of 1,500 meters,” he said. The probe is equipped with a membrane that allows the separation of gases from water in which they are dissolved. “As a result, the gases can be pumped to the surface for analysis. This is a new concept. In the past, determining the presence of gases underground involved bringing the water from underground to the surface and degassing the water for the analysis.” To detect the presence of methane and to characterize it, a 1,250-meter borehole, which had been drilled more than 15 years ago, was monitored. At 600 meters, the team found 99 percent of the gas dissolved in the water was methane, while just 1 percent was hydrogen, which, according to Pironon, isn’t out of the ordinary. “Later, we had the opportunity to make measurements until the bottom of the borehole. We noticed a constant increase in hydrogen from 600 to 1,250 meters. We detected the hydrogen concentration to be around 20 percent at the bottom of the borehole,” he remarked. How big is the deposit? Soon, the researchers will start taking measurements in three other boreholes at similar depths to understand if the hydrogen concentration remains high as you move laterally from the site of the original borehole. “If the concentration is similar, the next step, which is being discussed with the authorities, would be to drill a hole 3,000 meters deep to validate the evolution of the hydrogen concentration with depth,” he said. The deeper borehole could also throw up another surprise. “Besides knowing the level of hydrogen concentration, we will also know if hydrogen is present in dissolved form or in gaseous state at these depths,” Pironon said. This study could also shed light on the source of this hydrogen. According to Pironon, there are two hypotheses, one of which is related to the presence of the mineral siderite. “Hydrogen could be produced by the reaction between water and siderite, which is made of iron carbonates. We consider that the siderite could be oxidized by water molecules to produce hydrogen. The oxygen then combines with iron to produce iron oxide.” According to Pironon, the other hypothesis relates its presence to the chemical processes that form coal, which, along with the release of methane, can also produce hydrogen. Given that Lorraine has one of the world’s biggest coal basins and boasts 6 km of sediments, Pironon and his research colleague Philippe de Donato are keeping their fingers crossed on the new tests confirming the estimates of hydrogen present under the ground on which the three innocuous sheds stand. With hydrogen tipped to be the fuel of the future because of its promise of net-zero emissions, the discovery of vast quantities of what’s termed “white hydrogen” could prove to be a shot in the arm for the transition to clean energy. Dhananjay Khadilkar is a journalist based in Paris.
Chemistry and Material Sciences
Why we don’t need to go to space to mine its resources Pune, Sep 25 360info Meteorites could show us the perfect chemical models we need to end the natural resource crisis but we dont have to leave Earth to mine them.The worlds fascination with asteroids knows no bounds, from ongoing interest in space travel to some cultures putting meteorites in shrines. - Country: - India Meteorites could show us the perfect chemical models we need to end the natural resource crisis — but we don’t have to leave Earth to mine them. The world's fascination with asteroids knows no bounds, from ongoing interest in space travel to some cultures putting meteorites in shrines. In 2022, the auction house Christie's landed a record-breaking price of around USD 189,000 for a meteorite that originated from the Moon. But meteorites are also abundant, with resources worth billions of dollars, and there is plenty of material hiding in plain sight on Earth for those who know where to look. Nearly 40,000 tonnes of meteoritic chunks and dust fall to Earth each year. They land in oceans and on land, are more visible in desolate regions of Antarctica, Siberia, the Sahara, the Canadian Steppes and the Australian outback. The true commercial value of fallen meteorites is yet to be realised, but some projections slate the global market for space mining to grow to USD 1.99 billion by 2027. Graphene, a material China identified as a top national priority and which the European Union invested EUR1 billion (USD 1.07 billion) in developing, has been found in two meteorites: one that fell in Allende, Mexico, and the other in the Queen Alexandra Range of Antarctica. Graphene, a form of carbon made up of a single layer of atoms, has high conductivity and super strength and is seen as a wonder material with a huge range of applications. Tetrataenite — an iron-nickel alloy found only in meteorites — is being touted as a replacement for rare earth minerals used in permanent magnets, a product used in car manufacturing, computing and many other everyday items. Not every meteorite holds materials as valuable as graphene or tetrataenite. But they might offer the perfect mineralogical and chemical models of what needs to be replicated on Earth to help resolve our most pressing demand for critical minerals. However, mining asteroids or the Moon is not easy given that we have not prospected any lunar zone suitable for mining commercial quantities. Also, the commerce of extraterrestrial mining, removal of gangue, and returning the ore to Earth is enormously higher than present-day sample return missions. It seems more ergonomic and economical to extract from meteoritic material back on Earth instead of sending humans into space to extract ores. Eventually, humans are likely to routinely extract resources from extraterrestrial bodies in order to sustain a presence on bases away from Earth, such as on asteroids, when we develop the capacity. That capacity is still a way off. NASA's many missions to asteroids or comets — Stardust, OSIRIS-REx, JAXA (twice), Hayabusa-1 and Hayabusa2 — have each returned with less than 10 grams of extraterrestrial samples. For the short term, space mining remains unaffordable, unless these missions yield a magical ‘'hardtofindium'', the mythical substance that fuelled a battle suit in a Green Lantern spin-off. In the real world, it would take a substance so valuable that a few micrograms could fetch billions to make space mining feasible. Experts forecast a golden era of space exploration. The words are laced with optimism but until we develop the capacity, that optimism is a little unrealistic. Asteroid mining will happen — perhaps it has always happened, if American meteoriticist Harvey Nininger's studies count — but the modern narrative has made us miss more realistic solutions. Any notion that the natural resource needs of humans can only be met by mining asteroids or the Moon is misplaced. Sustainable Development Goal 12 and initiatives such as India's Lifestyle for Sustainable Environment advocate responsible consumption and production. Solutions to the increasing terrestrial demand for water and minerals can be found in our backyard. The cosmos is ready to help if we are prepared to peer into meteorites we can already find on Earth. (This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)
Chemistry and Material Sciences
Chandrayaan 3's Game-Changing Revelation on Sulphur Origins New Delhi, August 31, 2023 - In a groundbreaking development that has sent shockwaves through the scientific community, India's Chandrayaan 3 mission has unlocked a long-standing lunar mystery – the origin of sulphur on the moon's surface. The Pragyan rover's latest insights have shattered conventional wisdom, prompting experts to question whether the sulphur comes from volcanic activity, meteoric impacts, or an entirely intrinsic source. Since its historic landing on August 23 in the moon's south pole region, Chandrayaan 3's instruments have been working tirelessly to unveil the moon's hidden truths. Pragyan's initial confirmation of sulphur and oxygen presence in the lunar soil left researchers astounded. But the recent revelation by the Alpha Particle X-ray Spectrometer (APXS) instrument has added another layer of complexity to the sulphur puzzle. The presence of sulphur on the moon is a puzzle that challenges existing theories. Traditionally associated with volcanic activities on Earth, the lunar sulphur's source has long remained enigmatic. The initial detection was carried out by the Laser-Induced Breakdown Spectroscope, followed by APXS's confirmation of not only sulphur but also several minor elements. The Indian Space Research Organisation (ISRO) has affirmed that these findings will force scientists to reevaluate their understanding of the moon's sulphur source. The debate now encompasses the possibility of intrinsic lunar processes, recent volcanic eruptions, or meteoric impacts. These revelations have also ignited speculation about potential water ice on the lunar surface, further enhancing the moon's scientific intrigue. The APXS instrument's contribution to this discovery is immense. Designed for in-situ analysis of planetary bodies with minimal atmosphere, it deploys radioactive sources that emit alpha particles and X-rays onto surface samples. By analyzing the resulting characteristic X-ray lines, researchers can decipher the elemental composition and abundances of the sample. Chandrayaan 3's continuous revelations underscore the importance of space exploration and the enigmatic nature of celestial bodies. As scientists grapple with these new findings, the moon's sulphur source remains a tantalizing riddle, waiting for humanity to unravel its secrets.
Chemistry and Material Sciences
A collision between two neutron stars, tightly bound on a decaying orbit, appears to be a relatively rare event. In the entire Milky Way galaxy, of all the 100 billion stars, scientists reckon there’s only around 10 neutron star binaries destined for a collision. To date, we’ve only detected a handful of the kilonova explosions that follow such a collision, and none in our own galaxy. But what if there were a kilonova in the Milky Way? What would that mean for life on Earth? According to a team led by physicist Haille Perkins of the University of Illinois Urbana-Champaign, nothing too good, if the explosion was within a certain proximity. This information probably isn’t going to help much if two neutron stars did collide nearby – but it’s nice to know just how deadly space can be. The team’s findings, uploaded to preprint server arXiv, suggest that the biggest danger comes from cosmic rays, if we’re not in the path of the kilonova’s gamma-ray burst, up to a distance of 35 light-years. If we are in that path, we get a fatal proximity of 300 light-years – but we need to be in exactly the right spot. This helps us assess threats to our own existence, but also will allow scientists to determine the likelihood of life surviving on alien worlds in proximity to these star-shattering events that erupt with powerful, atmosphere-stripping radiation. Binary neutron star collisions – based on the scant handful we have observed – have several components. There’s usually a short-duration burst of gamma radiation (the supernovae of collapsing single stars are longer), erupting out in a pair of narrow jets on either side of the colliding stars, as well as gamma-ray cocoons around each jet, which form as the jet tries to punch through material shed during the kilonova. When the jets slam into the surrounding interstellar medium, they produce powerful X-rays; that’s known as the X-ray afterglow. And, as the situation evolves over years to centuries, a bubble of cosmic rays expands out into space from the center of the collision. Perkins and her team investigated how these events could affect a planet, based on what we know from the first neutron star collision ever detected, GW170817. They found that any living thing within the narrow range of the jet out to a distance of 91 parsecs – 297 light-years – is probably toast due to the powerful gamma radiation. Outside of that narrow range, however, it is somewhat safer. You’d need to be within a distance of around 13 light-years to be hit by gamma radiation from the cocoon structures. Both these threats would only persist for a short time; they’d strip Earth’s stratospheric ozone, which would take about four years to recover. The X-radiation, the team found, is much more lethal, as the afterglow lasts much longer than the gamma-ray emission. However, you’d still need to be relatively close – within about 16 light-years. It’s the long-term bubble of cosmic rays that pose the most significant threat, the researchers found. Accelerated by the kilonova remnant, these would strip the ozone, leaving Earth vulnerable and pelted with radiation, for a duration up to thousands of years. That would trigger a devastating mass extinction. For that, we’d need to be within about 35 light-years from the source. So we’re probably relatively safe from this kind of death, for now. In fact, we’re far more at risk from something right here in the Solar System: giant flares from our own Sun. After that, we should worry about asteroid impacts, and then supernovae. Kilonovae are, in this list at least, the least of our worries, despite the dangers they can pose. “The rarity of binary neutron star mergers combined with a small range of lethality means that they are probably not important threats to life on Earth. We find that the mean recurrence time of lethal mergers at the location of the Sun is much larger than the age of the Universe,” the researchers write. “However, even if it never induced a mass extinction, a nearby kilonova event would be visible on Earth. It would likely disrupt technology soon after the merger and remain bright in the sky for over a month.” Well, that’s something to look forward to at least. Discover:
Chemistry and Material Sciences
Lava worlds, massive exoplanets home to sparkling skies and roiling volcanic seas called magma oceans, are distinctly unlike the planets in our solar system. To date, nearly 50% of all rocky exoplanets yet discovered have been found capable of maintaining magma on their surfaces, likely because these planets are so close to their host stars they orbit in fewer than 10 days. Being so close causes the planet to be bombarded by harsh weather and forces surface temperatures to the extreme, making it all but completely inhospitable to life as we know it today. Now, in a new study, scientists have shown that these sweeping molten oceans have a large influence on the observed properties of hot rocky Super-Earths, such as their size and evolutionary path. Their research, published recently in The Astrophysical Journal, found that due to lava's extremely compressible nature, oceans of magma can cause lava-rich planets without atmospheres to be modestly denser than similarly sized solid planets as well as impact the structure of their mantles, the thick inner layer that surrounds a planet's core. Even so, since these objects are notoriously under-studied, it can be a difficult task to characterize the fundamental workings of lava planets, said Kiersten Boley, lead author of the study and a graduate student in astronomy at The Ohio State University. "Lava worlds are very odd, very interesting things and because of the way we detect exoplanets, we're more biased to finding them," said Boley, whose research revolves around understanding what essential ingredients makes exoplanets unique and how tweaking those elements, or in the case of lava worlds, their temperatures, can completely change them. One of the most well-known of these mysterious burning worlds is 55 Cancri e, an exoplanet about 41 light-years away that scientists describe as home to both sparkling skies and roiling lava seas. While there are objects in our solar system, such as Jupiter's moon Io, that are extremely volcanically active, there aren't true lava planets in our stretch of the cosmos that scientists can get up close and personal to study. However, investigating how the composition of magma oceans contributes to the evolution of other planets, such as for how long they stay molten and for what reasons they eventually cool down, can offer clues into Earth's own fiery history, said Boley. "When planets initially form, particularly for rocky terrestrial planets, they go through a magma ocean stage as they're cooling down," said Boley. "So lava worlds can give us some insight into what may have happened in the evolution of nearly any terrestrial planet." Using the exoplanet interior modeler software Exoplex and data collected from previous studies to construct a module that included information on several types of magma compositions, researchers simulated several evolutionary scenarios of an Earth-like planet with surface temperatures from between 2600 and 3860 degrees Fahrenheit -- the melting point at which the planet's solid mantle would turn to liquid. From the models they created, the team was able to discern that mantles of magma ocean planets can take on one of three forms: the first in which the entire mantle is completely molten, the second where a magma ocean lies on the surface, and a third sandwich-esque model that consists of a magma ocean at the surface, a solid rock layer in the middle and another layer of molten magma that lies closest to the planet's core. The results suggest that the second and third forms are slightly more common than planets that are completely molten. Depending on the composition of magma oceans, some atmosphere-free exoplanets are better than others at trapping volatile elements -- compounds such as oxygen and carbon necessary to the formation of early atmospheres -- for billions of years. For example, the study notes that a basal magma class planet that is 4 times more massive than Earth can trap more than 130 times the mass of water than in Earth's oceans today, and about 1,000 times the amount of carbon currently present in the planet's surface and crust. "When we're talking about the evolution of a planet and its potential to have different elements that you would need to support life, being able to trap a lot of volatile elements within their mantles could have greater implications for habitability," said Boley. Lava planets are a long way from becoming habitable enough to support life, but it's important to understand the processes that help these worlds to get there. Nevertheless, this study makes clear that measuring their density isn't exactly the best way to characterize these worlds when comparing them to solid exoplanets as a magma ocean neither significantly increases nor decreases its planet's density, said Boley. Instead, their research reveals that scientists should focus on other terrestrial parameters such as fluctuations in a planet's surface gravity to test their theories about how hot lava worlds operate, especially if future researchers plan on using their data to aid in larger planetary studies. "This work, which is a combination of earth sciences and astronomy, basically opens up exciting new questions about lava worlds," said Boley. The study was supported by the National Science Foundation. Other co-authors are Wendy Panero, Joseph Schulze, Romy Martinez and Ji Wang, all from Ohio State, as well as Cayman Unterborn from the Southwest Research Institute. Story Source: Journal Reference: Cite This Page:
Chemistry and Material Sciences
The prodigious evidence for water on Mars has eliminated scientific debate about whether Mars had a watery past. It clearly did. But it has left us with an awkward question: What exactly did that past look like? Some results argue that there were long-lived oceans and lakes on Mars. Others argue that the water largely consisted of ice-covered bodies that only allowed water to burst out onto the surface on occasions. The picture is further confused by the fact that some or all of these may have been true at different times or in different locations. Creating a clear picture would help shape our understanding of an environment that might have been far more conducive to life than anything that exists on present-day Mars. A new paper describes evidence that at least one part of Mars went through many wet/dry cycles, which may be critical for the natural production of molecules essential to life on Earth—though they don't necessarily mean conditions in which life itself could thrive. Putting a hex on Mars The results come courtesy of Curiosity, the older of the two operational rovers on the planet, which is exploring a site called Gale Crater. About 3,000 Martian days into its exploration, the rover was at a site that dates to roughly 3.6 billion years ago, during Mars' relatively wet Hesperian period. And it came across what would be familiar to gamers as a hex grid: hundreds of hexagonal shaped rock deposits in the area of a few centimeters across and at least 10 centimeters deep. These features are small enough that they'd be easy to overlook as simply another collection of wind-swept debris on the red planet. But up close, they're striking: large collections of hexagons that share sides, creating a regular grid. While there's some irregularity, the lines separating them largely form three-way intersections with equal angles between each line. And, in places where erosion has had different effects on nearby instances, it's clear that individual hexagons are at least 10 centimeters in height. Similar shapes have been seen on Pluto, formed by convection of an icy surface. But these are far, far larger, able to be detected from a considerable distance from Pluto. The tiny size of the hexes on Mars is completely incompatible with convection. Instead, it has to be the product of mud drying out, creating cracks as the material contracts. The water itself could either come externally, in the form of a flood, or via groundwater that soaks up to the surface. But again, the tiny size of these features is decisive, indicating that only the top few centimeters got wet, which is incompatible with a groundwater source. To form the regular, hexagonal shapes also means repeated cycles—experiments show that at least a dozen cycles are needed before you start to get the equal angles at the junction. So, simply based on their shape, it appears that these hexagons are the product of repeated flooding. The chemistry backs this up. The rocks in the lines that separate individual hexagons are largely a mixture of calcium and magnesium sulfates, which will readily precipitate out of water as conditions get drier. These deposits will form harder rocks than the dried mud that comprises the bulk of the hexagons.
Chemistry and Material Sciences
The first fossil evidence pointing to an environment that once could have hosted life on Mars has been reported by scientists, who say the discovery of hexagonal patterns in ancient Martian mud suggests it once had a cyclical climate with seasons similar to those on Earth. The detection of distinctive hexagonal patterns on a portion of the Martian terrain has been described as a “major discovery”, made possible with data obtained by NASA’s Curiosity rover. The new findings, along with the previous detection of simple organic molecules, offer mounting evidence that the Red Planet once had conditions ideal for the formation of more complex compounds, and possibly even forms of life. The research that led to the discovery, undertaken by scientists at France’s CNRS with the additional participation of the French space agency, CNES, recently appeared in the journal Nature. Since interactions between plates that form the outer lithospheric part of Earth, a process known as plate tectonics, do not occur on Mars, large portions of the Red Planet’s terrain that include ancient fossil rivers and lakes have remained well preserved on its surface. These regions of Mars were first explored by Curiosity beginning in 2012, leading to the initial detection of the presence of simple organic molecules. Although geological processes can lead to their creation, the presence of these molecules could also indicate that life forms were once present. However, for that to be possible, an environment conducive to their formation would also be required. Now, according to the French research team with CNRS and CNES and their U.S. and Canadian partners, the latest evidence seems to indicate that such an environment did indeed exist, and that life on Mars could have once also existed long ago. The discoveries, made possible with Curiosity’s Mastcam and ChemCam instruments, were made through the detection of salt deposits that formed a series of hexagonal patterns in layers of Martian sediment dating back to between 3.6 and 3.8 billion years ago. Comprised of a pair of camera systems mounted on the rover’s head, the Mastcam is designed to capture detailed images of the Martian terrain, whereas the rover’s ChemCam involves both a camera, and several remote sensing capabilities that allow it to investigate soil and rock samples it discovers. Combined, the two instruments are able to provide a wealth of data about the planet’s rich geology, as well as conditions on Mars that may have once been suitable for life. William Rapin of France’s Institut de Recherche en Astrophysique et Planétologie and the lead author of the new paper describing the discovery, says the formation of mud cracks like the ones found by Curiosity is a clear indication of seasonal change where wet and dry conditions occur repeatedly. Rapin and the research team say these salt deposits are the first direct fossil evidence of an ongoing cyclical Martian climate where both wet and dry seasons occurred in its ancient past, although previous research had already suggested changes in the seasonal environment on the planet would be capable of providing conditions perfect for the formation of complex compounds that form the building blocks of life. “This is the first tangible evidence we’ve seen that the ancient climate of Mars had such regular, Earth-like wet-dry cycles,” Rapin said in a statement, emphasizing that such cycles may even be a requirement “for the molecular evolution that could lead to life.” Based on information the team has obtained, imagery obtained through satellite reconnaissance of the surface of Mars should help scientists spot other regions with similar terrain to the location where these salt deposits were found, which may also be good candidates for the discovery of evidence of ancient life forms. In the future, additional studies of the largely uninterrupted Martian terrain may help reveal more clues about what conditions could have led to the formation of life on Mars, information that will also help scientists better understand how such processes unfolded in Earth’s ancient past. “It’s pretty lucky of us to have a planet like Mars nearby,” Rapin says, “that still holds a memory of the natural processes which may have led to life.”
Chemistry and Material Sciences
Last week, SpaceX made headlines again after a Falcon 9 rocket blasted a hole through the atmosphere, leaving a trail of reddish, pink trail of blood across the night sky. The bleeding hole left by the rocket was captured in several amazing images, including one spectacular image from photographer Jeremy Perez, who shared the shot on his Instagram. Perez captured the shot of the SpaceX atmosphere hole and the reddish light trail it created from the San Francisco Volcanic Fields, located north of Flagstaff, Arizona. The trail was only visible across some points in California and Arizona, but it was quite spectacular to behold, according to Perez. “After the rocket passed overhead, a red fluorescent glow expanded south and crossed over with the Milky Way in the sky,” Perez told Spaceweather.com. Perez captured several shots of the glow, which some have described as a bleeding hole in the atmosphere. The glow lasted just around 20 minutes. The reason for this glow, and for similar glows seen in the past, is due to the holes that rockes often punch through the ionosphere. These holes aren’t dangerous to us here on the surface, but the way that the gases interact with the solar radiation beyond the ionosphere causes the glowing effect – sometimes even seen as a whirlpool in the sky. These effects have been known about since at least 2005, when a titan rocket first triggered what scientists refer to as “severe ionospheric perturbations.” These “perturbations” are essentially the same as a minor geomagnetic storm hitting the ionosphere. These light shows are only likely to become even more common, too, as more private companies continue to launch rockets into space. So the next time you see a strange glow of light across the sky, it’s most likely the result of a SpaceX (or another company’s) rocket piercing a hole in our atmosphere and the exhaust interacting with solar radiation.
Chemistry and Material Sciences
It has been billed as NASA's 'asteroid autumn' and involves a trio of exciting missions that could answer some truly mind-boggling questions. From offering clues to how life on Earth began, to unlocking the secrets of the solar system, key milestones for each voyage are due to play out over the next six weeks. They include one rocket launch, a distant fly-by between Jupiter and Mars, and the recovery of ancient space rocks in the Utah desert that could contain the ingredients for life. Of the three, the lift-off of NASA's Psyche spacecraft probably sounds the most mundane — but no so fast. That is also a fascinating mission, because it is setting off on a 2.5 billion-mile (4 billion-kilometre) journey to find out once and for all if a metal-rich asteroid really could bring down the world's economy. Psyche is due to lift-off on October 5 on its way to the asteroid 16 Psyche, which scientists think may be packed full of precious metals with a value in excess of $10,000 quadrillion (£8,072 quadrillion). Others dispute this and say it is just hard rock, but NASA should be able to tell either way when its orbiter gets to the asteroid in July 2029. One thing it definitely won't look like is anything resembling what you might find in a jewellery store, however. 'I would love for it to look like a shiny, polished, what's called a pallasite meteorite — with the shiny silver metal and the beautiful gold and green jewel-like silicate minerals in between,' said Lindy Elkins-Tanton, the mission's principal investigator. 'But it's not going to look like that. Because no one's been onto Psyche, cut it open and polished it.' She added: 'It's been hanging out in space, getting solar wind hitting it for a really, really long time. So the surfaces are not likely to be shiny.' If 16 Psyche is in fact loaded with precious metals, it could be worth a huge amount of money, according to Dr Linda Elkins-Tanton, a space scientist at MIT. She has calculated that the iron in 16 Psyche alone would be worth $10,000 quadrillion (£8,072 quadrillion). Assuming the market for asteroid materials is on Earth, this could cause the value of precious metals to plummet, completely devaluing all holdings including those of governments, and all companies involved in mining, distributing and trading such commodities. Ultimately, it could lead to the collapse of the entire economy. Of course it's all speculative and hypothetical, because even if the space rock was worth anywhere close to that kind of money, it's not like it could easily be brought back to Earth. Engineers in California are currently getting the spacecraft ready to journey to 16 Psyche, which sits in the main asteroid belt between Mars and Jupiter. The orbiter is due to piggy-back into space on a SpaceX Falcon Heavy rocket which is scheduled to lift off from Pad 39A at Kennedy Space Center in Florida no earlier than 10:38 ET (15:38 BST) on October 5. A separate team of NASA scientists is also gearing up to recover a capsule containing fragments of the asteroid Bennu. The 8.8oz (250g) sample of rock and dust will be dropped back to Earth by the OSIRIS-REx spacecraft and is due to land in Utah's western desert on September 24. An acronym for 'Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer', OSIRIS-REx became the first US mission to collect a sample from an asteroid when it briefly touched down on Bennu in October 2020 and scooped up material with its robotic arm. The space rock is about the size of the Empire State Building and around 200 million miles away from us. It is of great interest to scientists because it is believed to contain microscopic mineral grains that pre-date the solar system. This carbon-rich material is also thought to hold the preserved building blocks of life and could help experts better understand how the planets were formed and how life originated on Earth. The sample that OSIRIS-REx recovered is the largest by a NASA mission since the Apollo astronauts returned with moon rocks in the 1960s and 70s. It will be brought back to Earth and land at the Utah Test and Training Range in just over a week's time. The precious rocks and dust will then be housed at a new lab at NASA's Johnson Space Center in Houston, which is home to hundreds of pounds of lunar material collected by the 12 Apollo moonwalkers. 3. Fly-by on the way to Jupiter's Trojans Meanwhile, a spacecraft named Lucy – launched in October 2021 to study eight asteroids that orbit Jupiter – will pass its first space rock called Dinkinesh on November 1. During its 12-year journey the probe will be the first to tour the so-called Trojan asteroids, which are seen as 'time capsules from the birth of our solar system'. Although Lucy will get its first close-up look at a space rock in six weeks' time, its maiden Trojan asteroid flyby will not happen for another four years, when the spacecraft gets closer to Jupiter in August 2027. The vast majority of the asteroid visits will happen in 2027 and 2028, with a final asteroid flyby scheduled to happen in March 2033. The first will take place in April 2025, when Lucy visits a space rock in the main asteroid belt beyond Mars known as Donaldjohanson. This will be followed by a trip to seven Trojan asteroids that are named after characters from Greek mythology. They are Eurybates, Queta, Polymele, Leucus, Orus, Patroclus and Menoetius. All of the Trojans are thought to be abundant in dark carbon compounds. Below an insulating blanket of dust, they are probably rich in water and other volatile substances, dating back to the first days of the solar system. No other space mission in history has been launched to as many different destinations in independent orbits around our sun. Lucy will show us, for the first time, the diversity of the primordial bodies that built the planets. NASA has said the $981 million (£715 million) mission will revolutionise our knowledge of planetary origins and the formation of the solar system, giving insight into planetary evolution. It takes its name from the fossilised human ancestor, named 'Lucy' by her discovers, whose skeleton provided unique insights into our evolution. As well as viewing some of the oldest rocks in the solar system, Lucy's path will cross the Earth three times, as it uses our planet's gravity to aid in its positioning. This move will make it the first ever spacecraft to return to Earth from the outer solar system, as all the others are either still going – in the case of the Viking probes – or burnt up in the atmosphere of a gas giant, as was the case with Cassini and Saturn.
Chemistry and Material Sciences
A US government agency on Tuesday named four technologies it expects will keep computer data secret when quantum computers mature enough to crack today's encryption technology.Scientists have proven that quantum computers can break mainstream encryption technology if today's progress in quantum computers is sustained long enough. The National Institute of Standards and Technology (NIST) has overseen a hunt to design and test post-quantum cryptography technology to protect that data.Of the four technologies NIST picked, two are expected to be more widely used. One, called Crystals-Kyber, is for establishing digital keys that two computers need to share encrypted data, and the other, Crystals-Dilithium, is for signing encrypted data to establish who sent it. It'll likely take two years for the approaches to be standardized enough for incorporation into today's software and hardware.Quantum computers have been steadily progressing, but it'll likely still take years of work to create machines that are reliable and powerful enough to crack encryption. But it's still an urgent issue. It takes years to find new encryption methods, ensure they're safe and install them widely. And government agencies and hackers can harvest today's potentially sensitive information with the expectation they'll be able to crack it later when it'll still be valuable."We believe 10 to 15 years is a commonly held viewpoint on the timescales for attack," said Duncan Jones, head of cybersecurity for quantum computer hardware and software maker Quantinuum. "But with the possibility of 'hack now, decrypt later,' the attacks may have already begun."Quantum computers also could undermine cryptocurrencies, which also use today's cryptography technology.
Computer Hardware
Canadian team discovers power-draining flaw in most laptop and phone batteries Breakthrough explains major cause of self-discharging batteries and points to easy solution The phone, tablet or laptop you're reading this on is likely having its battery slowly drained because of a surprising and widespread manufacturing flaw, according to researchers in Halifax. "This is something that is totally unexpected and something that probably no one thought of," said Michael Metzger, an assistant professor at Dalhousie University. The problem? Tiny pieces of tape that hold the battery components together are made from the wrong type of plastic. Batteries release power because of a chemical reaction. Inside each battery cell, there are two types of metal. One acts as a positive electrode and one as a negative electrode. These electrodes are held in an electrolyte fluid or paste that is often a form of lithium. When you connect cables to each end of the battery, electrons flow through the cables — providing power to light bulbs, laptops, or whatever else is on the circuit — and return to the battery. Trouble starts if those electrons don't follow the cables. When electrons move from one charged side of the battery to the other through the electrolyte fluid, it's called self-discharge. The battery is being depleted internally without sending out electrical current. This is the reason why devices that are fully charged can slowly lose their charge while they're turned off. "These days, batteries are very good," Metzger said. "But, like with any product, you want it perfected. And you want to eliminate even small rates of self-discharge." Stress-testing batteries In the search for the perfect battery, researchers have to watch how each one performs over its full lifespan. "We do a lot of our tests at elevated temperatures these days. We want to be able to do testing in reasonable time frames," Metzger said. Heat makes a battery degrade more quickly, he explained. At Dalhousie University's battery lab, dozens of experimental battery cells are being charged and discharged again and again, in environments as hot as 85 C. For comparison, eggs fry at around 70 C. If researchers can learn why a battery eventually fails, they can tweak the positive electrode, negative electrode, or electrolyte fluid. Seeing red During one of these tests, the clear electrolyte fluid turned bright red. The team was puzzled. It isn't supposed to do that, according to Metzger. "A battery's a closed system," he said. Something new had been created inside the battery. They did a chemical analysis of the red substance and found it was dimethyl terephthalate (DMT). It's a substance that shuttles electrons within the battery, rather than having them flow outside through cables and generate electricity. Shuttling electrons internally depletes the battery's charge, even if it isn't connected to a circuit or electrical device. But if a battery is sealed by the manufacturer, where did the DMT come from? Through the chemical analysis, the team realized that DMT has a similar structure to another molecule: polyethylene terephthalate (PET). PET is a type of plastic used in household items like water bottles, food containers and synthetic carpets. But what was plastic doing inside the battery? Tale of the tape Piece by piece, the team analyzed the battery components. They realized that the thin strips of metal and insulation coiled tightly inside the casing were held together with tape. Those small segments of tape were made of PET — the type of plastic that had been causing the electrolyte fluid to turn red, and self-discharge the battery. "A lot of companies use PET tape," said Metzger. "That's why it was a quite important discovery, this realization that this tape is actually not inert." Tech industry takes notice Metzger and the team began sharing their discovery publicly in November 2022, in publications and at seminars. Some of the world's largest computer-hardware companies and electric-vehicle manufacturers were very interested. "A lot of the companies made clear that this is very relevant to them," Metzger said. "They want to make changes to these components in their battery cells because, of course, they want to avoid self-discharge." The team even proposed a solution to the problem: use a slightly more expensive, but also more stable, plastic compound. One option is polypropylene, which is typically used to make more durable plastic items like outdoor furniture or reusable water bottles. "We realized that it [polypropylene] doesn't easily decompose like PET, and doesn't form these unwanted molecules," Metzger said. "So currently, we have very encouraging results that the self-discharges are truly eliminated by moving away from this PET tape."
Computer Hardware
According to cybersecurity experts from Eclypsium, computer hardware manufacturer Gigabyte installed a backdoor in the firmware of its motherboards, putting 271 motherboard models at risk of being hacked. The lengthy list of affected models features nearly every motherboard Gigabyte has put out in recent years, including the latest Z790 and X670 units. As Eclypsium’s blog explains, Gigabyte embedded a Windows executable into the firmware of its motherboards that runs when the computer boots up. In other words, every time you reboot your computer, code in the motherboard’s firmware initiates Gigabyte’s app center, which downloads and runs an executable payload from the internet. “The firmware does not implement any cryptographic digital signature verification or any other validation over the executables,” Eclypsium warns. “The dropped executable and the normally-downloaded Gigabyte tools do have a Gigabyte cryptographic signature that satisfies the code signing requirements of Microsoft Windows, but this does little to offset malicious use […] As a result, any threat actor can use this to persistently infect vulnerable systems either via MITM (machine-in-the-middle attacks) or compromised infrastructure.” If you aren’t sure which motherboard your PC has, you can check by going to Start > Windows Tools > System Information. Look for “BaseBoard Manufacturer” and “BaseBoard Product.” If the product you see is on the list, you might want to take action. Here are a few recommendations from Eclypsium to minimize risk: - Scan and monitor systems and firmware updates in order to detect affected Gigabyte systems and the backdoor-like tools embedded in firmware. Update systems to the latest validated firmware and software in order to address security issues like this one. - Inspect and disable the “APP Center Download & Install” feature in UEFI/BIOS Setup on Gigabyte systems and set a BIOS password to deter malicious changes. - Administrators can also block the following URLs: Eclypsium is currently working with Gigabyte to address this backdoor implementation.
Computer Hardware
The Intel Core i9-12900K processor already boasts 16 cores and 24 threads, but the 13th gen version promises to increase that to 24 and 32.Photo: Sarah Jacobsson Purewal/GizmodoIntel accidentally dropped details for its upcoming slate of desktop processors on its Canadian site, and though the company was quick to pull the specs, the Internet Archive has your back for those looking to see the original version.OffEnglishThe Intel Core i5-13600K’s looking at 14 cores and 20 threads while pushing up to 5.1 GHz max frequency. The i7-13700K is 16 cores to 24 threads and 5.3 GHz, while the i9-13900K is rocking 24 cores, 32 threads and a 5.4 GHz frequency. With Turbo Boost Max 3.0 or Thermal Velocity Boost, assuming you have the appropriate power and cooling, you’ll be able to push high GHz counts.These numbers aren’t too surprising, but they do help confirm earlier leaks. In leaked slides shown on igor’sLAB, the 13th gen CPUs previously seemed to have scaled cores and threads compared to the 12th gen. Intel has promised that, along with the speeds shown here, it’s also increased the L2 cache and L3 Intel Smart Cache on i5K processors and above. Of course, the new chipsets will support CPU PCIe 5.0 interfaces up to 16 lanes and continue facilitating DDR5 and DDR4 RAM, though they will now go up to DDR5-5600 compared to the 12th gen’s limit of DDR5-4800. The previously leaked SKU chart slide gives a good rundown of what’s going on here.Beyond the eye-catching boasts of increased cores and threads, there isn’t a big killer feature here over what’s on the current 12th gen Alder Lake series, which made waves by introducing DDR5 support. In our own review of the 12th gen chips, we already felt Alder Lake was pretty damned future-proof, supporting PCIe Gen5 and Thunderbolt 4. And considering you’ll need to drop between $280 to $590 for any of the new mid-to-high end CPUs, these hybrid chips are going to be a big ask for many users out there.And, of course, AMD has already rolled up to the corner, putting Intel on blast with its own upcoming Zen 4 CPU lineup, available at the end of this month. AMD’s claiming that its 7000 series chips will also allow boost clocks of over 5 GHz to match Intel, as well as support for PCIe5 and DDR5. It’s hard to judge their speeds versus Intel CPUs just based on cores and clock speeds, as we’re only working with pre-release material. What’s most important is pricing, with the highest end Ryzen 9 7950X costing $699 at launch. A top-of-the-line Core i9-12900K runs over $100 cheaper, so eyes are on Intel for how it prices Raptor Lake.AMD also won’t have a 7800X CPU at launch, Sept. 27, which means that when it does eventually put out its mid-range CPU version, it’ll likely compete more with Intel’s 13th gen then it will with the 12th gen. And since you’ll likely need to upgrade your motherboard to support this new slate of gaming-ready CPUs, users will have an even bigger decision to make.There’s some argument for further future-proofing your computer hardware, but we still don’t know pricing for those upcoming 13th gen processors or their release window. That, in my mind, is going to be the main deciding factor in whether there’s any reason to hold off upgrading if you haven’t already gone for a 12th gen and you’re absolutely set on Intel CPUs. And if you already have dropped several Benjamins on new chips, I have serious doubts that digging deeper into your wallet for the 13th gen will result in the kind of performance gains that demand spending even more right away.
Computer Hardware
Europe's right-to-repair law asks hardware makers for fixes for up to 10 years Smartphones and tablets would also be added to the EU's list of devices that must be repairable under new rule The European Commission has adopted a new set of right to repair rules that, among other things, will add electronic devices like smartphones and tablets to a list of goods that must be built with repairability in mind. The new rules [PDF] will need to be need to be negotiated between the European Parliament and member states before they can be turned into law. If they are, a lot more than just repairability requirements will change. One provision will require companies selling consumer goods in the EU to offer repairs (as opposed to just replacing a damaged device) free of charge within a legal guarantee period unless it would be cheaper to replace a damaged item. Beyond that, the directive also adds a set of rights for device repairability outside of legal guarantee periods that the EC said will help make repair a better option than simply tossing a damaged product away. Under the new post-guarantee period rule, companies that produce goods the EU defines as subject to repairability requirements (eg, appliances, commercial computer hardware, and soon cellphones and tablets) are obliged to repair such items for five to 10 years after purchase if a customer demands so, and the repair is possible. OEMs will also need to inform consumers about which products they are liable to repair, and consumers will be able to request a new Repair Information Form from anyone doing a repair that makes pricing and fees more transparent. The post-guarantee period repair rule also establishes the creation of an online "repair matchmaking platform" for EU consumers, and calls for the creation of a European repair standard that will "help consumers identify repairers who commit to a higher quality." "Repair is key to ending the model of 'take, make, break, and throw away' that is so harmful to our planet, our health and our economy," said Frans Timmermans, EVP for the European Green Deal, which aims to make the whole of EU carbon neutral by 2050. On that note, the EC proposed a set of anti-greenwashing laws alongside passing its right to repair rule yesterday that would make it illegal to make any green claims about a product without evidence. Citing the fact that 94 percent of Europeans believe protecting the environment is important, the EC said its proposal covers any explicit, voluntarily-made claims "which relate to the environmental impact, aspect, or performance of a product or the trader itself." Any such claims, like a laptop being made from recycled plastic, would need to be independently verified and proven with scientific evidence, the EC said. New rules don't do enough, say right to repair advocates The Right to Repair coalition said in a statement that, while it welcomes the step forward taken by the EU's new repairability rules, "the opportunity to make the right to repair universal is missed." While the EC's rules focus on cutting down on waste by making products more easily repairable, they don't do anything to address repair affordability or anti-repair practices, R2R said. Spare parts and repair charges, the group argues, could still be exorbitantly priced and inaccessible to the average consumer. - John Deere urged to surrender source code under GPL - Dual Tesla lawsuits pull Elon Musk into right-to-repair war - HMD offers Nokia phone with novel concept: Designed to be repaired by its owner - Second-hand and refurbished phone market takes flight amid inflation hike "We need a truly universal right to repair including independent providers and granting universal access to affordable spare parts, repair manuals and diagnostic tools," R2R Europe coordinator Cristina Ganapini said. Ganapini said that truly universal right to repair laws would include assurances that independent providers were available to conduct repairs, and that components, manuals and diagnostic tools would be affordably priced. She also said that, even with the addition of smartphones and tablets to repairability requirements, the products it applies to is still too narrow. "We call on the EU Parliament and Council to step up the ambition of this first right to repair proposal in the EU," Ganapini said. ®
Computer Hardware
BY Meghan MalasJuly 11, 2022, 1:57 PMA person works on a laptop computer on the University of California, Berkeley campus in Berkeley, California, as seen in June 2021. (Photographer: David Paul Morris—Bloomberg/Getty Images)A master’s degree in computer science can be the gateway into one of the most well-compensated, high-demand careers today. Computer and research scientist occupations are expected to grow 22% from 2020 to 2030 and had a median pay of more than $131,000 in 2021, according to the Bureau of Labor Statistics. People who gravitate toward graduate computer science programs may be looking to update or upgrade their current technical skillset—or they may be looking to pivot their careers. Regardless, there is plenty of room for more of these professionals across all sectors. But what types of jobs and opportunities are available to master’s in computer science graduates? Where are these professionals most needed, and why? To find out more about the demand for computer science-educated workers, Fortune spoke to Indrajit Ray, the associate chair of the department of computer science at Colorado State University. The online master’s in computer science program at Colorado State landed the No. 2 spot on Fortune‘s first-ever ranking of the best online master’s degree programs in computer science. How computer science programs can help companies find more data-savvy workers“Our current society is significantly data-driven,” Ray says. “What has changed within the last decade is the ability to rapidly process that data and make sense out of the complex hidden features in that data—and in the process create more and more data.”It’s no secret that companies are seeking to become more data-driven in their business strategies. With computer science, better hardware and algorithms make this possible, resulting in high demand for computer science graduate degree holders. Jobs that require collaboration with domain experts from other disciplines to solve complex problems are typically the most common jobs for graduates with a master’s degree in computer science, Ray notes. This includes areas like business supply chain management, scheduling and operations, healthcare, finance, information management, and more. Additionally, there are roles for computer science degree-holders who are involved more directly in developing computer hardware, networks, cybersecurity technology, databases, operating systems, high-performance computing, programming languages, and application software design. As computing and technology evolve, how master’s degree programs in computer science can help“Based on my own experience in our program as well as anecdotal evidence from conversations with colleagues from other universities, receiving a master’s degree in computer science—and related disciplines such as computer engineering, information technology or computer and information science‚ is a common way to pivot careers,” Ray says. This makes sense given the demand and high pay associated with these occupations. That being said, many of the students in Colorado State’s master’s degree program in computer science are mid-career professionals who have a bachelor’s degree in computer science and 10 or more years of applicable experience. These students need to enhance their depth of knowledge, or acquire new skills in emerging areas such as cybersecurity, artificial intelligence, and machine learning to remain relevant in their jobs, Ray says. Grads with a master’s degree in computer science have more opportunities than those with just undergraduate degrees in the field. For example, computer and information research scientists are required to have a master’s degree, according to the Bureau of Labor Statistics. This is common for data scientist positions, as well, which are also in high demand. People with an undergraduate degree may have fewer job opportunities because they need advanced knowledge in traditional computer science areas like computing theory, algorithms, programming languages, computer hardware and networks, and operating systems. Job candidates also required to know rapidly emerging areas like cybersecurity, cloud computing, artificial intelligence, and machine learning, Ray tells Fortune. Cybersecurity and data roles are in high demand for grads with a master’s degree in computer scienceRay describes the overall contribution of computer scientists as two-fold: People with a master’s degree in computer science bring in technical know-how and skills to identify engineering, business, and societal problems that pose novel challenges in computing—and then they develop the theories, models, and solutions to address those challenges. “Currently, two types of positions are in very high demand and expected to remain so over the next several years,” Ray says. “These are jobs that require expertise in cybersecurity and those requiring expertise in data management, data mining, and data analytics.”In the next three years, 463 exabytes of data will be created each day globally, estimates Raconteur. While this means great potential for powerful solutions and insight across countless areas, this exponential growth in data also means more security and privacy risk and challenges.“Unfortunately, industries have been very slow to respond because to a significant majority security is an afterthought,” Ray says. “Things are changing, which is good—this sector is expected to grow much faster than average.”
Computer Hardware
Home News (Image credit: TeamGroup) Backblaze is back again with a new SSD report for Q2 2022 (opens in new tab), showcasing the lifespan of its SSD boot drives powering its backup servers since 2018. But this time, Backblaze's new update confirms that SSDs are more reliable than hard drives. Backblaze began recording SSD lifespans when it started switching from HDD boot drives to SSD boot drives in its backup servers. However, due to the newness of the drives, it has taken five years for the company to really being seeing any noteworthy differences in overall average SSD lifespan behavior compared to its HDD boot drive lifespans with the same age. 2018 all the way to 2021 sees average SSD lifespan records that are nearly identical to hard drive failure rates. Year 1 by far shows us the lowest failure rates of all, with under a 0.66% failure rate for HDDs and no failure rate at all for SSDs. Year two steps things up in an almost perfectly linear fashion, with HDDs and SSDs seeing a roughly 0.85% increase in failure rates. This linear increase in failures duplicates itself in year 3 and year 4 as well, with both SSDs and HDDs showing similar failure rate curves — though with SSD's being lower overall. At year 4, HDDs are hovering around the 1.8% mark while SSDs have barely passed the 1% mark. And then things appear to diverge.(Image credit: Backblaze)The charts start to change drastically in favor of SSDs at year 5. HDD boot drive lifespan results take a drastic uptick in failure rates, jumping from just under 2% to 3.6% within a year. Meanwhile, Backblaze's SSDs head in the compete opposite direction, declining from 1.05% to an impressive 0.92% average. That translates into a 3x reliability improvement for SSDs over hard drives. While this isn't that surprising, given SSDs have no moving parts, it is good to finally have some hard details based on thousands of drives showing that average SSD life expectancy will far outweigh hard drives, especially once storage starts to get old. It's still possible for other factors like excessive writes or poorly designed firmware and controllers to cause earlier SSD deaths. However, such exceptions only affect a small number of people. Backblaze concludes its report by stating, "At this point we can reasonably claim that SSDs are more reliable than HDDs, at least when used as boot drives in our environment. This supports the anecdotal stories and educated guesses made by our readers over the past year or so. Well done." With five years of data, SSDs are doing very well overall. It will be interesting to see what happens as the drives move into the six, seven, and eight years old range. Aaron Klotz is a freelance writer for Tom’s Hardware US, covering news topics related to computer hardware such as CPUs, and graphics cards.
Computer Hardware
Home News (Image credit: Tom's Hardware) Igor's Lab recently published an article discussing Nvidia's melting 12VHPWR 16-pin power connectors on the RTX 4090 and why the power connectors are melting. Igor states that the cause for the melting is due to Nvidia's poorly built quadruple 8-pin to 16-pin power adapter, not the 16-pin connection standard itself. He says Nvidia's adapter is badly engineered, to the point where it can risk damaging the 16-pin connection. Igor discovered that the thick wires coming from all four 8-pin connectors are wired directly to the six 12V pins on the 16-pin adapter (with the rest being ground pins). That in itself isn't a huge concern, but he says this was done via a very bad soldering job. The solder is incredibly small and Igor says the base is "..a mere 0.2mm of thin copper with a width of 2mm per incoming wire..." This means Nvidia's soldering contacts could be way too small to keep the 8-pin wires connected even under normal bends, which could lead to spotty connections between the wires. If you look at Igor's images, you can tell right off the bat that the soldering job looks poor. This was obviously outsourced to some other company, but regardless the soldering job looked very messy. Igor goes on to say that just lifting off the enveloping layer of the solder causes the connection to tear immediately. Based on these findings, we can't help but question the continued use of the included Nvidia 'quadropus' adapters with RTX 4090 cards. If possible, you should replace the adapter with a proper 16-pin cable that's designed to plug directly into your PSU. Barring that, a third-party adapter made to higher standards should suffice. The default adapter — which based on our testing of RTX 4090 cards, is part of the package AIB vendors get from Nvidia along with the GPU and GDDR6X memory — appears to be a primary factor in the issues at hand. If that's not an option, avoid any and all bends near the 16-pin connector on Nvidia's 8-pin adapter. You'll probably have to take your side panel off your case, but it's better to have an ugly computer than a damaged $1600 GPU. Igor says Nvidia has already been informed about these issues regarding its 8-pin to 16-pin power adapter, so we should see an announcement by Nvidia regarding a fix for the issue. We suspect Nvidia will likely issue a recall and replace the original adapters with a new version that's much more reliable. After all, any component is only as strong as the weakest link. An over-engineered cooling solution on an RTX 4090 card will do you no good if it has to use a shoddy power adapter. Aaron Klotz is a freelance writer for Tom’s Hardware US, covering news topics related to computer hardware such as CPUs, and graphics cards.
Computer Hardware
Taiwanese computer hardware giant MSI (Micro-Star International) has reportedly been hacked and added to the list of victims of a new ransomware gang called "Money Message". The cybercriminals claim to have stolen source code and other sensitive information from the company's network. MSI is a leading global manufacturer of computer components, including motherboards, graphics cards, desktops, laptops, servers, and other devices. Its annual revenue is over $6.5 billion. Money Message has listed MSI on its data leak website and posted screenshots of the company's CTMS and ERP databases, along with files containing software source code, private keys, and BIOS firmware. The threat actors are now threatening to publish all the stolen data unless MSI agrees to pay their ransom demand. |List of files hackers stolen| The attackers claim to have stolen 1.5TB of data from MSI's systems, including source code and databases. They are demanding a ransom payment of $4,000,000. According to Bleeping Computer reports, in a chat with an MSI agent, a Money Message operator reportedly said, "Say your manager, that we have MSI source code, including framework to develop bios, also we have private keys able to sign in any custom module of those BIOS and install it on PC with this bios." It is not yet clear whether the stolen data is authentic, and MSI has not yet commented on the incident. The incident serves as a reminder of the need for strong cybersecurity measures to protect against the growing threat of ransomware attacks. Companies must take proactive steps to secure their networks and educate their employees on how to identify and respond to potential cyber threats.
Computer Hardware
In the latest installment of hardware smuggling busts by Chinese customs, authorities have arrested a hardware trafficker that tried to sneak 420 M.2 SSDs into China. Customs estimated the seized SSDs are worth around HK$258,000 or $32,984.94. Perhaps some of those drives are among the best SSDs. Chinese news outlet HKEPC first spotted the story and reported that the smuggler tried to slip the illicit goods from Macau to Zhuhai through the Gongbei Port. Given the proximity between Macau and Zhuhai, the passage is one of the most popular trafficking routes for mules. It wasn't long ago that a woman hid more than 200 Alder Lake chips inside her fake silicone belly, or another man tried to stroll into China with 160 Intel processors taped to his body. Normally, traffickers try to smuggle high-value goods, such as processors or graphics cards. A recent attempt to conceal 70 graphics cards among 617 pounds of live lobster comes to mind. On the contrary, this SSD smuggler opted to sneak in lower-value hardware instead. It isn't the first time we've seen M.2 SSDs transported into China. A recent bust from this year detained a man that hid 84 SSDs inside his scooter. M.2 SSDs still contain metallic parts, so it's close to impossible to get past metal detectors without raising the alarm with the metal detectors. Instead of hiding the SSDs inside a package, the perpetrator opted to tape the drives around his body for a bigger haul. The seized bounty amounted to 420 M.2 drives, equal to a combined value of approximately $32,984.94. At $78.53 a piece, the offender could be smuggling 1TB or 2TB PCIe 4.0 drives. For that matter, smuggling hardware from Macau or Hong Kong bypasses the customs duties that the Chinese government applies to the goods. As a result, the individuals who hire the smugglers see greater profit margins when they resell on platforms such as Goofish. There's a big grey market in mainland China for consumer electronic devices, including computer hardware. Therefore, the smuggling is unlikely to stop. On the other hand, smugglers are just thinking up more creative methods to get goods into mainland China.
Computer Hardware
The European Commission has adopted a new set of right to repair rules (PDF) that, among other things, will add electronic devices like smartphones and tablets to a list of goods that must be built with repairability in mind. The Register reports: The new rules will need to be need to be negotiated between the European Parliament and member states before they can be turned into law. If they are, a lot more than just repairability requirements will change. One provision will require companies selling consumer goods in the EU to offer repairs (as opposed to just replacing a damaged device) free of charge within a legal guarantee period unless it would be cheaper to replace a damaged item. Beyond that, the directive also adds a set of rights for device repairability outside of legal guarantee periods that the EC said will help make repair a better option than simply tossing a damaged product away. Under the new post-guarantee period rule, companies that produce goods the EU defines as subject to repairability requirements (eg, appliances, commercial computer hardware, and soon cellphones and tablets) are obliged to repair such items for five to 10 years after purchase if a customer demands so, and the repair is possible. OEMs will also need to inform consumers about which products they are liable to repair, and consumers will be able to request a new Repair Information Form from anyone doing a repair that makes pricing and fees more transparent. The post-guarantee period repair rule also establishes the creation of an online "repair matchmaking platform" for EU consumers, and calls for the creation of a European repair standard that will "help consumers identify repairers who commit to a higher quality." "Repair is key to ending the model of 'take, make, break, and throw away' that is so harmful to our planet, our health and our economy," said Frans Timmermans, EVP for the European Green Deal, which aims to make the whole of EU carbon neutral by 2050. On that note, the EC proposed a set of anti-greenwashing laws alongside passing its right to repair rule yesterday that would make it illegal to make any green claims about a product without evidence. Citing the fact that 94 percent of Europeans believe protecting the environment is important, the EC said its proposal covers any explicit, voluntarily-made claims "which relate to the environmental impact, aspect, or performance of a product or the trader itself." Any such claims, like a laptop being made from recycled plastic, would need to be independently verified and proven with scientific evidence, the EC said. Under the new post-guarantee period rule, companies that produce goods the EU defines as subject to repairability requirements (eg, appliances, commercial computer hardware, and soon cellphones and tablets) are obliged to repair such items for five to 10 years after purchase if a customer demands so, and the repair is possible. OEMs will also need to inform consumers about which products they are liable to repair, and consumers will be able to request a new Repair Information Form from anyone doing a repair that makes pricing and fees more transparent. The post-guarantee period repair rule also establishes the creation of an online "repair matchmaking platform" for EU consumers, and calls for the creation of a European repair standard that will "help consumers identify repairers who commit to a higher quality." "Repair is key to ending the model of 'take, make, break, and throw away' that is so harmful to our planet, our health and our economy," said Frans Timmermans, EVP for the European Green Deal, which aims to make the whole of EU carbon neutral by 2050. On that note, the EC proposed a set of anti-greenwashing laws alongside passing its right to repair rule yesterday that would make it illegal to make any green claims about a product without evidence. Citing the fact that 94 percent of Europeans believe protecting the environment is important, the EC said its proposal covers any explicit, voluntarily-made claims "which relate to the environmental impact, aspect, or performance of a product or the trader itself." Any such claims, like a laptop being made from recycled plastic, would need to be independently verified and proven with scientific evidence, the EC said.
Computer Hardware
NVIDIA (NVDA): Why You Should Stay Invested in This Stock Shares of chipmaker NVIDIA (NVDA) have gained more than 127% over the past six months. The rally has been driven primarily by investor perception that NVDA is well-poised to take... This story originally appeared on StockNews Shares of chipmaker NVIDIA (NVDA) have gained more than 127% over the past six months. The rally has been driven primarily by investor perception that NVDA is well-poised to take advantage of the ongoing boom in AI. Despite NVDA’s solid growth potential in AI, our proprietary rating system has rated the stock ‘Neutral’ as macroeconomic uncertainties linger. Read on…. Designer of graphics chips, NVIDIA Corporation (NVDA), reported disappointing results for the fourth quarter and full-year 2023. Furthermore, the company is expected to face macroeconomic challenges in the near term, including supply chain disruptions, high borrowing costs, and eroding consumer spending amid inflationary and recessionary pressures. Despite a weak performance in fiscal 2023, shares of NVDA have gained 127.1% over the past six months. Improved investor sentiment is driven by optimism about the chipmaker’s growth prospects amid the AI boom. While NVDA is making enormous efforts to boost its expansion in high-growth areas, it could be wise to wait for a better entry point in this stock due to its fundamental weakness, elevated valuation, and near-term macro headwinds. Reflecting the uncertain prospects, our proprietary POWR Ratings system has rated this chip stock C (Neutral). With a $666.95 billion market cap, NVDA provides graphics and compute & networking solutions in the United States, Taiwan, China, and internationally. The company reported fourth-quarter revenue of $6.05 billion, down 21% year-over-year. In addition, NVDA’s non-GAAP net income and EPS declined 35% and 33% year-over-year to $2.17 billion and $0.88, respectively. Furthermore, analysts are bearish about the company’s near-term prospects as it grapples with macroeconomic headwinds, including declining consumer spending amid still-elevated inflation, rising interest rates, supply chain constraints, and growing export restrictions. However, the graphic chips maker is working effectively in customizing its products to meet the rules and regulations of the country they are being exported to. On March 21, 2023, NVDA announced that it had modified its flagship product H100 into version H800 which is legal to export to China, an important market for its computer hardware and software. Also, the company said it would ship the BlueFlied-3 data processing unit (DPU), a primary product to boost the speed of computing infrastructure, to China. Despite macroeconomic challenges dampening near-term demand for its solutions, AI is viewed as a long-term growth opportunity for NVDA since AI adoption is at its starting point with the recent viral success of Open AI’s ChatGPT. “AI is at an inflection point, setting up for broad adoption reaching into every industry. From startups to major enterprises, we are seeing accelerated interest in the versatility and capabilities of generative AI,” said Jensen Huang, founder and CEO of NVDA. The company is partnering with leading cloud service providers to offer AI-as-a-service that provides businesses access to NVIDIA’s world-leading AI platform. Enterprises and customers could use each NVIDIA AI layer – the AI supercomputer, acceleration libraries software, or generative AI models – as a cloud service. Here’s what could influence NVDA’s performance in the upcoming months: Positive Latest Developments On March 21, 2023, NVDA and Adobe (ADBE) announced a partnership in which the companies would co-develop a new generation of advanced AI models using NVIDIA Picasso and Adobe tools like Creative Cloud. This partnership, primarily focusing on the deep integration of generative AI in creative workflows, might boost NVDA’s growth and profitability. Moreover, on the same day, NVDA and Google Cloud delivered a new generative AI platform, built on the new L4 GPU and Vertex AI, to accelerate the work of companies making a rapidly expanding number of generative AI applications. This new introduction should bode well for the company. Bleak Financials For the fourth quarter of fiscal 2023, NVDA’s revenue decreased 20.8% year-over-year to $6.05 billion. The company’s non-GAAP gross profit declined 23.3% year-over-year to $3.83 billion. Also, its non-GAAP operating expenses increased 22.7% year-over-year to $1.78 billion. Furthermore, the company’s non-GAAP income from operations declined 39.5% year-over-year to $2.22 billion. Also, its non-GAAP net income decreased 35.1% year-over-year to $2.17 billion, while its non-GAAP EPS came in at $0.88, down 33.3% year-over-year. Mixed Analyst Estimates Analysts expect NVDA’s revenue for the fiscal 2024 first quarter (ending April 2023) to decline 21.3% year-over-year to $6.52 billion. The consensus earnings per share estimate of $0.91 for the current quarter indicates a decline of 33.1% year-over-year. However, analysts expect NVDA’s revenue and EPS for the fiscal year (ending January 2024) to increase 11.3% and 35.7% year-over-year to $30.03 billion and $4.53, respectively. Also, the company’s revenue and EPS for fiscal 2025 are expected to grow 24.5% and 33.7% year-over-year to $37.39 billion and $6.06, respectively. Robust Profitability NVDA’s trailing 12-month gross profit margin of 56.93% is 12.7% higher than the 50.54% industry average. Its trailing 12-month EBITDA margin of 26.40% is 183.8% higher than the 9.30% industry average. Also, the stock’s trailing 12-month net income margin of 16.19% is 502% higher than the industry average of 2.69%. Furthermore, NVDA’s trailing 12-month ROCE, ROTC, and ROTA of 17.93%, 9.61%, and 10.61% compare to the industry averages of 1.96%, 2.06%, and 0.67%, respectively. Elevated Valuation In terms of forward non-GAAP P/E, NVDA is currently trading at 59.96x, 197.4% higher than the industry average of 20.16x. The stock’s forward EV/Sales of 22.23x is 710.8% higher than the industry average of 2.74x. Moreover, its forward EV/EBITDA multiple of 66.86 is 389.9% higher than the industry average of 13.65. In addition, the stock’s forward Price/Sales of 22.27x is 725.7% higher than the industry average of 2.70x. Its forward Price/Cash Flow multiple of 63.54 is 238.5% higher than the industry average of 18.77. POWR Ratings Reflect Uncertainty NVDA has an overall C rating, equating to Neutral in our POWR Ratings system. The POWR Ratings are calculated by considering 118 distinct factors, with each factor weighted to an optimal degree. Our proprietary rating system also evaluates each stock based on eight distinct categories. NVDA has a B grade for Quality, in sync with higher-than-industry profitability. Also, it has a B grade for Momentum. The stock is currently trading above its 50-day and 200-day moving averages of $144.89 and $178.17, respectively. NVDA has a C grade for Growth, consistent with its weak financials and mixed analyst expectations. On the other hand, the stock has an F grade for Value, consistent with its higher valuation relative to its industry peers. Also, its 24-month beta of 2.01 justifies a D grade for Stability. NVDA is ranked #63 out of 91 stocks in the Semiconductor & Wireless Chip industry. Click here to access all POWR Ratings for NVDA. Bottom Line NVDA reported deteriorating financials in the fourth quarter and fiscal year 2023. While the company is well-positioned to cash in on the open-ended growth opportunities present by AI in the long run, its revenues and earnings are expected to be affected by ongoing macroeconomic uncertainties and the growing risk of a recession. Given NVDA’s disappointing financials, significantly high valuation, and near-term macroeconomic headwinds, it could be wise for investors to wait for a better entry point in this chip stock. Stocks to Consider Instead of NVIDIA Corporation (NVDA) Given its uncertain short-term prospects, the odds of NVDA outperforming in the weeks and months ahead are compromised. However, there are many industry peers with much more impressive POWR Ratings. So, consider these three A-rated (Strong Buy) stocks from the Semiconductor & Wireless industry instead: United Microelectronics Corp. (UMC) SUMCO Corporation (SUOPY) Tower Semiconductor Ltd. (TSEM) What To Do Next? Get your hands on this special report: What gives these stocks the right stuff to become big winners, even in this brutal stock market? First, because they are all low priced companies with the most upside potential in today’s volatile markets. But even more important, is that they are all top Buy rated stocks according to our coveted POWR Ratings system and they excel in key areas of growth, sentiment and momentum. Click below now to see these 3 exciting stocks which could double or more in the year ahead. NVDA shares were trading at $277.55 per share on Tuesday morning, up $7.53 (+2.79%). Year-to-date, NVDA has gained 89.95%, versus a 8.54% rise in the benchmark S&P 500 index during the same period. About the Author: Mangeet Kaur Bouns Mangeet’s keen interest in the stock market led her to become an investment researcher and financial journalist. Using her fundamental approach to analyzing stocks, Mangeet’s looks to help retail investors understand the underlying factors before making investment decisions. The post NVIDIA (NVDA): Why You Should Stay Invested in This Stock appeared first on StockNews.com
Computer Hardware
The Intel Core i9-12900K processor already boasts 16 cores and 24 threads, but the 13th gen version promises to increase that to 24 and 32.Photo: Sarah Jacobsson Purewal/GizmodoIntel accidentally dropped details for its upcoming slate of desktop processors on its Canadian site, and though the company was quick to pull the specs, the Internet Archive has your back for those looking to see the original version.OffEnglishThe Intel Core i5-13600K’s looking at 14 cores and 20 threads while pushing up to 5.1 Ghz max frequency. The i7-13700K is 16 cores to 24 threads and 5.3 GHz, while the i9-13900K is rocking 24 cores, 32 threads and a 5.4 GHz frequency. With Turbo Boost Max 3.0 or Thermal Velocity Boost, assuming you have the appropriate power and cooling, you’ll be able to push to even high GHz counts.These numbers aren’t too surprising, but they do help confirm earlier leaks. In leaked slides shown on igor’sLAB, the 13th gen CPUs previously seemed to have scaled cores and threads compared to the 12th gen. Intel has promised that, along with the speeds shown here, it’s also increased the L2 cache and L3 Intel Smart Cache on i5K processors and above. Of course, the new chipsets will support CPU PCIe 5.0 interfaces up to 16 lanes and continue facilitating DDR5 and DDR4 RAM, though they will now go up to DDR5-5600 compared to the 12th gen’s limit of DDR5-4800. The previously leaked SKU chart slide gives a good rundown of what’s going on here.Beyond the eye-catching boasts of increased cores and threads, there isn’t a big killer feature here over what’s on the current 12th gen Alder Lake series, which made waves by introducing DDR5 support. In our own review of the 12th gen chips, we already felt Alder Lake was pretty damned future-proof, supporting PCIe Gen5 and Thunderbolt 4. And considering you’ll need to drop between $280 to $590 for any of the new mid-to-high end CPUs, these hybrid chips are going to be a big ask for many users out there.G/O Media may get a commissionAnd, of course, AMD has already rolled up to the corner, putting Intel on blast with its own upcoming Zen 4 CPU lineup, available at the end of this month. AMD’s claiming that its 7000 series chips will also allow boost clocks of over 5 Ghz to match Intel, as well as support for PCIe5 and DDR5. It’s hard to judge their speeds versus Intel CPUs just based on cores and clock speeds, as we’re only working with pre-release material. What’s most important is pricing, with the highest end Ryzen 9 7950X costing $699 at launch. A top-of-the-line Core i9-12900K runs over $100 cheaper, so eyes are on Intel for how it prices Raptor Lake.AMD also won’t have a 7800X CPU at launch, Sept. 27, which means that when it does eventually put out its mid-range CPU version, it’ll likely compete more with Intel’s 13th gen then it will with the 12th gen. And since you’ll likely need to upgrade your motherboard to support this new slate of gaming-ready CPUs, users will have an even bigger decision to make.There’s some argument for further future-proofing your computer hardware, but we still don’t know pricing for those upcoming 13th gen processors or their release window. That, in my mind, is going to be the main deciding factor in whether there’s any reason to hold off upgrading if you haven’t already gone for a 12th gen and you’re absolutely set on Intel CPUs. And if you already have dropped several Benjamins on new chips, I have serious doubts that digging deeper into your wallet for the 13th gen will result in the kind of performance gains that demand spending even more right away.
Computer Hardware
Home News (Image credit: Shutterstock) Hardcore overclocker Elmor officially broke the CPU frequency world record with Intel's brand-new Core i9-13900K 24-core processor. The best CPU hit a staggering 8.812GHz using liquid nitrogen cooling, dethroning the 8-year reigning champion, the FX-8370, by 90MHz.That's right; it took eight years for a new CPU architecture to dethrone AMD's FX series processors. Those chips are infamous for their mediocre CPU performance at launch; however, these chips scaled incredibly well under liquid nitrogen overclocking.The previous world-record FX-8370 result was made with master overclocker The Stilt, featuring a clock speed of 8.722GHz at a negative 186 Celsius.Elmor accomplished this monumental feat thanks to Intel's new highly-clocked 13th Gen Raptor Lake CPU architecture. Out of the box, the Core i9-13900K can run over 5.5GHz on all P-cores while also hitting 5.8GHz under lightly threaded workloads. The 13900K is, by far, Intel's highest-clocking chip to date.With the afforded frequency headroom already available to the Raptor Lake CPUs with traditional air/liquid cooling, it makes sense that these chips would excel in liquid nitrogen overclocking, where thefrigidd temperatures can stabilize these chips far beyond 6GHz.But this is just the beginning; Elmor's world record was on the launch day of Intel's 13th Gen CPUs. So there could be many more world records to break with Raptor Lake once overlockers get accustomed to the architecture's unique behavior.Intel has also teased an upcoming successor for the Core i9-12900KS that will feature 6GHz boost clocks by default. If Intel is binning Raptor Lake dies for this new model, we should see a new world record broken, with higher quality silicon that will come with this new Raptor Lake CPU. Aaron Klotz is a freelance writer for Tom’s Hardware US, covering news topics related to computer hardware such as CPUs, and graphics cards.
Computer Hardware
Although the Samsung 990 Pro has replaced it, the Samsung 980 Pro used to be one of the best SSDs around. Now, it may be close to three years old, but it's still a popular PCIe 4.0 drive that's much more affordable than its successor, so it doesn't come as a shock that there are many knockoffs on the market. One Chinese user from the Baidu Tieba (opens in new tab) (via Harukaze5719 (opens in new tab)) forums recently encountered an imitation posing as the Samsung 980 Pro. The phony drive had everything that suggested that it was legit: the Samsung 980 Pro 2TB sticker and the firmware, which later turned out to be fake. The forgery was so good that the mysterious drive even managed to fool the Samsung Magician software. Once the sticker was removed, the drive had all the signs of a counterfeit SSD. The SSD utilizes the Maxio MAP1602A PCIe 4.0 SSD controller, the same one that powers the Acer Predator GM7. The SSD controller comes from TSMC's 12nm process node and features a DRAM-less design. More experienced consumers know that Samsung produces and uses its SSD controllers and NAND chips in the brand's SSDs. In the case of the Samsung 980 Pro, the SSD employs the 8nm Elpis controller with a DRAM design. Additionally, Samsung's SSD controllers are typically larger than the competition. Regarding the NAND selection, the Samsung 980 Pro sports 128-layer TLC 3D V-NAND. The fraudulent Samsung 980 Pro drive uses YMTC's 128-layer TLC 3D NAND (X2-9060) built with the Chinese chipmaker's Xtacking 2.0 technology. With a different SSD controller and NAND, the bogus drive obviously won't perform similarly to a real Samsung 980 Pro. According to the user-provided benchmarks, the SSD delivered sequential read and write speeds of around 4.8 GBps and 4.5 GBps, respectively, in CrystalDiskMark and up to 4.2 GBps and 3.9 GBps in AS SSD. On the contrary, the Samsung 980 Pro offers sequential read and writes up to 7 GBps and 5 Gbps, respectively, depending on the capacity. So the performance numbers on the false drive are way off. The merchant sold fake Samsung 980 Pro 2TB SSDs on Xianyu, Taobao's second-hand market, for 880 yuan or $127.77. The drives even come in official Samsung packaging. The price alone makes it easy to see that the SSD is a sham. The actual drive currently retails for $169.99 (opens in new tab) on the U.S. market. The scammed user claims he had been out of the PC game for a while, so he wasn't up to speed on the latest technology. Furthermore, he reportedly didn't have a PC at hand, so ten days had passed before he could test the drive. There's always an imminent risk to buying computer hardware on the second-hand market, especially in China. But even distinguished online shopping platforms, such as AliExpress or Taobao, are filled with malicious merchants ready to take advantage of less-knowledgeable consumers. These same actors have infiltrated U.S. retailers as third-party sellers. We've seen our fair shares of scam cases, like the $39 30TB portable SSD on Walmart or the $100 16TB SSD on Amazon. If something sounds too good to be true, it usually is.
Computer Hardware
LAKE MARY, Fla., Sept. 19, 2023 /PRNewswire/ -- In the fast-paced world of technology, a groundbreaking platform is emerging, poised to redefine the future of blockchain and artificial intelligence (AI). Meet VITREUS, a nexus of innovation designed to make the power of blockchain and AI accessible to all industries. The fundamental correlation between corporate desire to leverage the technology and its current maturity leads to a lot left to be desired by traditional offerings, until now. VITREUS is not your typical blockchain platform. It's a multifaceted approach that combines the immutability of blockchain, the versatility of digital assets, and AI-driven compliance. The result? A unified, highly secure ecosystem for data management, compliance, and automation, all in one place. Rife with an innately reduced attack surface, VITREUS will be a boon to corporations looking to modernize their digital infrastructure and take full advantage of the blockchain technology they've awaited. What does this mean for regulated businesses, regardless of their industry? It means VITREUS provides them an opportunity to streamline operations, fortify security, and navigate complex regulatory landscapes more efficiently. VITREUS offers a future-proof solution, ready to adapt to evolving needs with emerging technologies like zero-knowledge proofs and secure multiparty computation. But let's break it down further: Layer 0 - Interoperability & Shared Security: Think of this as the glue that holds everything together. It allows different specialized blockchains to seamlessly interact, expanding collaboration across industries while maintaining data privacy and security. The expansion of the VITREUS platform, all streams through Layer 0 adding speed, security and interoperability. Layer 1 - Data Storage: VITREUS offers a secure and resilient data storage solution that's more cost-effective than traditional cloud options. Plus, it provides fine-grained control over data access and governance, while limiting the access to the original submitting party, preventing unauthorized access control. Layer 2 - COMPLiQ: This layer unleashes the power of AI. By connecting to an off-chain AI compute network, VITREUS boosts computational capacity, enhances AI capabilities, and ensures trustworthy AI-driven decisions. With a universal source of truth, Layer 2 helps combat many real-world issues associated with using LLM-based systems. Layer 3 - Businesses & Power Plant: Tailored for businesses, this layer bridges the gap between legacy systems and the blockchain, simplifying data integration, migration, and automation while preserving data integrity. With business data, securely moved to the blockchain, the opportunity to leverage real-world data for smart contract and digital asset driven interactions presents new avenues for businesses to grow. Layer 4 - Logichains: These are the building blocks that enhance security and reliability across the entire platform, reducing the burden on individual components, directly correlating to enhanced scalability. VITREUS stands out with its unique blockchain architecture and next gen AI VITREUS is more than just technology; it's a game-changer for industries seeking innovation and efficiency. It offers a seamless transition to blockchain technology, making it easy for businesses to leverage the benefits without disrupting their operations. Bound by 3 layers of Decentralized Physical Infrastructure Networks (DePIN), VITREUS positions its 5 Layer blockchain to be a fundamental shift to a new era of corporate grade technology that leverages the hard work of its predecessors. Utilizing the familiar web2-like user experience the general population is accustomed to layered onto the web3 technology under the hood. This concept eases the challenges associated with web3 usability surrounding wallets, transactions. Media Contact: Jaren Holmes +1 936 443 1393 [email protected] View original content to download multimedia:https://www.prnewswire.com/news-releases/vitreus-a-newcomer-is-taking-ai-and-blockchain-innovation-to-the-next-level-301931394.html SOURCE VITREUS
Emerging Technologies
With help from Derek Robertson Negotiators within the United Nations are grappling with how to address artificial intelligence and potential state surveillance of political dissidents in a new cyber security treaty that’s in the works. Like many tech policy discussions lately, the rapid emergence of AI as a dual-use tool for carrying out and protecting against cyberattacks has thrown a wrench in the proceedings in New York City, as negotiators sketch out how countries should cooperate with each other when investigating cybercrime. The treaty would bind countries to common standards for sharing data and information, shaping how countries deal with criminal investigations in the digital realm for decades to come. With the current session wrapping on Sept. 1, negotiators from different member states are duking it out over critical definitions in the treaty with wide-reaching implications on what qualifies as a cybercrime, and what safeguards need to be placed on the flow of information between countries. One of the core tensions playing out is how much information the U.S. and its allies must provide to countries like Russia and China with less than democratic regimes — particularly on cybercrime investigations that could double as surveillance operations. Some countries want the treaty to broadly cover the misuse of information and communication technologies, which would allow access to “everything that touches the flow of data,” said Deborah McCarthy, a retired ambassador who is the U.S.’ lead negotiator on the treaty. “That will include AI, in all aspects, in all its forms,” she said. The United States wants more specific definitions and for the treaty to focus instead on a narrow set of crimes in order to limit the control a country can exert over its or other nations’ information space. Digital rights advocate Katitza Rodriguez, policy director for global privacy at the Electronic Frontier Foundation, said the broad scope of the current treaty could authorize sharing personal data with law enforcement in other countries — including biometric information and datasets used to train AI. Rodriguez said the treaty’s lack of precision on what kinds of data needed to be shared “could potentially lead to sharing of intrusive data without a specific assistance request.” “In theory, the long arm of this treaty could access citizens in other countries who may express opinions counter to the government of the country that is requesting [information on the citizen],” McCarthy said. “And we’re saying no, it has to be for certain crimes, under certain conditions and safeguards would apply.” Negotiators will hammer out safeguards this afternoon for the flow of information between law enforcement, McCarthy said. The U.S. and its allies specifically want to lay the groundwork that would deny information-gathering requests that could be used to target political dissidents. Additionally, in its current iteration, digital rights advocates are worried the treaty’s broad definitions of cybercrime might criminalize legitimate cybersecurity research on emerging technologies like AI, thus chilling work in the field. Protections for private citizens carrying out cybersecurity research are still under debate on the global stage, even as the U.S. federal government turns to hackers to help it catch vulnerabilities in large language models. Raman Jit Singh Chima, Asia policy director and senior international counsel for the digital rights advocacy group Access Now, said the UN treaty does “not actually help those who are trying to make sure that AI does not result in an explosion in cybercrime.” McCarthy noted that the need for built-in protections for cybersecurity researchers was a “consistent message” from industry, think tanks and human rights groups, and that proposals for such protections are “still being discussed.” With the new school year here, educators are slowly learning to embrace ChatGPT and other AI tools in the classroom. That’s the main takeaway from a report this morning by POLITICO’s Blake Jones, Madina Touré, and Juan Perez Jr., who write about how after early bans and panic over the technology, it’s now being consciously integrated into curricula across the country. Olli-Pekka Heinonen, the director general of the International Baccalaureate program, told them that “AI will be affecting societies to a large extent, and they are so strongly influencing the basic ways of how we make sense of reality, how we know things, and how we create things, that it would be a mistake if we would leave schools out of that kind of development.” Although individual schools and local and state governments are getting more ChatGPT-friendly, there still isn’t an education-focused regulatory response to the technology (with the exception of guidance issued in May by the Department of Education for personalized learning). The POLITICO team reports that nonprofits, unions, and educators are largely concerned with privacy, security, and job preparation. — Derek Robertson What does one of the highest-profile champions of open technology think about Elon Musk’s efforts to crowdsource fact-checking on X? Ethereum founder Vitalik Buterin offered his thoughts in a recent blog post, arguing that the “community notes” feature meant to provide Wikipedia-like, consensus-driven fact-checking on the platform formerly called Twitter is not only “informative and valuable” but highly aligned with the ethos of the crypto world. “Community Notes are not written or curated by some centrally selected set of experts; rather, they can be written and voted on by anyone, and which notes are shown or not shown is decided entirely by an open source algorithm,” Buterin writes. “It’s not perfect, but it’s surprisingly close to satisfying the ideal of credible neutrality, all while being impressively useful, even under contentious conditions, at the same time.” He writes that although it doesn’t quite add up to the vision of “decentralized” social media that many in the crypto world hold, it could play a big role in driving interest in, and preference for, the principles that the open-source world holds dear. — Derek Robertson - A computer science student explains his breakup with an AI “agent.” - A pioneer in the world of “feeling” prosthetic limbs has died at 49. - British chip company Arm’s upcoming IPO will test the investment appetite for AI. - What’s the best way to get your resume noticed in an AI-driven hiring world? - Sam Altman’s ambitious Worldcoin project is already under serious threat. Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); and Steve Heuser ([email protected]). Follow us @DigitalFuture on Twitter.
Emerging Technologies
Infosys Partners With Walmart Commerce Tech To Deliver Omnichannel Solutions To Retailers Infosys will help retailers implement and use Store Assist, which enables them to provide 'seamless omnichannel experiences'. Indian IT services company Infosys on Wednesday said it has joined hands with Walmart Commerce Technologies to help retailers harness technology solutions that simplify customer and store employee experiences. Infosys will help retailers implement and use Store Assist, which enables them to provide 'seamless omnichannel experiences' such as pickup, delivery and ship from store for both employees and customers, the company said in a statement. "Infosys today announced a new collaboration with Walmart Commerce Technologies to help retailers leverage technology solutions that simplify customer and store employee experiences", it said. It is pertinent to mention that Infosys has a consumer and retail focused practice that has been helping retailers, consumer tech, consumer packaged goods and logistics companies navigate their way through digital and emerging technologies over the last three decades. Through platforms like Infosys Equinox, partner products, and leading digital capabilities, Infosys has helped over 190 retailers globally transform into sentient enterprises by enhancing their core digital capabilities, advancing their operating models and transforming their talent for the future, the company said.
Emerging Technologies
Commuters are reflected on an advertisement of Reliance Industries' Jio telecoms unit, at a bus stop in Mumbai, India, February 21, 2017. REUTERS/Shailesh AndradeRegister now for FREE unlimited access to Reuters.comNEW DELHI, Aug 29 (Reuters) - India's telecom leader Reliance (RELI.NS) said on Monday it is working with Alphabet Inc's (GOOGL.O) Google to launch a budget 5G phone as it laid out a $25 billion plan for introducing the next-generation wireless services within two months.Speaking at the company's annual general meeting, Reliance Chairman Mukesh Ambani said Jio's 5G network will be the world's largest, launching in main cities including New Delhi and Mumbai before being expanded across India by December next year."Jio has prepared an ambitious 5G roll-out plan," the billionaire tycoon said, adding that the phone being developed with Google would be "ultra-affordable".Register now for FREE unlimited access to Reuters.com5G data speeds in India are expected to be about 10 times faster than those of 4G, with the network seen as vital for emerging technologies like self-driving cars and artificial intelligence.Reliance's 5G plans throw down the gauntlet to rivals Bharti Airtel (BRTI.NS) and Vodafone-Idea (VODA.NS) in the world's second biggest mobile market.Jio, India's biggest mobile carrier with more than 420 million customers, snapped up airwaves worth $11 billion in a $19 billion 5G spectrum auction earlier this month.The aggressive 5G strategy builds on Jio's playbook of disrupting India's telecoms market having sparked a price war in 2016 when it launched cheap 4G data plans and free voice services, and later a 4G smartphone costing just $81, again in partnership with Google.Reliance said it would partner with Qualcomm Inc (QCOM.O) to provide cloud-based 5G network solutions for businesses, and with Meta Platform Inc (META.O) for its JioMart shopping integration on WhatsApp. It also announced a foray into private 5G networks.Register now for FREE unlimited access to Reuters.comReporting by Munsif Vengattil in New Delhi; Editing by Aditya Kalra, Kirsten DonovanOur Standards: The Thomson Reuters Trust Principles.
Emerging Technologies
Government Inks 8 MoUs With IBM To Provide Curated Courses For Youth In Future-Ready Skills Agencies of the ministries of education and skill development and entrepreneurship have signed eight agreements with IBM Inc. under which curated courses will be provided to empower youth in India with future-ready skills. Agencies of the ministries of education and skill development and entrepreneurship have signed eight agreements with IBM Inc. under which curated courses will be provided to empower youth in India with future-ready skills. Minister of Education and Skill Development and Entrepreneurship Dharmendra Pradhan presided over the signing of the MoUs on Wednesday. "The collaboration will focus on the co-creation of curriculum for skilling learners across school education, higher education, and vocational skills on emerging technologies like AI (including generative AI), cyber-security, cloud computing and professional development skills," the Ministry of Skill Development and Entrepreneurship stated. Speaking at the MoU signing event, Pradhan said that India, with its vast and youthful population, stands at the cusp of tremendous potential. To harness this demographic dividend, it is crucial to equip the youth with the necessary skills to excel in today's modern workforce. "This collaboration marks a significant stride toward our vision of a 'Skilled India' and in scaling up digital skills training and skill building in emerging technologies using IBM SkillsBuild platform," the minister stressed. He said the partnerships will go a long way in making our learners harness the true potential of technology, making them future wealth creators and entrepreneurs of the society. The collaboration between IBM and Ministry of Education and Ministry of Skill Development and Entrepreneurship spans across three core levels of education. IBM will provide access to digital content from IBM SkillsBuild for high school students, teachers, and trainers on cutting-edge skills in schools identified by the Navodaya Vidyalaya Samiti, National Council for Teacher Education and Kendriya Vidyalaya Sangathan. This program will be offered online, via webinars and in-person workshops conducted by IBM's CSR implementation partners. In addition, IBM will refresh CBSE's AI curriculum for Grades 11 and 12, develop a cyber skilling and blockchain curriculum for high school students to be hosted on IBM SkillsBuild. IBM will continue its partnership with the Ministry of Skill Development and Entrepreneurship and work closely with the Directorate General of Training and state vocational education and skilling departments to onboard job seekers, including long-term unemployed, and school dropouts to IBM SkillsBuild and enable them to gain the technical and professional skills required to re-enter the workforce.
Emerging Technologies
ChatGPT is an advanced deep neural network that operates on the transformer model, capable of generating new text based on input data. Using an ML algorithm called unsupervised learning, the model is pre-trained on vast amounts of text data, allowing it to identify underlying language patterns and structures. Multiple layers of self-attention and feed-forward neural networks comprise the model, enabling it to capture the intricate relationships and dependencies between words within a sentence. Language modeling is the primary task during pre-training, where the model predicts the next word in a given text sequence based on previous words. As a result, the model acquires a high-quality representation of the language. After pre-training, the model can fine-tune specific natural language processing tasks such as sentiment analysis, language translation, and text classification. One of the most significant benefits of ChatGPT is its ability to generate coherent, contextually appropriate responses to text inputs, even for open-ended prompts like chatbot conversations. The pre-trained model generates a probability distribution over the next word in the sequence, and the output is sampled from this distribution. The exciting role of ChatGPT in data science Data science is an exciting and ever-evolving field that requires constant learning and adaptation to stay ahead. Fortunately, ChatGPT, an AI language model, can play a vital role in this process by providing data science professionals with a range of benefits that makes it an indispensable resource for acquiring new skills and staying up-to-date with industry trends. Here are some ways in which ChatGPT can help you enhance your knowledge of data-driven decision-making: - Personalized Learning Assistance: ChatGPT can provide personalized assistance by understanding the context of a user's query and delivering tailored explanations, examples, and resources to help learners comprehend complex data science concepts better. - Access to Diverse Learning Resources: The extensive knowledge base of the tool enables it to recommend a wide range of learning resources for data science, including articles, tutorials, online courses, books, and videos from reputable sources. - Networking and Collaboration: The language model can assist in networking and collaboration within the data science community, generating conversation starters or discussion topics to engage with peers, mentors, and industry experts, fostering a collaborative learning environment, and expanding their professional network. - Practice Problem Generation: ChatGPT can generate practice problems and coding challenges related to various data science topics, allowing learners to test their knowledge and develop problem-solving skills. - Interactive Learning Environment: The AI assistant provides an interactive learning environment that enables users to ask questions, receive immediate feedback, and engage in conversations about data science topics, fostering a deeper understanding and helping to solidify knowledge. - Code Explanation and Debugging Assistance: ChatGPT can help learners by providing explanations for specific code snippets, offering suggestions for improvements, and even assisting with debugging issues. - Staying Current with Industry Trends: ChatGPT is an intelligent agent that can provide information on recent developments, emerging technologies, and new methodologies, ensuring that users are aware of the latest trends and best practices in the industry. Significant examples of ChatGPT in action ChatGPT is a versatile tool that has been used across various industries, including healthcare, finance, and marketing. Here are some examples of its applications: - Healthcare: Although human doctors will always remain the ultimate decision-makers in healthcare, ChatGPT can greatly assist with clinical decision support. By providing real-time, evidence-based recommendations, the AI language model can help flag potential drug interactions, suggest treatment options for specific conditions, and provide relevant clinical guidelines. - Marketing: ChatGPT can revolutionize marketing by engaging in personalized conversations with potential customers, answering their queries, and providing information about products and services. It can also be integrated into lead generation forms, offering recommendations and assistance. - Finance: ChatGPT has been utilized to predict financial market trends and identify potential investment opportunities. It has also been utilized to develop intelligent agents that can assist customers with basic financial queries. ChatGPT prompts for data science professionals ChatGPT prompts are initial messages or inputs given to the language model to start generating responses or outputs. These prompts are typically a few words or sentences that provide context and direction to the model about the type of response that is expected. The prompts can be in the form of questions, statements, or incomplete phrases. They can guide the model to produce accurate and relevant responses, streamline the process, generate bulk responses, and tailor them to the user's needs. In short, ChatGPT prompts are a valuable tool for generating high-quality responses quickly and efficiently. - Prompt 1: Provide me a Python Code Snippet for a decision tree classifier. ChatGPT from sklearn.datasets import load_iris from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score # load the iris dataset iris = load_iris() X = iris.data y = iris.target # split the dataset into training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) # create a decision tree classifier clf = DecisionTreeClassifier() # fit the classifier to the training data clf.fit(X_train, y_train) # make predictions on the testing data y_pred = clf.predict(X_test) # calculate the accuracy of the classifier accuracy = accuracy_score(y_test, y_pred) print("Accuracy: {:.2f}%".format(accuracy * 100)) In this example, we are using the Iris dataset, which consists of samples of iris flowers and their corresponding species. We first split the data into training and testing sets, and then create a decision tree classifier using the DecisionTreeClassifier() function from the sklearn.tree module. We fit the classifier to the training data using the fit() method and then make predictions on the testing data using the predict() method. Finally, we calculate the accuracy of the classifier using the accuracy_score() function from sklearn.metrics module, and print the result to the console. - Prompt 2: Give an example of a data science project where teamwork was essential to the project's success. ChatGPT One example of a data science project where teamwork was essential to its success is the development of a recommendation system for an e-commerce website. In this project, data scientists would need to work closely with developers and product managers to gather requirements, understand the business goals and limitations, and identify the data sources needed to build the recommendation system. The data scientists would then need to collaborate with data engineers to collect, clean, and pre-process the data before feeding it into the recommendation algorithm. Once the algorithm is developed, the data scientists would need to work with developers to integrate the algorithm into the website and test it rigorously to ensure that it works as intended. Finally, the project team would need to analyze the performance of the recommendation system and make iterative improvements based on user feedback and data analytics. Throughout the entire project, effective communication and collaboration between team members would be critical to ensuring the success of the project. - Prompt 3: I want you to act as a programmer and write a simple code to detect customer response rate. ChatGPT To detect the customer response rate, we first need to define what we consider as a response. Let's assume that a response is when a customer completes a specific action, such as filling out a survey or making a purchase. Assuming that you have a dataset with customer information, including whether or not they responded, and the total number of customers who were contacted, here's a simple Python code to calculate the response rate: import pandas as pd # Load the dataset into a Pandas dataframe df = pd.read_csv('customer_data.csv') # Calculate the total number of customers contacted total_customers_contacted = len(df) # Calculate the total number of customers who responded total_customers_responded = len(df[df['responded'] == 1]) # Calculate the response rate response_rate = (total_customers_responded / total_customers_contacted) * 100 # Print the response rate print('Response rate: {:.2f}%'.format(response_rate)) This code loads the customer data into a Pandas dataframe, calculates the total number of customers contacted and the total number of customers who responded, and then calculates the response rate as a percentage. Finally, it prints the response rate to the console. Note that this is a very simple example, and in real-world scenarios, there may be more complex factors that affect the response rate, such as demographics, time of day, and marketing channel used. As a data science professional, you can learn, collaborate, and stay updated with the latest trends in the field. The aforementioned prompts are examples of how ChatGPT can provide in-depth information for professionals at all levels aiding you to enhance your data science skills and expertise. Some other examples of prompts you could use are: - R to Python translation Prompt: Think of yourself as a code translator, please. The following R code needs to be converted to Python. [Put code here] - Microsoft Excel formula Prompt: Please assume the role of an Excel sheet specialist who can create formulas for Excel sheets. Please provide a formula that meets the following criteria. - Python-based production of synthetic data Prompt: Assume you are a Python programmer. Can you create a piece of code that produces fake data for a client database? Tips for generating data science prompts - Offer context: When asking ChatGPT a question, provide context that helps the model understand the nuances of the problem. This allows it to generate more informed responses. - Be precise: To minimize the need for tweaking output, be clear about the input and desired outcome when posing a question to ChatGPT. - Keep trying: Don't be afraid to try multiple prompts. Using different prompts can provide different perspectives on a problem and enable the model to generate a variety of responses. Final Takeaways… According to Sam Altman, the creator of ChatGPT, the future of AI is both awe-inspiring and alarming, and its potential is virtually boundless. Currently, GPT-4 is in development and is expected to be the most advanced large language model at launch. ChatGPT can address the lack of specialists in the data science field by training laypeople to code and program solutions to analytics problems. This could make data science more accessible and attractive to individuals who previously found it daunting. To remain competitive, data scientists should focus on how they can augment their skills using this intelligent agent. ChatGPT's impact on society becomes evident, and other tech giants such as Google are developing their own models. It will be intriguing to observe their performance in comparison to Al chatbot, and how they will contend with its dominance in the field in the foreseeable future.
Emerging Technologies
At this year's London Fashion Week(Opens in a new tab), the growing intersection between fashion and technology took an innovative form: digital clothing tags. Backstage at the Ahluwalia show(Opens in a new tab) in February, I saw these tags attached to pieces from the British brand's latest collection, in a partnership with Microsoft and software platform EON(Opens in a new tab). In place of the average, flimsy clothing tag (which are usually swiftly removed after purchasing an item), these versions each feature a QR code. When scanned, the tags unveil a bundle of information about the garment: how Ahluwalia manufactured it, insight into the supply chain, the item's lifecycle, and even the creative process — including the kind of music designer Priya Ahluwalia(Opens in a new tab) listened to during its making. EON's mission is to improve traceability and enable the circularity of garments. A slew of luxury brands(Opens in a new tab), including Prada, Coach(Opens in a new tab), and Ralph Lauren are looking to introduce some form of digital product IDs or have already done so, as a component of a larger goal to incorporate sustainable and technologically-advanced practices more staunchly into their businesses. Amazon Style(Opens in a new tab), which opened its first physical store in California last year, has similarly added QR-laden tags to its clothes, which lean more toward details like sizing and customer ratings of their products, rather than information about Amazon's manufacturing process. Circular fashion, a concept often relegated to a buzzword(Opens in a new tab), is something brands increasingly strive to deliver. Circularity, in this regard, is the ability to repurpose and reuse clothing. Fashion is amongst the most polluting industries(Opens in a new tab) on the planet, producing 20 percent of global wastewater(Opens in a new tab) and 10 percent(Opens in a new tab) of all greenhouse gas emissions. A collective push to shed this reputation(Opens in a new tab) and increase sustainability has been amplified in past years — not to mention the booming secondhand clothing market(Opens in a new tab) is estimated to be worth $350 billion by 2027. A recent report from U.N. Climate Change and CDP(Opens in a new tab) shows that the fashion industry is finally making tangible changes to achieve sustainability and take action against climate change. But there is work to be done: greenwashing is still rife when it comes to high street labels; fast-fashion presents enormous challenges. Wearing then throwing away fashion is still a major issue: 92 million tonnes(Opens in a new tab) of textiles end up in landfills each year. By 2030, it is estimated(Opens in a new tab) that this number will increase to 134 million tonnes of waste annually. Many in the fashion industry believe technology like digital IDs hold potential for effective change. In this case, the tags act as a sort of passport for a garment, providing a holistic, end-to-end overview for the product. They ultimately deliver a blueprint coveted trinity: resale (the most environmentally-friendly(Opens in a new tab) tactic when it comes to fashion), repair, and recycle. For Natasha Franck, the founder and CEO of EON, digital IDs — in the form of QR codes or NFC tags(Opens in a new tab) — hold the potential to turn "simple products" into "traceable and valuable assets". "Products become immersive media channels, connecting brands directly to their customers, on-demand," she tells Mashable. "Customers can scan their items with their phone and discover in-depth information about where and how they were made, or access services such as styling, care, repair, resale and more." These bits of "embedded information", as Franck puts it, transform pieces of clothing into smart garments, which can massively support brands and people entering the resale market, in particular. EON is developing an Instant Resale(Opens in a new tab) program, through which users can scan their product's digital ID and instantly resell. The startup is currently working with French luxury brand Chloé and fashion resale site Vestiaire Collective(Opens in a new tab) on this initiative. Maruschka Loubser, director of global partnerships at Microsoft, tells Mashable digital ID tags, like those in the company's partnership with Ahluwalia and EON, are "an important part of the future of fashion" due to their ability to promote circularity creatively. "Digital IDs enable authenticated resale, rental and service business models — turning products into a service," Loubser explains. "In general, if a garment is smart it allows for multiple interactions it taking it from a linear business model to a circular — it also enables the garment transparency whether it is being re-sold, rented, recycled or upcycled." "Digital IDs enable authenticated resale, rental and service business models — turning products into a service." This isn't the first time Microsoft has forayed into fashion, recently dipping into the junction where AI meets design(Opens in a new tab) with Portugal-based company Fashable to create "an AI algorithm that can generate original clothing designs, helping fashion companies to meet customer demand, get to market faster, and reduce clothing waste." The ubiquity of these tags is growing. In fact, the European Union proposed a standardized practice of digital passports for textiles(Opens in a new tab) last March, a facet of its Circular Economy Action Plan(Opens in a new tab) which is part of the EU's larger 2050 climate neutrality target and the European Green Deal(Opens in a new tab). Still, consumer attitudes may have to shift. The Sustainability Consortium(Opens in a new tab), a global non-profit with a focus on the consumer goods industry, conducted a 2020 study(Opens in a new tab) that concluded, "Digital tags can measure the frequency and duration of clothing use with reasonable reliability." The organization found that there are some constraints to scaling the idea more broadly, but consumer attitudes towards these tags became increasingly positive. Yoox Net-A-Porter Group(Opens in a new tab), a global online retailer that also works with EON "to accelerate circularity", found in its research(Opens in a new tab) that customers progressively engaged with the information provided via the tags. In 2021, the company found that 39 percent of shoppers viewed information about the care of a product, while 47 percent viewed details about transparency. Franck says that brands adopting technology in this manner will be able to "outperform others" in the realm of sustainability. She also believes the possibilities for this technology are massive for development in other spaces, with the potential to connect products with "emerging technologies like digital wardrobes, gaming and metaverse apps, NFTs and more." "Brands who succeed will move beyond selling 'simple products' to selling 'intelligent assets'," she says. "The possibilities are endless, and there is still so much innovation to come."
Emerging Technologies
Following the President’s Executive Order, New Reports Outline Recommendations to Protect Consumers, Investors, Businesses, Financial Stability, National Security, and the Environment The digital assets market has grown significantly in recent years. Millions of people globally, including 16% of adult Americans, have purchased digital assets—which reached a market capitalization of $3 trillion globally last November. Digital assets present potential opportunities to reinforce U.S. leadership in the global financial system and remain at the technological frontier.  But they also pose real risks as evidenced by recent events in crypto markets. The May crash of a so-called stablecoin and the subsequent wave of insolvencies wiped out over $600 billion of investor and consumer funds. President Biden’s March 9 Executive Order (EO) on Ensuring Responsible Development of Digital Assets outlined the first whole-of-government approach to addressing the risks and harnessing the potential benefits of digital assets and their underlying technology. Over the past six months, agencies across the government have worked together to develop frameworks and policy recommendations that advance the six key priorities identified in the EO: consumer and investor protection; promoting financial stability; countering illicit finance; U.S. leadership in the global financial system and economic competitiveness; financial inclusion; and responsible innovation. The nine reports submitted to the President to date, consistent with the EO’s deadlines, reflect the input and expertise of diverse stakeholders across government, industry, academia, and civil society. Together, they articulate a clear framework for responsible digital asset development and pave the way for further action at home and abroad. The reports call on agencies to promote innovation by kickstarting private-sector research and development and helping cutting-edge U.S. firms find footholds in global markets. At the same time, they call for measures to mitigate the downside risks, like increased enforcement of existing laws and the creation of commonsense efficiency standards for cryptocurrency mining. Recognizing the potential benefits and risks of a U.S. Central Bank Digital Currency (CBDC), the reports encourage the Federal Reserve to continue its ongoing CBDC research, experimentation, and evaluation and call for the creation of a Treasury-led interagency working group to support the Federal Reserve’s efforts. Protecting Consumers, Investors, and Businesses Digital assets pose meaningful risks for consumers, investors, and businesses. Prices of these assets can be highly volatile: the current global market capitalization of cryptocurrencies is approximately one-third of its November 2021 peak. Still sellers commonly mislead consumers about digital assets’ features and expected returns, and non-compliance with applicable laws and regulations remains widespread. One study found that almost a quarter of digital coin offerings had disclosure or transparency problems—like plagiarized documents or false promises of guaranteed returns. Outright fraud, scams, and theft in digital asset markets are on the rise: according to FBI statistics, reported monetary losses from digital asset scams were nearly 600 percent higher in 2021 than the year before. Since taking office, the Biden-Harris Administration and independent regulators have worked to protect consumers and ensure fair play in digital assets markets by issuing guidance, increasing enforcement resources, and aggressively pursuing fraudulent actors. As outlined in the reports released today, the Administration plans to take the following additional steps: The reports encourage regulators like the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC), consistent with their mandates, to aggressively pursue investigations and enforcement actions against unlawful practices in the digital assets space. The reports encourage Consumer Financial Protection Bureau (CFPB) and Federal Trade Commission (FTC), as appropriate, to redouble their efforts to monitor consumer complaints and to enforce against unfair, deceptive, or abusive practices. The reports encourage agencies to issue guidance and rules to address current and emergent risks in the digital asset ecosystem. Regulatory and law enforcement agencies are also urged to collaborate to address acute digital assets risks facing consumers, investors, and businesses.  In addition, agencies are encouraged to share data on consumer complaints regarding digital assets—ensuring each agency’s activities are maximally effective. The Financial Literacy Education Commission (FLEC) will lead public-awareness efforts to help consumers understand the risks involved with digital assets, identify common fraudulent practices, and learn how to report misconduct. Promoting Access to Safe, Affordable Financial Services Today, traditional finance leaves too many behind. Roughly 7 million Americans have no bank account. Another 24 million rely on costly nonbank services, like check cashing and money orders, for everyday needs. And for those who do use banks, paying with traditional financial infrastructure can be costly and slow—particularly for cross-border payments. The digital economy should work for all Americans. That means developing financial services that are secure, reliable, affordable, and accessible to all. To make payments more efficient, the Federal Reserve has planned the 2023 launch of FedNow—an instantaneous, 24/7 interbank clearing system that will further advance nationwide infrastructure for instant payments alongside The Clearinghouse’s Real Time Payments system. Some digital assets could help facilitate faster payments and make financial services more accessible, but more work is needed to ensure they truly benefit underserved consumers and do not lead to predatory financial practices. To promote safe and affordable financial services for all, the Administration plans to take the following steps: Agencies will encourage the adoption of instant payment systems, like FedNow, by supporting the development and use of innovative technologies by payment providers to increase access to instant payments, and using instant payment systems for their own transactions where appropriate – for example, in the context of distribution of disaster, emergency or other government-to-consumer payments. The President will also consider agency recommendations to create a federal framework to regulate nonbank payment providers. Agencies will prioritize efforts to improve the efficiency of cross-border payments by working to align global payments practices, regulations, and supervision protocols, while exploring new multilateral platforms that integrate instant payment systems. The National Science Foundation (NSF) will back research in technical and socio-technical disciplines and behavioral economics to ensure that digital asset ecosystems are designed to be usable, inclusive, equitable, and accessible by all. Fostering Financial Stability Digital assets and the mainstream financial system are becoming increasingly intertwined, creating channels for turmoil to have spillover effects. Stablecoins, in particular, could create disruptive runs if not paired with appropriate regulation. The potential for instability was illustrated in May 2022 by the crash of the so-called stablecoin TerraUSD and the subsequent wave of insolvencies that erased nearly $600 billion in wealth. In October, the Financial Stability Oversight Council (FSOC) will publish a report discussing digital assets’ financial-stability risks, identifying related regulatory gaps, and making additional recommendations to foster financial stability. The Biden-Harris Administration has long recognized the need for regulation to address digital assets’ stability risks. For example, in 2021, the President’s Working Group on Financial Markets recommended steps for Congress and regulators to make stablecoins safer. Building on this work, the Administration plans to take the additional following steps: The Treasury will work with financial institutions to bolster their capacity to identify and mitigate cyber vulnerabilities by sharing information and promoting a wide range of data sets and analytical tools. The Treasury will work with other agencies to identify, track, and analyze emerging strategic risks that relate to digital asset markets. It will also collaborate on identifying such risks with U.S. allies, including through international organizations like the Organization for Economic Co-operation and Development (OECD) and the Financial Stability Board (FSB). Advancing Responsible Innovation U.S. companies lead the world in innovation. Digital asset firms are no exception. As of 2022, the United States is home to roughly half of the world’s 100 most valuable financial technology companies, many of which trade in digital asset services. The U.S. government has long played a critical role in priming responsible private-sector innovation. It sponsors cutting-edge research, helps firms compete globally, assists them with compliance, and works with them to mitigate harmful side-effects of technological advancement. In keeping with this tradition, the Administration plans to take the following steps to foster responsible digital asset innovation: The Office of Science and Technology Policy (OSTP) and NSF will develop a Digital Assets Research and Development Agenda to kickstart fundamental research on topics such as next-generation cryptography, transaction programmability, cybersecurity and privacy protections, and ways to mitigate the environmental impacts of digital assets. It will also continue to support research that translates technological breakthroughs into market-ready products. Additionally, NSF will back social-sciences and education research that develops methods of informing, educating, and training diverse groups of stakeholders on safe and responsible digital asset use. The Treasury and financial regulators are encouraged to, as appropriate, provide innovative U.S. firms developing new financial technologies with regulatory guidance, best-practices sharing, and technical assistance through things like tech sprints and Innovation Hours. The Department of Energy, the Environmental Protection Agency, and other agencies will consider further tracking digital assets’ environmental impacts; developing performance standards as appropriate; and providing local authorities with the tools, resources, and expertise to mitigate environmental harms. Powering crypto-assets can take a large amount of electricity—which can emit greenhouse gases, strain electricity grids, and harm some local communities with noise and water pollution. Opportunities exist to align the development of digital assets with transitioning to a net-zero emissions economy and improving environmental justice. The Department of Commerce will examine establishing a standing forum to convene federal agencies, industry, academics, and civil society to exchange knowledge and ideas that could inform federal regulation, standards, coordinating activities, technical assistance, and research support. Reinforcing Our Global Financial Leadership and Competitiveness Today, global standard-setting bodies are establishing policies, guidance, and regulatory recommendations for digital assets. The United States is working actively with its partners to set out these policies in line with our goals and values, while also reinforcing the United States’ role in the global financial system. Similarly, the United States has a valuable opportunity to partner with countries still developing their digital assets ecosystems, helping to ensure that countries’ financial, legal, and technological infrastructures respect core values including data privacy, financial stability, and human rights. To reinforce U.S. financial leadership and uphold U.S. values in global digital asset markets, the Administration will take the following steps outlined in the framework for international engagement released by the Treasury Department earlier this summer: U.S. agencies will leverage U.S. positions in international organizations to message U.S. values related to digital assets. U.S. agencies will also continue and expand their leadership roles on digital assets work at international organizations and standard-setting bodies—such as the G7, G20, OECD, FSB, Financial Action Task Force (FATF), and the International Organization for Standardization. Agencies will promote standards, regulations, and frameworks that reflect values like data privacy, free and efficient markets, financial stability, consumer protection, robust law enforcement, and environmental sustainability. The State Department, the Department of Justice (DOJ), and other U.S. enforcement agencies will increase collaboration with—and assistance to—partner agencies in foreign countries through global enforcement bodies like the Egmont Group, bilateral information sharing, and capacity building. The State Department, Treasury, USAID, and other agencies will explore further technical assistance to developing countries building out digital asset infrastructure and services. As appropriate, this assistance may include technical assistance on legal and regulatory frameworks, evidence-gathering and knowledge-sharing on the impacts, risks, and opportunities of digital assets. The Department of Commerce will help cutting-edge U.S. financial technology and digital asset firms find a foothold in global markets for their products. Fighting Illicit Finance The United States has been a leader in applying its anti-money laundering and countering the financing of terrorism (AML/CFT) framework in the digital asset ecosystem. It has published relevant guidance, engaged in regular public-private dialogue, used its enforcement tools, and led in setting international AML/CFT standards. While our efforts have strengthened the U.S. financial system, digital assets— some of which are pseudonymous and can be transferred without a financial intermediary —have been exploited by bad actors to launder illicit proceeds, to finance terrorism and the proliferation of weapons of mass destruction, and to conduct a wide array of other crimes. For example, digital assets have facilitated the rise of ransomware cybercriminals; narcotics sales and money laundering for drug trafficking organizations; and the funding of activities of rogue regimes, as was the case in the recent thefts by the Democratic People’s Republic of Korea (DPRK)- affiliated Lazarus Group. It is in the national interest to mitigate these risks through regulation, oversight, law enforcement action, and the use of other United States Government authorities. To fight the illicit use of digital assets more effectively, the Administration plans to take the following steps: The President will evaluate whether to call upon Congress to amend the Bank Secrecy Act (BSA), anti-tip-off statutes, and laws against unlicensed money transmitting to apply explicitly to digital asset service providers—including digital asset exchanges and nonfungible token (NFT) platforms. He will also consider urging Congress to raise the penalties for unlicensed money transmitting to match the penalties for similar crimes under other money-laundering statutes and to amend relevant federal statutes to let the Department of Justice prosecute digital asset crimes in any jurisdiction where a victim of those crimes is found. The United States will continue to monitor the development of the digital assets sector and its associated illicit financing risks, to identify any gaps in our legal, regulatory, and supervisory regimes.  As part of this effort, Treasury will complete an illicit finance risk assessment on decentralized finance by the end of February 2023 and an assessment on non-fungible tokens by July 2023. Relevant departments and agencies will continue to expose and disrupt illicit actors and address the abuse of digital assets.  Such actions will hold cybercriminals and other malign actors responsible for their illicit activity and identify nodes in the ecosystem that pose national security risks. Treasury will enhance dialogue with the private sector to ensure that firms understand existing obligations and illicit financing risks associated with digital assets, share information, and encourage the use of emerging technologies to comply with obligations.  This will be supported by a Request for Comment published to the Federal Register for input on several items related to AML/CFT. Informing the above recommendations, the Treasury, DOJ/FBI, DHS, and NSF drafted risk assessments to provide the Administration with a comprehensive view of digital assets’ illicit-finance risks. The CFPB, an independent agency, also voluntarily provided information to the Administration as to risks arising from digital assets. The risks that agencies highlight include, but are not limited to, money laundering; terrorist financing; hacks that result in losses of funds; and fragilities, common practices, and fast-changing technology that may present vulnerabilities for misuse. Exploring a U.S. Central Bank Digital Currency (CBDC) A U.S. CBDC – a digital form of the U.S. dollar – has the potential to offer significant benefits. It could enable a payment system that is more efficient, provides a foundation for further technological innovation, facilitates faster cross-border transactions, and is environmentally sustainable. It could promote financial inclusion and equity by enabling access for a broad set of consumers. In addition, it could foster economic growth and stability, protect against cyber and operational risks, safeguard the privacy of sensitive data, and minimize risks of illicit financial transactions. A potential U.S. CBDC could also help preserve U.S. global financial leadership, and support the effectiveness of sanctions. But a CBDC could also have unintended consequences, including runs to CBDC in times of stress. Recognizing the possibility of a U.S. CBDC, the Administration has developed Policy Objectives for a U.S. CBDC System,which reflect the federal government’s priorities for a potential U.S. CBDC. These objectives flesh out the goals outlined for a CBDC in the E.O. A U.S. CBDC system, if implemented, should protect consumers, promote economic growth, improve payment systems, provide interoperability with other platforms, advance financial inclusion, protect national security, respect human rights, and align with democratic values. But further research and development on the technology that would support a U.S. CBDC is needed.  The Administration encourages the Federal Reserve to continue its ongoing CBDC research, experimentation, and evaluation. To support the Federal Reserve’s efforts and to advance other work on a potential U.S. CBDC, the Treasury will lead an interagency working group to consider the potential implications of a U.S. CBDC, leverage cross-government technical expertise, and share information with partners. The leadership of the Federal Reserve, the National Economic Council, the National Security Council, the Office of Science and Technology Policy, and the Treasury Department will meet regularly to discuss the working group’s progress and share updates on and share updates on CDBC and other payments innovations.
Emerging Technologies
President Biden and United Kingdom Prime Minister Rishi Sunak have announced Thursday a new economic partnership called the Atlantic Declaration, with Sunak is hailing as being of a "kind that has never been agreed before." Biden, speaking alongside Sunak in Washington, D.C., said the plan "outlines how we can enhance our cooperation to accelerate the clean energy transition that must take place and is taking place, lead the development of emerging technologies that are going to shape so much of our future and protect technologies critical to our national security." "When it comes to technology that will shape the future, like semiconductors, quantum computing and artificial intelligence, the U.K. and the U.S. are working together to make sure they are developed safely and responsibly and jointly," Biden continued. "We're going to do more on joint research and development to ensure the future we're building remains fundamentally aligned with our values set in both our countries," he added. "And we're doing more to prevent technologies that are invented and developed in our countries from being used for military or intelligence purposes by countries that do share our values." Sunak described the Atlantic Declaration as "a new economic partnership for a new age of a kind that has never been agreed before" and "a test case for the kind of re-imagined alliances President Biden has spoken so eloquently about." "That means new investment. This week alone, £14 billion of new American investment has been committed into the U.K., creating thousands of jobs," he said. "I know people have wondered what kind of partner Britain would be after we left the EU. I'd say judge us by our actions," Sunak also told reporters. "We're committed to our values as ever, as reliable an ally as ever, as attractive an investment destination as ever. But we're changing, too. We're strengthening our relationships not just with old friends like America and in Europe, but with new friends in the Indo-Pacific, too." Prior to the press conference Thursday, Biden and Sunak held their fourth face-to-face meeting since the latter became prime minister in October. It was the first time Sunak has visited Washington, D.C., since assuming the role. "Today we also discussed are on wavering support for the people of Ukraine who are defending themselves against the most brutal aggression we've seen in a long time at the hands of Russia and Putin," Biden said. "The U.K. and the United States, together with more than 50 partners, have committed historic levels of security assistance to Ukraine." Biden also reflected that the two countries have worked through some of the toughest moments in modern history side-by-side, recalling the meetings that Prime Minister Winston Churchill and Franklin D. Roosevelt held in the White House. "You know Prime Minister Churchill and Roosevelt met here a little over 70 years ago and they asserted that the strength of the partnership between Great Britain and the United States was strength of the free world," Biden told Sunak. "I still think there’s truth to that assertion." The Associated Press contributed to this report.
Emerging Technologies
U.S. Secretary of State Antony Blinken on Thursday urged democracies around the world to work together to ensure technology is used to promote democratic values and fight efforts by authoritarian regimes to use it to repress, control and divide citizens. Blinken made the comments as he led a discussion on "Advancing Democracy and Internet Freedom in a Digital Age." The session was part of U.S. President Joe Biden's Summit for Democracy, a largely virtual gathering of leaders taking place this week from the State Department in Washington. Blinken said the world is at the point where technology is "reorganizing the life of the world" and noted many countries are using these technologies to advance democratic principles and make life better for their citizens. He pointed to the Maldives, where court hearings are being held online; Malaysia, where the internet was used to register 3 million new voters last year; and Estonia, where government services are delivered faster and more simply. At the same time, Blinken said the internet is being used more and more to spread disinformation and foment dissent. He said the U.S. and its democratic partners must establish rules and norms to promote an open, free and safe internet. The secretary of state identified four priorities to help meet this goal, including using technology to improve people's lives in tangible ways, establishing rights-respecting rules for emerging technologies, investing in innovation, and countering the effects of authoritarian governments' use of digital tools to abuse citizens and weaken democracies. Since the summit began earlier the week, the White House has emphasized the desire of the U.S. to make "technology work for and not against democracy." On Wednesday, the prime ministers of eight European countries signed an open letter to the chief executives of major social media companies calling for them to be more aggressive in blocking the spread of false information on their platforms. The leaders of Ukraine, Moldova, Poland, the Czech Republic, Estonia, Latvia, Lithuania and Slovakia signed the letter. The statement told the companies their tech platforms "have become virtual battlegrounds, and hostile foreign powers are using them to spread false narratives that contradict reporting from fact-based news outlets." It went on to say advertisements and artificial amplification on Meta's platforms, which include Facebook, are often used to call for social unrest, bring violence to the streets and destabilize governments. About 120 global leaders are participating in the summit. It is seen as Biden's attempt to bolster the standing of democracies as autocratic governments advance their own agendas, such as Russia's 13-month invasion of Ukraine, and China's alliance with Moscow. In a statement as the summit opened Tuesday, the White House said, "President Biden has called the struggle to bolster democratic governance at home and abroad the defining challenge of our time." The statement went on to say, "Democracy — transparent and accountable government of, for, and by the people — remains the best way to realize lasting peace, prosperity, and human dignity."
Emerging Technologies
A phony AI-generated attack ad from the Republican National Committee (RNC) offered Congress a glimpse into how the tech could be used in next year’s election cycle. Now, Democrats are readying their response. On Tuesday, Rep. Yvette Clarke (D-NY) introduced a new bill to require disclosures of AI-generated content in political ads. Clarke told The Washington Post Tuesday that her bill was a direct response to the RNC ad that launched last week. The video came out soon after President Joe Biden announced his 2024 reelection campaign, depicting a dystopian future where Biden reinstates the draft to aid Ukraine’s war effort and causes China to invade Taiwan if reelected. “The upcoming 2024 election cycle will be the first time in U.S. history where AI generated content will be used in political ads by campaigns, parties, and Super PACs. Unfortunately, our current laws have not kept pace with the rapid development of artificial intelligence technologies,” Clarke said in a statement on Tuesday. “The upcoming 2024 election cycle will be the first time in U.S. history where AI generated content will be used in political ads” The debate over whether to regulate AI and machine learning technology plagued the prior presidential election in 2020. Leading up to the election, a bogus video of then-House Speaker Nancy Pelosi slurring her words in a drunken manner went viral across social media platforms and spurred a handful of congressional hearings. Meta, TikTok, and other major social media companies later banned deepfakes, but lawmakers failed to approve any meaningful regulation as a result of their efforts. Clarke’s REAL Political Advertisements Act would apply to still-image and video ads, requiring a message at either the beginning or the end disclosing the use of AI-generated content. With a new election cycle on the horizon, AI-generated and other doctored video content has only grown more rampant online. Over the last year, the increased accessibility of and corporate investment in AI tech has spooked lawmakers, influencing a deluge of new bills and regulatory solutions. Last month, Senate Majority Leader Chuck Schumer (D-NY) circulated a broad framework among experts illustrating Democrats’ approach to AI regulation. The framework proposed new rules requiring AI developers to disclose their data sources, who trained the algorithms, its audience, and an explanation for how the algorithm arrives at its responses, according to Axios. In a March interview with The Verge, Sen. Mark Warner (D-VA) lamented how slow Congress has been to regulate emerging technologies. Speaking about social media’s potential for harm, he said, “I wish we would’ve put some rules in place ahead of the game.” He continued, “We have a chance to do that with AI.” Without new laws, federal agencies have started to fill the gaps left by Congress. Last month, the Commerce Department asked the public how the federal government should regulate AI algorithms. The Federal Trade Commission has started warning companies against using biased AI, arguing that it already has the authority to go after them over possible discriminatory algorithms.
Emerging Technologies
Artificial intelligence-generated text can appear more human on social media than text written by actual humans, a study found. Chatbots, such as OpenAI’s wildly popular ChatGPT, are able to convincingly mimic human conversation based on prompts it is given by users. The platform exploded in use last year and served as a watershed moment for artificial intelligence, handing the public easy access to converse with a bot that can help with school or work assignments and even come up with dinner recipes. Researchers behind a study published in the scientific journal Science Advances, which is supported by the American Association for the Advancement of Science, were intrigued by OpenAI’s text generator GPT-3 back in 2020 and worked to uncover whether humans "can distinguish disinformation from accurate information, structured in the form of tweets," and determine whether the tweet was written by a human or AI. One of the study’s authors, Federico Germani of the Institute of Biomedical Ethics and History of Medicine at the University of Zurich, said the "most surprising" finding was how humans more likely labeled AI-generated tweets as human-generated than tweets actually crafted by humans, according to PsyPost. "The most surprising discovery was that participants often perceived information produced by AI as more likely to come from a human, more often than information produced by an actual person. This suggests that AI can convince you of being a real person more than a real person can convince you of being a real person, which is a fascinating side finding of our study," Germani said. With the rapid increase of chatbot use, tech experts and Silicon Valley leaders have sounded the alarm on how artificial intelligence can spiral out of control and perhaps even lead to the end of civilization. One of the top concerns echoed by experts is how AI could lead to disinformation to spread across the internet and convince humans of something that is not true. Researchers for the study, titled "AI model GPT-3 (dis)informs us better than humans," worked to investigate "how AI influences the information landscape and how people perceive and interact with information and misinformation," Germani told PsyPost. The researchers found 11 topics they found were often prone to disinformation, such as 5G technology and the COVID-19 pandemic, and created both false and true tweets generated by GPT-3, as well as false and true tweets written by humans. They then gathered 697 participants from countries such as the U.S., UK, Ireland, and Canada to take part in a survey. The participants were presented with the tweets and asked to determine if they contained accurate or inaccurate information, and if they were AI-generated or organically crafted by a human. "Our study emphasizes the challenge of differentiating between information generated by AI and that created by humans. It highlights the importance of critically evaluating the information we receive and placing trust in reliable sources. Additionally, I would encourage individuals to familiarize themselves with these emerging technologies to grasp their potential, both positive and negative," Germani said of the study. Researchers found participants were best at determining disinformation crafted by a fellow human than disinformation written by GPT-3. "One noteworthy finding was that disinformation generated by AI was more convincing than that produced by humans," Germani said. The participants were also more likely to recognize tweets containing accurate information that were AI-generated than accurate tweets written by humans. The study noted that in addition to its "most surprising" finding that humans often can’t differentiate between AI-generated tweets and human-created ones, their confidence in making a determination fell while taking the survey. "Our results indicate that not only can humans not differentiate between synthetic text and organic text but also their confidence in their ability to do so also significantly decreases after attempting to recognize their different origins," the study states. The researchers said this is likely due to how convincingly GPT-3 can mimic humans, or respondents may have underestimated the intelligence of the AI system to mimic humans. "We propose that, when individuals are faced with a large amount of information, they may feel overwhelmed and give up on trying to evaluate it critically. As a result, they may be less likely to attempt to distinguish between synthetic and organic tweets, leading to a decrease in their confidence in identifying synthetic tweets," the researchers wrote in the study. The researchers noted that the system sometimes refused to generate disinformation, but also sometimes generated false information when told to create a tweet containing accurate information. "While it raises concerns about the effectiveness of AI in generating persuasive disinformation, we have yet to fully understand the real-world implications," Germani told PsyPost. "Addressing this requires conducting larger-scale studies on social media platforms to observe how people interact with AI-generated information and how these interactions influence behavior and adherence to recommendations for individual and public health."
Emerging Technologies
A kid creates his Avatar at FITUR. HELIXA Experience Center is the technological and artistic installation in the heart of the FITUR fair. It will be the first to offer visitors the chance to create their own avatar in real time. This avatar is a personalised, hyper-realistic, 3D, #seriezero digital twin that they can use to interact on digital platforms and in the metaverse.Guillermo Gutierrez Carrascal | LightRocket | Getty ImagesThe idea of digital twins — digital representations of physical systems, products or processes that serve as indistinguishable counterparts for purposes such as simulations, testing, monitoring and maintenance — has been around for some time. But indications are the concept's time has come for wider adoption to support business applications."With the rapid adoption of digital twins, we're seeing two categories of practical applications arise: use-cases by industry that solve a very specific challenge, and industry-agnostic use-cases which aid in broader strategy and decision making," said Frank Diana, principal futurist at Tata Consultancy Services.Like artificial intelligence a few years ago, digital twin technology has tipped from highly specific applications into becoming a widespread management best practice, Diana said."With the deeper and more contextualized insights digital twins provide, we gain a better understanding of our products, processes, and systems and more confidence in our models," said Matt Barrington, emerging technologies leader at consulting firm EY Americas."For example, this enables more organizations to have the confidence to experiment with access-based service models for complex products or new data-based services" such as twin-based insurance policies for smart buildings, Barrington said. "Moving forward in a more dynamic, ecosystem-oriented marketplace, we expect all companies to enable and become dependent [on] digital twins to intelligently operate most aspects of their business," he said.Coming to life with real-time dataCompanies are using virtual product development twins to accelerate design and development cycles more effectively, Barrington said. "Digital twins take the models we already have for today's products, processes and systems and bring them to life in real-time with real-world data," he said.One practical application of digital twins within TCS has been in guiding the firm's return to office strategy during the late stages of the pandemic, Diana said. "To re-open effectively, we needed to know answers to questions like how many [workers] might get infected? Who should we test, and when? What should the capacity of our quarantining facility be?" he said.To answer these questions, TCS created a digital twin environment with a novel machine-processable "model of locality," with the principal objective of predicting and controlling the spread of Covid. "The digital twin serves as a quantitative aid to explain the current state of the environment and assist in decision-making, enabling a safe and effective return to office for our associates," Diana said.Digital twins are also replacing historical data-driven models used for business strategy, Diana said. "These legacy strategic platforms lack the ability to account for deviations and disruptions, which have become increasingly common in the post-Covid world," he said.Along with AI, organizations are using digital twins to help envision, experiment with, and execute business decisions through simulators that represent key business entities, interrelationships, and external forces such as competitors or natural disasters, Diana said.In life sciences, digital twins are being used to create twins of human organs, enabling new approaches to medical research and care, Diana said. Pharmaceutical and cosmetics companies can use twins to test how to deliver new drugs or products on human skin in cyberspace instead of relying on animal testing, he said. Researchers can use digital hearts to find new surgical techniques or treatments for heart disease. Smarter citiesDigital twins are also being used for smart city initiatives, Diana said. For example, Los Angeles is employing digital twin technology that will model transportation movement and activity, such as ride sharing and autonomous drones, to better plan its mobility infrastructure.Another possible application is in environmental, social and governance initiatives. The technology "leverages huge data sets of historical weather, travel, and physical infrastructure data to create a digital twin of any physical location," said Dan Versace, research analyst ESG business services at research firm International Data Corp. By using artificial intelligence and machine learning, digital twinning can perform in-depth analysis to provide users with elaborate, scenario-based assessments of environmental conditions, Versace said."This technology, when appropriately applied, can produce insights into the physical risks that come hand-in-hand with the increasing instances of climate-related natural disasters," Versace said. "In the coming year this technology stands to only grow in capabilities, with some organizations claiming that they will be able to account for not only direct risks faced by organizations due to climate change, but also the impact these disasters will have to their clientele and value chain."This will allow companies to develop resiliency planning and mediation strategies long before they are needed, without having to be exposed to any material risk, Versace said."We are going to see digital twins adopted rapidly in 2023, in many different industries," Diana said. "The volatility and uncertainty that's on the horizon for this year will serve as a catalyst to drive companies into a mode of rehearsing uncertain futures. Digital twins will be a critical tool for that rehearsal."Digital twins are gaining momentum in adoption and sophistication as more organizations see positive outcomes from the early adopters, Barrington said. As digital twins become mainstream, EY predicts two major trends. One is hyper-personalization, using twins to better tailor products, services, and experiences with the goal of improving customer loyalty and value.The other is dynamic supply chains. "As more twins of critical assets and processes come online, leaders will leverage digital twins to not only model and simulate their supply chain, but to optimize and automate a dynamic and intelligent supply chain model — all orchestrated by digital twins," Barrington said. "Many leaders learned from the recent pandemic that static linear supply chains will not suffice moving forward and digital twins are one of the best ways to remove risk."
Emerging Technologies
Today’s most powerful quantum computers are elaborate, bespoke machines that can take up entire rooms. IonQ says its new devices will be mounted in standard server racks and can slot seamlessly into any data center. Last week at Quantum World Congress in Tysons, Va. (just outside Washington, D.C.), the company unveiled designs for two new “enterprise-grade” quantum computers based on its trapped-ion technology. Unlike the Maryland-based firm’s previous systems, both have been designed to fit in standard server cabinets and integrate with conventional computing infrastructure. The Forte Enterprise will be available from next year and feature 35 “algorithmic qubits” (AQ)—a metric invented by the company to denote the number of qubits in a processor that can be put to useful work, rather than the sheer number of physical qubits. That’s a modest performance boost from the 29 AQ achieved by the company’s existing Forte computer, but the new device will fit into eight server cabinets, which represents a 40 percent reduction in size. And the company says that in 2025 it will launch the 64-AQ Tempo, which will fit into just three cabinets and be the company’s first system able to outperform classical computers on certain tasks. The move to a rack-mounted form factor suggests a shift in IonQ’s business model, from selling remote access to its devices over the cloud to hardware sales. The shift to an industry-standard form factor is all part of the company’s push to make quantum computers more affordable, says CEO Peter Chapman, both by ensuring they are compatible with existing computing infrastructure and also making them easier to manufacture. “We’re trying to not only build high-performance, cutting-edge machines, but also following Moore’s law in reducing the cost of every generation,” he says. Given that the company’s previous computer was roughly 5 feet (1.5 meters) across, shifting to 19-inch-wide (48.3-centimeter-wide) server cabinets required a significant redesign, says Chapman. In particular, the optical components at the heart of its device had to shrink considerably. Unlike quantum computers made by Google and IBM, which use superconducting circuits to create qubits, IonQ uses a special chip to isolate individual ytterbium ions and hold them in place using electromagnetic fields. The qubits are programmed by firing lasers at them, which requires a lot of optical hardware. In previous machines, says Chapman, this involved bulk optical components sitting on a platform roughly the size of an air hockey table and weighing about a tonne. This has now been replaced by a mixture of micro-optics and silicon photonics, with the goal to eventually shift all subsystems to silicon photonics, he adds. IonQ’s new machines won’t be the first rack-mounted quantum computers. In 2021, researchers at the University of Innsbruck in Austria and a spin-off company called Alpine Quantum Technologies unveiled a 29-qubit trapped-ion computer that could fit into a pair of server cabinets. Thomas Monz, CEO of AQT and a researcher at the university, says that there’s not enough detail on the new computers to meaningfully comment on them. But Monz says the move to a rack-mounted form factor suggests a shift in IonQ’s business model, from selling remote access to its devices over the cloud to hardware sales. That can make scaling revenue more challenging because rather than serving many users on the same hardware, you have to build new devices for each fresh customer. At present, IonQ hosts devices at its own facilities and allows customers to access them through Amazon, Google, and Microsoft cloud services. Part of the problem though, says Chapman, is that providers don’t currently have service-level agreements for quantum hardware—contracts that guarantee customers certain levels of availability and performance. That’s largely because this would require a large number of quantum computers to balance loads, and at present the company has only one device from each generation running at a time. “So sometimes, if there’s a large number of jobs in the queue, you might have to wait hours, unfortunately, to have your job run,” says Chapman. “To put things into production, you definitely have to solve that problem.” Chapman says the company eventually hopes to sell a significant number of its machines to cloud partners to provide more reliable quantum cloud services. But his team also thinks there are plenty of companies for whom a dedicated, on-site quantum computer makes sense. “We’re trying to not only build high-performance, cutting-edge machines, but also following Moore’s law in reducing the cost of every generation.” —Peter Chapman, CEO, IonQ Any application that is real time, such as financial data analysis with quantum computers, is a likely candidate, says Chapman. Start-of-the day planning for a logistics company’s 150,000 delivery routes is another. There are also some calculations in quantum chemistry that can run for months at a time, Chapman adds, something that’s not easy to cater for on the cloud. The Forte Enterprise will not be doing these kinds of jobs though, admits Chapman. While it will be capable of supporting roughly 1,000 entangling gates, the machine is still essentially a research device. Its purpose, says Chapman, is to give customers a platform to start developing applications for the Tempo, which the company does expect to outperform classical computers on certain problems. IonQ has already signed a contract to provide both the Forte Enterprise and Tempo to Swiss quantum-technology hub QuantumBasel, and last week also announced a US $25.5 million deal with the U.S. Air Force Research Lab to provide two systems for its facility in Rome, N.Y. In January, the company announced plans for a manufacturing plant in Seattle, which is slated to open in the first half of 2024. Edd Gent is a freelance science and technology writer based in Bengaluru, India. His writing focuses on emerging technologies across computing, engineering, energy and bioscience. He's on Twitter at @EddytheGent and email at edd dot gent at outlook dot com. His PGP fingerprint is ABB8 6BB3 3E69 C4A7 EC91 611B 5C12 193D 5DFC C01B. His public key is here. DM for Signal info.
Emerging Technologies
India, U.S. Discuss Co-Producing Jet Engines, Long-Range Artillery, Infantry Vehicles Ahead of Prime Minister Narendra Modi's official state visit to the USA, officials of both countries have intensified their discussions on co-producing jet engines, long-range artillery and infantry vehicles under the Initiative on Critical and Emerging Technologies. Ahead of Prime Minister Narendra Modi's official state visit to the USA, officials of both countries have intensified their discussions on co-producing jet engines, long-range artillery, and infantry vehicles under the Initiative on Critical and Emerging Technologies. India's National Security Advisor Ajit Doval and U.S. NSA Jake Sullivan launched iCET earlier this year. It was done at the direction of U.S. President Joe Biden and Indian Prime Minister Narendra Modi, who, after their Tokyo meeting in May 2022, announced to elevate and expand the strategic technology partnership and defence industrial cooperation between the governments, businesses, and academic institutions of the two countries. At a meeting with Indian Defence Secretary Giridhar Aramane at the Pentagon, U.S. Deputy Defense Secretary Kathleen Hicks termed the proposals to co-produce jet engines, long-range artillery, and infantry vehicles under iCET 'unprecedented opportunities' to strengthen ties between the defence sectors of the two nations. The two officials discussed priorities for strengthening the U.S.-India defence partnership, including increasing defence industrial cooperation and operational collaboration, Pentagon spokesperson Eric Pahon said. Hicks underscored the importance of the U.S.-India relationship as critical for promoting peace and prosperity in the Indo-Pacific and beyond. Pahon also reiterated the importance of deepening military-to-military relations between the two nations as the technology partnership grows. Reiterating the US's support for India's defence modernisation objectives, including increasing investments in Indian domestic manufacturing, Hicks and Aramane welcomed the upcoming launch of INDUS-X ahead of PM Modi's visit to the United States in June. They also highlighted its value as an opportunity to promote partnerships between the two countries' defence innovation ecosystems. A day earlier, the two officials co-chaired the 17th U.S.-India Defense Policy Group meeting. The dialogue advanced an ambitious agenda—including industrial cooperation, information sharing, maritime security, and technological collaboration—that reflects the robust and comprehensive defence ties between the United States and India, said U.S. Department of Defense spokesperson Lt. Col. David Herndon. "The officials exchanged views about shared priorities in the Indian Ocean Region and aligning the U.S.-India partnership with other like-minded partnerships to sustain a free and open Indo-Pacific. "The leaders reinforced their commitment to strengthening cooperation and interoperability between the U.S. and Indian militaries to work together in all domains and across all services," the spokesperson added.
Emerging Technologies
Amazon is working on the development of a new conversational artificial intelligence (AI) tool known as ‘Amazon Olympus’ in a bid to topple ChatGPT and Bard, according to reports. Sources at the company told The Information that the tech giant is working on the large language model (LLM) and has allocated both resources and staff from its Alexa AI and science teams to spearhead its creation. Development of the model is being led by Rohit Prasad, former head of Alexa turned lead scientist for artificial general intelligence (AGI)according to Reuters. Prasad moved into the role to specifically focus on generative AI development as the company seeks to contend with industry competitors such as Microsoft-backed OpenAI and Google. According to sources, the Amazon Olympus model will have two trillion parameters. If correct, this would make it one of the largest and most powerful models currently in production. By contrast, OpenAI’s GPT-4, the current market leading model, boasts one trillion parameters. Olympus could be rolled out as early as December and there is a possibility the model could be used to support retail, Alexa, and AWS operations. Is Amazon Olympus the successor to Titan? Amazon Bedrock offers customers a variety of foundation models, including models from AI21 Labs and Anthropic, which the tech giant recently backed with a multi-billion-dollar investment. Amazon Olympus could be the natural evolution of Amazon’s LLM ambitions. Earlier this year, the company revealed it planned to increase investment in the development of LLMs and generative AI tools. ITPro has approached Amazon for comment. Get the ITPro. daily newsletter Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2023. Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies. He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
Emerging Technologies
Hypersonic missiles–those able to fly up to five times the speed of sound–are crucial to U.S. national security in the eyes of American defense leaders. As great power competitors China and Russia develop and deploy the technology, where does the United States stand in the race to develop, test, manufacture and scale hypersonic missiles? How can partnerships between government, industry, academia and allies be utilized? When is the technology expected to be fielded by American forces, and what role will it play in future military strategy? And in a strained labor market, how are workforce demands and shortages affecting the government, military, and private sector alike? On October 18, The Hill convenes members of Congress, defense leaders and hypersonic technology experts to discuss the future of readiness and homeland security Tuesday, October 18, 2022 1:00 PM ET / 10:00 AM PT Speakers: Rep. Doug Lamborn (R-CO), Ranking Member, Strategic Forces SubcommitteeRep. Donald Norcross (D-NJ), Chair, Tactical Air & Land Forces SubcommitteeHeidi Shyu, Under Secretary of Defense for Research and EngineeringDr. Kelly Stephani, Associate Professor, Department of Mechanical Science and Engineering, Center for Hypersonics & Entry Systems Studies, University of Illinois Urbana-ChampaignDr. Mark Lewis, Director, NDIA Emerging Technologies Institute Sponsor Perspective: John W. Otto, Senior Director, Advanced Hypersonic Weapons, Raytheon Technologies Join the conversation!  Tweet us: @TheHillEvents using #TheHillDefense SPONSOR: Raytheon Technologies Corporation is an aerospace and defense company that provides advanced systems and services for commercial, military and government customers worldwide. With four industry-leading businesses ― Collins Aerospace, Pratt & Whitney, Raytheon Intelligence & Space and Raytheon Missiles & Defense ― the company delivers solutions that push the boundaries in avionics, cybersecurity, directed energy, electric propulsion, hypersonics, and quantum physics. The company was formed in 2020 through the combination of Raytheon Company and the United Technologies Corporation aerospace businesses. Tags
Emerging Technologies
Our brainwave activity can be monitored and modified by neurotechnology. Devices with electrodes placed on the head can record neural signals from the brain and apply low electric current to modulate them. These “wearables” are finding traction not only with consumers who want to track and improve their mental wellness but with companies, governments and militaries for all sorts of other uses. Meanwhile, firms such as Elon Musk’s Neuralink are working on next-generation brain implants that could do the same thing, only with far greater power. While the initial use may be to help people with paralysis to type, the grand idea is for augmentation to be available to all. Nita Farahany, a professor of law and philosophy at Duke University who studies the ethical, legal and social ramifications of emerging technologies, is sounding the alarm. Technology that can read our minds sounds terrifying. But it is also way ahead of where things are. Aren’t you jumping the gun? I don’t think so and, furthermore, we dismiss it at our peril. While the technology can’t literally read our complex thoughts, there are at least some parts of our brain activity that can be decoded. There have been big improvements in the electrodes and in training algorithms to find associations using large datasets and AI. More can be done than people think. There are a lot of real-world applications and major tech companies like Meta are investing in it. What’s the danger? There are profound risks from both the commodification of the data but also what it means to have your brain activity monitored by others and what that does to freedom of thought. The technology is at an inflection point: use is ascending steeply but it is not yet mainstream. We have a moment, before the terms of service are set by others, where we can have a voice in how it is used and deployed in society. What uses most worry you? Applications around workplace brain surveillance and use of the technology by authoritarian governments including as an interrogation tool I find particularly provocative and chilling. We do see the technology starting to be used in some ways that are more like involuntary neural surveillance. What should we set in place to protect ourselves? To start we need a new human right to “cognitive liberty”, which would come with an update to other existing human rights to privacy, freedom of thought and self-determination. All told it would protect our freedom of thought and rumination, mental privacy, and self-determination over our brains and mental experiences. It would change the default rules so we have rights around the commodification of our brain data. It would give people control over their own mental experiences and protect them against misuse of their brain activity by corporate and government actors, weighed against societal interests. Plenty of technology to digitally monitor workers is already in use. What extra could employers get from monitoring brain activity? And wouldn’t it be a bit cumbersome to wear at work? The extra is what is hidden inside your brain that wasn’t expressed in your email and wasn’t on your computer screen. It includes things like your level of fatigue, engagement, focus, boredom, frustration and stress – metrics that purportedly can be measured with high accuracy. The form was weird headbands a decade ago but now we’re talking about the same devices that you’re using already in the workplace – headphones and earbuds – simply also having brain sensors. I suspect we will quickly come to forget that the same earbuds that we are using to make a conference call can also track brainwave activity, just in the same way we forget our phones are tracking everything about us. One touted use is criminal justice. The US firm Brainwave Science sells so-called “brain fingerprinting” technology it says will “transform your interrogations”. The company claims to have numerous international government security agencies as customers. We have a societal interest in catching criminals… The US criminal justice system, as far as we are aware, does not use these techniques and, if it did, criminal defendants would need to submit to it voluntarily. But whether other parts of the US government are using it is unclear. The use is troubling, and I don’t think it is justified. There is scepticism about the scientific validity and we have also almost always relied on the need for investigators to gather and develop a case against an individual without going to the criminal themselves because of the abuse that can happen. How concerned are you about the military applications of neurotechnology – so called “brain control” weaponry? Deeply concerned and particularly so when it comes to authoritarian regimes like China, who could prevent or quell demonstrations in ways that are powerful and sinister. The Biden administration has sanctioned Chinese research institutes and firms for their purported investment in brain-controlled weapons. Potential possibilities with this type of weaponry include mental manipulation of individuals to shape their thinking, brain control interfaces where weapons can be controlled with the power of thought and microwave weapons that might be used to mentally disorient large numbers of people. International action against the development and use of this type of weaponry would help. Would you ever consider a brain implant? Based on today’s technology, no way. None of it is safe or effective for healthy individuals. But, in time, maybe. Needed would be an acceptable safety profile, applications that would make me interested and really strong rights to give me confidence my data and access to my brain wouldn’t be misused. What applications might be compelling? A big reason why much-hyped Google Glass was ultimately rejected was because it didn’t have any… The idea of a more seamless brain-to-technology interaction could be compelling for some people. In the same way you use a mouse and a keyboard, you could just use your brain. You could turn on the lights or adjust the thermostat just by thinking about it. Probably more compelling would be the ability to communicate brain to brain with another person. If I could transfer you a full resolution thought or share with you a memory in my brain – the sight, the smell, my feelings – it would be transformational for how we communicate and empathetically relate to one another. Is there a risk of brain implants being hacked? It’s a terrifying possibility. And I worry not just about somebody monitoring your thoughts, but also manipulating them or trying to instil new thoughts and experiences. If “brainjacking” did occur, it could kill the tech: people might decide that the risks are too profound to use it. Or maybe it won’t bother us so much: we take so little care in protecting our online privacy, even when we claim to want it. The Battle for Your Brain by Nita Farahany is published by St Martin’s Press on 14 March (£25.99). To support the Guardian and Observer, order your copy at guardianbookshop.com. Delivery charges may apply
Emerging Technologies
Kamala Harris Will Lay Out AI Strategy In London Speech Vice President Kamala Harris, in a speech in London, will lay out the burgeoning risks related to artificial intelligence, calling for international cooperation and stricter standards to protect consumers from the technology. (Bloomberg) -- Vice President Kamala Harris, in a speech in London, will lay out the burgeoning risks related to artificial intelligence, calling for international cooperation and stricter standards to protect consumers from the technology. “As history has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over the wellbeing of their customers; the security of our communities; and the stability of our democracies,” according to prepared remarks Harris is scheduled to give at the US Embassy in the UK capital on Wednesday. The speech is part of a broad effort by the White House to put restrictions on new artificial intelligence tools, which are rapidly coming to market often with little to no oversight from regulators. Harris is in London with other foreign leaders to take part at the AI Security Summit convened by UK Prime Minister Rishi Sunak at Bletchley Park. Harris will unveil a series of steps the White House is taking to address risks. Among those are a new US AI Safety Institute inside the Commerce Department, which will create guidelines and tools for mitigating the dangers posed by AI. And the Office of Management and Budget is planning to release draft policy guidance on how AI should be used by the US government. The vice president will also announce that the US government is working with major foundations, including the David and Lucile Packard Foundation, Ford Foundation, and Heising-Simons Foundation, which have committed $200 million to finance AI security efforts. In addition, Harris will point out that the US is has joined other countries to help establish norms for military use of AI. The speech comes after President Joe Biden on Monday signed an executive order that empowers the federal government to enact security standards and privacy protections on new AI tools. The order will have broad effects on companies including Microsoft Corp., Amazon.com Inc. and Alphabet Inc.’s Google. The companies will have to submit test results on their new models to the government before releasing them to the public. The directive also calls for AI-generated content to be labeled. Earlier: Biden Signs Sweeping Order Regulating Artificial Intelligence The use of AI tools has soared in recent months with the release of platforms, including OpenAI’s ChatGPT app, that are readily accessible to the average consumer. The increased use of the technology has also spurred concerns that the platforms could be used to spread misinformation or that the underlying algorithms are perpetuating bias. Several governing bodies, including the United Nations and the Group of Seven, are actively seeking to establish rules-of-the-road for artificial intelligence. The European Union is arguably the farthest along, with its AI Act expected to become law by the end of the year. The Biden administration’s swift response to rein in AI is in contrast with how Washington has generally approached emerging technologies. Efforts to oversee social media platforms have languished in Washington for years, leaving many disputes to be settled in court, including a landmark federal antitrust case the Justice Department is pursuing against Google. Still, the White House order still relies on federal agencies — most of which lack a lot of AI expertise — taking internal steps to bolster oversight. Congress would have to act for more comprehensive oversight. Senate Majority Leader Chuck Schumer has begun discussions about AI, but it’s unclear if legislation could pass a bitterly divided Congress. --With assistance from Akayla Gardner. ©2023 Bloomberg L.P.
Emerging Technologies
Introduction: In an increasingly digital world, where trust and security are paramount, blockchain technology has emerged as a revolutionary force. Beyond its association with cryptocurrencies, blockchain has the potential to transform various industries, including identity management. In this article, we will explore how blockchain technology can revolutionize the concept of “About Me,” enabling individuals to take control of their personal data, establish trust, and reshape the way we present ourselves in the digital realm. Decentralized Identity Taking Ownership of Blockchain Technology, Blockchain technology, allows for the creation of decentralized identity systems, giving individuals control over their personal information. Instead of relying on centralized authorities or platforms, users can securely manage their own data and decide who can access it. This empowers individuals to present an authentic and comprehensive “About Me” profile, free from manipulation or data breaches. Immutable Records: Establishing Trust Blockchain Technology, Blockchain‘s, immutability ensures that once information is recorded, it cannot be altered or tampered with. This feature enhances trust and credibility in the “About Me” context. By leveraging blockchain, individuals can provide transparent and verifiable records of their qualifications, achievements, and experiences. Employers, collaborators, or even potential partners can trust the integrity of the information presented Smart Contracts: Verified Credentials Smart contracts, programmable agreements on the blockchain, enable the issuance and verification of credentials. Through self-sovereign identity platforms, individuals can obtain digitally signed credentials from trusted sources, such as educational institutions or professional organizations. Data Privacy: Protecting Personal Information With traditional platforms, sharing personal information often comes with privacy concerns. Blockchain’s architecture allows for selective disclosure, where individuals can share specific information without revealing their entire profile. By leveraging cryptographic techniques, individuals can maintain privacy while still establishing trust and credibility in their “About Me” presentation. Enhanced Networking: Trust-Based Connections Blockchain technology also fosters new networking opportunities. Decentralized social platforms built on blockchain can facilitate trust-based connections between individuals with shared interests or goals. By leveraging blockchain-based identity systems, users can connect and collaborate with like-minded individuals while maintaining control over their personal data. Future Possibilities: Expanding Horizons As blockchain technology continues to evolve, its impact on the concept of “About Me” will likely expand further. Integration with emerging technologies such as artificial intelligence and the Internet of Things (IoT) can unlock new possibilities for personalized, context-aware “About Me” profiles. Imagine dynamic profiles that automatically update with real-time achievements or personalized recommendations based on blockchain-verified data. Conclusion: As blockchain technology continues to evolve, its impact on the concept of “About Me” will likely expand further. Integration with emerging technologies such as artificial intelligence and the Internet of Things (IoT) can unlock new possibilities for personalized, context-aware “About Me” profiles. Imagine dynamic profiles that automatically update with real-time achievements or personalized recommendations based on blockchain-verified data.
Emerging Technologies
In the race to land the first humans on Mars, NASA is betting big on nuclear rocket engines to get its astronauts to the red planet. Earlier this year, the agency announced a partnership with the Defense Advanced Research Projects Agency, or DARPA, to develop a rocket that uses nuclear propulsion to carry astronaut crews to deep-space destinations like Mars. This type of technology would significantly cut down on the time needed to reach Mars, making long-duration spaceflights less risky for the humans onboard. A conventional spacecraft powered by burning liquid fuel typically takes around seven or eight months to reach the red planet. Scientists have said nuclear rocket engines could shave off at least a third of that time. NASA Administrator Bill Nelson said the shortened journey would give crews more flexibility on missions to Mars. “You enable yourself to be on the surface for maybe three weeks, four weeks, and get back within a reasonable amount of time, instead of having to be gone for two or three years,” he said. A shorter round trip also means the astronauts would be exposed to less cosmic radiation while in space. Studies have shown that without the protection of Earth’s atmosphere and magnetic field, humans can receive the equivalent of an entire year of radiation on Earth in just one day in space. For missions to Mars, that means an astronaut could be exposed to radiation levels 700 times higher than on Earth, according to the European Space Agency. To reduce these risks, DARPA — the branch of the Defense Department responsible for experimenting with new and emerging technologies — is developing a rocket powered by nuclear thermal propulsion. The program has been dubbed DRACO, short for Demonstration Rocket for Agile Cislunar Operations. The system uses high heat from a fission reactor to turn liquid propellant into a gas, which is then funneled through a nozzle to power the spacecraft. This type of propulsion can create more thrust and is at least three times as efficient as chemical rockets, according to NASA. That means needing to carry less fuel onboard, which frees up room to haul more equipment, science experiments or other cargo to the Martian surface. “It can completely change the game of how people think about what is possible in space — what you can carry, how quickly you can get there,” DARPA Director Stefanie Tompkins said. “You have much more flexibility in getting where you want, when you want.” And though the system runs on nuclear power, Nelson said it would use low-enriched uranium rather than weapons-grade, highly enriched uranium. “This is safe nuclear technology,” he said. He added that tests of the nuclear rocket engine would not occur on Earth and would instead take place in space, with safety being the highest priority. The first DRACO demonstration could happen as early as 2027, according to NASA. Daniel Dumbacher, executive director of the American Institute of Aeronautics and Astronautics, called nuclear propulsion a major step forward for space exploration. Missions to Mars naturally include myriad risks to human health, he said, but nuclear technology could limit some of the consequences, including the psychological toll on crews confined within cramped quarters. “There’s the psychological aspect of being away from home for that long,” Dumbacher said. “How does the human mind deal with being in something the size of a school bus for months and months? How do we keep the mind entertained? How do we keep morale up? All of these challenges grow exponentially when you go out to Mars, so shortening the trip time is a big, big deal.” The blueprints for a nuclear rocket engine were initially drawn up in the 1960s. But the program stalled after the Three Mile Island accident in 1979 and the Chernobyl disaster in 1986 turned public opinion against nuclear technology. Now, Nelson said there’s more appetite and political will to explore alternative fuels. Tompkins said the promise of nuclear propulsion was there in the 1960s, but the program was never able to reach its full potential. She said this current environment is the “right time” in history to evolve the technology to the next level. “When I go back and I read the reports from those days,” she said, “I am continually reminded that we all get to stand on the shoulders of giants.”
Emerging Technologies
GPT-4 is coming next week: at an approximately one-hour hybrid information event entitled "AI in Focus - Digital Kickoff" on 9 March 2023, four Microsoft Germany employees presented Large Language Models (LLM) like GPT series as a disruptive force for companies and their Azure-OpenAI offering in detail. The kickoff event took place in the German language, news outlet Heise was present. Rather casually, Andreas Braun, CTO Microsoft Germany and Lead Data & AI STU, mentioned what he said was the imminent release of GPT-4. The fact that Microsoft is fine-tuning multimodality with OpenAI should no longer have been a secret since the release of Kosmos-1 at the beginning of March. "We will introduce GPT-4 next week" "We will introduce GPT-4 next week, there we will have multimodal models that will offer completely different possibilities – for example videos," Braun said. The CTO called LLM a "game changer" because they teach machines to understand natural language, which then understand in a statistical way what was previously only readable and understandable by humans. In the meantime, the technology has come so far that it basically "works in all languages": You can ask a question in German and get an answer in Italian. With multimodality, Microsoft(-OpenAI) will "make the models comprehensive". Disruption and "killing old darlings" Braun was joined by the CEO of Microsoft Germany, Marianne Janik, who spoke across the board about disruption through AI in companies. Janik emphasised the value creation potential of artificial intelligence and spoke of a turning point in time – the current AI development and ChatGPT were "an iPhone moment". It is not about replacing jobs, she said, but about doing repetitive tasks in a different way than before. One point that is often forgotten in the public discussion is that "we in Germany still have a lot of legacy in our companies" and "keep old treasures alive for years". Disruption does not necessarily mean job losses. It will take "many experts to make the use of AI value-adding", Janik emphasised. Traditional job descriptions are now changing and exciting new professions are emerging as a result of the enrichment with the new possibilities. She recommends that companies form internal "competence centres" that can train employees in the use of AI and bundle ideas for projects. In doing so, "the migration of old darlings should be considered". In addition, the CEO emphasised that Microsoft does not use customers' data to train models (which, however, does not or did not apply at least to their research partner OpenAI according to its ChatGPT policy). Janik spoke of a "democratisation" – by which she admittedly only meant the immediate usability of the models within the framework of the Microsoft product range, in particular their broad availability through the integration of AI in the Azure platform, Outlook and Teams. Use cases that are already possible today Clemens Sieber (Senior AI Specialist) and Holger Kenn (Chief Technologist Business Development AI & Emerging Technologies, both Microsoft Germany) provided insights into practical AI use and concrete use cases that their teams are currently working on, but also into technical backgrounds. Kenn explained what multimodal AI is about, which can translate text not only accordingly into images, but also into music and video. He talked about embeddings, which are used for the internal representation of text in the model, in addition to the GPT-3.5 model class. Responsible AI is already built into Microsoft products according to Kenn, and "millions of queries can be mapped into the APIs" via the cloud. Most of the audience probably agreed with him on a basic assessment, that now is the time to get started. Especially in the programming area, models such as Codex and Copilot make it easier to create boilerplate code. Clemens Siebler illustrated with use cases what is already possible today. For example, speech-to-text telephone calls could be recorded and the agents of a call centre would no longer have to manually summarise and type in the content. According to Siebler, this could save 500 working hours a day for a large Microsoft customer in the Netherlands, which receives 30,000 calls a day. And the prototype for the project was created within two hours, a single developer implemented the project in a fortnight (plus further time for final implementation). According to him, the three most common use cases are answering questions on company knowledge that is only accessible to employees, AI-assisted document processing and semi-automation by processing spoken language in the call and response centre. When asked about operational reliability and fact fidelity, Siebler said that the AI will not always answer correctly, so it is necessary to validate. Microsoft is currently creating confidence metrics to address this issue. Customers often use AI support only on their own data sets, primarily for reading comprehension and querying inventory data, where the models are already quite accurate. However, the text generated by the model remains generative and is therefore not easily verifiable. "We build a feedback loop around it with thumbs up and thumbs down," Siebler said – this is an iterative process. Interestingly, none of the four Microsoft employees commented on AI integration in the company's own search engine, "the new Bing". The final panel was not open to audience questions, and only pre-registered participants had been admitted. Microsoft and European AI regulation – a footnote On the topic of regulation, Microsoft Germany took a more positive stance at the AI kickoff event than the Microsoft and US position deposited in Brussels, pointing here in particular to the dangers of regulation for innovation and the grassroots community. "Open source must of course be allowed to flourish," said Janik. Unquestionably an important concern, but its advocate is probably not called Microsoft. On the subject of attempted influence on European legislative procedures, the German edition of the MIT Technology Review had reported at the beginning of March 2023, and a year ago, a French cloud provider had filed an antitrust complaint for anti-competitive practices by the hyperscalers. Against this background, commitments to fairness, security, the right to privacy and reliability may have been more of a message to business customers and partners. Readers interested in the activities of BigTech corporations like Microsoft around the AI Act in Brussels should refer to the research findings of the lobbying watchdog group Corporate Europe Observatory, in particular its report "The Lobbying Ghost in the Machine" from the end of February 2023. Behind the scenes, BigTech seems to be keen to keep its generative AI models free from an overly strong commitment to AI security and European legal norms, and instead to shift regulatory obligations and responsibility for any damage to European providers and user groups. As a potential customer, it is important to consider this, as the contractual penalties provided for in the upcoming AI Act for the use of high-risk AI (which currently still includes ChatGPT and Co.) could hit a company hard, with up to 30 million euros or 6 percent of annual turnover. The details can be found in the publicly accessible draft regulation, e.g. in Article 71 under "Sanctions". Microsoft's information event, however, was not a press conference on this topic, but was probably aimed more at potential or existing business customers as well as future partner companies. (sih)
Emerging Technologies
Amid deep concerns about the risks posed by artificial intelligence, the Biden administration has lined up commitments from seven tech companies — including OpenAI, Google and Meta — to abide by safety, security and trust principles in developing AI. Reps from seven “leading AI companies” — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — are scheduled to attend an event Friday at the White House to announce that the Biden-Harris administration has secured voluntary commitments from the companies to “help move toward safe, secure, and transparent development of AI technology,” according to the White House. “Companies that are developing these emerging technologies have a responsibility to ensure their products are safe,” the Biden administration said in a statement Friday. “To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety.” Note that the voluntary agreements from Meta, Google, OpenAI and the others are just that — they’re promises to follow certain principles. To ensure legal protections in the AI space, the Biden administration said, it will “pursue bipartisan legislation to help America lead the way in responsible innovation” in artificial intelligence. The principles the seven AI companies have agreed to are as follows: - Develop “robust technical mechanisms” to ensure that users know when content is AI generated, such as a watermarking system to reduce risks of fraud and deception. - Publicly report AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, covering both security risks and societal risks, such as “the effects on fairness and bias.” - Commit to internal and external security testing of AI systems prior to release, to mitigate risks related to biosecurity and cybersecurity, as well as broader societal harms. - Share information across the industry and with governments, civil society and academia on managing AI risks, including best practices for safety, information on attempts to circumvent safeguards and technical collaboration. - Invest in cybersecurity and “insider threat” safeguards to protect proprietary and unreleased model weights. - Facilitate third-party discovery and reporting of vulnerabilities in AI systems. - Prioritize research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination. - Develop and deploy advanced AI systems “to help address society’s greatest challenges,” ranging from “cancer prevention to mitigating climate change.” The White House said it has consulted on voluntary AI safety commitments with other countries, including Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE and the U.K. The White House said the Office of Management and Budget will soon release draft policy guidance for federal agencies to ensure the development, procurement and use of AI systems is centered around safeguarding the Americans’ rights and safety.
Emerging Technologies
In early 2020, when scientists rushed to develop a vaccine to take on the SARS-CoV-2 coronavirus that causes COVID-19, it seemed like a really long shot. The fastest a vaccine had ever previously been developed was for mumps, back in the 1960s—an effort that took 48 months. Still, just nine months later, in December 2020, the American pharmaceutical giant Pfizer and a German deep-tech startup, BioNTech, had developed the first COVID-19 vaccine, validating the use of the new technology of mRNA-based vaccines.  The first studies on DNA vaccines began 25 years ago, and the science of RNA vaccines too has been evolving for over 15 years. One outcome was mRNA technology, which required the convergence of advances in synthetic biology, nanotechnology, and artificial intelligence, and has transformed the science—and the business—of vaccines. Pfizer generated nearly $37 billion in sales from the COVID-19 vaccine last year, making it one of the most lucrative products in the company’s history.  Like Pfizer and Moderna in the pharmaceuticals sector, several corporations in other industries—such as Tesla in automobiles, Bayer in agrochemicals, BASF in specialty chemicals, Deere in agriculture machinery, and Goodyear in rubber—are relying on deep technologies. Deep Tech, as we call it, is the problem-driven approach to tackling big, hairy, audacious, and wicked challenges by combining new physical technologies, such as advanced material sciences, with sophisticated digital technologies, such as A.I. and soon, quantum computing.  Deep Tech is rising to the fore because of business’s pressing need to develop new products faster than before; to develop sustainable products and processes; and to become more future-proof. Deep Tech can generate enormous value and will provide companies with new sources of advantage. In fact, Deep Tech will disrupt incumbents in almost every industry. That’s because the products and processes that will result because of these technologies will be transformational, creating new industries or fundamentally altering existing ones.  The early prototypes of Deep Tech-based products are already available. For instance, the use of drones, 3-D printers, and syn-bio kits is proliferating, while No Code / Low Code tools are making A.I. more accessible. They’re opening up more avenues by which companies can combine emerging technologies and catalyze more innovations. Unsurprisingly, incubators and accelerators have sprung up worldwide to facilitate their development. Not only are more Deep Tech start-ups being set up nowadays, but they’re launching successful innovations faster than before.  It’s risky for CEOs of incumbent companies to count on a wait-and-watch strategy. They need to figure out ways to tap into Deep Tech’s potential right away before their organizations are disrupted by them—just as digital technologies and start-ups disrupted business not so long ago. Unlike digital disruption, though, the physical-cum-digital nature of Deep Tech provides a golden opportunity for incumbents to shape these technologies’ evolution and to harness them for their benefit.  Established giants can help Deep Tech start-ups scale their products, which can be especially complex and costly for physical products, by leveraging their expertise in engineering and manufacturing scale-up and by providing market access. And because the incumbents are already at the center of global networks, they can also help navigate government regulations and influence their suppliers and distributors to transition to infrastructure that will support the new processes and products. Doing so will unlock enormous value, as the Pfizer-BioNTech case exemplifies.  Most incumbents will find that Deep Tech poses two stiff challenges at first. One, it isn’t easy to spot or assess the business opportunities that the new technologies will create. Two, it’s equally tough to develop and deploy Deep Tech-based solutions and applications, which usually requires participating in and catalyzing collective actions with ecosystems. To manage the twin challenges of Deep Tech, CEOs should keep in mind three starting points.  Backcasting Despite its sophistication, conventional technology forecasting produces linear predictions and siloed thinking; it doesn’t account for how technologies change and converge. As a result, most forecasts underestimate the speed at which technologies evolve and when business will be able to use them. That’s why companies should use “backcasting,” the method outlined by University of Waterloo’s John Robinson in the late 1980s.  Rather than tracking the development of many technologies, business would do better to start by focusing on the world’s biggest needs and pressing problems, to identify the long-standing frictions and tradeoffs that have prevented it from tackling them until now. Then, they should define a desirable future in which those issues have been resolved, and work back to identify the technologies, and combinations thereof, that will make solutions possible and commercially feasible. Backcasting helps companies come to grips with both short-term and long run technological changes, making it ideal to manage Deep Tech.  The Anglo-American think tank Rethink X, for instance, has used a technology disruption framework, predicated on backcasting, to highlight the implications of creating a sustainable world. The analysis suggests that the technological changes under way in the energy, transportation, and food sectors, driven by a combination of just eight emerging technologies, could eliminate over 90% of net greenhouse gas emissions in 15 years’ time. The same technologies will also make the cost of carbon withdrawal affordable, so more breakthrough technologies may not be needed in the medium term.  Gauging change  When companies evaluate the business opportunities that deep technologies will open up, they should take into account the scope of the changes they will bring about. It will be determined by the complexity of a technology and the business’s ability to scale solutions based on it. As Arnulf Grubler, the head of the Austria-based International Institute for Applied Systems Analysis, and his co-authors argued six years ago, new technologies can bring about four levels of change. They can:  1. Improve an existing product. For example, sustainable biodegradable plastic can replace conventional plastic packaging.  2. Improve an existing system. Nanomaterial-infused paints and an A.I.-enabled smart home system can, for instance, dramatically change homes.  3. Transform a system. Developing the ecosystem for hydrogen-powered automobiles, from hydrogen production to refueling stations, could transform urban mobility.  4. Transform a system-of-systems. Creating a purification technology that transforms current water supply and management systems will also alter the working of water-consuming sectors such as agriculture, alcohol, beverages, paper, and sugar.  Figuring out which of the four levels of change is likely to result will help companies better assess market sizes as well as growth trajectories. When BCG recently estimated the market size of Deep Tech solutions in nine sustainability-related sectors, for example, it found that while technology improvements in existing value chains would generate additional revenues of over $123 billion per annum, those that resulted in systemic changes would generate 20 times more. Or as much as $2.7 trillion a year.  Cultivating ecosystems  Few companies already have in-house all the technologies and capabilities they need to deploy Deep Tech. They must gain the support of technology-related ecosystems, which extend from academics and university departments to investors and governments, to develop those competencies. The types of linkages that will result will depend on the business opportunity as well as the ecosystem’s maturity.  Several kinds of collaborations are likely to form. Some incumbents will, obviously, join hands with start-ups to develop new products or processes, as Bayer did in 2017, setting up a joint venture with Ginkgo Bioworks to synthesize microbes that will allow plants to produce their own fertilizers. Others will orchestrate systemic changes, which is what Hyundai Motor Group is trying to do in the field of mobility by working with several Deep Tech startups. Still others may focus on nurturing deep technologies to maturity themselves, akin to the efforts of Sweden’s SSAB (formerly Swedish Steel), Vattenfal, and Finland’s LKAB to scale a sustainable steel-making process in which fossil-free electricity and green hydrogen replace coking coal.  *** A deep technology was impossible yesterday, is barely feasible today, and may soon become so pervasive and impactful that it will be difficult to remember life without it, points out Michigan State University’s Joshua Siegel. The future will likely belong to companies that don’t just track Deep Tech, but invest in its development and drive its adoption by engaging with ecosystems, forcing rivals to play the losing strategy of catch up.  Read other Fortune columns by François Candelon.  François Candelon is a managing director and senior partner at BCG and global director of the BCG Henderson Institute.Maxime Courtaux is a project leader at BCG and ambassador at the BCG Henderson Institute.Antoine Gourevitch is a managing director and senior partner at BCG.John Paschkewitz is a partner and associate director at BCG.Vinit Patel is a project leader at BCG and ambassador at the BCG Henderson Institute. Some companies featured in this column are past or current clients of BCG.  The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune. Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.
Emerging Technologies
India AI, Meta India Ink Pact To Spur Advancements In AI, Emerging tech India AI and Meta India on Wednesday signed a pact to foster collaboration in the field of artificial intelligence and emerging technologies, making Meta’s open-source AI models available for use by the Indian ecosystem, according to an official release. India AI and Meta India on Wednesday signed a pact to foster collaboration in the field of artificial intelligence and emerging technologies, making Meta’s open-source AI models available for use by the Indian ecosystem, according to an official release. India AI is an Independent Business Unit under Digital India Corporation. The MoU aims to establish a framework for collaboration and cooperation between India AI and Meta in the field of artificial intelligence and emerging technologies, including making Meta's open-source AI models available for use by the Indian ecosystem. "India AI and Meta have entered into a collaboration aimed at advancing research and development in AI and emerging technologies, seeking breakthroughs in AI technology and its applications," the release said. Both organisations may also consider establishing a Centre of Excellence to nurture the startup ecosystem of AI and other emerging technologies. "Leveraging Meta's AI research models like LlaMA, Massively Multilingual Speech, and No Language Left Behind, the partnership will focus on building data sets in Indian Languages to enable translation and large language models, with priority given to low-resource languages," it said. The effort will foster social inclusion, improve government service delivery, and spur innovation using large language models, Generative AI, cognitive systems and translation models. The two sides will strive to enhance accessibility to AI computing resources for researchers, startups and organisations with limited resources. Knowledge sharing and collaboration in AI and emerging technologies will be facilitated through workshops, seminars, conferences and similar platforms. "Both organisations are dedicated to developing programmes and initiatives that enhance AI and Emerging Technologies skills and expertise among researchers, professionals and students in India, contributing to the growth of AI talent in the country," the release said. India AI and Meta share a common goal of raising awareness about the new-age technologies' potential benefits and risks among various stakeholders, including policymakers, businesses, civil society and the general public. They will also work together to promote responsible AI practices through the collaborative development of comprehensive tools and guidelines. Nick Clegg, President, Global Affairs, Meta, said the company's open approach to AI innovation is complementary to India’s leadership on digital issues. "Giving businesses, startups and researchers access to these technologies can open up a world of social and economic opportunities. ‘India AI’ is an exciting programme and with close collaboration between government and industry, we can strengthen India’s digital leadership and help to ensure AI tools are built for India's unique needs," he said.
Emerging Technologies
UK firms could gain access to billions in US green funding under new plans announced by Rishi Sunak and Joe Biden. The Atlantic Declaration would allow British electric car firms access to new US green tax credits and subsidies, steps up co-operation on AI and includes a data protection deal. The deal may be subject to approval by Congress and state-by-state agreement in the US for some areas. Mr Sunak said the deal "sets a new standard for economic cooperation". The agreement comes after hopes of a full-blown free trade deal were abandoned. On the plane ride over to Washington Mr Sunak said: "For a while now, that has not been a priority for either the US or UK." At a press conference alongside Mr Biden, Mr Sunak insisted the targeted approach of the Atlantic Declaration was about "what can do the most benefit to our citizens as quickly as possible". The deal, he argued was a better response to the economic challenges posed by emerging technologies, saying it was "ambitious in what it seeks to achieve". Tens of thousands of small businesses would benefit from the deal removing trade restrictions. "Be in no doubt, the economic relationship between our two countries has never been stronger," Mr Sunak said. The deal would mitigate some of the impact of the US flagship Inflation Reduction Act (IRA) on the UK economy. Mr Biden's key policy is focussed on building up the US domestic energy market and contains $370bn (£297bn) to boost green technology production. Under current policy, the IRA provides tax credit worth $3,750 (£3,000) for each electric vehicle (EV) manufactured in the US, or which use components mined, processed or manufactured in the country. The UK is already a net exporter of raw materials for EV batteries to the US. But nations without a US trade deal are barred from accessing IRA subsidies. The Atlantic Declaration commits the UK and US to working on a new Critical Minerals Agreement - which would give buyers of vehicles made using critical minerals processed, recycled or mined by UK companies access to tax credits. The declaration says the agreement would be launched after consultation with US Congress. Japan already has a similar deal, which allows Japanese firms to also swerve export duties on minerals used in producing EV batteries. The declaration also includes a commitment to a "new UK-US Data Bridge" which would allow UK firms to transfer data freely to certified US organisations without paying a levy. Downing Street estimate the change will affect around 55,000 UK businesses - translating into £92.4m in direct savings per year. Mr Biden also supported Mr Sunak's plans to set up an international summit on AI safety which will be hosted in the UK later this year. The prime minister said: "The UK and US have always pushed the boundaries of what two countries can achieve together. "So it is natural that, when faced with the greatest transformation in our economies since the industrial revolution, we would look to each other to build a stronger economic future together. "The Atlantic Declaration sets a new standard for economic cooperation, propelling our economies into the future so we can protect our people, create jobs and grow our economies together." Both Mr Biden and Mr Sunak agree to carry out work to improve the resilience of supply chains and efforts will be stepped up to shut Vladimir Putin's Russia out of the global civil nuclear market.
Emerging Technologies
Twenty years ago, nanotechnology was the artificial intelligence of its time. The specific details of these technologies are, of course, a world apart. But the challenges of ensuring each technology’s responsible and beneficial development are surprisingly alike. Nanotechnology, which is technologies at the scale of individual atoms and molecules, even carried its own existential risk in the form of “gray goo.” As potentially transformative AI-based technologies continue to emerge and gain traction, though, it is not clear that people in the artificial intelligence field are applying the lessons learned from nanotechnology. As scholars of the future of innovation, we explore these parallels in a new commentary in the journal Nature Nanotechnology. The commentary also looks at how a lack of engagement with a diverse community of experts and stakeholders threatens AI’s long-term success. Nanotech excitement and fear In the late 1990s and early 2000s, nanotechnology transitioned from a radical and somewhat fringe idea to mainstream acceptance. The U.S. government and other administrations around the world ramped up investment in what was claimed to be “the next industrial revolution.” Government experts made compelling arguments for how, in the words of a foundational report from the U.S. National Science and Technology Council, “shaping the world atom by atom” would positively transform economies, the environment and lives. But there was a problem. On the heels of public pushback against genetically modified crops, together with lessons learned from recombinant DNA and the Human Genome Project, people in the nanotechnology field had growing concerns that there could be a similar backlash against nanotechnology if it were handled poorly. These concerns were well grounded. In the early days of nanotechnology, nonprofit organizations such as the ETC Group, Friends of the Earth and others strenuously objected to claims that this type of technology was safe, that there would be minimal downsides and that experts and developers knew what they were doing. The era saw public protests against nanotechnology and – disturbingly – even a bombing campaign by environmental extremists that targeted nanotechnology researchers. Just as with AI today, there were concerns about the effect on jobs as a new wave of skills and automation swept away established career paths. Also foreshadowing current AI concerns, worries about existential risks began to emerge, notably the possibility of self-replicating “nanobots” converting all matter on Earth into copies of themselves, resulting in a planet-encompassing “gray goo.” This particular scenario was even highlighted by Sun Microsystems co-founder Bill Joy in a prominent article in Wired magazine. Many of the potential risks associated with nanotechnology, though, were less speculative. Just as there’s a growing focus on more immediate risks associated with AI in the present, the early 2000s saw an emphasis on examining tangible challenges related to ensuring the safe and responsible development of nanotechnology. These included potential health and environmental impacts, social and ethical issues, regulation and governance, and a growing need for public and stakeholder collaboration. The result was a profoundly complex landscape around nanotechnology development that promised incredible advances yet was rife with uncertainty and the risk of losing public trust if things went wrong. How nanotech got it right One of us – Andrew Maynard – was at the forefront of addressing the potential risks of nanotechnology in the early 2000s as a researcher, co-chair of the interagency Nanotechnology Environmental and Health Implications working group and chief science adviser to the Woodrow Wilson International Center for Scholars Project on Emerging Technology. At the time, working on responsible nanotechnology development felt like playing whack-a-mole with the health, environment, social and governance challenges presented by the technology. For every solution, there seemed to be a new problem. Yet, through engaging with a wide array of experts and stakeholders – many of whom were not authorities on nanotechnology but who brought critical perspectives and insights to the table – the field produced initiatives that laid the foundation for nanotechnology to thrive. This included multistakeholder partnerships, consensus standards, and initiatives spearheaded by global bodies such as the Organization for Economic Cooperation and Development. In the U.S., much of this collaborative work was spearheaded by the cross-agency National Nanotechnology Initiative. In the early 2000s, the initiative brought together representatives from across the government to better understand the risks and benefits of nanotechnology. It helped convene a broad and diverse array of scholars, researchers, developers, practitioners, educators, activists, policymakers and other stakeholders to help map out strategies for ensuring socially and economically beneficial nanoscale technologies. In 2003, the 21st Century Nanotechnology Research and Development Act became law and further codified this commitment to participation by a broad array of stakeholders. The coming years saw a growing number of federally funded initiatives – including the Center for Nanotechnology and Society at Arizona State University (where one of us was on the board of visitors) – that cemented the principle of broad engagement around emerging advanced technologies. Experts only at the table These and similar efforts around the world were pivotal in ensuring the emergence of beneficial and responsible nanotechnology. Yet despite similar aspirations around AI, these same levels of diversity and engagement are missing. AI development practiced today is, by comparison, much more exclusionary. The White House has prioritized consultations with AI company CEOs, and Senate hearings have drawn preferentially on technical experts. According to lessons learned from nanotechnology, we believe this approach is a mistake. While members of the public, policymakers and experts outside the domain of AI may not fully understand the intimate details of the technology, they are often fully capable of understanding its implications. More importantly, they bring a diversity of expertise and perspectives to the table that is essential for the successful development of an advanced technology like AI. This is why, in our Nature Nanotechnology commentary, we recommend learning from the lessons of nanotechnology, engaging early and often with experts and stakeholders who may not know the technical details and science behind AI but nevertheless bring knowledge and insights essential for ensuring the technology’s appropriate success. The clock is ticking Artificial intelligence could be the most transformative technology that’s come along in living memory. Developed smartly, it could positively change the lives of billions of people. But this will happen only if society applies the lessons from past advanced technology transitions like the one driven by nanotechnology. As with the formative years of nanotechnology, addressing the challenges of AI is urgent. The early days of an advanced technology transition set the trajectory for how it plays out over the coming decades. And with the recent pace of progress of AI, this window is closing fast. It is not just the future of AI that’s at stake. Artificial intelligence is only one of many transformative emerging technologies. Quantum technologies, advanced genetic manipulation, neurotechnologies and more are coming fast. If society doesn’t learn from the past to successfully navigate these imminent transitions, it risks losing out on the promises they hold and faces the possibility of each causing more harm than good.
Emerging Technologies
Today we are more than thrilled to welcome PyTorch to the Linux Foundation. Honestly, it’s hard to capture how big a deal this is for us in a single post but I’ll try.  TL;DR — PyTorch is one of the most important and successful machine learning software projects in the world today. We are excited to work with the project maintainers, contributors and community to transition PyTorch to a neutral home where it can continue to enjoy strong growth and rapid innovation. We are grateful to the team at Meta, where PyTorch was incubated and grew into a massive ecosystem, for trusting the Linux Foundation with this crucial effort. The journey will be epic. Artificial Intelligence, Machine Learning, and Deep Learning are critical to present and future technology innovation. Growth around AI and ML communities and the code they generate has been nothing short of extraordinary. AI/ML is also a truly “open source-first” ecosystem. The majority of popular AI and ML tools and frameworks are open source. The community clearly values transparency and the ethos of open source. Open source communities are playing and will play a leading role in development of the tools and solutions that make AI and ML possible — and make it better over time.  For all of the above reasons, the Linux Foundation understands that fostering open source in AI and ML is a key priority. The Linux Foundation already hosts and works with many projects that are either contributing directly to foundational AI/ML projects (LF AI & Data) or contributing to their use cases and integrating with their platforms. (e.g., LF Networking, AGL, Delta Lake, RISC-V, CNCF, Hyperledger).  PyTorch extends and builds on these efforts. Obviously, PyTorch is one of the most important foundational platforms for development, testing and deployment of AI/ML and Deep Learning applications. If you need to build something in AI, if you need a library or a module, chances are there is something in PyTorch for that. If you peel back the cover of any AI application, there is a strong chance PyTorch is involved in some way. From improving the accuracy of disease diagnosis and heart attacks, to machine learning frameworks for self-driving cars, to image quality assessment tools for astronomers, PyTorch is there. Originally incubated by Meta’s AI team, PyTorch has grown to include a massive community of contributors and users under their community-focused stewardship. The genius of PyTorch (and a credit to its maintainers) is that it is truly a foundational platform for so much AI/ML today, a real Swiss Army Knife. Just as developers built so much of the technology we know today atop Linux, the AI/ML community is building atop PyTorch – further enabling emerging technologies and evolving user needs. As of August 2022, PyTorch was one of the five-fastest growing open source software communities in the world alongside the Linux kernel and Kubernetes. From August 2021 through August 2022, PyTorch counted over 65,000 commits. Over 2,400 contributors participated in the effort, filing issues or PRs or writing documentation. These numbers place PyTorch among the most successful open source projects in history.   Projects like PyTorch that have the potential to become a foundational platform for critical technology benefit from a neutral home. Neutrality and true community ownership are what has enabled Linux and Kubernetes to defy expectations by continuing to accelerate and grow faster even as they become more mature. Users, maintainers and the community begin to see them as part of a commons that they can rely on and trust, in perpetuity. By creating a neutral home, the PyTorch Foundation, we are collectively locking in a future of transparency, communal governance, and unprecedented scale for all. As part of the Linux Foundation, PyTorch and its community will benefit from our many programs and support communities like training and certification programs (we already have one in the works), to community research (like our Project Journey Reports) and, of course, community events. Working inside and alongside the Linux Foundation, the PyTorch community also has access to our LFX collaboration portal, enabling mentorships and helping the PyTorch community identify future leaders, find potential hires, and observe shared community dynamics.  PyTorch has gotten to its current state through sound maintainership and open source community management. We’re not going to change any of the good things about PyTorch. In fact, we can’t wait to learn from Meta and the PyTorch community to improve the experiences and outcomes of other projects in the Foundation. For those wanting more insight about our plans for the PyTorch Foundation, I invite you to join Soumith Chintala (co-creator of PyTorch) and Dr. Ibrahim Haddad (Executive Director of the PyTorch Foundation) for a live discussion on Thursday entitled, PyTorch: A Foundation for Open Source AI/ML. We are grateful for Meta’s trust in “passing us the torch” (pun intended). Together with the community, we can build something (even more) insanely great and add to the global heritage of invaluable technology that underpins the present and the future of our lives. Welcome, PyTorch! We can’t wait to get started!
Emerging Technologies
Filling the hole Silicon Valley Bank left in the climate tech ecosystem The Silicon Valley Bank (SVB) played a critical role in the climate tech industry, particularly for early-stage companies. The bank’s recent collapse will be felt even though its depositors will get their money back, as announced by Treasury Secretary Janet Yellen. Still, one can expect some project investments to be delayed and the financial costs to startups to rise as a new risk appetite emerges. An important silver lining should be that more banks may eventually get more comfortable with supporting climate tech, which can help grow this funding “ecosystem” considerably and eventually lead to more investment. SVB was a significant climate tech sponsor As has been reported widely, the bank embraced its Silicon Valley roots and built its 40-year business around supporting startups that struggled to get the attention of larger, traditional banks. That also included many technology-related firms in general, but climate tech was a key segment. SVB claimed more than 1,500 customers, with collectively many billions in loans for businesses focused on clean energy generation and storage. It had a dominant position in certain segments, such as 62 percent of community solar projects. That support for startups at the cutting-edge of the energy transition translated to several products and services, including some that were more specialized. For example, SVB was a large provider of so-called “back leverage debt financing,” which is a loan product used to take advantage of tax breaks afforded to solar and wind energy generation. More fundamentally, however, SVB’s role in nurturing emerging technologies appears to have been unique. SVB played an important liquidity role, supporting venture capital (VC)equity as startups sought to commercialize and scale their products and meet working capital needs. And by being part of the VC ecosystem, SVB was also able to connect companies with capital. Filling the gap SVB was willing to take risks on these emerging companies by developing an understanding of their technologies, industry conditions and the companies’ early funders. This raises questions about how to get other lenders to fill its shoes, and quickly. One, partial answer may lie in the VC community — which some will find ironic given that venture capitalists are being assigned a good portion of the blame for SVB’s demise. Members of over a dozen VC firms are reported to have already met to discuss scenarios to keep parts of SVB operations active, including participating in a consortium that could bid for parts of the failed bank. Presumably, the biggest part of the answer is that other banks and non-bank lenders must increase their activity in this space. Although that will take time, one can imagine SVB staff helping to spread and deepen these institutions’ knowledge and relationships in clean and climate tech. Fortunately, financing of established renewable technologies like solar and wind is now well established, and many other companies engage in it. To wit, companies such as solar provider Sunrun says, in the immediate aftermath of the collapse, it has already been approached by several other lenders to replace its SVB loans. The incentives for banks to get more active in climate tech, even in less well-developed segments, are clear enough. It will not escape the banking industry’s attention that industry cash availability has never been higher, with U.S. climate tech venture capital funding in 2022 of $28 billion, more than double 2020 and quadruple 2018 levels. It is also clear that there are huge new investment incentives through the Inflation Reduction Act (IRA). And it can only help that servicing climate tech helps advance banks’ climate-related commitments, made through associations like the Glasgow Financial Alliance for Net Zero (GFANZ) and others. Finally, it seems plausible that the government, and specifically a government-sponsored green bank, could be part of the answer as well. This may be anathema to some purists, yet if a promising technology area lacks a broad enough understanding by the financial community, or is too nascent to receive private capital support, then a government-sponsored green bank may be very helpful in providing initial support and incubational resources to get private capital comfortable. The SVB saga underscores the risk to an industry that is overly reliant on one source of financial support. If the bank’s demise eventually leads to a much more diverse set of financing sources, then climate tech will be the better off for it. Getting to that point quickly should be the goal for the financial industry and government alike. Brad Handler is Payne Institute program manager and sustainable finance lab researcher at the Colorado School of Mines, as well as a former Wall Street equity research analyst in the oil and gas sector. Morgan Bazilian is director of the Payne Institute and a professor of public policy at the Colorado School of Mines, as well as a former lead energy specialist at the World Bank. Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Emerging Technologies
The US National Renewable Energy Laboratory (NREL) has released a new, interactive version of its research cell efficiency chart for a range of PV technologies. November 21, 2022 The new version of the Best Research-Cell Efficiency ChartImage: Screenshot, NRELNREL has unveiled a new version of its Best Research-Cell Efficiency Chart. The tool highlights the highest confirmed conversion efficiencies of research cells for a range of PV technologies.With the new interactive version, users can pull up decades of research data and compare custom charts that focus on specific technologies or time periods, NREL said in a statement. They can now find data on a cell’s current, voltage output, and fill factor, in addition to efficiency. The availability of those details will depend on the information in NREL’s records.The highest research cell efficiency recorded in the chart is 47.1%, for a four-junction cell. Its interactive nature allows users to visualize the recent jump in conversion efficiencies for emerging technologies like perovskite solar cells. That contrasts with the steady improvement of silicon solar cell efficiency since the 1980s.Development of two types of silicon solar cells (in blue) and of perovskite solar cells (in orange)Image: NREL“It’s possible that the NREL efficiency chart is the most presented graph that you can find at the annual PV Specialists Conference and similar conferences,” said Sarah Kurtz, senior research fellow at NREL. “But it has gotten so full that it’s difficult to see the data. With the new interactive chart, researchers will be able to still use the data, but present it in a way that communicates better for their talk.”The cell efficiencies are measured by independent testing centers, including NREL, before earning a spot on the chart. All of the figures are also available as a downloadable file.This content is protected by copyright and may not be reused. If you want to cooperate with us and would like to reuse some of our content, please contact: [email protected].
Emerging Technologies
How Indo-Pacific economic negotiations can support supply chain resilience As the world has experienced unprecedented shortages ranging from personal protective equipment and automotive semiconductors to wheat and sunflower oil in the midst of the pandemic and the war in Ukraine, supply chain resilience has emerged as an urgent priority issue for businesses and governments alike. Moreover, looking ahead, the concentration of critical raw materials especially essential for green and digital technologies subjects the supply chains of emerging technologies to potential geopolitical risk. Through a series of recent supply chain shocks, the world has learned important lessons: No country is immune to them or can manage them on its own. Only through international cooperation, particularly among like-minded countries, can countries improve supply chain resilience and economic security to the advantage of the global economic system. One of the most promising international initiatives for tackling supply chain vulnerabilities is taking shape now with the Indo-Pacific Economic Framework (IPEF) negotiations. Initiated by the Biden administration in May 2022, IPEF serves as a blueprint for stepped-up U.S. economic engagement in the region with 14 participating countries. Its supply chain pillar has gained significant attention, as it seeks to create a new framework for international cooperation to strengthen resilience, an area only partially addressed in conventional trade agreements. IPEF provides a venue for innovative thinking on rules and cooperation mechanisms that can help to avert supply chain disruptions in the future, complementing existing trade agreements such as the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP) and the Regional Comprehensive Economic Partnership Agreement (RCEP.) IPEF negotiators are reportedly making headway in their supply chain discussions, with the possibility of announcing certain early harvest agreements, like a crisis management mechanism, around the Asia-Pacific Economic Cooperation Ministers Responsible for Trade meeting in late May. This progress is encouraging and will set the agenda for further deliverables. However, to make IPEF an effective and mutually beneficial supply chain initiative, governments should be ambitious and creative, drawing from lessons learned in recent years. It’s encouraging that IPEF negotiators are designing mechanisms to provide early warning for potential shortages and disruptions, as well as to work collectively during crises. This will require timely information-sharing between businesses and governments. Striking the right balance between sharing relevant business information and protecting privacy and confidentiality is crucial for gaining private sector buy-in. Countries are keenly interested in a collective crisis response mechanism. Creative, bold ideas should be explored here, such as adapting the financial currency swap system to a supply chain version. This would allow countries to access stockpiles and surplus resources within the IPEF-wide network during supply crises caused by external shocks or economic coercion. Developing WTO-plus rules to discourage export restrictions and facilitate customs procedures and essential cross-border movement of products and people will also prove valuable. Finally, supply chain connectivity and co-investment opportunities can significantly benefit IPEF’s developing country members. A public-private partnership supply chain connectivity forum can help identify potential supply chain partnership and co-investment opportunities while establishing a supply chain ombudsmen system that would provide an effective one-stop shop for addressing specific concerns faced by businesses. Beyond a sector-wide approach, IPEF work should also focus on sector-specific pilot initiatives in key strategic areas, such as critical minerals and materials — including lithium, nickel, cobalt and rare earths. These resources are economically and geopolitically significant, and diversification is urgently needed due to the current high levels of market concentration. Developing IPEF-wide supply chain mapping for critical minerals and materials, streamlining and harmonizing regulations and standards and exploring a new swap system for these resources during crises can create tangible benefits for IPEF countries. We also recommend that Washington negotiate similar critical materials and minerals agreements with interested IPEF members, like the one recently concluded with Japan, making them eligible for electric vehicle tax credits under the Inflation Reduction Act. Pursuing and implementing these suggestions is a large undertaking that cannot happen overnight. Instead, it could be approached in phases. This year, the focus should be on sector-wide deliverables, with 2024 dedicated to detailed, sector-specific work on critical materials and minerals while ensuring that cooperation mechanisms function effectively and address real-time problems. Over time, a market access component should also be considered. The IPEF Supply Chain Pillar presents a welcome opportunity for the United States and its regional partners to reshape and bolster regional supply chain networks through a next-generation economic agreement. All IPEF members stand to gain by creating more resilient and secure supply chains, and their work can serve as a model for further collective actions in this area. Han-koo Yeo is the former trade minister of the Republic of Korea and a distinguished fellow at the Asia Society Policy Institute. Wendy Cutler is vice president at the Asia Society Policy Institute. Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Emerging Technologies
Westlake-based Charles Schwab will launch its first crypto-related ETF in the first week of August.(Charles Schwab)Charles Schwab is the latest institutional investor to launch a fund focused on the controversial crypto industry.The Westlake-based financial giant’s asset management arm will launch its first crypto-related ETF on or about Aug. 4. The Schwab Crypto Thematic ETF will track the Schwab Crypto Thematic Index, which invests in companies that “may benefit from the development or utilization of cryptocurrencies and other digital assets,” the announcement said.Schwab emphasizes in its announcement that the fund will not invest in crypto directly. Instead, it invests in companies that benefit from crypto-related activities, like mining or staking, developing blockchain applications and enabling the use of digital assets to buy or sell goods and services.“For investors who are interested in cryptocurrency exposures, there is a whole ecosystem to consider as more companies seek to derive revenue from crypto directly and indirectly,” David Botset, managing director at Schwab Asset Management, said in a statement.Buying into a crypto-related ETF may give more risk-averse investors a way to gain exposure to the industry. The prices of cryptocurrencies have plummeted since reaching highs in November, scaring some investors away. The price of Bitcoin is down about 65% since it peaked in early November.The fund will trade on the New York Stock Exchange under the ticker “STCE” and will have a fee of 0.30%, which Schwab claims is the lowest cost for a crypto-related ETF. On its website, Schwab says that 85% of its market cap index ETFs have expenses lower than 0.10%.ETFs are baskets of securities that are often focused on specific sectors, like energy or technology. They reduce the risk that comes with investing heavily in one stock. Thematic ETFs focus on specific “themes” like climate change or crypto versus a specific sector. Thematic ETFs are often banking on emerging technologies seeing big wins in the future.Schwab isn’t the first traditional investor to launch an ETF in the crowded crypto space this year.In April, Boston-based Fidelity launched the Fidelity Crypto Industry and Digital Payments ETF with an expense ratio of 0.39%, which it claimed was the lowest available at the time. Now, Schwab has undercut that cost.A few days later in April, New York-based BlackRock, the world’s largest asset manager, launched the iShares Blockchain and Tech ETF, which has a higher expense ratio of 0.47%.Related:Mark Cuban fears ‘nightmare’ SEC crypto registration rules but remains bullish on cryptoNatalie Walters, Money Reporter. Natalie is an award-winning reporter who covers banks, crypto, fraud, deals and other money topics for The Dallas Morning News. She was previously based in New York where she reported for The Motley Fool, TheStreet and Business Insider. She grew up in Augusta, Ga. and holds an MA in Investigative Journalism from The Cronkite School. [email protected] NatalieReporter Business BriefingBecome a business insider. Get the latest headlines delivered to your inbox every weekday.By signing up you agree to our privacy policyMost Popular on DallasNews.com123456
Emerging Technologies
Correct and incorrect recommendations inter-mingled in one-third of the chatbot’s responses, making errors more difficult to detect. For many patients, the internet serves as a powerful tool for self-education on medical topics. With ChatGPT now at patients’ fingertips, researchers from Brigham and Women’s Hospital, a founding member of the Mass General Brigham healthcare system, assessed how consistently the artificial intelligence chatbot provides recommendations for cancer treatment that align with National Comprehensive Cancer Network (NCCN) guidelines. Their findings, published in JAMA Oncology, show that in approximately one-third of cases, ChatGPT 3.5 provided an inappropriate (“non-concordant”) recommendation, highlighting the need for awareness of the technology’s limitations. “Patients should feel empowered to educate themselves about their medical conditions, but they should always discuss with a clinician, and resources on the Internet should not be consulted in isolation,” said corresponding author Danielle Bitterman, MD, of the Department of Radiation Oncology at Brigham and Women's Hospital and the Artificial Intelligence in Medicine (AIM) Program of Mass General Brigham. “ChatGPT responses can sound a lot like a human and can be quite convincing. But, when it comes to clinical decision-making, there are so many subtleties for every patient’s unique situation. A right answer can be very nuanced, and not necessarily something ChatGPT or another large language model can provide.” The emergence of artificial intelligence tools in health has been groundbreaking and has the potential to positively reshape the continuum of care. Mass General Brigham, as one of the nation’s top integrated academic health systems and largest innovation enterprises, is leading the way in conducting rigorous research on new and emerging technologies to inform the responsible incorporation of AI into care delivery, workforce support, and administrative processes. Although medical decision-making can be influenced by many factors, Bitterman and colleagues chose to evaluate the extent to which ChatGPT’s recommendations aligned with the NCCN guidelines, which are used by physicians at institutions across the country. They focused on the three most common cancers (breast, prostate and lung cancer) and prompted ChatGPT to provide a treatment approach for each cancer based on the severity of the disease. In total, the researchers included 26 unique diagnosis descriptions and used four, slightly different prompts to ask ChatGPT to provide a treatment approach, generating a total of 104 prompts. Nearly all responses (98 percent) included at least one treatment approach that agreed with NCCN guidelines. However, the researchers found that 34 percent of these responses also included one or more non-concordant recommendations, which were sometimes difficult to detect amidst otherwise sound guidance. A non-concordant treatment recommendation was defined as one that was only partially correct; for example, for a locally advanced breast cancer, a recommendation of surgery alone, without mention of another therapy modality. Notably, complete agreement in scoring only occurred in 62 percent of cases, underscoring both the complexity of the NCCN guidelines themselves and the extent to which ChatGPT’s output could be vague or difficult to interpret. In 12.5 percent of cases, ChatGPT produced “hallucinations,” or a treatment recommendation entirely absent from NCCN guidelines. These included recommendations of novel therapies, or curative therapies for non-curative cancers. The authors emphasized that this form of misinformation can incorrectly set patients’ expectations about treatment and potentially impact the clinician-patient relationship. Going forward, the researchers are exploring how well both patients and clinicians can distinguish between medical advice written by a clinician versus a large language model (LLM) like ChatGPT. They are also prompting ChatGPT with more detailed clinical cases to further evaluate its clinical knowledge. The authors used GPT-3.5-turbo-0301, one of the largest models available at the time they conducted the study and the model class that is currently used in the open-access version of ChatGPT (a newer version, GPT-4, is only available with the paid subscription). They also used the 2021 NCCN guidelines, because GPT-3.5-turbo-0301 was developed using data up to September 2021. While results may vary if other LLMs and/or clinical guidelines are used, the researchers emphasize that many LLMs are similar in the way they are built and the limitations they possess. “It is an open research question as to the extent LLMs provide consistent logical responses as oftentimes ‘hallucinations’ are observed,” said first author Shan Chen, MS, of the AIM Program. “Users are likely to seek answers from the LLMs to educate themselves on health-related topics---similarly to how Google searches have been used. At the same time, we need to raise awareness that LLMs are not the equivalent of trained medical professionals.” Disclosures: Bitterman is the Associate Editor of Radiation Oncology, HemOnc.org and receives funding from the American Association for Cancer Research. Funding: This study was supported by the Woods Foundation. Paper cited: Chen, S, et al. “Use of Artificial Intelligence Chatbots for Cancer Treatment Information” JAMA Oncology DOI: 10.1001/jamaoncol.2023.2954
Emerging Technologies
Study shows that even confident educators have trouble recognizing AI-generated essays Can you tell if what you're reading right now was written by a human or generated by artificial intelligence? Do you care? Those are essentially the questions that University of California San Diego researchers asked in an experiment with ChatGPT at a regional high school. The researchers tested teachers and students with pairs of essays—one by a high school student and the other by ChatGPT—and asked them to identify which essay was the work of a human and which of the AI language model. Teachers were right about 70% of the time. Students scored an average of 62%. Those may not seem like terrible marks; they're passing grades, right? But the researchers say the numbers should be well above 90% if it were easy to tell the difference. Confidence didn't correlate with accuracy either. People who thought they could spot the work of the chatbot didn't do better than those who were less certain of their abilities. "We were surprised that teachers who had experience with ChatGPT or a history of teaching high school English found the task so challenging," said senior author Gail Heyman, a professor of psychology in the UC San Diego School of Social Sciences. These findings underscore widespread concerns about students potentially turning in AI-generated essays as their own and getting away with the dishonest behavior. "But also," Heyman said, "one of the most interesting—and troubling—aspects of our study is that teachers performed worse on the identification task when the pair of essays included a student essay that was particularly well-written. In fact, many teachers said they guessed that the better-written essay was generated by ChatGPT. This finding suggests that teachers are more likely to 'accuse' a well-written essay of being produced by AI—which also has some potentially concerning implications in a real-world classroom setting." The study, published in the journal Human Behavior and Emerging Technologies, included 69 high school teachers and 140 high school students as participants. The essay topics were similar to those that are commonly assigned in schools. (One topic, for instance, was: "Why is literature important?") The study also surveyed the participants about their views on ChatGPT. Students reported greater optimism than their teachers about the future role of ChatGPT in education, and rated possible academic integrity violations like submitting AI-generated essays as one's own less negatively than teachers did. Study co-author Riley Cox, a high school student who volunteered as a research assistant on the study, said, "It was exciting to me to watch my classmates and teachers figure out this new technology both from the perspective of a student and a psychology researcher. It was interesting to see that teachers had a lot of worries about ChatGPT that didn't seem to concern students." As one high school teacher who participated in the study commented, "I think ChatGPT could have some interesting applications in the classroom, but my concerns outweigh any positives. I am worried that we are watching the decline of original thought in our students, as well as their ability to persevere through hard work." The researchers believe their study highlights both some of the challenges and opportunities that ChatGPT brings to education. "We're on the verge of a major shift in educational practices as high-quality human-like content becomes increasingly available for anyone to use," said co-author Tal Waltzer, a postdoctoral fellow in Heyman's lab at UC San Diego. "How exactly we handle this transition raises important ethical considerations. For example, the fact that the paid subscription version of ChatGPT performs better on many standardized tests than the freely available version could exacerbate already existing concerns about equity in education." Heyman and Waltzer plan to continue research in this area "to develop an empirical foundation for best practices regarding the ethical use of AI in secondary education," Heyman said. They will investigate what kind of activities enhance learning, to help figure out ways that ChatGPT might be used as a kind of collaborator. More information: Tal Waltzer et al, Testing the Ability of Teachers and Students to Differentiate between Essays Generated by ChatGPT and High School Students, Human Behavior and Emerging Technologies (2023). DOI: 10.1155/2023/1923981 Provided by University of California - San Diego
Emerging Technologies
Generative AI bots will change how we write forever — and that’s a good thing Is generative artificial intelligence (GenAI) really destroying writing? There’s been a widespread argument that the technology is allowing high school and college students to easily cheat on their essay assignments. Some teachers across the country are scrambling to ban students from using writing applications like OpenAI’s ChatGPT, Bard AI, Jasper and Hugging Face, while others explore ways to integrate these emerging technologies. But things are getting a little too panicky too quickly. While media reports have cast GenAI writing bots as the “death” of high school and college writing, knee-jerk responses to these emerging technologies have been shortsighted. The public is failing to see the bigger picture — not just about GenAI writing bots but about the very ideas of GenAI and writing in general. When it comes to technology and writing, public cries about moral crises are not new. We’ve heard the same anxious arguments about every technology that has ever interacted with the production and teaching of writing — from Wikipedia and word processors to spell checkers, citation generators, chalkboards, the printing press, copy machines and ballpoint pens. Remember the outrage over Wikipedia in the early 2000s, and the fear that students might use it to avoid conducting “actual research” when writing? Teachers and educational institutions then held meetings and filled syllabi with rules banning students from accessing Wikipedia. Within a decade of Wikipedia’s introduction, however, the educational outrage has dissipated and the use of the site in classroom assignments is now commonplace. This is proof that all technologies — not just digital or writing technologies — have two possible paths: either they become ubiquitous and naturalized into how we do things, or they become obsolete. In most cases, they become obsolete because another technology surpasses the old technology’s usefulness. GenAI writing bots are not destroying writing; they are reinvigorating it. Ultimately, we shouldn’t be so concerned about how students might use ChatGPT or Bard AI or the others to circumvent hegemonic educational values. Instead, we should be thinking about how we can prepare our students and the future workforce for ethically using these technologies. Resisting these changes in defense of wholesale nostalgia for how we learned or taught writing is tantamount to behaving like the proverbial ostrich with its head in the sand. So, what will come next with GenAI for writing? Right now, it is clear that ChatGPT can produce fundamental writing that is generic. However, as companies develop algorithms that are discipline-specific, GenAI writing bots will start building more complex abilities and producing more dynamic writing. Just as “Social Media Marketing Manager” evolved into a now-familiar job as online commerce emerged, so too will we see “Prompt Engineer” (someone who can prompt GenAI to deliver useful outcomes) become a prevalent career path throughout the next decade. For example, think about the U.S. outdoor recreational industry, which accounts for 1.9 percent of the Gross Domestic Product (GDP) and amounts to about $454 billion per year. This is an industry — like many others — that relies on the ability to rapidly produce nearly endless content in the form of magazines, product descriptions, travel guides, advertisements, videos, reviews and social media posts. When this industry further develops GenAI writing bots specific to its needs, or when tech companies develop these bots and sell access to them, the bots will evolve to produce the writing that is both needed and effective. Students will need to know how to write the prompts that will guide GenAI-driven content in those industries. Subscription GenAI services will inevitably become the norm for much of the content produced for commercial consumption, and many companies will build their own writing bots for their specific and private needs. Companies like Jasper AI are banking on this, and with nearly 1,000 new GenAI platforms launching each week, the model appears to be heading toward subscription-based access to proprietary GenAI platforms. Thus, schools and colleges will need to develop new ways to understand the role of writing in education, surrender ingrained beliefs about teaching writing, and teach students how to operate in the GenAI-supported environments of the future. Fortunately, not all educational institutions or teachers are jumping aboard the anti-AI bandwagon. Institutions like the University of Florida (UF), with its forward-thinking AI Initiative, are using this moment of technophobic reaction to critically engage the role of AI in all teaching and learning situations. Rather than imposing restrictions, UF administrators are holding roundtables and symposia about how to address GenAI writing bots in classrooms. When it comes down to it, GenAI is not the enemy of writers or writing instructors. It is just a new technological teaching tool, and we can learn something from it if we listen. Sidney I. Dobrin, Ph.D., is a professor and the chair of the Department of English at the University of Florida. He is the director of the Trace Innovation Initiative, a member of the Florida Institute for National Security, and an Adobe Digital Thought Leader. He is also the author of “Talking About Generative AI: A Guide for Educators and AI and Writing.” Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Emerging Technologies
istock Some scientists are trying to convince the Food and Drug Administration to classify aging as a disease and target for drug development. The Food and Drug Administration (FDA) considers aging to be a natural process. This makes it difficult to get FDA approval for drugs that seek to slow or reverse the biological process of aging. Instead, drugs intended to target aging must target a disease that often results from the aging process in order to demonstrate efficacy and gain approval. But there is growing consensus and effort among scientists to convince the FDA that aging itself should be classified as a disease and an appropriate target for drug development. This could be a major milestone for not just industry, but society. If the FDA is swayed, the resulting regulatory shift could mean approval of drugs or treatments that slow or reverse the aging process generally, before a patient develops disease. Researchers who view aging as a medical condition aren’t referring only to the inevitable passage of time. Instead, they view aging as a process of deterioration of our structure and function at the cellular level; the hallmark characteristics of which are genomic instability and damage to our DNA.   And the World Health Organization (WHO) supports this view – WHO describes the process of aging as “… the impact of the accumulation of a wide variety of molecular and cellular damage over time.” Advanced chronological age is widely acknowledged to be the biggest risk factor for developing disease, but the process of aging itself results in fragility and mental decline, even absent any disease. As the WHO describes it, aging is “a gradual decrease in physical and mental capacity.” In fact, the WHO recently included “ageing associated decline in intrinsic capacity” as a disease code in the 11th edition of International Classification of Diseases (ICD). This is particularly significant, because diseases are conditions that can be treated, or are at least worthy of the effort, which implies that the WHO views aging-related decline as a treatable condition. The FDA likely will revisit this issue, as scientific pressure mounts to declare aging a drug target. Part of the problem is that the FDA needs more guidance to determine how “aging” should be measured, and needs to see evidence that targeting “aging” biomarkers actually has a beneficial effect. In addition to industry pressure, the FDA faces pressure from Congress to clarify the regulatory pathway for emerging regenerative therapies. Seven years have passed since the FDA gave its blessing for the Targeting Aging with Metformin (TAME) clinical trial, the first trial to target the effects of aging. TAME is a double blind, placebo-controlled, multi-center study designed to evaluate whether the diabetes drug Metformin can protect against age-related conditions such as cancer, dementia or cardiovascular disease in non-diabetic study participants. The goal of the trial is to establish an indication for aging, which the American Federation of Aging Research refers to as a “proof of concept” that aging is an appropriate drug target. In this way, the goal of TAME is more political than scientific. The scientific community continues to discuss its next steps, including potential aging-related biomarkers that should serve as acceptable targets for regenerative therapies. The acceptance of biomarkers to represent the aging process is not as big of a regulatory jump as it might seem. For example, the FDA recently expressed a willingness to consider biomarkers for the identification of early-stage Alzheimer’s disease.   We may be a few well-designed studies away from FDA acceptance of aging as a drug target, particularly as researchers bridge the divide between the TAME target and better-accepted disease targets. The looming possibility of a regulatory pathway for anti-aging therapies warrants an evaluation of how a new regulatory pathway will affect the market. Here’s what we could expect:   The market for regenerative therapies will expand to nearly the entire adult population. Regenerative therapy companies targeting the biological process of aging are currently limited to addressing specific diseases or medical conditions to obtain FDA approval. Drugs or therapies that get to market are typically limited in approved use for one disease; approval for additional diseases often comes years later. Removal of the disease-specific regulatory barriers would make regenerative therapies available as preventative care solutions. According to David Sinclair, a professor of genetics at Harvard Medical School and leader in the field of regenerative medicine, work to develop medicines that could prevent many diseases is going far slower than it should be, because aging is not recognized as a medical condition.   Simply put, therapies that target the process of aging itself will enjoy a nearly unlimited market, compared to therapies with the same mechanism that target a single disease. “There is little doubt that approval of drugs and therapies to treat aging in general would have a significant impact on research funding,” says Mark Mirkin, a corporate and securities lawyer at Rimon PC, the firm where I work. “For example, the venture capital industry would be very receptive to entrepreneurial ventures addressing diagnostics, therapeutics and devices directed toward the broad markets resulting from an FDA-approved aging-related disease sector. Start-ups, especially university spin-outs, with federal grant funding will be attractive seed capital financing opportunities for individual angel investors, angel funds and family offices, etc. … Early-stage venture capital funds, hungry for exciting new pathways, will follow. Such financing is certain to attract attention from strategic investors, including Big Pharma.”   Lastly, let’s talk about the prospect of democratized access to preventative care and better manufacturer-clinician communication in the process. Disease-specific drug approvals force clinicians to grapple on their own with the off-label use of a drug in addressing any disease or medical condition not approved by the FDA. Except to warn of contraindications, manufacturers cannot communicate with clinicians about off-label use. As a result, many patients outside the class for whom a drug was approved are not able to receive the drug until years have passed and millions more dollars have been spent on clinical trials in a disease-by-disease evaluation. FDA approval of regenerative therapies for the treatment of aging ultimately would put important safety information in the hands of community clinicians, allow access to reimbursement through health insurance, and therefore put the therapy within reach of the general population. It’s an unfortunate truth of health care in America that wealthy patients have better access to both preventative and disease care than less-privileged patients. This economic dichotomy would be alleviated, to a degree, with a regulatory shift to target aging as a treatment indication. Amy Baker is a partner in the Orlando office of Rimon PC, with clients in the life sciences industry, including legal issues regarding emerging technologies, regulatory compliance and Food and Drug Administration and Federal Trade Commission regulations.
Emerging Technologies
- U.S. President Joe Biden spoke about his plan to work with competitors around the world "to ensure we harness the power of artificial intelligence for good while protecting our citizens from this most profound risk." - Biden made the comments at the United Nations General Assembly on Tuesday. - The commitment comes as U.S. policymakers have endeavored to learn about how the technology works to determine the proper guardrails to protect Americans without stifling positive innovation. Speaking at the United Nations General Assembly on Tuesday, U.S. President Joe Biden made comments about his plan to work with competitors around the world "to ensure we harness the power of artificial intelligence for good while protecting our citizens from this most profound risk." "Emerging technologies such as artificial intelligence hold both enormous potential and enormous peril," Biden said at the U.N. Tuesday. "We need to be sure they're used as tools of opportunity, not as weapons of oppression. Together with leaders around the world, the United States is working to strengthen rules and policies so AI technologies are safe before they're released to the public, to make sure we govern this technology, not the other way around, having it govern us." His comments come as U.S. policymakers have endeavored to learn more about how the technology works in order to determine the proper guardrails to protect Americans without stifling positive innovation. The discussion is taking place with the backdrop of an intense competition with China, which is also seeking to be a world leader in the technology. On Wednesday, Senate Majority Leader Chuck Schumer, D-N.Y., hosted prominent tech CEOs including Tesla and SpaceX's Elon Musk and Meta's Mark Zuckerberg, as well as labor and civil rights leaders, to speak with senators about AI as the lawmakers contemplate legislative protections. Following the meeting, Schumer told reporters that everyone in the room agreed that government needs to play a role in regulating AI. How exactly that will look is still up for debate. Lawmakers differ on which is the proper body to regulate AI, as well as how light a touch policymakers should apply with regulation. Schumer warned it would be counterproductive to move too fast, pointing to the European Union, which has quickly created the AI Act. But, Schumer said, "on a timeline, it can't be days or weeks, but nor should it be years. It will be in the general category of months." In the meantime, several agencies have asserted their ability to rein in the abuses of AI with existing legal power. And the National Institute of Standards and Technology (NIST) in the Department of Commerce released a voluntary risk management framework for AI earlier this year. The Biden administration has also secured voluntary commitments from leading AI companies to test their tools for security before they release them to the public.
Emerging Technologies
The White House is relying on more than an executive order to bolster online security. The Biden administration has issued a National Cybersecurity Strategy meant to "rebalance" responsibilities toward the larger companies and organizations best-equipped to handle threats. The initiative will most notably expand the use of minimum security standards for critical infrastructure, and establish a common set of regulations to make it easier to comply with that baseline. Accordingly, the administration also wants improved public-private alliances that can more effectively defend infrastructure. The federal government would also modernize its networks and response policies to safeguard against threats. Companies may also be on the hook for sloppy behavior. The strategy would shift some liability for software and services to developers that ignore recommended cybersecurity practices or ship products with known vulnerabilities. The White House hopes to work with Congress and companies on legislation that bars total liability and sets tougher standards for "specific high-risk scenarios." A safe harbor provision would protect companies that make a sincere effort to develop secure products. The plan would also invest more in cybersecurity research and workforces. The administration hopes to cut back on "systemic" vulnerabilities at the internet's core, and to adapt to emerging technologies such as postquantum encryption (that is, protection against quantum-based hacks) and digital IDs. Some policies will be largely unchanged. The government will proactively "disrupt and dismantle" threats, including international cooperation on fighting ransomware. The implementation has already begun, the administration says. As Cyberscoop points out, though, there's no certainty the strategy will work as promised. The outline largely delegates responsibilities to individual agencies, Congress and in some cases state regulators. The result may not be as harmonious as hoped. It's also unclear if developers will welcome laws that make them liable for security holes. Still, the approach sets expectations for how federal officials will tackle digital threats going forward.
Emerging Technologies
From pv magazine USA Over 100 countries have pledged to be carbon neutral by 2050, according to the United Nations. A recent study conducted by researchers at the U.S. National Renewable Energy Laboratory (NREL) found that to totally decarbonize global electrical systems using solar in the 2050 to 2060 decade, the world would need to install 63.4 TW of PV. The findings of the study are published in “Photovoltaic Deployment Scenarios Toward Global Decarbonization: Role of Disruptive Technologies,” which appears in the journal Solar RRL. The researchers modeled viable trajectories to supply > 60 TW of installed solar capacity to meet the goal of global decarbonization and studied the effect of a disruptive technology on deployment cost and market opportunity. They concluded that it is possible for production to scale up to the 2.9 TW to 3.7 TW needed per year within 10 to 15 years, a goal that will cost from $600 billion to $660 billion. The model includes using existing technology as well as mature technologies using silicon and CdTe. The study estimates that disruptive technologies offer a $1 to $2 trillion market opportunity and that the potential for cost savings could amount to hundreds of billions of dollars. The disruptive technologies include cadmium telluride (CdTe)-based PV, perovskites and tandem technologies. CdTe, which is the technology used by solar manufacturers First Solar and Toledo Solar, among others, currently represents about 16% of the U.S. solar market, NREL reports. Perovskites have challenges with long-term durability, but they offer advantages in manufacturing because they can be printed quickly and relatively inexpensively in a roll-to-roll method. Tandems pair technologies, such as perovskite on silicon, and several companies are beginning to manufacture tandems including TandemPV, CubicPV and OxfordPV. According to NREL, tandem devices and minimodules are seeing efficiencies of as much as 31.25%. And while the efficiencies are promising, and other tandems continue to be developed, R&D is still needed in order for these devices to be manufactured at the scale needed to achieve terawatt capacities. “There are economically viable trajectories that get to the needed manufacturing capacity to produce the amount of PV needed to completely decarbonize the world’s energy economy,” said Jao van de Lagemaat, director of the Chemistry and Nanoscience Center at the U.S. Department of Energy’s NREL. “Emerging technologies could potentially lower the cost of this deployment significantly if they get commercialized in time.” Popular content To provide enough solar over the next two decades to decarbonize the global electrical system will require an “unprecedented ramp-up of production capacity”, but it is achievable, according to the NREL analysis. The estimated target of 63.4 TW needed between 2050 and 2060 is a 60-fold increase in the amount of installed PV worldwide today. When the researchers say this is possible, it is under the assumption that investors protect their investments by avoiding stranded production assets. The model demonstrates that a sustainable ramp-up of manufacturing is possible, and that disruptive technologies will play a role in lowering capital costs. Another assumption is that the lifespan of a PV module will increase considerably, possibly from an average of 30 years in 2020 to 50 years by 2040. Looking beyond the 2050 to 2060 decade, van de Lagemaat, noted that module retirement and population growth will be the drivers of a “relatively modest demand” after global decarbonization is reached. In addition to van de Lagemaat, the other authors are Michael Woodhouse from NREL and Billy Stanbery from Colorado School of Mines. NREL’s internal Laboratory Directed Research and Development program funded the research. This content is protected by copyright and may not be reused. If you want to cooperate with us and would like to reuse some of our content, please contact: [email protected].
Emerging Technologies
Modi may negotiate first India-US planetary defense deal with Biden Prime Minister Narendra Modi is reportedly set to negotiate India's first deal on planetary defense with United States President Joe Biden during his first official US state visit. Both countries are likely to boost their partnership in areas like human space travel, planetary defense, and commercial space activities. The collaboration is expected to help India's Gangayaan mission and train human resources in space technology. Why does this story matter? Modi's state visit to the US is being hailed as one of the most significant diplomatic trips of 2023. It comes at a time when India has risen as a major economic and geopolitical force amid conflict between the US and China. So, India will try to cash on Modi's trip and boost its defense and space sectors through bilateral cooperation and joint initiatives. India, US already committed to expand collaboration under ICET Under the Initiative on Critical and Emerging Technologies (ICET), the two countries have already agreed to expand collaboration in human space travel, commercial space, and training programs, said the White House, per Hindustan Times. Modi's visit will likely boost all these areas. It is also expected that there will be greater collaboration on astronaut training, crew rescue, and other operations in human space travel. Modi's US visit to boost India's Gaganyaan mission The collaborations also reportedly involve advanced training for Indian astronauts at the National Aeronautics and Space Administration (NASA) Johnson Space Center, according to a White House brief from January. The initiative is anticipated to benefit India's Gaganyaan mission, which would launch three crew members to a 400km orbit for a three-day trip in 2025 to show the country's capability of human space travel. NASA to coordinate first India-US planetary defense deal The much-talked-about planetary defense deal, if finalized without any hassles, would be the first time India and the US will work together in the area. In 2016, NASA also established a planetary defense coordination office. It looks for and warns about near-Earth objects (NEOs) such as comets, asteroids, and potentially hazardous objects in order to "ensure global planetary defense efforts are coordinated and streamlined." India to likely sign up for Artemis Accords Meanwhile, India's potential involvement in the international exploration program called Artemis Accords is another topic of conversation between India and the US. To create a framework for civic cooperation and the peaceful use of the Moon, Mars, and other celestial objects, at least 25 nations are already part of it. According to PTI, NASA is certain that India would sign up for Artemis Accords.Share this timeline
Emerging Technologies
By Mischa Dijkstra, Frontiers science writer Macrotermes termite mound in Namibia. Image credit: D. Andréen Scientists studied the ‘egress complex’ of Macrotermes michaelseni termites from Namibia, which appears to promote moisture regulation and gas exchange. They showed that the layout of this lattice-like network of tunnels can intercept wind around the termite mound to create turbulence inside, which can power ventilation and control the interior climate. These properties can be copied to create a comfortable climate in human buildings with little energy. Among the approximately 2,000 known species of termites, some are ecosystem engineers. The mounds built by some genera, for example Amitermes, Macrotermes, Nasutitermes, and Odontotermes, reach up to eight meters high, making them some of the world’s largest biological structures. Natural selection has been at work improving the ‘design’ of their mounds over tens of millions of years. What might human architects and engineers learn if they go to the termites and consider their ways? In a new study in Frontiers in Materials, researchers showed how termite mounds can teach us to create comfortable interior climates for our buildings that don’t have the carbon footprint of air conditioning. “Here we show that the ‘egress complex’, an intricate network of interconnected tunnels found in termite mounds, can be used to promote flows of air, heat, and moisture in novel ways in human architecture,” said Dr David Andréen, a senior lecturer at the bioDigital Matter research group of Lund University, and the study’s first author. Termites from Namibia Andréen and co-author Dr Rupert Soar, an associate professor at the School of Architecture, Design and the Built Environment at Nottingham Trent University, studied mounds of Macrotermes michaelseni termites from Namibia. Colonies of this species can consist of more than a million individuals. At the heart of the mounds lie the symbiotic fungus gardens, farmed by the termites for food. The researchers focused on the egress complex: a dense, lattice-like network of tunnels, between 3mm and 5mm wide, which connects wider conduits inside with the exterior. During the rainy season (November through April) when the mound is growing, this extends over its north-facing surface, directly exposed to the midday sun. Outside this season, termite workers keep the egress tunnels blocked. The complex is thought to allow evaporation of excess moisture, while maintaining adequate ventilation. But how does it work? Andréen and Soar explored how the layout of the egress complex enables oscillating or pulse-like flows. They based their experiments on the scanned and 3D-printed copy of an egress complex fragment collected in February 2005 from the wild. This fragment was 4cm thick with a volume of 1.4 liters, 16% of which were tunnels. They simulated wind with a speaker that drove oscillations of a CO2-air mixture through the fragment, while tracking the mass transfer with a sensor. They found that air flow was greatest at oscillation frequencies between 30Hz and 40 Hz; moderate at frequencies between 10Hz and 20 Hz; and least at frequencies between 50Hz and 120 Hz. Turbulence helps ventilation The researchers concluded that tunnels in the complex interact with wind blowing on the mound in ways that enhance mass transfer of air for ventilation. Wind oscillations at certain frequencies generate turbulence inside, whose effect is to carry respiratory gases and excess moisture away from the mound’s heart. “When ventilating a building, you want to preserve the delicate balance of temperature and humidity created inside, without impeding the movement of stale air outwards and fresh air inwards. Most HVAC systems struggle with this. Here we have a structured interface that allows the exchange of respiratory gasses, simply driven by differences in concentration between one side and the other. Conditions inside are thus maintained,” explained Soar. The authors then simulated the egress complex with a series of 2D models, which increased in complexity from straight tunnels to a lattice. They used an electromotor to drive an oscillating body of water (made visible with a dye) through the tunnels, and filmed the mass flow. They found, to their surprise, that the motor needed to move air back and forth only a few millimeters (corresponding to weak wind oscillations) for the ebb and flow to penetrate the entire complex. Importantly, the necessary turbulence only arose if the layout was sufficiently lattice-like. Living and breathing buildings The authors conclude that the egress complex can enable wind-powered ventilation of termite mounds at weak winds. “We imagine that building walls in the future, made with emerging technologies like powder bed printers, will contain networks similar to the egress complex. These will make it possible to move air around, through embedded sensors and actuators that require only tiny amounts of energy,” said Andréen. Soar concluded: “Construction-scale 3D printing will only be possible when we can design structures as complex as in nature. The egress complex is an example of a complicated structure that could solve multiple problems simultaneously: keeping comfort inside our homes, while regulating the flow of respiratory gasses and moisture through the building envelope.” “We are on the brink of the transition towards nature-like construction: for the first time, it may be possible to design a true living, breathing building.” Fragment of the egress complex of Macrotermes michaelseni. Image credit: D. Andréen 3D scan of the fragment of egress complex used in the experiments. Image credit: D. Andréen and R. Soar REPUBLISHING GUIDELINES: Open access and sharing research is part of Frontiers’ mission. Unless otherwise noted, you can republish articles posted in the Frontiers news site — as long as you include a link back to the original research. Selling the articles is not allowed.
Emerging Technologies
Google I/O 2023: Date, Time And How To Watch The Annual Developer Conference Online Google released the official Google I/O 2023 schedule on April 27, 2023 Google I/O is an annual event held by Google each year, bringing together thousands of developers from around the world. It is an annual developer conference held by Google in Mountain View, California. "I/O" in Google I/O stands for Input/Output. And with just 5 days to go, Google made a big announcement by showcasing teaser video of Pixel Fold - the first foldable phone engineered by Google. More details about Google Pixel Fold will be revealed on May 10 at the Google I/O event. What Is Google I/O? Google I/O is an annual event held by Google each year, bringing together thousands of developers from around the world. It is an annual developer conference held by Google in Mountain View, California. "I/O" in Google I/O stands for Input/Output. The event provides an opportunity to learn about the newest and hottest techniques in development, as well as reveal details of upcoming products and services. This year's event is anticipated to be even bigger than last year's with a wide range of topics including artificial intelligence, machine learning, and other emerging technologies likely to make an appearance. When is Google I/O 2023? Google announced it on their website and mentioned that the event will begin on 10 May 2023. Upon the announcement, many fans around the world are tuned into social media platforms such as Twitter and YouTube as well as on the Google Developers blog. With each new event comes a chance for attendees to try out upcoming announcements first-hand which will no doubt be something that most attendees look forward to in 2023 at the main venue in Mountain View or one of the multiple online venues available if attending virtually. How To Watch Google I/O 2023 This year’s event will be broadcast in front of a limited live audience and is open to everyone online on May 10, 2023. You can tune in to the livestreamed keynotes, then dive into technical content and learning material on demand. If you're interested in taking part, you can register on the official Google website using your Google account. But if you'd just like to watch the main keynote, expect to be able to do so without needing to register. The main keynote starts at 10:30 PM IST, so make sure you don't miss out! Closer to the time Google will provide a link that will allow everyone to watch the keynote online no matter where they are. There are sure to be some great announcements made at this year's event so make sure you check it out! What To Expect At Google I/O Google has already raised the bar by announcing about Pixel fold. We can expect more such product announcements on the event day. To kick off the conference as always, Google will hold its keynote address which will likely dive into a wide range of topics. It could cover anything from AI advancements within the company to new branches of tech like automated driving. Following this, there will be seminars focusing on the core four sections of Amazon Web Services (SaaS), Google Toolkits, Mobile Application Development, and Cloud Computing. AI developments have been at the forefront of the tech industry for some time now and it looks like Google plans on using I/O 2021 to make some significant strides in this department. Additionally, attendees can expect plenty of news about Android 14 and ChromeOS, two popular operating systems within Google’s products suite. All in all, it seems like it’s going to be an exciting event that everyone is sure to enjoy!
Emerging Technologies
Photo used for representational purpose only. iStock Sandeep DikshitNew Delhi, January 29The US will renew efforts by the West to wean India away from oil and arms dependency on Russia with a series of meetings in the coming days.Notable among them is the interaction between NSAs Ajit Doval and Jake Sullivan this week on cutting-edge technologies besides a meeting in March between NATO and Ministries of External Affairs and Defence. Senior US diplomat Victoria Nuland, considered the original architect of the US policy on Ukraine, will also meet senior officials to help “India find alternatives to Russian military equipment”.In India for foreign office consultations, US Under Secretary of State for Political Affairs Victoria Nuland told the Senate Foreign Relations committee that India had “60 years of entanglement” with Russia which needs to end. She was essentially reiterating the statement she made last May after visiting India.Her visit follows that of Assistant Secretary of State Donald Lu’s trip early this month.Nearly in parallel, NSA Ajit Doval and his US counterpart Jake Sullivan will be in Washington for the first “Initiative on Critical and Emerging Technologies”. The initiative took shape after PM Narendra Modi underlined to US President Joe Biden the need to get the partnership on cutting-edge going when they met in Tokyo last year for the Quad Leaders’ Summit.The meeting on Tuesday would “forge closer linkages between government, academia and industry of the two countries in areas such as AI, quantum computing, 5G/6G, biotech, space and semiconductors”, said an official statement while the White House said it would expand partnership in critical and emerging technologies. On Wednesday, there will be a high level dialogue on space issues, which will also be attended by the ISRO chief S Somnath.In March, senior officials from NATO’s policy planning division will hold a dialogue with officials from Ministries of External Affairs and Defence. The dialogue with NATO has taken place infrequently but the March meeting is of significance as it will be held after Nuland’s interaction and the hi-tech dialogue.However, the news about the India-NATO meeting has drawn an adverse reaction from China. “If such cooperation goes further, New Delhi will have to encounter severe backlash from its people and face greater losses of diplomatic and strategic autonomy,” wrote the Global Times.The attempt to bring India close to the Western military bloc received a boost when the chair of the UK’s defence select committee suggested the AUKUS agreement between Australia, the UK, and the US should expand to include India and Japan. AUKUS is aimed at helping Australia acquire nuclear-powered submarines. Japan, Australia and the US are part of the Quad whose fourth pillar is India. #Ajit Doval #Russia
Emerging Technologies
Reuters reports: General Motors is exploring uses for ChatGPT as part of its broader collaboration with Microsoft, a company executive told Reuters. "ChatGPT is going to be in everything," GM Vice President Scott Miller said in an interview last week. The chatbot could be used to access information on how to use vehicle features normally found in an owners manual, program functions such as a garage door code or integrate schedules from a calendar, Miller said. "This shift is not just about one single capability like the evolution of voice commands, but instead means that customers can expect their future vehicles to be far more capable and fresh overall when it comes to emerging technologies," a GM spokesperson said on Friday. More details from Engadget: According to Semafor, the digital assistant will operate differently from other chatbots like Bing Chat. GM is reportedly working on adding a "car-specific layer" on top of the large language models that power ChatGPT. The chatbot could be used to access information on how to use vehicle features normally found in an owners manual, program functions such as a garage door code or integrate schedules from a calendar, Miller said. "This shift is not just about one single capability like the evolution of voice commands, but instead means that customers can expect their future vehicles to be far more capable and fresh overall when it comes to emerging technologies," a GM spokesperson said on Friday. More details from Engadget: According to Semafor, the digital assistant will operate differently from other chatbots like Bing Chat. GM is reportedly working on adding a "car-specific layer" on top of the large language models that power ChatGPT.
Emerging Technologies
FILE PHOTO: A Long March-3B carrier rocket carrying the Beidou-3 satellite, the last satellite of China's Beidou Navigation Satellite System, takes off from Xichang Satellite Launch Center in Sichuan province, China June 23, 2020. China Daily via REUTERS/File PhotoLONDON, Oct 10 (Reuters) - China is using its financial and scientific muscle to manipulate technologies in a manner that risks global security, Britain's top cyber spy will say on Tuesday, warning that Beijing's actions could represent "a huge threat to us all."In a speech, Jeremy Fleming, director of the GCHQ spy agency, will say that the Chinese leadership was seeking to use technologies such as digital currencies and its Beidou satellite navigation network to tighten its grip over its citizens at home, while spreading its influence abroad."They seek to secure their advantage through scale and through control," Fleming will say in the annual security lecture at the Royal United Services Institute think tank, according to extracts released by his office.Register now for FREE unlimited access to Reuters.com"This means they see opportunities to control the Chinese people rather than looking for ways to support and unleash their citizens' potential. They see nations as either potential adversaries or potential client states, to be threatened, bribed, or coerced."The remarks are Fleming's latest public warnings about Beijing's behaviour and aspirations. Last year, he said the West faced a battle to ensure China did not dominate important emerging technologies such as artificial intelligence, synthetic biology and genetics.Fleming will say the Chinese leadership was driven by a fear of their own citizens, of freedom of speech, of free trade and open technological standards and alliances, "the whole open, democratic order and the international rules-based system."That fear combined with China's strength was driving it "into actions that could represent a huge threat to us all," he will say.China has previously described similar accusations from Western governments as being groundless and politically motivated smears.Fleming will also highlight technologies where he says China is seeking to gain leverage, such as its development of a centralised, digital currency to allow it to monitor the transactions of users, as well as to possibly evade the sort of sanctions Russia has faced since its invasion of Ukraine.He will also point to Beidou, China’s answer to the U.S.-owned GPS navigation system."Many believe that China is building a powerful anti-satellite capability, with a doctrine of denying other nations access to space in the event of a conflict," he will say. "And there are fears the technology could be used to track individuals."Register now for FREE unlimited access to Reuters.comReporting by Michael Holden in London Editing by Matthew LewisOur Standards: The Thomson Reuters Trust Principles.
Emerging Technologies
World Economic Forum expects 14M fewer jobs in five years The World Economic Forum (WEF) predicts the global labor markets will see 14 million more jobs slashed than created in the next five years. The WEF’s “Future of Jobs” report expects that 69 million jobs will be created between 2023 and 2027, but 83 million jobs will be destroyed. The annual report is based on survey responses from more than 800 companies. The figures constitute “a structural labour-market churn of 152 million jobs, or 23% of the 673 million employees in the data set being studied,” according to the research. A net loss of 14 million jobs represents 2 percent of current employment. The analysis, though, notes that the count is “subject to a high degree of uncertainty.” The expected changes are attributed to a changing work landscape driven by emerging technologies, climate change adaptation and other factors. In 2016, the annual report suggested that 35 percent of workers’ skills would be disrupted in the following five years. In 2023, the analysis predicts 44 percent of workers’ skills will be disrupted in the next five years. “The human-machine frontier has shifted,” according to the World Economic Forum, though businesses are introducing automation “at a slower pace than previously anticipated.” Organizations estimate that 34 percent of today’s business-related tasks are machine-performed, and the new report predicts that figure will reach 42 percent by 2027. Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Emerging Technologies
China leads the US in the research of 37 out of 44 key technologies tracked by an Australian think tank. These critical and emerging technologies span a range of sectors including defense, space, and energy. China's research lead in these sectors could have implications for democratic nations. China has a "stunning lead" ahead of the US in high-impact research across critical and emerging technologies, according to Canberra-based independent think tank Australian Strategic Policy Institute, or ASPI. The world's second-largest economy is leading the US in researching 37 out of 44 critical and emerging technologies across the defense, space, energy, and biotechnology sectors — including research of advanced aircraft engines, drones, and electric batteries — the ASPI said in its Thursday report. The US State Department partly funded the study. The ASPI found that for a few fields, all of the world's top 10 research institutions are in China, and they collectively generate nine times more high-impact research papers than the second-ranked country — which is the US in many cases. In particular, China has the edge in defense and space-related technologies, the ASPI said. "Western democracies are losing the global technological competition, including the race for scientific and research breakthroughs," the report, led by the institute's senior analyst Jamie Gaida, said. The ASPI said China's lead is the product of "deliberate design and long-term policy planning" by President Xi Jinping's administration and those who came before him. The report's authors warned that China's research dominance in strategic sectors could have adverse implications for democratic nations. In the immediate term, the lead could allow China to "gain a stranglehold on the global supply of certain critical technologies." In the longer run, China's leading position could propel it to excel in almost all sectors, including technologies that don't exist yet, per the ASPI. "Unchecked, this could shift not just technological development and control but global power and influence to an authoritarian state where the development, testing and application of emerging, critical and military technologies isn't open and transparent and where it can't be scrutinized by independent civil society and media," the think-tank said. The ASPI urges governments around the world to collaborate and invest more in research to catch up to China. It also recommended measures such as visa screening for visitors to research facilities to limit "illegal technology transfers" to China and said governments should consider "narrow limits" on the movements of researchers who are experts in strategic sectors. "Recruiting personnel to lead research programs in, for example, defense-relevant technologies in adversarial states poses a clear threat to a country's national security," said the ASPI. It added that serious national-security risks need to be identified before movement restrictions are implemented as they need to be weighed against a person's right to freedom of movement. Chinese foreign ministry spokesperson Mao Ning said at a scheduled press conference on Friday that her country's scientific and technological progress contribute to global technological advancement. "We oppose hegemonism in science, decoupling, and breaking of industrial and supply chains," said Mao, according to an official transcript. "Politicizing scientific and technological issues, using them as weapons for ideological confrontation and patching up coteries harm the interests of the whole world." Read the original article on Business Insider
Emerging Technologies
Baidu and a rival, Pony.ai, received China’s first licenses in April to operate taxis with no one in the driver's seat but with a safety supervisor on board. That came 18 months after Waymo started driverless ride-hailing service in Phoenix, Arizona, in October 2020.Get Innovation BeatBoston Globe tech reporters tell the story of the region's technology and innovation industry, highlighting key players, trends, and why they matter.Founded in 2000 as a search engine operator, Baidu has expanded into artificial intelligence, processor chips and other technology. It says its autonomous vehicles could, if successful, make driving cheaper, easier and safer.“We believe the top goal of autonomous driving is to reduce human-caused traffic accidents,” said Wei Dong, vice president of Baidu's Intelligent Driving Group.Autonomous driving is one of an array of emerging technologies from artificial intelligence to renewable energy that China companies are pouring billions of dollars into trying to create, urged on by the ruling Communist Party.Beijing wants to join the United States, Europe and Japan in the ranks of technology powers to build its prosperity and global influence. That holds out the possibility of new inventions but also fuels tension with Washington and its allies, which see China as a strategic challenger.Baidu's Apollo autonomous driving platform was launched in 2017 and the Apollo Go self-driving taxi service three years later.Taxi service with a driver at the wheel to take over in an emergency started in 2020 and has expanded to Beijing, Shanghai and eight other cities. Apollo Go says it provided 213,000 rides in the final quarter of last year, making it the world's busiest self-driving taxi service.For rides with no driver and a supervisor in the passenger seat, Apollo Go started in a 60-square-kilometer (23-square-kilometer) area of Yizhuang, an industrial district on Beijing's southeastern outskirts with wide streets and few cyclists or pedestrians.“It’s very convenient,” said Zhao Hui, 43, who uses Baidu taxis in Yizhuang.“It might feel a little safer” than a human driver, Zhao said. “Sometime there are small objects, maybe some that people don’t notice. They can spot them and stop.”Other developers include Deeproute.ai and AutoX in Shenzhen. Pony.ai, founded in 2016 and backed by venture capital, is road-testing autonomous cars and semi-trailer trucks.Industry plans are "very aggressive to deliver the robo-taxi to the consumer,” said Owen Chen of S&P Global Mobility.Automaker Geely, owner of Volvo Car, Geely, Lotus and Polestar, has announced plans for satellite-linked autonomous vehicles. Network equipment maker Huawei Technologies Ltd. is working on self-driving mining and industrial vehicles.The ruling party is promoting automation to shore up economic growth by making its shrinking, aging workforce more productive. China's working-age population has fallen by 5% since its 2011 peak and is forecast to slide further.“People are very expensive,” said Wei. “Once this public service no longer needs people, the cost can drop rapidly.”As for whether China can lead the global industry, “it’s a race at the moment,” said Pete Kelly, managing director of the automotive division of GlobalData Plc.“But they could easily do this, because of the way decisions are made and deployments happen in China,” Kelly said.McKinsey & Co. in 2019 estimated China’s potential market for self-driving taxis, buses, trucks and other equipment and software in the trillions of dollars.The earliest products are unlikely to earn back their development costs but might be “loss leaders” to sell other services, said Kelly.Baidu says it already is selling navigation and other technology to automakers. It projects total sales of 10 billion yuan ($1.5 billion) based on agreements made so far, chairman Robin Li said in a May 26 conference call with reporters.The company says it spent a total of 5.6 billion yuan ($885 million) on research and development last year but doesn't disclose how much of that went into autonomous vehicles. Baidu reported a profit of 10.2 billion yuan ($1.7 billion) last year on 124.5 billion yuan ($19.5 billion) in revenue.Baidu and its most advanced rivals have reached the industry's Level 4 out of five possible levels of technology. That means their systems can operate without a driver but must be pre-loaded with a detailed map. That limits the area where they can operate.Lower levels of technology range from cruise control, a feature available for decades, up to Level 3, which allows hands-free highway driving. Self-driving robot carts already are widely used in factories, warehouses and other tightly controlled settings.Once self-driving taxis are on the road, operators must gather information about pedestrians and local conditions based on daily driving, a time-intensive process that will slow the technology's rollout.For a maneuver such as a U-turn, the Apollo system tracks as many as 200 vehicles, pedestrians and other potential obstacles as much as 100 meters (110 yards) away, according to Baidu.Wei said Baidu would be happy to have foreign partners adapt its technology to their markets but has no export plans yet while it focuses on Chinese cities.Intersections still are a challenge, Wei said. Pedestrians in China are used to drivers gradually making their way through crowds in a crosswalk while turning on a green light, but a robot car can't do that.“Our car always will yield to people and might wind up not getting through the light,” Wei said.Baidu has launched its own self-driving car brand, JIDU, which unveiled a concept car this month and says it is aiming for a sticker price of 480,000 yuan ($72,000).The company also has deals with three Chinese electric vehicle brands to produce cars with computers, radar and light-based sensors built in instead of bolted onto the roof.To encourage others to use Apollo, Baidu has made the platform open-source and says it has signed up 210 industry partners and 80,000 developers who might create products based on it.Apollo Go says it plans to expand self-driving taxi service to 65 cities by 2025 and 100 by 2030.Compared with a human driver, “the difference isn’t big,” said Zhang Zhihua, 29, an interior designer who uses driverless Baidu taxis in Yizhuang. “If you aren’t looking at the front and if you are playing on your mobile phone, then it feels exactly the same.”
Emerging Technologies
A secure and resilient supply of critical minerals is paramount to our national security and is an economic necessity. Unfortunately, the United States is almost entirely dependent on foreign nations for our supply — an alarming fact considering most of the technology our government uses today requires these minerals. Every day, the Department of Homeland Security works to secure our border, counter terrorist threats, harden our cybersecurity defenses, and protect us from emerging threats such as weaponized drones and biological weapons. This is a sweeping and, at times, difficult mandate to fulfill. Fortunately for the public, the department has some of the best people in the world executing its critical mission. But our enemies are getting savvier and more sophisticated. For DHS officials to be successful, they need to have access to cutting-edge equipment and technology. Technologies like high-speed communications, surveillance systems, radar satellites, and secure computer networks allow DHS agents and officers to mitigate, prepare for, and respond to any threat facing the country. But these technologies aren't possible without minerals like cobalt, lithium, and rare earth elements, which include 17 minable metallic elements . These are not just crucial for technology at DHS but are necessary inputs in critical and emerging technologies in both the defense and civilian spaces, from fighter jets to electric vehicles to semiconductors. Critical minerals play an integral part in our ability to innovate and produce the tools necessary to keep America free, secure, and prosperous in the 21st century. While the U.S. was once the leader in critical mineral production, China now dominates the market. Beijing controls around 90% of the world's REEs and has been the source of 80% of U.S. imports of REE compounds and metals in recent years. It processes 50%-70% of the world's lithium and cobalt. China understands the importance of critical minerals and REEs in future technology, so it has made strategic decisions to corner and control the market. We've witnessed firsthand what can occur when despots and dictators control critical resources. President Vladimir Putin has weaponized Russia's oil and natural gas supply, constricting or cutting off energy to European countries that oppose its unprovoked and unjustified attack on Ukraine. Some European Union member states are still importing Russian energy out of sheer necessity. They've been cornered into funding Putin's war machine, and this dependence limits their geopolitical options. China has taken similar actions in other key industries. During the COVID-19 outbreak, China restricted exports of personal protective equipment and other necessary medical supplies. There was also debate in Beijing about restricting critical pharmaceutical exports to the U.S., which could have had a devastating impact on Americans' access to medicine. Beijing also has a history of weaponizing its critical mineral supply. In the 2000s, China imposed export restrictions and taxes on REEs, spurring significant price increases globally. Given the past practices of Putin and Chinese dictator Xi Jinping, the U.S. should take every step necessary to ensure it is not reliant on dictators for rare earth elements. They are too fundamental to our economic and national security. Ending U.S. dependency on China for these products, and developing secure and resilient supply chains, will require a whole-of-America approach, with both the public and private sectors working in tandem. Fortunately, political leadership on both sides of the aisle are aware of the seriousness of this issue and are putting the wheels in motion to end our dependence. But more needs to be done. In March, President Joe Biden invoked the Defense Production Act to increase domestic production of strategic critical minerals like lithium, nickel, cobalt, and others necessary for large-capacity batteries that power electric vehicles and store renewable energy. This is a prudent decision and builds upon several similar initiatives undertaken during the Trump administration. With automakers transitioning to EVs at such a rapid pace, we must ensure that the U.S. retains the capability to power them without China's help. There are also government incentives, such as the proposed EV tax credit, that can spur demand for electric vehicles and boost the need for a secure supply chain of the critical minerals necessary for their production. Lawmakers are also on the cusp of passing major legislation, the Bipartisan Innovation Act, which would make huge investments into our domestic semiconductor industry and help secure our critical mineral supply chains. While there is posturing on both sides of the aisle that has prevented passage thus far, this issue is too important to be overlooked. Lawmakers have a responsibility to get this done. Congress is considering other legislative solutions to bolster our domestic supply chain of critical minerals, but this will prove challenging under the overbearing environmental regulations levied against American miners. States like West Virginia are using federal funding to extract critical minerals from coal waste in abandoned mines and surrounding areas and waterways. It will take the federal government, state and local governments, and the private sector to launch initiatives like this one. The U.S. is facing myriad challenges. If we want to secure our borders, prevent attacks on the homeland, defend our troops, and produce the vehicles and technology of the future, we need a robust and secure supply of critical minerals. Now is the time to roll up our sleeves and get to work. Our future depends on it. Chad Wolf is the former acting U.S. Secretary of Homeland Security. He is currently the executive director and chief strategy officer of the America First Policy Institute.
Emerging Technologies
India's Services Exports Industry Has Begun Its Second Innings India has gained about 80 basis points in market share in services exports since Covid, according to estimates by Morgan Stanley The butterfly effect, in popular culture, is when small changes in initial conditions can lead to large-scale and unpredictable variations in the future state of the system. The story of India's IT industry seems to be a consequence of many such effects over the years. Today, it's resilience in the pandemic, and continuing growth has led to a rise in India's share in services globally—a trend that is expected to persist. India has gained by close to 80 basis points in market share in services exports globally since the onset of the pandemic, according to estimates by Upasana Chachra, chief India economist at Morgan Stanley. Factors such as remote working contributed to the rise. While there will be some slowdown in annual terms owing to slower global growth, market share gains tend to be sticky and will provide a cushion, Chahchra said. India’s services exports have remained resilient during the Covid-19 pandemic and amid current geopolitical uncertainties, driven by higher demand for digital support, cloud services, and infrastructure modernisation catering to new challenges, according to the Economic Survey. A further increase in the share is likely, with the services exports registering a growth of 27.7% in April-December 2022 compared with 20.4% in the corresponding period a year earlier. Beyond the next two years, the outlook looks exciting as tech penetration spreads, according to a note authored by Pranjul Bhandari, chief economist at HSBC. The National Association of Software and Service Companies has a target of $500 billion in IT services revenues by FY30. If that comes to bear, the net services exports could rise by $100 billion between FY23 and FY30, Bhandari said. On a net basis, they now rake in $60 billion a year more in revenues than in the pre pandemic period, said Bhandari. That has provided some much-needed relief on the balance of payments front, she added. "We’ve always known about India’s promising IT sector. But what’s driving the recent exuberance? The rise in the revenues of the top IT companies has definitely played a role," said Bhandari. But other IT services exports are rising even more rapidly. Mid-sized IT firms have been gaining market share, professional and management consulting services exports have grown the fastest at 29% CAGR, followed by computer services at 16%, and research and development at 13%, according to Bhandari. The one sector that generates revenues under each of these headings has contributed to the fast growth in IT services is the rise in global capability or delivery centres set up by large MNCs, according to her. The Rise Of Global Capability Centers What began as a cost arbitrage has now become a key source of high-quality talent and leading-edge innovation. Recognising the value of the Indian demographic dividend, global corporations have set up over 1,500 GCCs in India as of September 2022, according to a research note by EY. India accounts for over 45% of such centres in the world outside of home country, according to Nasscom. Mirroring the growing maturity of the India software services industry, GCCs today have moved past the “cost centre” stage, it said. They are now increasingly focusing on high-value activities such as IP-creation, building competencies around emerging technologies, setting up centres of excelling and taking full ownership of vendor management. Speed Breakers Ahead? After a terrific run, however, it is possible that the pace of increase in India’s IT services exports moderates over the next year or two, in line with slowing global growth, Bhandari cautioned. The Economic Survey, too, cautioned that insofar as India is concerned, some headwinds may be observed in the coming months because of the slowing growth in some of India’s major trading partners. Yet, on the contrary, services exports may improve as runaway inflation in advanced economies drives up wages and makes local sourcing expensive, opening up avenues for outsourcing to low-wage countries, including India, it said. EY sees a long-term opportunity the reliance on technology increases. In the next two decades, it said, as India further gains strength as the world’s technology and innovation hub, the share of transformational and more complex, expertise-based services will enable Indian IT sector to grow faster.
Emerging Technologies
US Markets Loading... H M S ChatGPT is a viral chatbot that uses generative AI to create a compelling and interesting chatbot. Andriy Onufriyenko/Getty Images ChatGPT has been touted as a Google killer and the turning point for AI to enter the mainstream. It's fascinating to play with, but ChatGPT is still limited, and the backlash has already started. Emerging technology like ChatGPT is usually called overhyped until it becomes essential. Loading Something is loading. Thanks for signing up! Access your favorite topics in a personalized feed while you're on the go. Over the past few weeks, social-media feeds and tech-pundit newsletters have been filled with stories and conversations written by artificial intelligence. All of it was written by ChatGPT, a project from the "capped-profit" research group OpenAI. ChatGPT lets users ask its bot questions or give it prompts using GPT-3, an impressive piece of natural-language-processing AI tech.GPT-3 has been available to the public since earlier this year. What caused some people to call ChatGPT AI's "iPhone moment" was its ease of use. Depending on the prompt, ChatGPT can write an essay on the Napoleonic Wars or craft someone's cover letter for a job application. Some are now saying ChatGPT could be the end of Google's search dominance, pave the way for widespread cheating in universities, or even do away with many types of white-collar jobs. But tech insiders and pundits are already becoming cynical about ChatGPT, with some calling it "overrated," "overhyped," and "dumber than you think." It's what comes next, during the period when general sentiment toward ChatGPT turns skeptical, that will determine if and how tech like ChatGPT becomes an essential part of our lives.ChatGPT and the Gartner Hype CycleIn layman's terms, ChatGPT is a chatbot using GPT-3, a state-of-the-art AI model that uses a massive amount of data to generate humanlike text. It's capable of performing a variety of natural language processing tasks, such as answering questions, summarizing text, and generating complete sentences and paragraphs. It's considered one of the most advanced language-processing AI models currently available.How advanced? The above paragraph was written by ChatGPT. You can criticize the prose — an editor might take issue with the use of the adjective "massive" for something nonphysical, and the sentence structure tends toward the simplistic — but it reads as if a human wrote it.GPT-3 is of a branch of machine learning called "generative AI." Generative AI analyzes huge amounts of data and trains models that can generate new things. Feed a generative-AI program millions of pieces of art, and it learns how to produce art, like OpenAI's other viral sensation DALL-E. Feed a generative-AI program a huge amount of computer code, and it can create code.It's all very exciting and new, which is where the Gartner Hype Cycle, from the tech-research and consulting firm Gartner, comes in handy. Debuted by the analyst Jackie Fenn in 1995, it lays out the five stages for any emerging technology. Gartner Hype Cycle. Shayanne Gal/Insider Most of the significant tech innovations of the past 30 years — the internet, the smartphone, cloud computing — can be tracked using the Gartner Hype Cycle. Take the internet. The "technology trigger" would be the first public demonstration of the World Wide Web in 1989. It's exciting but barely noticed by the general public, and very few understand its potential.The "peak of inflated expectations" hits in the 1990s. The internet is rapidly expanding, fueled by exciting but sometimes overly ambitious or frivolous ideas and the dot-com bubble of the late '90s. The "trough of disillusionment" is when the dot-com bubble burst. Excitement cools, and a period of disillusionment sets in. Companies get wiped out. The conventional wisdom is the internet was overhyped. Finally, you get the longest part of the process: the "slope of enlightenment" into the "plateau of productivity." The internet matures. Developers and investors focus on practical applications and refining the user experience. The mobile web launches, and the modern internet, which undergirds a good portion of economy, blooms.ChatGPT's futureSo where is ChatGPT now? Gartner placed generative AI on the very beginning of the track in its August emerging-technologies trend report, its most recent. But a lot has happened since August, especially now that ChatGPT has emerged as an easier way for people to access its technology. ChatGPT seems to have led to a surge in sign-ups for other generative-AI tools, like Lensa AI. Harvard Business Review wrote this latest version of ChatGPT "feels like a tipping point" for AI, as it shows the technology is ready for a wider variety of uses.In other words, it feels like ChatGPT is nearing the peak of inflated expectations. That means people are already finding all the ways it falls short. At Barron's, Tae Kim wrote about all the ways he said ChatGPT was "overhyped," comparing it to Amazon's Alexa and Apple's Siri. "While great AI tools for specific tasks," he wrote, "they never fulfilled their initial promise."Much like the internet in 2001, generative AI like ChatGPT is a young technology. Developers will continue improving the underlying models, and other developers will figure out how to deploy generative AI for useful purposes, not just as interesting playthings. If things work out, general users soon won't realize they're interacting with a bot. Workers will begin to work side by side with generative AI as a matter of course, the same way billions of workers today turn to Google to quickly figure something out. That's when tech like ChatGPT becomes essential — when it makes the long climb up the slope of enlightenment.To be sure, the rise of generative AI and more forms of ChatGPT aren't assured. Many emerging technologies never reach the final stage of being productive and get mired in a failure of the tech or the business model — or a combination of the two.  Perhaps it's best to let ChatGPT predict its own future:"As people begin to realize that artificial intelligence is not a magic solution to all of our problems, the hype around it will die down, and it will be able to develop and improve more effectively."Well put, ChatGPT. In a couple of more years, you'll be able to say it even better. Sign up for notifications from Insider! Stay up to date with what you want to know. Subscribe to push notifications Read next Tech Gartner ChatGPT More...
Emerging Technologies
Ignis Energy awarded three geothermal exploration licenses in Bingöl, Türkiye The application made by Ignis Enerji Üretim A.S. for three geothermal exploration licenses within the borders of Bingöl Province, Türkiye have been approved. Ignis Enerji Üretim A.S. (Ignis Energy) has been awarded three geothermal resource exploration licenses in the Bingöl Province in Türkiye after its application was approved by the Special Provincial Administration. With this development, Ignis Energy expands its geothermal portfolio in Turkiye, further trasnsferring the company’s vast experience in the oil and gas industry to the geothermal energy sector to develop and evaluate different technologies for sustainable energy production. A risk-balanced approach These new licenses fit well in the context of Ignis’ approach of building up a risk-balanced portfolio of geothermal opportunities. The overall approach provides a wide selection of geothermal development opportunities that all advance on separate timelines. Ignis approaches each geothermal project with the awareness of evaluating it within the unique structure of the project and its own context. Ignis CEO Richard Calleri said, “We understand the need for an expedient line of sight to production to make projects financially viable,” and underlined their goal to achieve a portfolio that identifies 1GW of renewable energy production capacity by 2030, utilizing mainly traditional technology for exploitation of geothermal resources but also the ability to tap into emerging technologies where this technology fits. Ignis has done selective technology investments with leading providers that allow the company to utilize these as needed. Ignis has geothermal licenses with a wide range of energy production potential ranging from mainly traditional hydrothermal systems to some opportunities that may require engineered solutions like EGS and Closed Loop designs. “Overall, our portfolio is aimed to be technology-agnostic and it supports the needed scalability in geothermal by providing the opportunities that these engineered solutions need to prove their abilities,” said Chief Operating Officer Marcus Oesterberg. Plenty of untapped potential The push into Turkiye is clear as the country still uses only a small part of its geothermal potential. There is incredible energy potential waiting to be discovered in the East. “We are working to reveal this and turn it into long term value for the Turkish people and the region,” said Country Managing Director Cannur Bozkurt. The region where Ignis obtained geothermal exploration licenses is important as it falls within the intersection area of the North Anatolian Fault Zone (NAF) and the East Anatolian Fault Zone (EAF). The region, where intense tectonism is observed, hosts many natural outflows with temperatures ranging between 30 °C and 68 °C, as well as rich geothermal manifestations indicating the existence of a strong geothermal energy source. Source: Email correspondence via our Turkish language platform JeotermalHaberler
Emerging Technologies
Nayib Bukele, President of El Salvador, announced late last night that the country would be purchasing one bitcoin every day starting today. The move to dollar-cost-average (DCA) into bitcoin is common in the community, however novel for a nation state.Currently, the country holds a bitcoin treasury of 2,381 BTC, valued at over $39 million. Bukele has made a habit in the past of making large BTC purchases during times of market volatility and buying the dip.Outside of just purchasing BTC and holding it on balance for El Salvador, the Bukele administration has fostered the birth of events gathering world leaders from countries all over the world the learn about the financial freedom bitcoin adoption offers. In September, it was announced that over 30 countries with over 110 speakers, including Senator Indira Kempis from Mexico, would gather to discuss financial inclusion. During this visit, attendees were introduced to the financial applications of bitcoin and were able to see bitcoin in action at Bitcoin Beach. Then, in October, the State Treasurer from North Carolina in the U.S. traveled to El Salvador – on his own dime – to learn about the changes bitcoin has already made for the El Salvadoran economy. “What we witnessed in El Salvador is very useful in our efforts to encourage more support and understanding for digital assets and emerging technologies here in South Carolina,” said Dennis Fassuliotis, president of the South Carolina Emerging Technologies Association, at the time.As Bitcoin continues to foster throughout the El Salvadoran economy through new initiatives such as Bitcoin diplomas, Bukele and his administration clearly plan to double down on the country’s investment into a bitcoin-focused economy. It remains unclear how long the purchasing of 1 BTC per day will continue.
Emerging Technologies
WTF?! China continues to express ambitions to be a global leader in several tech industries, despite the increasing export sanctions introduced by the US. The latest of these is advanced humanoid robots, which it believes will be mass-produced in the country as soon as 2025. The plan to create the robots was set out in a blueprint document published last week by China's Ministry of Industry and Information Technology (MIIT). The government wants to aid more fledgling companies in the field, set industry standards, develop talent and deepen international cooperation, reports Bloomberg, with the goal of reaching advanced-level humanoid robots and mass-producing them by 2025. The document says that the robots are expected to become as disruptive as computers, smartphones, and the latest energy vehicles (EVs, presumably), though it's noted that there's plenty of ambitious words with few actual details about how this will happen. The ministry also said that China is targeting breakthroughs in environment sensing, motion control and machine-to-human interaction capabilities in the next two years. The document might be limited to talk of goals and targets, but it still caused shares of Chinese robotics companies to surge. And the plan isn't as unlikely as it sounds: Chinese startup Fourier Intelligence unveiled its humanoid robot GR-1 (below) at the World Artificial Intelligence Conference (WAIC) in Shanghai in July. According to the South China Morning Post, GR-1 is said to be capable of walking on two legs at a speed of 5kmh (3.1 mph) an hour while carrying a 50kg (110 pounds) load. Fourier Intelligence says it plans to begin mass production by the end of 2023 and deliver thousands of units next year. The startup also hopes to collaborate with major AI companies to provide the bipedal machine with its "brains," which sounds worryingly like we're getting into Detroit: Become Human and I, Robot territory. China's government said it is encouraging the use of artificial intelligence in robots, too. Last month saw Amazon introduce its most humanoid robot to its warehouses: Digit, a 5-foot 9-inch 143-pound robot from Agility Robotics. The two-legged machine can walk forward, backward, and sideways, squat and bend, and move, grasp, and handle items using its arm/hand-like clasps. At the start of the year, an independent think tank claimed China was ahead of the US when it comes to research in 37 out of 44 crucial and emerging technologies, including AI, robotics, defense, and key quantum tech areas.
Emerging Technologies
The novelty of replacing one’s “home key” with a microchip implant is gaining worldwide interest, but there’s another more compelling story under the surface. Why is this technology — an integrated circuit the size of a grain of rice — reviled by some and celebrated by self-proclaimed human cyborgs? Arguably, William Shakespeare’s “Hamlet” offers the most elegant explanation: “Nothing is neither good nor bad, but thinking makes it so.” However, it would be prudent to tell Prince Hamlet that not all microchip implants are designed alike, and understanding the technological design enables one to better evaluate the competing viewpoints. Today, more than 50,000 people have elected to have a subdermal chip surgically inserted between the thumb and index finger, serve as their new swipe key, or credit card. In Germany, for example, more than 2,000 Germans have opted to receive these implants; one man even used it to store a link to his last will and testament. As chip storage capacity increases, perhaps users could even link to the complete works of Shakespeare. Chip implants are just one of the many types of emerging technologies in the Internet of Things (IoT) — an expanding digital cosmos of wirelessly connected internet-enabled devices. Some technologists are worried, however, that hackers targeting IoT vulnerabilities in sensors and network architecture also may try to hack chip implants. Radio-frequency identification (RFID) chips are identifying transponders that typically carry a unique identification number and can be tagged with user data such as health records, social media profiles, and financial information. RFID chips are passive transponders, which means the digital reader must be positioned a few inches away from the user’s microchipped hand to communicate. In contrast, near field communication (NFC) chips use electromagnetic radio fields to wirelessly communicate to digital readers in close proximity, much like smartphones and contactless credit cards. A benefit of NFC over RFID is international use, reasons Biohax: “With the power of existing infrastructure and the wide variety of services and products already supporting the NFC standard globally, one huge benefit of ours is that we overlap virtually any private or public sector already using NFC or mobile tech.” According to a 2021 United Kingdom-based consumer survey by Propeller Insights on digital payment trends in Europe, 51 percent of the approximately 2,000 respondents said they would consider getting a chip implant to pay for services. This technology is especially popular in Sweden as a substitute for paying with cash. “Only 1 in 4 people living in Sweden use cash at least once a week,” writes NPR. More than 4,000 Swedes have replaced keycards for chip implants to use for gym access, e-tickets on railway travel, and to store emergency contact information. The technology also may offer increased mobility for people with physically limiting health conditions, such as rheumatoid arthritis, multiple sclerosis, and motor neurone disease, according to BioTeq, a UK-based tech firm. For example, “a wheelchair-mobile person can approach a door and the reader will unlock the door, avoiding the need for keys that the person may not be able to use for themselves.” BioTeq is also exploring providing microchip services for those who are visually impaired to create “trigger audible or touch-sensory signals” in the home. Despite these benefits, the Bulletin of the Atomic Scientists avers that the main challenges to chip implants are security, safety and privacy. A general security concern with NFC technology is that it could allow third parties to eavesdrop on device communication, corrupt data, or wage interception attacks, warns NFC.org.  Interception attacks are when someone intercepts the data transmitted between two NFC devices and then alters the data as it’s being relayed. Like any device, these personal chips have security vulnerabilities and potentially could be hacked, even if embedded underneath the skin. With regard to health safety concerns, a 2020 study with the American Society for Surgery of the Hand indicated that RFID chip implants may carry potential health risks such as adverse tissue reaction and incompatibility with some magnetic resonance imaging (MRI) technology. Several social scientists also are apprehensive about the risks to privacy and human rights if the body becomes a type of “human barcode.” According to microbiologist Ben Libberton at Stockholm’s Karolinska Institute, chip implants can reveal sensitive personal information about your health and even “data about your whereabouts, how often you’re working, how long you’re working, if you’re taking toilet breaks and things like that.” Interestingly, the first person to implant a microchip in himself was professor Kevin Warwick of Reading University in 1998; he wanted to determine whether his computer could wirelessly track his movements at work. To date, at least 10 state legislatures in the United States have passed statutes to ban employers from requiring employees to receive human microchip implants. The most recent state was Indiana, which prohibited employers from requiring employees to be chipped as a condition of employment and discriminating against job applicants who refuse the implant. Nevada’s legislation is the most restrictive — although not a total ban, as proposed in 2017, Nevada Assembly Bill 226 prohibits an officer or employee of Nevada from “establishing a program that authorizes a person to voluntarily elect to undergo the implantation of such a microchip or permanent identification marker.” As the impact and influence of chip implants increases in the United States, it will raise complex questions for state legislatures and courts to consider, such as third-party liability for cybersecurity, data ownership rights, and Americans’ rights under the Fourth Amendment and the protection of sensitive digital data under the Supreme Court’s 2018 decision in Carpenter v. United States. Microchips offer alluring benefits of convenience and mobility, but they carry potential cybersecurity, privacy and health risks. The onus cannot be on the law alone, however, to protect consumers. Instead, it is a shared responsibility among consumers to understand their data rights as part of digital literacy, and among technologists to promote cybersecurity-informed engineering at each phase of product development. Further, lawmakers must be mindful of the delicate balance between protecting the flame of technological innovation and advancement, while guarding against misapplication and abuse. As technology historian Melvin Kranzberg noted, “Technology is neither good nor bad, nor is it neutral.” Zhanna L. Malekos Smith is a nonresident adjunct fellow with the Strategic Technologies Program at the Center for Strategic and International Studies (CSIS) in Washington and an assistant professor in the Department of Systems Engineering at the U.S. Military Academy at West Point, where she also is a Cyber Law and Policy Fellow with the Army Cyber Institute and affiliate faculty with the Modern War Institute. The opinions expressed here are solely those of the author and not those of CSIS, the U.S. government or Department of Defense.
Emerging Technologies
'Pipeline Of Talent' Needed For India, U.S. To Maintain Momentum Of Growth: PM Modi Prime Minister Narendra Modi highlighted the initiatives undertaken by India to promote education, research and entrepreneurship. To maintain the momentum of growth, a 'pipeline of talent' is needed for India and the US, Prime Minister Narendra Modi said at an event here to highlight the two nations' shared priorities around education and workforce. He was participating in the 'Skilling For Future Event' organised by the National Science Foundation on Wednesday and his visit to the organisation was hosted by First Lady Jill Biden. The prime minister also highlighted the initiatives undertaken by India to promote education, research and entrepreneurship. "I am really happy to have got an opportunity to interact with young and creative minds here. India is working on several projects in collaboration with the NSF. I thank First Lady Jill Biden for planning and organising this event," Modi said. Explaining the skilling mission of his government, Modi said for the bright future of youngsters, it is important to have education, skill and innovation, and India has worked in this direction. The National Education Policy, education and skilling have been integrated. Under the Skilling Mission, more than 50 million people have been trained, and another 15 million are being given training on latest and emerging technologies such as artificial intelligence and blockchains, Modi said while addressing a gathering. To maintain the momentum of growth, "for India and the U.S., it is important to ensure a pipeline of talent", he said and added that his goal is to have this decade as "techdecade". The U.S. has the world's top educational institutions and advanced technologies, while India the world's biggest "yuva (youth) factory", the prime minister said and added that he believes, the India-US partnership will prove to be the sustainable and inclusive engine for global growth. Welcoming the prime minister to the US, the First Lady said, "With this official visit, we are bringing together the world's oldest and world's largest democracies. But our relationship isn't just about governments. We're celebrating the families and friendships that span the globe, those who feel the bonds of both of our countries." She said after years of strengthening ties, the US-India partnership is deep and expansive as we jointly tackle global challenges. "Mr Prime Minister, I know that education is an issue close to your heart as it is to mine. You work to ensure that all Indians, especially girls, which I love, have the opportunity to pursue an education and gain the skills they need for our modern workforce. It is so important, it is exciting to be able to show you some of the innovative programs our schools and businesses are creating for students here," Jill Biden said. The NSF is headed by Indian American Dr Sethuraman Panchanathan. In the last year or so, several Indian Cabinet Ministers have visited its headquarters in Virginia. Prominent among them are Finance Minister Nirmala Sitharaman and External Affairs Minister S Jaishankar. "PM @narendramodi and @FLOTUS @DrBiden participated in a unique event focused on promoting vocational education and skill development among youth. PM and @FLOTUS discussed collaborative efforts aimed at creating workforce for the future. PM highlighted various initiatives undertaken by India to promote education, research and entrepreneurship," Ministry of External Affairs Spokesperson Arindam Bagchi tweeted along with pictures of the programme. The NSF is an independent agency of the U.S. government that supports fundamental research and education in all the non-medical fields of science and engineering. Its medical counterpart is the National Institutes of Health. The First Lady thanked the NSF for hosting them. "There are students in this room who are discovering the inner workings of semiconductors starting in middle and high school or training for other cutting edge jobs...," she said addressing the gathering. "If we want our economies to be strong, we need to invest in young people who are our future. We need to ensure that they have the opportunities that they deserve," Jill Biden said. Education is a cornerstone of the bond between India and the US, "one we hope to keep building and strengthening with this visit," she said. "Our universities are partnering together, leading research, and creating apprenticeships and internships that span the ocean. And, as we have seen here, students from both our countries are learning and growing alongside each other, discovering the people they want to become and building a better world, together. Working side by side, our nations can create a safer, healthier, more prosperous future for everyone," Jill Biden said. NSF director Panchanathan said that India, as of this year, is the world's most populous country. "Promoting education opportunities and expanding India's technology sector have been key pillars of Prime Minister Modi’s vision for the country," he said. Modi arrived here from New York where he led a historic event at the UN Headquarters earlier in the day to commemorate the ninth International Day of Yoga, attended by UN officials, diplomats and prominent personalities. Prime Minister Modi is visiting the U.S. from June 21-24 at the invitation of President Joe Biden and the First Lady. In his departure statement, Modi had said this "special invitation" from President Biden and First Lady Jill Biden for a state visit is a reflection of the vigour and vitality of the partnership between the democracies.
Emerging Technologies
FBI Hosts Five Eyes Summit to Launch Drive to Secure Innovation in Response to Intelligence Threats (From left to right), Australian Security Intelligence Organisation Director-General Mike Burgess; Canadian Security Intelligence Service Director David Vigneault; FBI Director Christopher Wray; New Zealand Security Intelligence Service Director-General of Security and Chief Executive Andrew Hampton; and MI5 Director General Ken McCallum pose for a group photo during the Emerging Technology and Securing Innovation Summit in Palo Alto, California, on October 16, 2023. (PALO ALTO, California)—In their first-ever joint public appearance, leaders of the Five Eyes intelligence partnership—the United States, the United Kingdom, Canada, Australia, and New Zealand—traveled to the U.S. at the invitation of FBI Director Christopher Wray. Together they are launching the first Emerging Technology and Securing Innovation Security Summit in Palo Alto, California, the heart of Silicon Valley. In addition to the Five Eyes, the summit is bringing together business leaders and entrepreneurs, government officials, and academics to discuss threats to innovation, coming trends in the use and potential exploitation of emerging tech, and means to work together to advance both economic security and public safety. The summit kicks off with a fireside chat with all five members, hosted by Dr. Condoleezza Rice, former secretary of state. The intelligence leaders will outline current threats and trends to private sector organizations in their respective countries. Following the fireside chat, the intelligence leaders will sit down with private sector leaders for in-depth discussions about expanding and strengthening private-public partnerships to better protect innovation and the collective security of the five nations and their citizens. Australian Security Intelligence Organisation Director-General Mike Burgess: “The Summit is an unprecedented response to an unprecedented threat. The fact the Five Eyes security services are gathering in Silicon Valley speaks to the nature of the threat and our collective resolve to counter it. This summit will raise awareness of the threat, and help the technology sector understand, identify and manage the risks.” Canadian Security Intelligence Service Director David Vigneault: “Innovation drives our collective prosperity and security, yet the threats to innovation are increasing in both scale and complexity. To meet this challenge, CSIS is working proactively with FVEYs partners, private sector leaders, and academia to secure our future and to ensure the safety, security and prosperity of Canada.” “L’innovation est un important moteur de notre prospérité et de notre sécurité collectives, mais les menaces qui pèsent sur elle ne cessent de gagner en ampleur et en complexité. Afin de remédier à cette situation, le SCRS collabore activement avec ses partenaires du Groupe des cinq, des dirigeants du secteur privé et le milieu universitaire pour assurer la sécurité et la prospérité du Canada.” FBI Director Christopher Wray: "Emerging technologies are essential to our economic and national security, and America’s role as a leading economic power, but they also present new and evolving threats. The FBI is committed to working with our Five Eyes and industry partners to continue to protect emerging technologies both from those who would steal them and those who would exploit them for malicious purposes." MI5 Director General Ken McCallum: “The U.K. is seeing a sharp rise in aggressive attempts by other states to steal competitive advantage. It’s the same across all five of our countries. The stakes are now incredibly high on emerging technologies; states which lead the way in areas like artificial intelligence, quantum computing and synthetic biology will have the power to shape all our futures. We all need to be aware, and respond, before it’s too late." Director-General of Security and Chief Executive, New Zealand Security Intelligence Service Andrew Hampton: "Emerging technologies bring many benefits to New Zealand, not least the potential to drive economic growth. At the same time, these same technologies in the wrong hands can be used in dangerous or illicit ways. The NZSIS is pleased to be working with our Five Eyes intelligence partners along with the private sector on this most critical of challenges.” The Five Eyes is a coalition of five countries: the U.S., the U.K., Canada, Australia, and New Zealand. It was formed in 1946, shortly after the end of World War II, to share intelligence and coordinate security efforts. The five member countries have a long history of trust and cooperation, and they share a commitment to common values. The partnership has played a significant role in global security over the past seven decades, strengthening intelligence-sharing and cooperation among its member countries in order to protect their national security and common interests. The security services are focused on countering a range of threats, including terrorism, cybersecurity, weapons proliferation, organized crime, and state-backed espionage and interference.
Emerging Technologies
ChatGPT for Traveling: Can AI Plan Your Next Vacation? Discover how ChatGPT, the AI-powered virtual assistant, can help plan your next vacation. From flights to accommodations, let ChatGPT be your guide. Picture yourself embarking on the adventure of a lifetime, effortlessly crafting your ideal vacation with tailored recommendations and a personalized itinerary. That's the magic of artificial intelligence (AI), transforming the travel planning experience and making that fantasy a reality. One standout AI innovation is ChatGPT, a groundbreaking language model revolutionizing how we plan our journeys. How does ChatGPT work and what sets it apart? The intricacies of ChatGPT may seem complex at first. Still, by examining its underlying technology and understanding the evolution of the GPT series, it becomes clear how this AI-driven tool has the potential to reshape various industries, including travel. What are large language models? Large language models like ChatGPT are AI-powered tools designed to understand and generate human-like text based on the provided context. They can accomplish various tasks, from answering questions to creating content. These models are trained on massive datasets, enabling them to understand linguistic patterns and produce coherent, contextually relevant responses. For instance, GPT-3, one of ChatGPT's predecessors, has been used to generate news articles, write code and even compose poetry. What are the different models in OpenAI's GPT series? The GPT series has come a long way since its inception, with each iteration boasting improved capabilities: - GPT: The first iteration, a notable breakthrough in natural language processing, laid the groundwork for future developments. - GPT-2: A more powerful version, capable of generating longer and more coherent text, raised concerns about the potential misuse of AI-generated content. - GPT-3: A highly advanced model boasting 175 billion parameters that transformed the AI landscape with impressive language generation capabilities that could mimic human writing in various styles and contexts. - GPT-4: The latest iteration, with even greater capabilities and potential applications, extends the possibilities of AI-driven language models in industries such as travel planning, healthcare and customer service. Do OpenAI and Microsoft collaborate? Microsoft plays a significant role in developing and supporting ChatGPT by providing resources, such as computational power and expertise. This collaboration has led to the creation of powerful AI tools, like ChatGPT, which can potentially revolutionize multiple industries. What are the real-world applications of ChatGPT? ChatGPT has a wide array of practical applications, including but not limited to the following: - Content generation: Writing articles, blog posts and marketing materials. - Customer support: Assisting users in resolving issues and answering queries. - Programming assistance: Generating code snippets and offering suggestions to developers. - Language translation: Translating text between languages with high accuracy. - Personal assistants: Providing personalized recommendations and scheduling appointments. How is AI transforming the travel industry? AI has emerged as a powerful force in the travel industry, paving the way for more efficient and personalized experiences. By examining the role of AI chatbots, the impact of machine learning on travel tech, the automation of travel agencies and OTAs and the influence of social media and new technology on travel trends, one can gain a deeper understanding of how AI is shaping the future of travel planning. The emergence of ai chatbots in the travel industry AI chatbots have become increasingly popular in the travel sector, offering round-the-clock assistance and instant responses to customer inquiries. ChatGPT, for example, can help travelers find the best flights, accommodations and attractions based on their preferences, leading to a more streamlined and personalized experience. The impact of machine learning on travel tech Machine learning, a subset of AI, enables computer systems to learn from data and improve over time. In the travel industry, companies can use machine learning to analyze vast amounts of data to offer tailored recommendations, optimize pricing strategies and enhance the overall customer experience. For example, machine learning algorithms can analyze flight and hotel booking trends to provide dynamic pricing or predict popular destinations based on social media activity. Automation in travel agencies and OTAs Automation is another way AI is transforming the travel industry. From streamlining booking processes to handling customer service inquiries, automation allows travel agencies and online travel agencies (OTAs) to operate more efficiently. By leveraging AI tools like ChatGPT, these organizations can automate repetitive tasks, freeing up human agents to focus on complex issues or provide personalized attention to customers. The role of social media and new technology in shaping travel trends Social media platforms and emerging technologies shape travel trends and preferences. AI-powered tools can analyze social media data to identify popular destinations, uncover emerging travel trends and offer personalized recommendations based on users' interests and online activity. New technologies such as virtual and augmented reality also provide immersive experiences that can influence travelers' decisions and enhance their trips. With AI driving significant advancements in the travel industry, it is essential to understand how tools like ChatGPT can improve travel planning, offering travelers more personalized and seamless experiences. How does ChatGPT enhance the travel planning process? As AI revolutionizes the travel industry, it is worth examining how ChatGPT impacts travel planning. ChatGPT excels in the following: Real-time language model capabilities ChatGPT's real-time language model capabilities enable it to respond to user queries quickly and effectively. Travelers can interact with ChatGPT-powered chatbots and receive instant flight recommendations, accommodations and attractions. This real-time interaction facilitates a dynamic and engaging travel planning experience, allowing travelers to make informed decisions quickly. Personal preferences and customization in trip planning ChatGPT's strengths include its ability to account for individual preferences during trip planning. By analyzing user input and historical data, ChatGPT can offer customized recommendations that cater to individual tastes, such as suggesting boutique hotels for those seeking a unique experience or family-friendly activities for travelers with children. This level of personalization elevates the planning process and ensures a tailored experience for each traveler. Integrating search engines and datasets for accurate recommendations ChatGPT can be combined with search engines like Bing and various datasets to provide accurate and up-to-date information. This integration enables the AI to access real-time information on flight availability, hotel vacancies and local attractions, ensuring the recommendations are relevant and reliable. Moreover, ChatGPT can offer well-informed suggestions that align with travelers' needs and expectations by tapping into user-generated content, such as reviews and ratings. What is the role of algorithms in creating a travel itinerary? Algorithms play a crucial role in optimizing travel itineraries. ChatGPT can generate well-structured itineraries that maximize efficiency and enjoyment by analyzing factors such as travel times, distances and user preferences. These algorithms can also account for external factors, such as weather and local events, to ensure that travelers have the best possible experience during their trip. How does AI travel planning compare to traditional travel agents? AI chatbots like ChatGPT offer several advantages over traditional travel agents, including: - 24/7 availability: AI chatbots can provide assistance and information anytime, offering convenience to travelers with busy schedules. - Instant responses: ChatGPT can provide real-time recommendations, enabling travelers to make quick decisions. - Personalization: AI chatbots can analyze vast amounts of data to offer tailored recommendations based on individual preferences. - Cost-effectiveness: AI-driven tools can often provide services at a lower cost than human travel agents. What are the limitations and challenges faced by AI language models? Despite their advantages, AI language models also face some limitations and challenges: - Lack of human touch: AI chatbots may not fully capture the empathetic and intuitive understanding that human travel agents can offer. - Misinterpretation: AI models can occasionally misinterpret user inputs, leading to incorrect or irrelevant recommendations. - Ethical concerns: Potential issues related to data privacy and the misuse of AI-generated content exist. What is the role of human travel agents in the age of AI? While AI chatbots are transforming the travel industry, human travel agents still play a vital role in providing personalized services and handling complex requests. Travel agents can offer a human touch and unique insights that AI tools may be unable to replicate. In the age of AI, the role of human travel agents is evolving to focus on offering specialized knowledge, fostering personal connections and providing exceptional customer service. By leveraging AI tools like ChatGPT to handle repetitive tasks, human travel agents can dedicate more time to understanding their client's needs, offering curated experiences and providing expert guidance on niche travel interests or complicated travel arrangements. Case study: Planning a trip to New York City with ChatGPT To better understand ChatGPT's potential in a practical context, let's walk through a hypothetical scenario of planning a trip to New York City using ChatGPT. Using AI tools for trip planning and itinerary creation Imagine you're planning a trip to New York City. You interact with a ChatGPT-powered chatbot, providing details about your preferences, such as a preference for boutique hotels and an interest in exploring the city's vibrant culinary scene. Based on this input, ChatGPT generates a list of boutique hotel options in the desired location and offers recommendations for food tours and must-try local dishes. ChatGPT considers travel times, distances and your schedule to create a well-structured itinerary. Then, the AI suggests an efficient route that includes visiting iconic sights such as the Statue of Liberty, Central Park and the Empire State Building while ensuring ample time to explore New York City's culinary experiences. Leveraging generative AI for unique and tailored experiences In addition to the popular tourist attractions, ChatGPT can provide you with memorable experiences tailored to your interests. For instance, if you're a fan of live music, ChatGPT recommends checking out the renowned Blue Note Jazz Club, which offers information on any live performances during their stay. This level of personalization ensures that your trip to New York City is a one-of-a-kind experience tailored to your tastes. Travel companies incorporating ChatGPT in their services Seeing the value in AI-driven solutions like ChatGPT, travel companies can integrate it into their services to provide clients with personalized recommendations, instant responses and seamless itineraries. By offering an AI-powered chatbot to clients like you, travel companies can enhance the overall customer experience and set themselves apart from competitors. Do startups and travel companies use AI technologies? With a clearer understanding of how ChatGPT can be employed in a real-life scenario, it is worth examining the broader context of AI adoption within the travel industry. Examples of travel tech startups leveraging AI Several travel tech startups are harnessing the power of AI to enhance their services and provide customers with unique travel experiences. For instance,Hopper, a mobile app that predicts flight prices, uses AI to analyze vast amounts of historical flight data to help users find the best time to book their flights. Another startup,Mezi, offers a personal travel assistant app that leverages AI to automate flight and hotel bookings, restaurant reservations and other travel-related tasks based on user preferences. Integration of AI chatbots and language models in travel agencies Many travel agencies are embracing AI solutions to improve customer service and streamline travel planning. By integrating AI chatbots and language models like ChatGPT into their systems, travel agencies can offer their clients instant responses, tailored recommendations and efficient itinerary planning. AI-driven tools also allow travel agents to focus on more complex tasks and provide personalized support while automating repetitive or mundane aspects of the planning process. The role of platforms like LinkedIn and Shutterstock in promoting AI in travel Platforms such as LinkedIn and Shutterstock play an essential role in promoting the adoption of AI within the travel sector. LinkedIn, a professional networking platform, facilitates the sharing of AI-related knowledge and expertise among travel industry professionals, disseminating best practices and fostering collaboration among companies. Shutterstock, a provider of stock images and video content, employs AI-powered tools to optimize search functionality and image recognition, helping travel businesses to find and use the most relevant visual content for their marketing materials. What does the future of AI-driven travel planning look like? As AI continues to make strides in the travel industry, examining the future of AI-driven travel planning, the evolving role of AI chatbots and language models and the potential impact on customer experience is crucial. The challenges and ethical considerations that may arise as AI becomes more deeply integrated into the travel planning process must be addressed. The evolving role of AI chatbots and language models in travel In the coming years, AI chatbots and language models like ChatGPT will likely become even more sophisticated, offering improved personalization, a better understanding of user inputs and more accurate predictions. As these advancements are made, AI-driven tools will increasingly become integral to travel planning, assisting travelers and travel professionals in creating unforgettable experiences. The impact of AI on the travel industry and customer experience The growing use of AI in the travel industry is poised to significantly impact customer experience, making travel planning more accessible, efficient and personalized. As AI-driven tools become more advanced, customers can easily plan trips that cater to their preferences, leading to more satisfying and memorable travel experiences. Potential challenges and ethical considerations As AI adoption accelerates within the travel industry, potential challenges and ethical considerations must be addressed. Data privacy, algorithmic bias and the digital divide could affect the industry and its customers. Addressing these concerns will be essential to ensure the responsible and equitable use of AI in travel planning, allowing the technology to reach its full potential while minimizing potential negative impacts. How can ChatGPT help you plan your next vacation? AI chatbots and language models like ChatGPT are revolutionizing travel planning, offering unparalleled personalization, efficiency and convenience for travelers and travel professionals. As the travel industry continues to embrace AI-driven solutions, it will be vital to address potential challenges and ethical considerations while harnessing the technology's potential to enhance customer experiences and reshape the landscape of travel planning. Discover how AI is transforming the world of travel planning – check out more of Entrepreneur's insightful articles and stay ahead of the curve.
Emerging Technologies
In the 1982 TV series Knight Rider, the main character can have a full conversation with his futuristic car. Once science fiction, this type of language interface may soon be one step closer to reality because General Motors is working on bringing a ChatGPT-style AI assistant to its automobiles, according to Semafor and Reuters. While GM won't be adding Knight Rider-style turbojet engines or crime-fighting weaponry to its vehicles, its cars may eventually talk back to you in an intelligent-sounding way, thanks to a collaboration with Microsoft. Microsoft has invested heavily in OpenAI, the company that created ChatGPT. Now, they're looking for ways to apply chatbot technology to many different fields. "ChatGPT is going to be in everything," Reuters quotes GM Vice President Scott Miller as saying in an interview last week. Miller gave examples of being able to explore vehicle features found in the manual, integrate schedules on a calendar, or program a garage door code using a conversational voice interface. A report from Semafor, which first broke the story, also suggests that a driver might be able to ask a future AI-enabled GM car how to fix a flat tire, and the car would play an instructional video or give step-by-step instructions on how to change it. "This shift is not just about one single capability like the evolution of voice commands, but instead means that customers can expect their future vehicles to be far more capable and fresh overall when it comes to emerging technologies," a GM spokesperson told Reuters on Friday. Large language models (LLM) like ChatGPT process text (and with Whisper integration, understand speech) to provide responses that draw heavily on the context of the conversation and a large data set trained on information from the Internet. However, they're also known to generate convincing errors when reporting factual information. That's a glaring issue GM and Microsoft will have to figure out before they can bring a LLM-based solution to market. In the aforementioned Knight Rider, Michael Knight (played by David Hasselhoff) drives an AI-equipped car—a modified 1982 Pontiac Trans Am—named KITT that can speak and understand natural language in fluid conversation. It also understands its surroundings and reacts to them dynamically. In the first episode of the series, Knight says, "I can't believe this—a car that talks back to me." In that series, KITT can also drive itself, a feature car manufacturers have been struggling with using a different form of AI that has nothing to do with LLMs. And ChatGPT, which is tuned on language (not driving) won't be getting behind the wheel any time soon. But since Pontiac was a division of General Motors in 1982, if GM doesn't license the name KITT for their automotive AI assistant, it could be a significant missed opportunity.
Emerging Technologies
The Futures - No. 14 From Quantumrun TLDR: In this issue The Quantumrun team shares actionable trend insights about gene therapies becoming the primary healthcare cost, how wars are rapidly adopting next-gen tech, the growth of deinfluencers, and the birth of the world’s newest volcano. Future signals to watch An underground reserve of premium phosphate rock in Norway, touted as the largest globally, can meet the global need for fertilizers, solar panels, and electric vehicle batteries for the next 50 years. With a speed of 281 miles per hour, China's new high-speed magnetic levitation (maglev) train is the world’s fastest. Coresight Research thinks US livestreaming sales could easily reach $50 billion this year (2023). Toyota is confident that it can develop a solid-state battery capable of powering electric vehicles for 745 miles and can be fully charged in just 10 minutes. A growing group of "deinfluencers" is advocating that materialism and overpriced trends are no longer fashionable. (You don’t need the USD $185 weighted blanket.) In a groundbreaking first, scientists have discovered how a human olfactory receptor captures an airborne scent molecule, a crucial chemical interaction that initiates our sense of smell. Quantumrun is excited to announce a partnership with NachoNacho, the world’s largest B2B SaaS marketplace. Get discounts on all SaaS subscriptions, including ours ;) Are gene therapies going to be the major healthcare expense? In 2021, overall US health expenditure rose by 2.7 percent, reaching $4.3 trillion, or $12,914 per person. In addition, $603 billion were spent on prescription medications, particularly retail drugs. The expenditure on drugs is primarily influenced by a limited number of high-priced products, mainly specialty drugs which accounted for half of total drug spending. The cost of these specialty drugs surged to $301 billion in 2021, a 43 percent rise since 2016. However, ARK research thinks healthcare expenditure will soon shift from chronic disease management to one-time, curative gene-editing therapies. Some clinical trials are already delivering promising results. For example, in June, Vertex Pharmaceuticals announced the Phase 1/2 results of VX-800, their gene therapy treatment for type 1 diabetics. All six patients who received the treatment successfully produced their own insulin and experienced improved blood sugar control. This treatment also reduced or eliminated insulin use. Cell and gene therapies (CGTs) didn’t get their first approval until 2017. As of 2022, only 27 CGTs have received FDA approval, making up about 8 percent of the 340 approved biologics. However, when it comes to the composition of truly groundbreaking drugs, CGTs continue to contribute to an increasing percentage of total approvals. Actionable trend insights as gene therapies consume a larger share of national healthcare budgets For biotech entrepreneurs: They can look for gaps in the healthcare market where CGTs could provide solutions, such as a specific disease or condition that current therapies do not adequately address. They can pitch CGT ideas to venture capitalists and angel investors, or apply for grants focused on biotech innovation. Understanding and complying with the regulatory landscape, including FDA approval processes and clinical trial requirements, is also essential. Private healthcare clinics that consult patients on CGT selection and administer their injection/delivery may also represent a premium business opportunity. For corporate innovators: Biotech and pharmaceutical companies can allocate greater resources to CGT research and development, including setting up a dedicated team or department. They can partner with universities, research institutions, or startups working on CGTs; this strategy can accelerate the development process and bring in fresh ideas. Training employees about CGTs may help pharma workforces prepare to work on these new therapies. For public sector innovators: Governments can develop and implement policies encouraging the research, development, and use of CGTs. These regulations could include funding for research, tax incentives for companies, and regulatory frameworks that ensure safety without stifling innovation. They can also launch campaigns to educate the public about the benefits and potential risks of CGTs, helping to build public trust and acceptance. Investing in infrastructure that supports the development and delivery of CGTs may also be beneficial, including research labs, manufacturing facilities, and training centers. Trending research reports from the world wide web Most Fortune 500 companies (72 percent) intend to integrate generative AI into their operations within the next three years to enhance workforce efficiency. A report predicts that China's GDP will surpass that of the US in the early 2030s. However, China's economic growth will also become relatively stagnant afterward. Benedict Evans argues that as long as there’s still no artificial general intelligence, the current wave of automation is just business as usual. The creator economy funding is down over 90 percent in the second quarter of 2023 compared with the same period two years ago. Future wars will become a next-gen tech battlefield Major wars are tragic for the nations involved, but they also reshape global conflict preparedness and influence global security. The ongoing war in Ukraine, the largest in Europe since 1945, will shape the future understanding of warfare. It points to a new type of high-intensity war that merges advanced technology with large-scale destruction involving civilians, allies, and private firms. Autocratic regimes are studying this war to gain an advantage in future conflicts, and liberal societies must acknowledge the reality of potential wars between industrialized economies. The battlefield is becoming transparent due to advanced sensors on satellites and drone fleets. Future wars will depend on surveillance to detect the enemy first, blind their sensors, and disrupt data transmission. Troops will need to adapt to new fighting methods, and armies that fail to invest in new technologies or doctrines will be overwhelmed. Despite advancements in artificial intelligence, war still involves a massive physical presence of humans, machines, and munitions. The casualty rate in Ukraine has been high due to the ability to see and hit targets precisely. The consumption of munitions and equipment is immense, with Russia firing 10 million shells a year and Ukraine losing 10,000 drones monthly. Technology may eventually alter how this physical "mass" is maintained, but armies need to be prepared for high attrition rates, and increase munitions manufacturing capacity and manpower reserves. Actionable trend insights as wars become increasingly high-tech For entrepreneurs: As authoritarian governments become more unstable and as private sector companies gradually become more patriotic, the reputational cost of participating in the defense industry will fade. This shift will offer entrepreneurs a freer hand to start businesses that develop advanced surveillance systems, autonomous vehicles, or AI algorithms that support national defense. For corporate innovators: Defense industry companies can increase their annual research and development budgets, acquire startups with promising technologies, or form partnerships with tech companies. Defense firms can also ensure their employees (and the military personnel they serve) are prepared for the shift towards high-tech warfare by providing training in the use of advanced emerging technologies, such as AI, cybersecurity, and data analysis. For public sector innovators: Governments can implement these technologies in their defense strategies, including using AI for data analysis, drones for surveillance, or autonomous vehicles for logistics and battle tactics. Governments can also invest in education and training programs to prepare their defense personnel for these technologies. These investments could involve funding university programs or providing scholarships for relevant fields related to the defense industry. Outside curiosities Fast Company discusses why we should stop using the terms “work wife/husband/spouse.” In June, the world’s newest volcano was born in Iceland's Reykjanes peninsula, spewing lava southward. Access discounted rates to the Quantumrun Foresight Platform on NachoNacho. Click on the logo above. More from Quantumrun Read more daily trend reporting on Quantumrun.com Subscribe to the Quantumrun Trends Platform (free for premium newsletter subscribers) Corporate readers can review our Trend Intelligence Platform Follow us on Linkedin Follow up on Twitter Finally, share your thoughts in the Substack comments below. We love hearing from you! See you in The Futures, Quantumrun
Emerging Technologies
Walmart is giving its corporate employees an AI "assistant" tool to speed up productivity. The retailer is embracing AI while other companies limit employees' use of AI in the workplace. The retailer already uses AI in its stores and customer service, and is hoping to leverage AI in its supply chain. Retail giants like Amazon, Apple, and Samsung have restricted its employees from using AI tools like OpenAI's ChatGPT – but Walmart is embracing it with open arms. Walmart is expanding AI efforts in its workplace with a new AI "assistant." It's one of many generative AI tools the company has already employed across to its 50,000 corporate employees. The generative AI tool, which includes a "My Assistant" feature, will summarize documents and speed up creating and drafting projects, Walmart's Chief People Officer Donna Morris and Executive Vice President of New Business and Emerging Technologies Cheryl Ainoa wrote in a blog post. When reached for comment, a Walmart spokesperson referred Insider to the blog post. The goal of the app is to free up employees "from monotonous, repetitive tasks, allowing more time and focus for improving the customer/member experience," per the blog post. It also noted Walmart hopes to implement the AI tool in employee orientations and to assist employees with selecting their annual benefits packages. Walmart's provision of generative AI tools to its office workers is one of the ways the big box retailer seeks to integrate AI across the company. In its customer service branch, Walmart has added "Conversational AI" to its chatbots and shopping features, including its Walmart Voice Order and Text to Shop services, which recommend products based on past customer orders. In-stores, Walmart employees can turn to an "Ask Sam" voice assistant to help them locate items. The retailer's self-checkout machines are equipped with AI-powered cameras to help detect and prevent theft. The company's AI endeavors are likely to pick up, as well: Walmart is hiring a senior manager for its conversational AI team who could earn between $168,000 and $252,000 a year. Walmart's CEO Doug McMillon recently told shareholders that Walmart would focus on enhancing its use of generative AI to better understand its customers and improve its supply chain. Beyond generative AI, McMillon has been pushing the retailer in a technology-savvy direction, including setting up drone delivery services in some stores. While some major US companies are torn on whether to embrace generative AI, others are introducing AI into their businesses with caution. Accenture and Goldman Sachs, for instance, are building out and testing their own AI tools in the hopes that the technology can streamline workflows and boost productivity. Tech companies like Meta, Netflix, and the dating app Hinge are willing to pay six-figure salaries to attract generative AI talent. CEO of Genies, an avatar tool company, previously told Insider that he has been spending more than $2,000 a month on ChatGPT Plus accounts for all his employees, which he claimed is freeing up "hours" worth of work. Meanwhile, more than 20 companies, such as beauty product manufacturer Estée Lauder, consulting giant PWC, and workflow automation platform Zapier have signed up as early beta users of OpenAI's ChatGPT Enterprise – a version of ChatGPT launched on Monday that is tailored for businesses to use. While the Chief People Officer and EVP of Walmart addressed the limitations of generative AI in the blog post, they believe the benefits will outweigh them. "Here's our belief: It will take both people and technology to deliver on our purpose to help people save money and live better," the blog post read. Do you work at Walmart and have a story to share? Contact this reporter at [email protected]. Check out Insider's source guide for other tips on sharing information securely. Read the original article on Business Insider
Emerging Technologies
Five cars were destroyed at Sydney Airport on Monday 11 September, following the explosion of a Lithium-ion battery in a luxury electric car. Firefighters from Fire and Rescue NSW (FRNSW) were called to the parking lot on Airport Drive, Mascot, at around 8:30 p.m., where flames had engulfed a luxury electric vehicle before spreading to another four motor vehicles. It was determined that the battery, which had recently been detached from the car and stored in the lot, was responsible for the fire. FRNSW’s Fire Investigation and Research Unit attended the scene and is continuing to investigate the circumstances surrounding the incident. Research officers from FRNSW’s Safety of Alternative and Renewable Energy Technologies (SARET) team have also inspected the aftermath. FRNSW said these insights will assist them in continually developing best practices for emergency response to lithium-related fires and informing the industry on how to manage the risks associated with emerging technologies.
Emerging Technologies
China has a "stunning lead" in 37 out of 44 critical and emerging technologies as Western democracies lose a global competition for research output, a security think tank said on Thursday after tracking defence, space, energy and biotechnology. The study, funded by the United States State Department, found the United States was often second-ranked, although it led global research in high-performance computing, quantum computing, small satellites and vaccines. "Western democracies are losing the global technological competition, including the race for scientific and research breakthroughs," the report said, urging greater research investment by governments. China had established a "stunning lead in high-impact research" under government programs. The report called for democratic nations to collaborate more often to create secure supply chains and "rapidly pursue a strategic critical technology step-up". ASPI tracked the most-cited scientific papers, which it said are the most likely to result in patents. China's surprise breakthrough in hypersonic missiles in 2021 would have been identified earlier if China's strong research had been detected, it said. "Over the past five years, China generated 48.49% of the world's high-impact research papers into advanced aircraft engines, including hypersonics, and it hosts seven of the world's top 10 research institutions," it said. In the fields of photonic sensors and quantum communication, China's research strength could result in it "going dark" to the surveillance of western intelligence, including the "Five Eyes" of Britain, the United States, Australia, Canada and New Zealand, it said. National talent flows of researchers were also tracked and monopoly risks were identified. China was likely to emerge with a monopoly in 10 fields including synthetic biology, where it produces one-third of all research, as well as electric batteries, 5G, and nano manufacturing. The Chinese Academy of Sciences, a government research body, ranked first or second in most of the 44 technologies tracked, which spanned defence, space, robotics, energy, the environment, biotechnology, artificial intelligence (AI), advanced materials and quantum technology. China was bolstering its research with knowledge gained overseas, and the data showed one-fifth of the top Chinese researchers were trained in a Five Eyes country, it said. The study recommended visa screening programs to limit illegal technology transfers and instead favour international collaboration with security allies. Australia's universities have said they are complying with foreign influence laws designed to stop the illegal transfer of technology to China, but also noted international collaboration is an integral part of university research.
Emerging Technologies
The U.S. Navy is operating or developing nearly a dozen different unmanned sea vehicles for use in maritime security operations. Some of the vehicles operate on the ocean’s surface, and others beneath it. Some are no bigger than torpedoes and must be launched by larger vessels, while others are autonomous, robotic warships. Pursuit of unmanned sea systems is not a new endeavor for the Navy. The Office of Naval Research recognized their potential decades ago, and smaller systems have been used in mine countermeasures for many years. The defense department has been experimenting for seven years with a transoceanic, unmanned surface warship called Sea Hunter developed by Leidos LDOS . Boeing BA is preparing to deliver the first of five extra-large unmanned submarines dubbed Orca that can operate at unprecedented depths. Both vehicles are capable of performing multiple warfighting missions. What’s new in recent years is that emerging technologies such as artificial intelligence have expanded the scope for robotic operations at sea. Chief of Naval Operations Admiral Michael Gilday has identified unmanned vehicles as a high-priority development area, along with digital networking and extended-range fires. The Sea Hunter unmanned surface vehicle developed by Leidos has been in operation for seven years. ... [+] The vehicle has transoceanic range and potential to execute multiple warfighting missions. Wikipedia The Navy released an unmanned campaign framework in 2021 that emphasized how robotic warships could enable distributed maritime operations, the service’s driving organizational construct for the future. With the number of manned warships in the fleet seemingly stuck around 300 for the foreseeable future, unmanned systems may be the only way to meet warfighting and presence objectives within available budgets. Although it will be a long time, if ever, before unmanned systems can deliver the functionality of a crewed submarine or destroyer, they can complement the manned fleet by performing tasks too dangerous or routine to justify assigning a manned warship. For instance, sending manned warships into the Baltic or Black Seas in an East-West war could place hundreds of sailors at risk; unmanned systems may be able to perform the necessary reconnaissance and strike missions without risking U.S. lives. Thus far, the Navy’s interest in unmanned sea systems has focused mainly on their potential to enable new operational concepts. However, if the technology proves useful, larger systems such as Sea Hunter and Orca might open the door to a new paradigm for naval shipbuilding. As I noted in a Forbes article earlier this week, naval shipbuilding today is a complicated and costly enterprise even when managed efficiently. It produces warships typically costing over a billion dollars each. Unmanned warships cost a small fraction of that amount to build, and a similarly low amount to operate. The possibility thus exists to pioneer new approaches to naval shipbuilding, approaches that can grow in scope as the use of robotic systems at sea expands in the future. Here are a few ways in which unmanned warships might revolutionize the way U.S. warships are built and operated: 1. Simplified designs that eliminate the complexity imposed when making manned vessels habitable and survivable. Many of the demanding specifications for current warships are driven by the need to accommodate a hundred or more sailors; eliminate the sailors, and the design requirements become much less burdensome—reducing cost to a point where survivability becomes a less critical feature. 2. Simplified engineering that compresses the time needed to transition from concept to construction. With a much simpler design, the demands on engineers to translate specifications into systems is correspondingly reduced, saving time and money. 3. Simplified construction as less costly and demanding processes enable a return to serial production. Serial production on the Liberty Ship model doesn’t exist in naval shipbuilding today, but it could return if specifications were suitably simplified and unit costs fell to a fraction of what manned warships cost. 4. Simplified planning as reduced material requirements permit streamlining of supply chains. Modern warship construction typically is supported by hundreds of subcontractors, but if survivability and other features associated with manning are eliminated, fewer specialized suppliers would be needed and integrators could rely more on commercial inputs. 5. Simplified innovation as less complicated designs facilitate the rapid insertion of advanced technology such as machine learning and digital networking. Unmanned systems substitute software for people, which implies a capacity for fast reconfiguration without necessarily requiring new hardware. 6. Simplified modification as threats evolve, often by porting new source code into software reconfigurable architectures from remote locations. In other words, the design features that facilitate introduction of new innovations also could greatly reduce the time and funding needed to modify warships in response to new operational challenges. 7. Simplified sustainment owing to less demanding designs and greater reliance on expendable/attritable systems. Unmanned systems should be much easier to repair and maintain than manned systems, and their supply requirements at sea would be negligible; for instance, Sea Hunter can traverse the Pacific in both directions on a single tank of fuel. 8. Simplified industrial bases as the ranks of sub-tier suppliers shrink and integrators shift to reliance on dual-use or commercial technologies. Because the barriers to building warships would diminish, additional integrators might enter the business, creating a more resilient industrial base. These ideas are purely conceptual, reflecting the fact that development of unmanned warships—especially highly capable, multi-mission ships—is in its infancy. The Navy could fruitfully accelerate its development of unmanned warships at modest cost, perhaps producing revolutionary results within a few years. Having said that, it will be a long time before the Navy can dispense with the processes it currently depends on to build manned warships. That may never happen. But unmanned systems open the door to building a bigger fleet at lower cost. Boeing and Leidos, mentioned above, contribute to my think tank. I am indebted to Maiya Clark of the Heritage Foundation for offering remarks at a Lexington Institute working group that stimulated my thinking on the industrial-base implications of unmanned warships.
Emerging Technologies
India and Japan enjoy a Special Strategic and Global Partnership, a relationship that has evolved over the years with the elevation of the partnership from a Global Partnership to a Strategic and Global Partnership in 2006 and eventually to a Special Strategic and Global Partnership in 2014. This partnership is built on regular annual summits, the latest of which was held in March 2022 in New Delhi, where the two sides engaged in a 2+2 Ministerial meeting. According to sources, “Our partnership is a comprehensive one, covering a broad range of areas such as defence and security, trade and investment, science and technology, education, healthcare, and emerging technologies. Defence and security cooperation has emerged as one of the most important pillars of the Special Strategic and Global Partnership, with both countries committed to ensuring peace and stability in the Indo-Pacific region.” The two sides have made significant progress in this area, including the successful conduct of the first fighter jet exercise “Veer Guardian” in Japan in January 2023 and the fourth “Dharma Guardian” army exercise, which was conducted for the first time in Japan. “Bilateral maritime security cooperation, including Navy to Navy cooperation, has also seen significant progress, with a large number of exercises being conducted. India’s participation in the International Fleet Review and the Malabar exercise off the Coast of Japan in November last year is a testament to this cooperation,” the source added. Commercial and Economic Front On the commercial and economic front, India and Japan have set up Japan Industrial Townships (JITs) to promote economic partnership, investment, industry, and infrastructure development. These integrated industrial parks provide world-class infrastructure facilities, plug-and-play factories, and investment incentives for Japanese companies. Additionally, India and Japan have had a Comprehensive Economic Partnership Agreement (CEPA) since 2011, which covers not only trade in goods but also services, movement of natural persons, investments, intellectual property rights, custom procedures, and other trade-related issues. The Industrial Competitiveness Partnership (IJICP) under which both countries cooperate to promote industrial cooperation between the two countries has also been established, with a Roadmap launched at the Annual Summit held in New Delhi in March 2022. Japan is the fifth-largest investor in India, with around 1450 Japanese companies operating in India today. The two countries are also cooperating in the areas of skill development and the movement of skilled workers, with Memorandums of Cooperation signed on Technical Intern Training Program (TITP) and Specified Skilled Worker (SSW). Under the SSW, 120 Indian youth have already been gainfully employed in Japan. Clean Energy Clean energy is another area where India and Japan have launched a partnership aimed at promoting energy cooperation through diverse and realistic energy transitions utilizing all energy sources and technologies to ensure energy security, carbon neutrality, and economic growth. Significance of bilateral meeting on Monday Prime Minister Narendra Modi will meet his Japanese counterpart Kishida Fumio and hold bilateral discussions in New Delhi tomorrow. The meeting between the two leaders comes at a time when India is holding the G20 presidency and Japan is hosting the G7 Leaders Summit later in May and also the QUAD leaders are meeting later that month. This presents an opportunity to discuss how the G20 and G7 can work together to converge priorities on critical global issues, including food and health security, energy transitions, and economic security. The meeting also provides an opportunity to review the progress made in our bilateral ties since the last Summit meeting held in March 2022.
Emerging Technologies
BENGALURU, India—Technology that can pull water out of thin air could help solve the world’s growing water scarcity problem, but most solutions are expensive and difficult to scale. Indian startup Uravu Labs says its low-cost modular approach could provide a blueprint for more affordable and sustainable atmospheric water harvesting. What comes out of the pipe, the company’s website says, is “100 percent renewable water”—renewably powered, harnessed from a vast and nearly inexhaustible source, and with no wastewater produced in the process. Uravu is putting the finishing touches on its biggest unit to date. The device, the company says, will be capable of harvesting up to 1,000 liters of water a day when it goes online later this month, at its headquarters in the south Indian city of Bengaluru. By the end of the year the company hopes to scale that up to 10,000 L a day, says cofounder Swapnil Shrivastav. Uravu isn’t the only company working on this problem, but its approach is different from that of most of its competitors. The vast majority of companies working in this area rely on technology similar to that found in air-conditioning units—a coiled tube full of refrigerant is used to cool air until its moisture condenses on the surface as liquid water. However, the approach requires huge amounts of electricity, says Shrivastav, and this makes these units expensive to run and unsustainable unless specifically powered by renewable energy. “Our goal from day one was to not only be scalable and renewable but also be the most affordable,” he says. This led Uravu to a solution that relies on water-loving materials called desiccants to absorb moisture from the air, which are then heated using renewable-energy sources to release the water again. The team started work on the idea in 2017 and made it to the finals of the Water Abundance XPrize, a two-year competition organized by XPrize to catalyze progress in atmospheric water-harvesting technology. Since then Uravu has raised US $2.5 million and completely revamped its design. Initially, the company was working on a self-contained unit about 4 meters across that combined a solid desiccant called silica gel with a solar thermal panel, which uses the sun to warm up water that can then be used to heat the desiccant. This could produce about 10 L of water a day, but because each unit required dedicated components like fans, valves, and pumps, the company found it got very little cost reductions when it tried to scale up, says Shrivastav. That’s why Uravu has now switched to a new design that decouples the key stages involved in harvesting water from air. One key step involved swapping out the silica gel for a liquid desiccant—calcium chloride solution—which is pumped between dedicated absorber and desorber units. In the absorber units the desiccant is sprinkled through a mesh to increase the surface area as air is drawn over it by a fan. After it absorbs moisture from the air, the desiccant is pumped to a separate desorber unit where the solution is heated to between 60 and 70 °C by a coiled pipe filled with hot water. This creates air so humid it’s almost like steam, which then passes to a low-power, air-cooled condenser that turns it into liquid water. Decoupling the absorber and desorber leads to significant scaling efficiencies, says Shrivastav, since each desorber can serve several of the less complicated absorbers. Each absorber unit is capable of collecting as much as 200 L per day. To create Uravu’s 1,000-L-per-day machine, the company will couple six absorbers with a single desorber unit. The new design can now use a variety of heat sources, including solar thermal panels, biomass burners, or industrial waste heat. If working with solar thermal power, the design also incorporates a hot water tank that acts as a thermal battery, allowing it to run even when the sun goes down. The company’s approach, says Shrivastav, uses roughly 40 percent less power than more conventional condensing systems to produce the same number of liters per day. The output of condensing systems also varies considerably depending on the local humidity, whereas Uravu can simply load more desiccant into its absorber when operating in drier areas to achieve the same level of output, he adds. Uravu’s 1,000-liter-per-day atmospheric water generator combines a single desorber unit [foreground] with multiple absorber units [background].Edd Gent One area where Uravu’s system doesn’t match up favorably is its physical footprint. A condensing system that fits in a shipping container can produce 6,000 L per day, but in the near term the company is unlikely to get more than 2,000 L out of a system that size, Shrivastav says. While the system may not be as compact, its much lower operating costs mean the company can already produce water for about 6 U.S. cents a liter, which is about 40 percent lower than a typical condensing system. Uravu expects that to come down to 3 cents in the near future, Shrivastav says. One problem with these kinds of large water-harvesting units, however, is that they require a large up-front investment, says Chiara Neto, a professor of physical chemistry at the University of Sydney, in Australia. That may put them out of reach for many poorer communities most in danger of water scarcity. And while it may be cheaper than other air-to-water approaches, Uravu still can’t compete with water from conventional sources, which is typically below 1 cent per liter in India. The math is more favorable in other geographies with more expensive water, says Shrivastav, such as the Middle East and Australia. But closer to home, the company has had to explore some novel business models that involve people paying a premium to burnish their environmental credentials. Uravu is currently providing renewable drinking water in reusable glass bottles to high-end restaurants and hotel chains in Bengaluru. Shrivastav says the company’s aim is to compete with premium mineral waters that sell at a higher margin than standard drinking water. Uravu has also installed one of its units at India’s third largest distiller, Radico Khaitan, which plans to use its water in high-end spirits as a marketing ploy. Shirvastav is also hopeful that NGOs or government organizations may help subsidize some of the cost of Uravu’s units, based on their knock-on benefits. Because they don’t rely on natural water sources, the units can be set up anywhere, which could help tackle the social cost to women and children who have to travel miles every day to collect water. And in India, which has a growing groundwater-depletion crisis, they could replace reverse osmosis plants that waste as much as 4 L of water for every liter of drinking water they produce. Ultimately, though, for this kind of absorption-based water harvesting to become widely practical there needs to be some kind of breakthrough in desiccant technology, says Swee Ching Tan, an assistant professor of materials science and engineering at the National University of Singapore. While he understands that Uravu is likely using calcium chloride due to its low cost and easy availability, for the approach to become economically viable it requires desiccants that can hold much more water and release it at lower temperatures. “We need material science to make more effective, efficient materials,” Tan says. Shrivastav says Uravu’s design is agnostic about what desiccant is used, as long as it’s soluble, and so the company should be able to work with new ones as they become available. But he points out that new materials typically take 8 to 12 years to reach commercialization. So waiting for better desiccants isn’t an option. And ultimately, even if Uravu’s costs limit where the technology can be applied, it could still make a significant difference. “We want to reach around 1 million liters per day in the next five years,” he says. “That translates to saving more than 2.5 million liters of groundwater and more than 15 to 20 tonnes of CO2.” This article appears in the April 2023 print issue as “Air, the New Freshwater Source.” Update 14 Feb. 2023: Uravu reports they’ve raised $2.5 million in funding, not the $4.5 million Spectrum originally reported. Spectrum regrets the error. Edd Gent is a freelance science and technology writer based in Bengaluru, India. His writing focuses on emerging technologies across computing, engineering, energy and bioscience. He's on Twitter at @EddytheGent and email at edd dot gent at outlook dot com. His PGP fingerprint is ABB8 6BB3 3E69 C4A7 EC91 611B 5C12 193D 5DFC C01B. His public key is here. DM for Signal info.
Emerging Technologies
The Biden administration announced Friday that seven of the nation’s top artificial intelligence developers have agreed to guidelines aimed at ensuring the "safe" deployment of AI. Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI all agreed to the guidelines and will participate in a Friday afternoon event with President Biden to tout the voluntary agreement. "Companies that are developing these emerging technologies have a responsibility to ensure their products are safe," the White House said in a Friday morning statement. "To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety." Under the voluntary guidelines, companies agree to ensure their AI systems are "safe" before they are released to the public. That involves a commitment to "internal and external security testing" of these systems before they are released. "This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects," the White House said. Companies agreed to share best practices for safety across the industry but also with the government and academics. he seven companies agreed to invest in cybersecurity and "insider threat safeguards" in order to protect unreleased AI systems, and to allow "third-party discovery and reporting of vulnerability " in their AI systems. Another major component of the White House-brokered deal is steps to "earn the public’s trust." According to the announcement, the companies agreed to develop tools to help people know when content is AI-generated, such as a "watermarking" system. "This action enables creativity with AI to flourish but reduces the dangers of fraud and deception," the White House said. Companies will also report AI systems’ capabilities and limitations, research the risks AI can pose, and deploy AI to "help address society’s greatest challenges," such as cancer prevention and "mitigating climate change." Senate Majority Leader Chuck Schumer, D-N.Y., who has been looking for ways to regulate AI in the Senate, welcomed the White House announcement but said some legislation will still be needed. "To maintain our lead, harness the potential, and tackle the challenges of AI effectively requires legislation to build and expand on the actions President Biden is taking today," he said. "We will continue working closely with the Biden administration and our bipartisan colleagues to build upon their actions and pass the legislation that’s needed."
Emerging Technologies
If you’re going to let a car drive itself, it had better have an exquisitely detailed picture of its surroundings. So far, the industry has favored the laser-powered precision of lidar. But startup Nodar, based in Somerville, Mass., says camera-based systems could do better. Lidar, which is short for light detection and ranging, scans the environment with laser beams and then picks up the reflections. Measuring how long it takes for the light to bounce back makes it possible to judge the distance and use that information to construct a 3D image. Most of today’s autonomous vehicles, including those made by Waymo and Cruise, rely heavily on lidar, which can cost tens of thousands of dollars for just a single unit. Nodar says its alternative would cost far less. Camera-based 3D vision systems have been considerably worse at judging distances than lidar, and they often struggle in low light or inclement weather. But thanks to advances in automotive camera technology and Nodar’s proprietary software, CEO Leaf Jiang says that’s no longer the case. Nodar takes images from two cameras spaced well apart and then compares their views to construct a triangle, with the object at the far apex. It then calculates how far away an object is. “Camera-based systems in general, have always gotten a bad rap,” he says. “We’re hoping to dispel those myths with our new results.” In recent testing, he says, the company’s technology consistently outperformed lidar on both resolution and range in a variety of scenarios, including night driving and heavy fog. In particular, it was able to detect small objects, like pieces of lumber or traffic cones, at twice the distance of lidar, which Jiang says is important for highway driving at higher speeds. Nodar takes images from two cameras spaced well apart and then compares their views to construct a triangle, with the object at the far apex. It then calculates how far away an object is. Such stereo camera setups are well known; several automotive suppliers incorporate them into advanced driver-assistance systems (ADAS). In dry, well-lit conditions Nodar’s software can generate 40 million 3D data points per second, as demonstrated in this automotive environmental simulation chamber in Germany.Nodar However, the approach faces two challenges. The two cameras have to be precisely calibrated, which is tricky to do on a vibrating car exposed to a wide range of environmental conditions. Normally this is achieved using exquisitely engineered mounts that keep the cameras stable, says Jiang, but this requires them to be close together. That’s a problem because the smaller the baseline distance between the cameras, the harder it is to triangulate to distant objects. To get around this, Nodar has developed patented auto-calibration software that allows you to place cameras much farther apart while making the system much less sensitive to instabilities. Normally camera calibration is done in carefully controlled environments using specially designed visual targets, but Nodar’s software uses cues in real-world scenes and is able to sync the two cameras up on every frame. This is computationally complex, says Jiang, but Nodar has developed highly efficient algorithms that can run in real time on off-the-shelf automotive chips. By allowing the cameras to be placed much farther apart, their system makes it possible to triangulate to objects as far out as 1,000 meters, says Jiang, which is substantially farther than most lidar sensors can manage. The other challenge for cameras is that, unlike lidar, which has its own light source, they rely on ambient light. That’s why they often struggle at night or in bad weather. To see how their system performed in these situations, Nodar conducted a series of tests on a remote airstrip in Maine with almost zero light pollution. The company also worked with an automotive environmental-simulation chamber in Germany that can recreate conditions like rain and fog. They collected data using a pair of 5.4-megapixel cameras with 30-degree field-of-view lenses spaced 1.2 meters apart and compared their results against a high-end 1,550-nanometer automotive lidar. In broad daylight, Nodar’s setup generated 40 million 3D data points per second compared to the lidar’s 600,000. In extremely heavy rain the number of valid data points dropped by only around 30 percent, while for lidar the drop was roughly 60 percent. And in fog with visibility of roughly 45 meters they found that 70 percent of their distance measurements were still accurate, compared to just 20 percent for lidar. At night, their system could detect a 12-centimeter-high piece of lumber from 130 meters away using high-beam headlights, compared to less than 50 meters with lidar. Lidar performed similarly with a 70-centimeter-high traffic cone, but Nodar’s technology could spot it from 200 meters away. The capabilities of automotive cameras are improving rapidly, says Jiang. Today’s devices are able to operate in very low light levels, he says, and can pick out fine details in a foggy scene not visible to the naked eye. But that’s also complemented by the company’s proprietary stereo matching algorithm, which Jiang says can sync up images even when they’re blurry. This allows them to use longer exposure times to collect more light at night, and also makes it possible to triangulate on fuzzy visual cues in fog or rain. If their technology works as they say, the advantages would be “lower cost, longer range, better resolution, and easy integration, as they use off-the-shelf cameras,” says Guarav Gupta, an analyst at Gartner. But the only people who can really validate the claims are the automotive companies Nodar is working with, he adds. Its also important to note that automotive lidar provides a 360-degree view around the vehicle, says Steven Waslander, director of the Toronto Robotics and AI Laboratory at the University of Toronto. It’s probably not fair to compare that performance against forward-facing stereo cameras, he says. If you wanted to replicate that 360-degree view with multiple stereo systems, he adds, it would cost more in terms of both money and computational resources. Jiang says that Nodar’s improved range and resolution could be particularly important for highway driving, where higher speeds and longer braking distances make detecting distant objects crucial. But Mohit Sharma, a research analyst at Counterpoint Research, points out that emerging lidar sensors using optical phased arrays, such as the lidar-on-a-chip made by Analog Photonics, will allow much faster scanning speeds suitable for highway driving. Ultimately, Sharma thinks no one technology is going to be a silver bullet for autonomous vehicles. “I believe sensor fusion is the best way to deal with complexities of autonomous driving and innovation in both lidar and camera technology will be helpful in reaching full self driving,” he says. - Aeva Unveils Lidar on a Chip › - New Lidar System Promises 3D Vision for Cameras, Cars, and Bots › - MIT and DARPA Pack Lidar Sensor Onto Single Chip › Edd Gent is a freelance science and technology writer based in Bengaluru, India. His writing focuses on emerging technologies across computing, engineering, energy and bioscience. He's on Twitter at @EddytheGent and email at edd dot gent at outlook dot com. His PGP fingerprint is ABB8 6BB3 3E69 C4A7 EC91 611B 5C12 193D 5DFC C01B. His public key is here. DM for Signal info.
Emerging Technologies
Demand For Call Center, Remote Customer Service Jobs See Sharp Uptick Since Pre-Pandemic: Report Gig work also grew during this time frame owing to the emergence of work-from-home as a predominant model of work. Businesses are recognising the rising importance of improving customer experience as there has been an exponential surge in demand from jobseekers for call centre and remote customer service jobs since pre-pandemic, says a survey. According to data from leading job site Indeed, there was an elevated demand for call centre and remote customer service jobs in the period from Jan 2020 -2021, with a sharp increase of 498.40% in job postings and a tremendous increase in jobseeker interest. The uptick was largely owing to the pandemic necessitating more virtual customer service representatives, an exponentially rising demand for quick-commerce platforms and the growth of the app economy. Gig work also grew during this time frame owing to the emergence of work-from-home as a predominant model of work. However, job trends over the past year reveal a moderate drop in both job postings (by 73.50%) and jobseeker interest (by 64.50% clicks per million) as the economy opens up and on-site jobs resume. "Especially in a post pandemic world, there is definitely a growing need for professionals who can manage customer interactions with empathy and efficiency, while also possessing the technical expertise to work with emerging technologies like AI and machine learning," said Sashi Kumar, Head of Sales, Indeed India. "It is also heartening to see that the country is a preferred destination for call centres and customer support outsourcing due to its large pool of skilled workers, lower operating costs, and proficiency in English," Kumar added. Bangalore leads job postings in this segment by a large margin, contributing to 19.14% of total jobs. This is followed by Delhi (9.33%), Mumbai (9.11%), Chennai (8.33%) and Hyderabad (6.31%).
Emerging Technologies
Adopting more efficient and low-carbon technology could create 29m new jobs and double the turnover of the chemicals industry, one of the world’s biggest emitters of carbon dioxide, according to a new report.Failure to do so could condemn the world to climate chaos, however, as the climbing emissions from the manufacture of chemicals could result in a global temperature rise of as much as 4C above pre-industrial levels, which would bring catastrophe.Chemicals manufacturing accounts for about 4% of global greenhouse gas emissions – roughly equal to the output of Russia, the world’s fourth largest emitting country – and the products are used in a myriad of other industries, from farming to automotive to consumer goods.It would be all but impossible for the world to stay within the limit of a 1.5C temperature rise, which scientists say is vital and which nations agreed to aim for last year at the Cop26 UN climate summit, without sharp reductions in emissions from the chemicals industry.By adopting techniques already available, such as recycling plastic and using fertiliser in a more targeted way, and investing in emerging technologies such as hydrogen, the industry could reduce its emissions rapidly. By 2050, it could be a net absorber of carbon dioxide instead of an emitter, according to a report published on Tuesday by Systemiq, a company that advises on sustainability, and the Centre for Global Commons at the University of Tokyo.Paul Polman, former chief executive of Unilever and founder of Systemiq Capital, a venture capital firm related to Systemiq, said the chemicals industry was coming under increasing scrutiny as companies across the world sought to reduce their carbon footprint. Chemicals companies are usually invisible to the public, but the companies they supply to, which make consumer goods, are under pressure from consumers.“They have not faced much pressure from consumers but chemicals are the key to all industries,” said Polman. “They have been notorious for lobbying in a different direction. But they can change, it is feasible, and the transformation can be fast.”He told the Guardian: “This [report] is the first time that we can show this industry that it’s really possible to change. The mindset of the industry has to change, to show there are enormous possibilities. The financial markets are also very keen to fund this transition.”The investments needed would amount to about $100bn a year by 2050, which is small compared with the current size of the industry, and would double its annual revenues from about $4.7tn a year at present, according to the report. The transition would also create about 11m new jobs in chemicals manufacture and a further 18m in related industries, the report forecast.Chad Holliday, former chief executive of the chemicals giant DuPont, said: “We need realistic and immediate action from industry on the climate goals agreed at an international level. We want to see ambitious companies grabbing the opportunities represented by the global net zero transition. A planet positive chemicals industry is possible and this is a pivotal moment for the industry to redefine its future.”Chemicals companies have resisted pressure from governments to reduce their carbon, for instance lobbying against inclusion in the EU’s emissions trading scheme for years. But the new report found that chemicals companies could harness emerging technologies such as hydrogen and ammonia to open up new markets.Many of the other technologies needed to reduce emissions from chemicals production are already in place. Energy efficiency is now in the spotlight, as gas prices have soared, prompting more companies to examine their processes. Recycling products, such as plastic, can vastly reduce the energy and materials needed, and is far less carbon-intensive than making plastic from scratch.Carbon capture and storage would also be needed, a technology currently still not in commercial use, but which could play a relatively minor role.
Emerging Technologies
The US and China are in a quantum computing race for dominance. Image: Twitter Last November, IBM launched its 127-qubit Eagle, surpassing China’s 113-qubit Jiuzhang to become the world’s fastest quantum computer. Now, China risks falling further behind in the quantum computing race as the United States reportedly weighs new export controls on the game-changing technology. The new ban, if implemented, would target quantum computing, artificial intelligence software and other emerging technologies that could have security implications vis-a-vis China. The ban would mark a next salvo on the Biden administration’s October 7 move to block high-end chips and advanced chip-making equipment exports to China. Bloomberg reported that US industry experts are now weighing in on the potential parameters of the restrictions, which are still preliminary, and that US allies are being consulted. Analysts say any such ban would further antagonize China, which strongly protested the October 7 bans, and could put the two rivals on a dangerous collision course. Chinese media commentators say the US aims to strengthen efforts to slow China’s development in emerging technologies, where the two strategic rivals are racing to lead the way. The commentators suspect the US will not only aim to block China from obtaining key quantum computing parts and software but also force other countries to uphold the bans. The US is making no secret of how it views the emerging and sensitive technology. US National Security Advisor Jake Sullivan said in a speech last month on technology, competitiveness and national security that “computing-related technologies, including microelectronics, quantum information systems and artificial intelligence” as among developments “set to play an outsized importance over the coming decade.” He also noted the importance of export controls to “maintain as large of a lead as possible” over rivals. Currently, Alphabet’s Google, Intel, Microsoft and IBM are all investing heavily in quantum computing projects. Other major players in the sector include US-based IonQ and Japan’s Fujitsu Ltd. IBM is on the cutting edge of quantum computing. Photo: IBM Quantum computers mark, well, a quantum leap over the speed and power of current supercomputers. That means they will likely be able to crack and bypass the encryption technologies used to secure current computer communications. More broadly, the technology is expected to unleash waves of new innovation that will revolutionize industry, communications and, crucially, defense. On September 15, US President Joe Biden signed an executive order urging the US Treasury Department’s Committee on Foreign Investment into the US (CFIUS) to ensure more robust consideration of evolving national security risks. What may otherwise appear to be an economic transaction undertaken for commercial purposes may actually present an unacceptable risk to US national security when conducted with foreign adversaries or countries of special concern, according to the Executive Order. It said the committee should consider the transaction’s effect on US supply chain resilience and national security across the microelectronics, AI, biotechnology and quantum computing sectors.   The Biden administration is now working on an outbound investment review mechanism that would scrutinize money heading to certain Chinese technologies, and new quantum computing and artificial intelligence controls could be included, according to an anonymous source quoted by Bloomberg. Biden has said the new curbs unveiled by the US Commerce Department’s Bureau of Industry and Security (BIS) on October 7 have already successfully blocked China’s access to key US chip technologies. The curbs have also limited how US citizens and residents are allowed to work and collaborate with Chinese tech firms. Secrss.com, a Shanghai-based research institute, noted in several articles this year that quantum computing can be applied in computing, communication, navigation, power and military defense industries. It said quantum computing had the potential to change the results of future wars. One article said quantum computers can be used to initiate cyber warfare by decoding passwords and bypassing encryption used on current digital computers. It said quantum computing technologies can also be used to navigate drones and submarines. Another article, citing a report in the European Physical Journal, said a quantum radar would be launched one day and used in space war. China is making strong progress in the field in the current open trading environment. The 66-qubit Zuchongzhi 2, for instance, is reportedly 10 million times faster than Google’s 55-qubit Sycamore. China’s Jiuzhang quantum computer. Photo: China University of Science and Technology In December 2020, a University of Science and Technology of China research team led by scientist Pan Jianwei launched Jiuzhang, a light-based or photonics quantum computer that can work at room temperature. It is said to be 10 billion times faster than Google’s Sycamore. In May 2021, Pan and his team launched Zuchongzhi 2, a superconducting quantum computer that needs to work at a temperature close to absolute zero. Jiuzhang and Zuchongzhi 2’s details were published last November by the Physical Review Letters, a scientific journal of the American Physical Society. Soon thereafter, IBM’s Eagle surpassed Zuchongzhi 2 to become the world’s fastest superconducting quantum machine, underscoring the two sides’ budding rivalry in the filed. Read: New US chip ban takes tech war to dire next level Follow Jeff Pao on Twitter at @jeffpao3
Emerging Technologies
Skip to page content Objective Analysis. Effective Solutions. Toggle Menu Site-wide navigation About RAND RAND at a Glance Diversity, Equity, and Inclusion RAND History Leadership Organizational Structure Contacts Locations Quality Standards Educational Opportunities Events RAND Campaign Press Room Careers Research Research Areas Children, Families, and CommunitiesCyber and Data SciencesEducation and LiteracyEnergy and EnvironmentHealth, Health Care, and AgingHomeland Security and Public SafetyInfrastructure and TransportationInternational AffairsLaw and BusinessNational Security and TerrorismScience and TechnologySocial EquityWorkers and the Workplace Research Divisions RAND Army Research Division RAND Australia RAND Education and Labor RAND Europe RAND Health Care RAND Homeland Security Research Division RAND National Security Research Division RAND Project AIR FORCE RAND Social and Economic Well-Being Featured Research Activities RAND International Center for Asia Pacific Policy Center for Global Risk and Security Center for Middle East Public Policy Center for Russia and Eurasia RAND Gulf States Policy Institute Research Departments Latest Insights Reports /content/admin/rand-header/jcr:content/par/header/reports Blog Posts /content/admin/rand-header/jcr:content/par/header/blogPosts Policy Experts By Research Area Children, Families, and CommunitiesCyber and Data SciencesEducation and LiteracyEnergy and EnvironmentHealth, Health Care, and AgingHomeland Security and Public SafetyInfrastructure and TransportationInternational AffairsLaw and BusinessNational Security and TerrorismScience and TechnologySocial EquityWorkers and the Workplace Capabilities Drawing upon decades of experience, RAND provides research services, systematic analysis, and innovative thinking to a global clientele that includes government agencies, foundations, and private-sector firms. Graduate School Student Spotlight Jalal Awan Ph.D. Candidate, Pardee RAND Graduate School, and Assistant Policy Researcher, RAND Javier Rojas Aguilera Ph.D. Student, Pardee RAND Graduate School, and Assistant Policy Researcher, RAND About 中文(简体) العربية Sign In Sign out Cart Toggle Search RANDPublished ResearchResearch Reports>RR-A609-1 RAND was commissioned by the UK Space Agency to conduct a study exploring the variety of possible future uses of space out to 2050 and identifying potential 'game-changers' and the implications for the UK space sector. The research provided a description of how the way we use space will change, who the key space actors will be, and how well-placed the UK is to address these future trends in the space economy. Download eBook for FreeFull DocumentDoes not include Technical Annex FormatFile SizeNotes PDF file 8.9 MB Use Adobe Acrobat Reader version 10 or higher for the best experience. Technical Annex FormatFile SizeNotes PDF file 1.8 MB Use Adobe Acrobat Reader version 10 or higher for the best experience. Research Questions How will the way we use space change out to 2050?Who will the key space actors be?How well-placed is the UK to address future changes? Recent years have witnessed major changes in how humans are utilising space. Access to and use of space has become essential to modern digital society and many aspects of everyday life. The number of space-related activities conducted by government, military and commercial actors around the world is increasing. This second 'space race', brings both threats and opportunities to the UK's economy, security, interests, values and way of life. To help the UK navigate this landscape of threats and opportunities, RAND conducted a study for the UK Space Agency to explore the variety of possible future uses of space out to 2050, as well as to identify potential 'game-changers' and the implications for the UK space sector. The research explored how the way we use space will change, who the key space actors will be, and how well-placed the UK is to address these future trends in the space economy. The research found that the development of new space markets and use cases out to 2050 may render the space sector increasingly inseparable from the wider economy. This is based on projections for the future of space that reflect the increasingly multi-stakeholder nature of the space economy and the growing integration of space-based and terrestrial activities. Significant change is expected both in the upstream segment—activities related to sending spacecraft and satellites into space—and downstream segment—activities using space data to offer products, services and ground segment applications—of the space economy. Key Findings The space economy is a broad ecosystem of space-based and terrestrial markets and activities.Projections for the future of space reflect the multi-stakeholder nature of the space economy and the growing integration of space-based and terrestrial activities. Significant change is expected both in the upstream segment—activities related to sending spacecraft and satellites into space—and downstream segment—activities using space data to offer products, services and ground segment applications—of the space economy. Out to 2050, the upstream segment may experience both incremental and transformative change.Key developments include development, adoption and adaptation of new and emerging technologies, evolving concepts for space flight and operations, and the application of new design and manufacturing techniques, including 'Industry 4.0'. Future downstream markets could encompass a variety of space, hybrid and terrestrial activities and end users.We identified around 200 potential use cases and organised these into 15 clusters: Agriculture Climate and environmental protection Energy Construction, repair and engineering Extractive industries Tourism, culture and entertainment Defence, security and safety Finance and commerce Health, medicine and pharmaceuticals Illicit activities Logistics Manufacturing Science, research and education Telecommunications Transport Various socio-technological enablers and barriers could affect the evolution of upstream and downstream space markets up to 2050.Important enablers include technological innovation, falling launch costs and commercialisation. Conversely, regulatory and socio-cultural factors represent prominent barriers for the future development of the space economy. Development of new markets and use cases may render the space sector increasingly inseparable from the wider economy by 2050. Recommendations To help ensure a more 'future proof' space strategy, there is a need for the UK to:Consider the increasing convergence between space and all other sectors of the wider economy, as well as the merging between multiple technologies, markets and use cases. Work with partners and allies to foster a deeper understanding of competition and collaboration in space and maximise influence over the future direction of an increasingly 'congested, contested and competitive' space domain. Deepen understanding of the UK's unique strengths and 'value proposition' as a potential partner for other government, military, commercial and civil actors in space. Continuously foster innovation and the capacity of space-related organisations to leverage new concepts or technologies that may increase the agility, adaptability, competitiveness and resilience of the UK space sector. Related Products Table of Contents Chapter One Introduction Chapter Two Mapping potential future uses of space Chapter Three Future upstream markets and applications out to 2050 Chapter Four Future downstream markets and applications out to 2050 Chapter Five Enablers and barriers for future uses of space out to 2050 Chapter Six Conclusions and implications for the UK This research was commissioned by the UK Space Agency and conducted by RAND Europe.This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions. The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors. Citation Format: Chicago Manual of Style RAND Corporation Style Manual Black, James, Linda Slapakova, and Kevin Martin, Future Uses of Space Out to 2050: Emerging threats and opportunities for the UK National Space Strategy. Santa Monica, CA: RAND Corporation, 2022. https://www.rand.org/pubs/research_reports/RRA609-1.html. Black, James, Linda Slapakova, and Kevin Martin, Future Uses of Space Out to 2050: Emerging threats and opportunities for the UK National Space Strategy, RAND Corporation, RR-A609-1, 2022. As of January 12, 2023: https://www.rand.org/pubs/research_reports/RRA609-1.html Save to My RAND Email Share on FacebookShare on TwitterShare on LinkedIn
Emerging Technologies