article_text
stringlengths 294
32.8k
⌀ | topic
stringlengths 3
42
|
---|---|
Starbucks is today officially introducing Starbucks Odyssey, launching later this year — the coffee chain’s first foray into building with web3 technology. The new experience combines the company’s successful Starbucks Rewards loyalty program with an NFT platform, allowing its customers to both earn and purchase digital assets that unlock exclusive experiences and rewards.
The company had earlier teased its web3 plans to investors, saying it believed this new experience would build on the current Starbucks Rewards model where customers today earn “stars” which can be exchanged for perks, like free drinks. It envisions Starbucks Odyssey as a way for its most loyal customers to earn a broader set of rewards while also building community.
To develop the project, Starbucks brought in Adam Brotman, the architect of its Mobile Order & Pay system and the Starbucks app, to help serve as a special advisor. Now the co-founder of Forum3, a web3 loyalty startup, Brotman’s team worked on Starbucks Odyssey alongside the Seattle coffee chain’s own marketing, loyalty and technology teams.
While Starbucks had been investigating blockchain technologies for a couple of years, it has only been involved in this particular project for around six months, Starbucks CMO Brady Brewer told TechCrunch. He says the company wanted to invest in this area, but not as a “stunt” side project, as many companies are doing. Rather, it wanted to find a way to use the technology to enhance its business and expand its existing loyalty program.
It opted to make NFTs the passes that allow access to this digital community, but it’s intentionally obscuring the nature of the technology underpinning the experience in order to bring in more consumers — including non-technical people — to the web3 platform.
“It happens to be built on blockchain and web3 technologies, but the customer — to be honest — may very well not even know that what they’re doing is interacting with blockchain technology. It’s just the enabler,” Brewer explains.
To engage with the Starbucks Odyssey experience, Starbucks Rewards members will log in to the web app using their existing loyalty program credentials.
Once there, they’ll be able to engage with various activities, which Starbucks called “journeys” — like playing interactive games or taking on challenges designed to deepen their knowledge of the Starbucks brand or coffee in general. As they complete these journeys, members can collect early digital collectibles in the form of NFTs (non-fungible tokens). Starbucks Odyssey, however, does away with the tech lingo and calls these NFT collectibles “journey stamps” instead.
Additionally, a set of limited-edition NFTs will be available to purchase in the Starbucks Odyessy web app, which also works on mobile devices. Though hosted on the Polygon blockchain, these NFTs will be bought using a credit or debit card — a crypto wallet is not required. The company believes this will make it easier for consumers to engage with the web3 experience by lowering the barrier to entry. It also won’t complicate consumers’ transactions with things like “gas fees,” preferring to offer a bundled price.
The company is not yet ready to share what its NFTs will cost or how many will be available at launch, saying these are decisions that are still being ironed out.
However, the various “stamps” (NFTs) will include a point value based on their rarity and can be bought or sold among Starbucks Odyessy members in the marketplace, with the ownership secured on the blockchain. The artwork on the NFTs is being co-created by Starbucks and outside artists, and a portion of the proceeds from the sale of the limited-edition collectibles will be donated to support causes chosen by Starbucks employees and customers.
By collecting the stamps, members will gain points that can unlock exclusive benefits.
These perks go beyond those you can earn with a traditional Starbucks Rewards account and its “stars.” While today, members can earn things like free coffee, free food or select merchandise, the points earned in Starbucks Odyessy will translate into experiences and other benefits. On the lower end, that could be a virtual espresso martini-making class or access to unique merchandise and artist collaborations. As you gain more points, you may earn invites to special events hosted at Starbucks Reserve Roasteries, or even earn a trip to the Starbucks Hacienda Alsacia coffee farm in Costa Rica. It’s expected the very largest perks will be reserved for those who purchase NFTs, though lesser versions may be offered to those who earn their way up.
For instance, a paid NFT could offer the full travel package and farm tour, while an earned NFT could offer the tour alone with flights and hotels left up to the user. Starbucks hasn’t made any formal decisions on this front, however.
But what the company can say is that it wants to deeply integrate the program with its existing loyalty rewards, beyond simply using the same user account credentials for both programs.
Brewer says Starbucks is already imagining how some of the activities that earn NFTs will be connected to real-world Starbucks purchases, for instance.
In Odyssey, users earn NFTs by doing challenges, which might also include a real-world activity like “try three things on the espresso menu.” This would require the user to show their barcode at checkout — as they would if earning stars — to have their transaction counted toward the Starbuck Odyssey challenge. The company is still determining what mix of games, challenges and quests it will include at launch.
“But we’ll have experiences that do link directly to customers’ behavior in our stores,” Brewer stresses. Most importantly, the company wants to make gaining NFTs something anyone can do — not just those with money to blow on digital collectibles, as is often the case with current NFT communities, which price out the average user.
“There will be a lot of ways for people to earn [rewards] without having to spend a lot of money,” says Brewer. “We want to make this super easy and accessible. There will be plenty of everyday experiences customers can earn like virtual classes or access to limited edition merchandise, for instance. “The range of experiences will be quite vast and very accessible,” he adds.
Starbucks says it explored all the different blockchains for the project but landed on the “proof-of-stake” blockchain technology built by Polygon for this effort because it uses less energy than first-generation “proof-of-work” blockchains, which is more in line with its conversation goals. The idea to enter into the world of web3 makes sense for a company known for taking advantage of emerging technologies and making them more approachable and easy for consumers to access. In years past, Starbucks introduced Wi-Fi in its stores to encourage customers to spend more time during visits. It also pushed the idea of mobile wallets long before Apple Pay became ubiquitous. And it made mobile ordering the norm well ahead of the COVID pandemic, when other restaurant chains picked it up.
But one criticism leveraged against many traditional businesses when they enter the web3 market is that they’re approaching it as a marketing stunt, not a real endeavor. Starbucks, of course, argues that’s not the case here — but only time will tell how serious its interest may be.
“We’re bullish on the future of these technologies enabling experiences that were not possible before,” Brewer claims. The intention is to be flexible and move with the customers as the web3 market changes, he explains. “It’s really important that we’re looking at it for the long-term,” he continues. “But, given that we’re plugging it into our industry-leading, massive scale rewards program — we’re committed,” he says.
The company says its web3 platform will open its waitlist (waitlist.starbucks.com) on September 12 and will launch later in the year. It will remove the waitlist and open the platform more broadly sometime next year. | Emerging Technologies |
Cryomotive is commercializing cryo-compressed hydrogen (CcH2) storage tanks and refueling stations (HRS) for trucks and commercial vehicles. Photo Credit: Cryomotive Image above title: CcH2 offers higher storage density for longer range. Photo Credit: Cryomotive Hydrogen is recognized as a key part of the energy transition required to reduce CO2 emissions and address the growing climate crisis. In the September 2022 report “Hydrogen Insights 2022,” the Hydrogen Council highlighted 680 global large-scale projects that are investing $240 billion in hydrogen up to 2030 — an increase of 50% since November 2021. The report also quoted Tom Linebarger, executive chairman at engine manufacturer Cummins (Columbus, Ind., U.S.): “To move to a zero emissions future, we must have multiple solutions available for our customers who require vastly different applications around the world and hydrogen will play a critically important role.” Multiple solutions will be also required within the hydrogen market, to meet the different requirements for storing and refueling passenger cars versus heavy trucks, for example, and larger versus smaller aircraft. I have written about Type IV compressed hydrogen gas (CGH2) tanks via feature articles in 2020 and 2021, while more recent articles discuss liquid hydrogen (LH2) tanks for heavy trucks and aviation (see “Demonstrating composite LH2 tanks for commercial aircraft” and “ZeroAvia advances … with LH2 in 2027”). Cryo-compressed hydrogen (CcH2) offers a third option for onboard storage tanks in transport/mobility applications. Hydrogen storage tank types. Photo Credit: Slide 61, “Carbon Fiber Composites and the Hydrogen Economy: Opportunities and Challenges,” by Mike Favaloro, Ginger Gardiner and Jeff Sloan, Carbon Fiber 2022 conference. CcH2 tanks at BMW BMW CcH2 storage, cryogenic gas denser than LH2. Photo Credit: Slide 8, “Cryo-compressed Hydrogen Storage,” BMW Group presentation by Klaas Kunze and Oliver Kircher, Cryogenic Cluster Day, Oxford, U.K. Sep. 28, 2012. BMW CcH2 storage tank. Photo Credit: Slide 9, “Cryo-compressed Hydrogen Storage,” BMW Group presentation by Tobias Brunner, Hydrogen Storage Workshop, Washington, D.C., U.S. Feb. 15, 2011. As shown in the graphic above, CcH2 tanks offer a hybrid solution between LH2 and CGH2 storage. By using cold temperatures (e.g., 40-80K/-233°C to -193°C) and medium pressure (e.g., 350 bar), BMW eliminated LH2’s boil-off issues — LH2 boils above -253°C — and achieved storage densities much higher than CGH2 and LH2. In presentations from 2010-2013, BMW described a prototype CcH2 system for a car that enabled <5-minute refueling and >500-kilometer range. The system used a 235-liter composite overwrapped pressure vessel (COPV) as an inner tank and cryoinsulation between this inner tank and a metal outer tank/jacket. This CcH2 tank reportedly stored 7.1 kilograms of hydrogen at 350 bar — versus 2.5 and 4.6 kilograms of hydrogen in standard 350-bar and 700-bar CGH2 tanks, respectively — with a gravimetric density of 5.4 weight% and a boil-off rate of <1% per year. The tank was elongated to fit along the car’s central tunnel. One of the key researchers involved in that work was Dr. Tobias Brunner. In 2015 Brunner left the company. He co-founded Cryomotive (Grasbrunn, Germany) in 2020, having acquired key BMW patents to adapt its CcH2 technology for trucks, commercial vehicles and aircraft. Cryomotive has already built full-scale demonstrator tanks for trucks. Once commercialized, these tanks will range from 600 to 700 millimeters in diameter and 2,350 to 2,650 millimeters in length to hold 75 to 115 kilograms of CcH2 gas in two- to four-tank system configurations. They will feature a Type III (aluminum liner overwrapped with carbon fiber/epoxy composite) inner pressure vessel encased in and separated from an aluminum outer jacket by multi-layer insulation (MLI) in a vacuum. MLI comprises multiple layers of aluminum foil and glass fiber fleece to prevent heat transfer by radiation. Nonconductive composite suspension/supports maintain the inner tank position within the outer tank. In September 2022, Cryomotive announced it had commissioned an automated winding machine from Mikrosam (Prilep, Macedonia). “We’ve built up manufacturing capabilities because no one in the world had that available for this size of overwrapped tanks for trucks,” says Brunner. “We’ve also developed a new HRS concept with new pumps and a new nozzle that we developed with a partner. It's the highest capacity nozzle in the world at 15 kilograms/minute and yet quite compact. ” Cryomotive is targeting first commercial applications in heavy trucks by 2025, scaling to produce hundreds of CcH2 tanks by 2026 and thousands of tanks by 2027. “The technology could also be a perfect solution for smaller aircraft and small ships,” notes Brunner. CRYOGAS can be used to fill an onboard CcH2 storage tank by either cryo-compressing liquid hydrogen (LH2) or cryo-cooling gaseous hydrogen (GH2). Photo Credit: Cryomotive. CRYOGAS: Between the two extremes Brunner explains that a cryo-compressed hydrogen storage system “is an insulated pressure vessel that you overfill with cold H2 gas — what we call CRYOGAS — that has 80% higher density than ambient temperature H2 gas at 700 bar, up to 80 grams/liter.” Higher density enables storing more H2 fuel in the tank for longer range. Brunner says CcH2 provides a solution between the two extremes of CGH2 at ambient temperature and high pressure (700 bar) and LH2 below its boiling point of -253°C/20 K at ambient pressure (see “BMW CcH2 storage, cryogenic gas denser than LH2” graph above). Highest density, less effort for hydrogen refueling stations (HRS). Cryomotive asserts that CcH2 not only offers the highest density for H2 in the vehicle storage tank but also requires less conditioning effort and corresponding equipment for refueling, regardless of whether the HRS is starting with compressed gas H2 (top) or liquid H2 (bottom) as a fuel. Photo Credit: Cryomotive and BMW High-pressure gas, says Brunner, “demands compressed gas in the vehicle tank. To refuel, this requires a series of compressors plus a lot of high-pressure buffer tanks, as well as pre-cooling down to -40°C.” [See top image at right]. Pre-cooling is necessary because Type IV storage tanks must not exceed 85°C to prevent degradation of the plastic tank liner and sealings. Fast refueling increases the temperature of both the hydrogen and the liner. Thus, regulations demand CGH2 must be cooled to -40°C before it is dispensed into 700-bar tanks and to -20°C before refueling 350-bar tanks. To give an example of the cost this entails, Brunner describes a hydrogen refueling station (HRS) concept proposed but quickly abandoned by H2 fuel cell truck manufacturer Nikola (Phoenix, Ariz., U.S.). “They proposed a solar-powered station with a refueling capacity of 8 tons of H2 gas per day,” he says. “This would have required 4,000 kilograms of high-pressure buffer tanks at a cost of $1,500/kilogram and multiple compressors at $1 million each — because no single compressor can provide direct feed at the required 5 to 8 kilograms per minute for fast refueling of heavy trucks. It also needed chillers to pre-cool the H2 gas. That CAPEX and the massive amount of energy required was a showstopper.” Brunner adds that CGH2 at 700 bar also demands a lot of carbon fiber to contain that pressure in the vehicle tanks. This is a huge issue because not enough carbon fiber is being produced currently with the high strength required — e.g., 4.9 gigapascals, the standard established by Toray’s (Tokyo, Japan) T700 carbon fiber. “Toray fibers are literally not available these days,” he says, “and the cost has exploded. For example, for a single 700-bar tank for trucks, cost can reach $20,000 to $35,000 just for the carbon fiber.” Cryogenic LH2, on the other hand, does lower HRS cost, says Brunner, “because you can store LH2 in large metal tanks and pump directly into the vehicle, but you need to cool down all the lines or you will have losses along the way and also in the vehicle tank.” The losses come from venting required to relieve pressure built as the LH2 boils into vapor. Onboard the vehicle, LH2 is also stored in stainless steel or aluminum tanks which must be kept at cryotemperatures. Although they don’t require active cooling (e.g., powered chillers) or carbon fiber, they are not cheap. Brunner notes their heat leak must be 5 watts per hour or less. This mandates a dewar construction of inner and outer tanks — both typically metal — separated by MLI in a high-quality vacuum with pressure below 10-4 pascals. This vacuum generation requires up to two weeks of heating, says Brunner, to achieve the necessary vacuum inside, as well as multiple pipes and systems to control the flow and pressure balance of liquid to gas. Such LH2 tanks are also heavy, with past construction requiring an inner vessel wall of 3- to 4-millimeter-thick stainless steel or aluminum. Note that Daimler Truck, hydrogen supplier Linde and Salzburger Aluminium Group (SAG, Lend, Austria) are developing LH2 tanks which they claim are thinner and more economical using subcooled LH2 (sLH2). “This is a concept,” explains Brunner, “where they fuel at a supercritical pressure up to 16 bar.” Supercritical means the pressure is above the critical point, the highest temperature along the pressure-temperature curve for liquid-to-gas phase change (see the purple region in the chart below). “Thus, they compress LH2 up to almost the cryo-compressed regime — but not to the high pressures that we use — and go into the supercritical region in order to avoid evaporation of sLH2 on the way to the vehicle. They let the pressure be reduced in the tank to 5-6 bars in a two-phase state — i.e., containing both liquid and gas — during operation.” CRYOGAS offers a solution between LH2 (far left) and CGH2 (far right). Subcooled LH2 is shown at left in purple. Photo Credit: Cryomotive Brunner adds he does not favor this approach, since evaporation losses at the station and onboard the vehicle can only be avoided in an optimal case of pre-conditioned tanks and station. “I developed LH2 tanks for years,” he explains. “That was my first job at BMW. And we gave it up after 35 years of research because refueling cannot be done without losses. You not only have to cool down all of the lines, but you normally need to depressurize the tank before it can take fresh LH2. Within the fleet that BMW operated, there was a significant amount of LH2 used just to cool down the system before it could be refueled.” Why CRYOGAS is a solution After his work on LH2 tanks, Brunner worked on developing CcH2 at BMW for more than five years. “We found that if you compress LH2 into a cryogenic gas at 30 or 40 megapascals [300 or 400 bar], you can basically increase its density,” he explains. “That was the first hypothesis that no one believed, but we built a pump with Linde and showed we could produce a high-pressure CRYOGAS at 30 megapascals with a density of 80 grams/liter compared to between 65 and 70 grams/liter for LH2.” The graph above reflects even further development by Cryomotive for trucks. Its CcH2 system still offers higher density than LH2 while maintaining the benefits of a gas. “We don’t have a liquid that can evaporate, but instead a gas, which is inherently thermally robust,” says Brunner. “For example, when you put CRYOGAS into a warm line or tank, it just loses density and expands a little, but there is no large density change. Thus, we eliminate boil-off issues and can fuel the tank regardless of whether it is warm or cold. That concept is revolutionary and was proven to work at BMW.” Brunner explains further, “You can basically pump fresh LH2 into the CRYOGAS and you will never lose any hydrogen. Alternatively, you can overfill it with very cold H2 gas. And when you run the vehicle, and say you empty the tank down from the triple overfill to the density of a 700-bar CGH2 tank, then you are in the gaseous region again. So, we are a gas tank all the time — just an overfilled gas tank, if you like, with very cold gas which converges to a warm gas when the tank is emptied down.” Thus, a CRYOGAS tank is filled with CcH2 at up to 400 bar pressure, which then lowers as the vehicle drives and CcH2 is discharged. CRYOGAS tanks will deliver cold GH2 as fuel in a well-defined temperature range with an adjustable pressure up to 3 MPa (30 bar) to supply a fuel cell or H2 combustion engine, says Brunner. Similar to LH2 tanks, CcH2 tanks have an internal heat exchanger. “We can freely control the pressure level in our tanks by the internal heat exchanger,” he adds, “but decided to not go higher than 30 bar, since that would result in a warmer tank and lower average density (capacity) after refueling.” Cryomotive refueling development Brunner reiterates that CcH2 enables reduced HRS cost, which is a significant benefit. “The key point is our technology requires no buffers, no heat exchangers, no pre-cooling and no communication — but instead direct fueling using a reciprocating piston pump at very low cost.” To reiterate, refueling using LH2 requires only a lower-cost pump and dispenser. However, Cryomotive also wants to offer refueling of CcH2 tanks without LH2 being available, which means using CGH2. This option does require using a CRYOGAS compressor and expander — this will likely be a single, connected device at the station — but will still not require pre-cooling, heat exchangers nor communication. No communication? “The onboard CGH2 tank needs to have communication with the HRS to avoid overheating,” Brunner explains. “A pressure stroke at the beginning of gaseous refueling identifies the tank pressure. The system must also know the temperature outside and in the tank. A lookup table then provides the right pressure ramp and that defines the speed of refueling.” As he explained above, refueling too quickly can result in exceeding 85°C in the tank, which is not allowed. “Because that tank can be warm or cold and the ambient temperature outside can be warm or cold, communication between the station and the vehicle tank is needed to tell the station how quickly it is allowed to fill.” “That’s not necessary for CRYOGAS because we cannot overheat,” says Brunner. “We can never end with 85°C in the tank because we operate at cryogenic temperatures, where heating due to compression is negligible due to the thermodynamic properties of hydrogen [see chart above].” Energy consumption for compression at the HRS. Photo Credit: Slide 21, “Cryo-compressed Hydrogen Storage,” BMW Group presentation by Klaas Kunze and Oliver Kircher, Cryogenic Cluster Day, Oxford, U.K. Sep. 28, 2012. “We can even pump at 1,000 kilograms/hour,” he continues. “We are developing such a pump with a simple design jointly with a partner. We can forecast with confidence that it can pump 500 kilograms/hour — 8 kilograms/minute — and it’s enormously cost effective — below $250,000.” Cryomotive’s CRYOGAS refueling stations will use Fives Cryomec Hy-Filling reciprocating pump to enable unlimited back-to-back refueling of trucks, buses and other heavy-duty vehicles with 80 kilograms of CcH2 for a 1,000-kilometer driving range. Photo Credit: Cryomotive, Fives Cryomec Cryomotive is working with Fives Cryomec (Allschwil, Switzerland) to develop and validate its Cryomec Hy-Filling reciprocating pump for CRYOGAS refueling stations. “As a worldwide leader in cryogenics, Fives has been at the forefront of hydrogen
for decades,” says Xavier Nicolas, CEO of Fives Cryomec. “We have manufactured more than 8,000 pumps ... and are looking forward to developing this new model with Cryomotive to boost green mobility for trucks and heady-duty vehicles.” “You need only one such pump in each CRYOGAS refueling station,” he adds, “plus one liquid bulk storage and the dispenser. You can build a CRYOGAS station for less than $1 million and it can fill one truck after the other, endless back-to-back refueling. That’s why we think the technology is so vital. It has a lot of advantages onboard the vehicle, but even more advantages for the HRS.” Cost for a 4 ton H2/day HRS for heavy-duty trucks. Cryomotive asserts that CcH2 refueling station costs are 1/5 that of 700 bar CGH2 stations. Photo Credit: Cryomotive “We’ve done the cost analysis — including total cost of ownership (TCO) — CAPEX, OPEX and overhead and profit,” says Brunner. “We have 1/5 the cost versus 700-bar CGH2 delivery. So, if we need a million euros, they need five million to achieve the same capacity of 4 tons H2 per day. Even compared to LH2, which is very economical, we are still less expensive, but we don’t have their issue of H2 losses.” FAQs Why did BMW give it up? “Hydrogen was seen as a niche market compared to batteries for electric passenger cars,” says Brunner. “And 700-bar gas tanks were seen as sufficient. No one at that time was asking for a higher capacity storage like they are now in trucks. For BMW, the amount of carbon fiber needed for a passenger car tank was not so high because you only need to store a few kilograms of H2 gas. You still have the effort required for refueling, but compared to a truck, you only need a few buffers, a smaller compressor and a smaller pre-cooler. So, 700-bar CGH2 tanks became the mainstream.” Isn’t the CcH2 system complex and expensive? “It is a pressure vessel with super insulation,” concedes Brunner, “but that insulation is much simpler than required for an LH2 tank. The vacuum is an order of magnitude lower so you don’t need two weeks to heat it in an oven to attain the necessary quality, and the MLI can be prefabricated and does not need to be wound around the vessel in a clean environment. We wind the composite over the liner for the inner tank, install the pre-fabricated MLI, weld the outer jacket closed and then apply the heat to attain the vacuum, which we can do fairly quickly. It’s also a more robust system. Our tanks can afford some convection and a few pieces in contact. LH2 tanks cannot. In fact, we can allow up to 10 times more heat leak than LH2 tanks.” Is active cooling required? “No,” says Brunner. “The insulation we use is enough to keep the system cold. When you drive the truck, you discharge cold gas from the insulated tank which cools down the tank by itself — this is simply thermodynamics. And even if you make a warm filling, you drive again, and it is cooled down again and gets back into the high-density regions of the operating range. So, we never need to actively cool, but instead the system cools itself by being used and by discharging hydrogen.” What happens if the truck sits idle? “First, the CcH2 system can absorb much more heat than a LH2 system, and a CcH2 tank can actually stand idle much longer if it is half-empty. For example, at half-full it can spend two weeks idle before venting is needed. Anyway, commercial vehicles are operated constantly, and they need a lot of energy, so that’s the perfect application for our technology.” Are there issues in maintaining the vacuum in the tank insulation, and does it need to be rechecked? “We do need to guarantee that the vacuum is not degrading too much, but the tanks can hold the required vacuum quality long enough. Once you would have a normal service check, where you connect the tank to a vacuum gauge or pump, read the vacuum level and reapply vacuum pressure if necessary. It is also easy to monitor the pressure in situ using very simple pressure transducers. If the pressure increases, meaning the vacuum quality is getting worse, then you re-apply vacuum pressure.” Why is this shorter than during tank manufacture? “The first vacuum drawing must withdraw moisture and other contamination; with this done, redrawing vacuum will be quicker.” Commercialization timeline Cryomotive is working with heavy truck manufacturer MAN (Munich, Germany) as well as Clean Logistics (Winsen, Germany). “MAN also owns Navistar,” notes Brunner, “while Clean Logistics announced an order of 5,000 trucks from GP Joule in Germany, but they have also bought a truck manufacturer from the Netherlands, GINAF Nederland, to have their own platform and they are retrofitting/converting trucks as well. For both of them, we are building the same system — one tank on the left and right of the frame. This fits into any truck, because that's where you have the LNG [liquid natural gas] or diesel tanks today in the European and Asian trucks.” Cryomotive’s target, says Brunner, “is to have these first trucks with our systems running in early 2025. And we’re working to be fully validated by then. We have done lots of cycle testing and other work toward certification. And we have also built the first stations jointly with our partner and seed investor Chart Industries [Ball Ground, Ga., U.S.] for the same timeline.” Chart is well-established in the H2 refueling and HRS market and will offer both LH2 and CcH2 refueling and storage. “We decided that our strategy had to include both trucks and stations,” says Brunner, “because if we just build the truck’s onboard storage tanks, and no one builds stations that work with these tanks, then we lose. So, this is why we’ve had to develop very good concepts for the refueling pump and nozzle. And now we're moving to build our first station.” Carbon fiber/epoxy overwrapped aluminum liner. Cryomotive CcH2 tanks feature a Type III inner pressure vessel and outer aluminum jacket (not shown). Here, an aluminum liner (top) is wrapped with carbon fiber/epoxy using a Mikrosam system. Photo Credit: Cryomotive After demonstration of the first CRYOGAS station and trucks with CcH2 tanks, Brunner anticipates small series production — hundreds of systems — from mid-2025 towards 2026. “And thousands of tanks in 2027-2028,” he adds. “That’s when Daimler Trucks and all the big truck manufacturers aim to sell large numbers of H2/fuel cell trucks. Our timeline fits that pretty well. And we have the production capabilities ourselves. We know how to do it and we’re establishing how to add lines and scale up. It’s a huge investment to do all of the core component manufacturing in-house, but we don’t want to be dependent on anyone in the value chain that can slow us down.” CRYOGAS for aviation As stated earlier, Cryomotive’s targets include not only trucks and commercial vehicles, but also aviation. Currently, however, almost all tank development in that industry centers around LH2. Why? “Because it's lighter,” says Brunner. “We can get to 8-10 weight % [wt% = kg H2/kg tank, or storage efficiency], which is much better than the typical 5-7 wt% for a 70 MPa CGH2 tank. But really large LH2 tanks can get to 30 or 40 weight%.” That’s a big advantage for aviation where weight reduction commands a premium. Why such high efficiency for larger tanks? Because LH2 tanks at ambient pressure increase their volume-to-surface area (V/SA) ratio with increased size. For example, as spherical tank diameter increases from 1 to 6 meters, the V/SA increases from 0.7 to 2. Scaling cylindrical tanks from 1 × 2 meters (dia. x height) to 2 × 8 meters increases V/SA from 6 to 100. Thus, more H2 can be stored versus the material needed to encapsulate it. Maximizing V/SA also minimizes heat transfer which causes boil-off. Brunner continues, “For smaller aircraft, the tank size is smaller — holding 20 to 40 kilograms — and then CcH2 and LH2 both have 7-10 weight%. But if we talk about hundreds or thousands of kilograms, then LH2 is unbeaten in terms of storage efficiency. However, they still have all the problems onboard and during refueling in terms of boil-off and losses.” Another disadvantage is that sLH2 tanks are best-suited for supplying into low-pressure H2 power devices, such as fuel cells at 4 to 5 bar. In order to supply into supercharged H2 combustion engines or gas turbines at 8 to 100 bar, additional compression equipment is required. “If there is only 5 bar of pressure in the tank,” he explains, “then you will need all these cryopumps onboard to increase the pressure for the engine, and they need to be redundant because there can be no failure. Our tanks, however, can provide sufficient pressure just from the tank. We don’t need a pump or active cooling. We have no losses during refueling but we are a little too heavy for large-scale aviation.” For small aircraft, Brunner believes that CcH2 could be a perfect solution. “We can fill very quickly; we are as safe as a LH2 system and we also have the compactness because we offer higher density in the tank. We are a good fit for aviation and we have that plan.” Cryomotive is interested in discussions with small aircraft companies, but in the meantime, it remains very focused on its timeline to commercialize its CRYOGAS tanks and stations for trucks and other commercial vehicles. RELATED CONTENT Rational exuberance for emerging markets Advanced air mobility and hydrogen storage represent a huge opportunity for the composites industry. They will also be hugely challenging. Hydrogen is poised to fuel composites growth, Part 2 Potential for Type IV composite tanks in H2 refueling stations and distribution, plus targeted cost reductions and emerging technologies for tank recertification and monitoring. B&T Composites joins R&D project to develop novel composite hydrogen storage tanks As part of the Greece-based H2Cat project, the company says new tanks will be developed and studied with embedded structural health monitoring (SHM) sensors. | Emerging Technologies |
Today, the U.S. Government unveiled its framework for a “Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy” at the Summit on Responsible AI in the Military Domain (REAIM 2023). The aim of the Declaration is to build international consensus around how militaries can responsibly incorporate AI and autonomy into their operations, and to help guide states’ development, deployment, and use of this technology for defense purposes to ensure it promotes respect for international law, security, and stability.
The Declaration consists of a series of non-legally binding guidelines describing best practices for responsible use of AI in a defense context. These include ensuring that military AI systems are auditable, have explicit and well-defined uses, are subject to rigorous testing and evaluation across their lifecycle, and that high-consequence applications undergo senior-level review and are capable of being deactivated if they demonstrate unintended behavior. We believe that this Declaration can serve as a foundation for the international community on the principles and practices that are necessary to ensure the responsible military uses of AI and autonomy.
We commend the Netherlands and the Republic of Korea for taking the initiative to co-host the REAIM Summit and for launching a timely discussion on this topic. We view the need to ensure militaries use emerging technologies such as AI responsibly as a shared challenge. We look forward to engaging with other likeminded stakeholders to build a consensus around this proposed Declaration and develop strong international norms of responsible behavior. | Emerging Technologies |
Google is testing a tool that uses AI to write news stories and has started pitching it to publications, according to a new report from The New York Times. The tech giant has pitched the AI tool to The New York Times, The Washington Post and The Wall Street Journal’s owner, News Corp.
The tool, internally codenamed “Genesis,” can take in information and and then generate news copy. Google reportedly believes that the tool can serve as a personal assistant for journalists by automating some tasks in order to free up time for others. The tech giant sees the tool as a form of “responsible technology.”
The New York Times reports that some executives who were pitched on the tool saw it as “unsettling,” noting that it seemed to disregard the effort that went into producing accurate news stories.
“In partnership with news publishers, especially smaller publishers, we’re in the earliest stages of exploring ideas to potentially provide AI-enabled tools to help journalists with their work,” a Google spokesperson said in a statement to TechCrunch.
“For instance, AI-enabled tools could assist journalists with options for headlines or different writing styles,” the spokesperson added. “Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity, just like we’re making assistive tools available for people in Gmail and in Google Docs. Quite simply these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating, and fact-checking their articles.”
Some news organizations, including The Associated Press, have long used AI to generate stories for things like corporate earnings, but these news stories represent a small fraction of the organization’s articles overall, which are written by journalists.
Google’s new tool will likely spur anxiety, as AI-generated articles that aren’t fact-checked or throughly-edited have the potential to spread misinformation.
Earlier this year, American media website CNET quietly began producing articles using generative AI, in a move that ended up backfiring for the company. CNET ended up having to issue corrections on more than half of the articles generated by AI. Some of the articles contained factual errors, while others may have contained plagiarized material. Some of the website’s articles now have an editor’s note reading, “An earlier version of this article was assisted by an AI engine. This version has been substantially updated by a staff writer.” | Emerging Technologies |
Building ever larger language models has led to groundbreaking jumps in performance. But it’s also pushing state-of-the-art AI beyond the reach of all but the most well-resourced AI labs. That makes efforts to shrink models down to more manageable sizes more important than ever, say researchers.
In 2020, researchers at OpenAI proposed AI scaling laws that suggested increasing model size led to reliable and predictable improvements in capability. But this trend is quickly putting the cutting-edge of AI research out of reach for all but a handful of private labs. While the company has remained tight-lipped on the matter, there is speculation that its latest GPT-4 large language model (LLM) has as many as a trillion parameters, far more than most companies or research groups have the computing resources to train or run. As a result, the only way most people can access the most powerful models is through the APIs of industry leaders.
“We won’t be able to make models bigger forever. There comes a point where even with hardware improvements, given the pace that we’re increasing the model size, we just can’t.”
—Dylan Patel, SemiAnalysis
That’s a problem, says Dylan Patel, chief analyst at consultancy SemiAnalysis, because it makes it more or less impossible for others to reproduce these models. That means external researchers aren’t able to probe these models for potential safety concerns and that companies looking to deploy LLMs are “tied to the hip” of OpenAI’s dataset and model design choices.
There are more practical concerns too. The pace of innovation in the GPU chips used to run AI is lagging behind model size, meaning that pretty soon we could face a “brick wall” beyond which scaling cannot plausibly go. “We won’t be able to make models bigger forever,” he says. “There comes a point where even with hardware improvements, given the pace that we’re increasing the model size, we just can’t.”
How large do large language models need to be?
Efforts to push back against the logic of scaling are underway though. Last year, researchers at DeepMind showed that training smaller models on far more data could significantly boost performance. DeepMind’s 70 billion parameter Chinchilla model outperformed the 175 billion parameter GPT-3 by training on nearly five times more data. This February, Meta used the same approach to train much smaller models that could still go toe-to-toe with the biggest LLMs. Its resulting LLaMa model came in a variety of sizes between 7 and 65 billion parameters, with the 13-billion parameter version outperforming GPT-3 on most benchmarks.
The company’s stated goal was to make such LLMs more accessible, and so Meta offered the trained model to any researchers who asked for it. This experiment in accessibility quickly got out of control though, after the model was leaked online. And earlier this month, researchers at Stanford University pushed things further by taking the 7 billion parameter version of LLaMa and fine-tuning it on 52,000 query responses from GPT-3.5, the model that originally powered ChatGPTand (as of press time) still powers OpenAI’s free version. The resulting model called Alpaca was able to replicate much of the behavior of the OpenAI model, according to the researchers, who released their data and training recipe so others could replicate it.
“Increasingly, we were finding that there was a gap in the qualitative behavior of models available to the research community and the closed-source models being served by leading LLM providers,” says Tatsunori Hashimoto, an assistant professor at Stanford who led the research. “Our view was that having a capable and accessible model was important to have the academic community engage in analyzing and solving the many deficiencies of instruction following LLMs.”
Since then, hackers and hobbyists have run with the idea, using the LLaMa weights and the Alpaca training scheme to run their own LLMs on PCs, phones and even a Raspberry Pi microcontroller. Hashimoto says it’s great to see more people engaging with LLMs, and he’s been surprised at the efficiency people have squeezed out of these models. But he stresses that Alpaca is still very much a research model not suitable for widespread use and that broad accessibility to LLMs also carries risks.
“If we can take advantage of the knowledge already frozen in these models, we should.”
—Jim Fan, Nvidia
Patel says there are question marks around the way the Stanford researchers evaluated their model, and it’s not clear the performance is as good as larger models. But there are plenty of other approaches for boosting efficiency that are making progress too. One promising technique is known as “mixture of experts,” he says, which involves training multiple smaller sub-models specialized to specific tasks rather than a single large model to solve all of them. The MoE approach makes a lot sense, says Patel. Our brains follow a similar pattern, with different regions specialized for different tasks.
Nvidia recently used the approach to build a vision-language model called Prismer designed to answer questions about images or provide captions. They showed it could match the performance of models trained on 10 to 20 times more data. “There are tons of high-quality pre-trained models for various tasks like depth estimation, object segmentation, 3D understanding,” says Jim Fan, AI research scientist at Nvidia. “If we can take advantage of the knowledge already frozen in these models, we should.”
Turning LLM sparsity into opportunity
Another attractive approach for boosting model efficiency is to exploit a property known as “sparsity,” says Patel. A surprisingly large number of weights in LLMs are set to zero, and performing operations on these values is a waste of computation. Finding ways to remove these zeros could help shrink the size of models and reduce computational costs, says Patel.
Sparsity is one of the most promising future directions for compressing models, says Sara Hooker, who leads the research lab Cohere For AI, but current hardware is not well-suited to exploit it. Patterns of sparsity typically don’t have any obvious structure, but today’s GPUs are specialized for processing data in well-defined matrices. This means that even when a weight is zero it still needs to be represented in the matrix, which takes up memory and adds computational overhead. While enforcing structured patterns of sparsity is a partial workaround, the chips can’t take full advantage and further hardware innovation is probably needed, Hooker says. “The interesting challenge is how do you represent the absence of something without actually representing it?” she says.
Many of the techniques that are effective at compressing smaller AI models also don’t appear to translate well to LLMs, says Hooker. One popular approach is known as quantization, which reduces data requirements by representing weights using fewer bits, for instance using 8-bit floating-point numbers rather than 32-bit. Another is knowledge distillation, in which a large teacher model is used to train a smaller one. So far though, these techniques have had little success when applied to models above 6 billion parameters, says Hooker.
Fighting against AI scaling laws also faces more prosaic challenges, says Patel. Part of the reason why they’ve proved so enduring is that it’s often easier to throw computing power at a well-understood model architecture than fine-tune new techniques. “If I have 1,000 GPUs for three months, what’s the best model I can make?” he says. “A lot of times, the answer is unfortunately that you really can’t get these new architectures to run efficiently.”
That’s not to say that efforts to shrink larger models are a waste of time, says Patel. However, he adds, scaling is likely to continue to be important for setting new states states-of-the-art. “The max size is going to continue to grow, and the quality at small sizes is going to continue to grow,” he says. “I think there’s two divergent paths and you’re kind of following both.”
Edd Gent is a freelance science and technology writer based in Bengaluru, India. His writing focuses on emerging technologies across computing, engineering, energy and bioscience. He's on Twitter at @EddytheGent and email at edd dot gent at outlook dot com. His PGP fingerprint is ABB8 6BB3 3E69 C4A7 EC91 611B 5C12 193D 5DFC C01B. His public key is here. DM for Signal info. | Emerging Technologies |
Artificial intelligence could be used to generate "unprecedented quantities" of realistic child sexual abuse material, an online safety group has warned.
The Internet Watch Foundation (IWF) said it was already finding "astoundingly realistic" AI-made images that many people would find "indistinguishable" from real ones.
Web pages the group investigated, some of which were reported by the public, featured children as young as three.
The IWF, which is responsible for finding and removing child sexual abuse material on the internet, warned they were realistic enough that it may become harder to spot when real children are in danger.
IWF chief executive Susie Hargreaves called on Prime Minister Rishi Sunak to treat the issue as a "top priority" when Britain hosts a global AI summit later this year.
She said: "We are not currently seeing these images in huge numbers, but it is clear to us the potential exists for criminals to produce unprecedented quantities of life-like child sexual abuse imagery.
"This would be potentially devastating for internet safety and for the safety of children online."
While AI-generated images of this nature are illegal in the UK, the IWF said the technology's rapid advances and increased accessibility meant the scale of the problem could soon make it hard for the law to keep up.
The National Crime Agency (NCA) said the risk is "increasing" and being taken "extremely seriously".
Chris Farrimond, the NCA's director of threat leadership, said: "There is a very real possibility that if the volume of AI-generated material increases, this could greatly impact on law enforcement resources, increasing the time it takes for us to identify real children in need of protection".
Mr Sunak has said the upcoming global summit, expected in the autumn, will debate the regulatory "guardrails" that could mitigate future risks posed by AI.
He has already met with major players in the industry, including figures from Google as well as ChatGPT maker OpenAI.
A government spokesperson told Sky News: "AI-generated child sexual exploitation and abuse content is illegal, regardless of whether it depicts a real child or not, meaning tech companies will be required to proactively identify content and remove it under the Online Safety Bill, which is designed to keep pace with emerging technologies like AI.
"The Online Safety Bill will require companies to take proactive action in tackling all forms of online child sexual abuse including grooming, live-streaming, child sexual abuse material and prohibited images of children - or face huge fines."
Read more:
AI a 'threat to democracy'
Why transparency is crucial to AI's future
Offenders helping each other use AI
The IWF said it has also found an online "manual" written by offenders to help others use AI to produce even more lifelike abuse images, circumventing safety measures that image generators have put in place.
Like text-based generative AI such as ChatGPT, image tools like DALL-E 2 and Midjourney are trained on data from across the internet to understand prompts and provide appropriate results.
Click to subscribe to the Sky News Daily wherever you get your podcasts
DALL-E 2, a popular image generator from ChatGPT creator OpenAI, and Midjourney both say they limit their software's training data to restrict its ability to make certain content, and block some text inputs.
OpenAI also uses automated and human monitoring systems to guard against misuse.
Ms Hargreaves said AI companies must adapt to ensure their platforms are not exploited.
"The continued abuse of this technology could have profoundly dark consequences - and could see more and more people exposed to this harmful content," she said. | Emerging Technologies |
SoftBank Begins Making Investments Again ‘Timidly With Fear’
SoftBank’s Vision Fund unit swung to a profit of ¥61 billion in the June quarter from a ¥2.3 trillion loss a year ago.
(Bloomberg) -- SoftBank Group Corp.’s Vision Fund eked out its first profit in more than a year and said it’s cautiously resuming investments to capitalize on the opportunities in artificial intelligence and other emerging technologies.
The Vision Fund invested $1.6 billion in the June quarter after coming to a virtual halt as investors soured on money-losing startups. But that’s still just a fraction of the fund’s early spending pace.
“We are investing timidly, with fear in our hearts,” SoftBank Chief Financial Officer Yoshimitsu Goto said during an earnings call. “But for an investor to play it safe is the same as not doing work.”
Under founder Masayoshi Son, SoftBank invested more than $140 billion dollars in unprofitable startups from 2017, inflating valuations worldwide before they were punctured by the Covid pandemic, China’s tech crackdown and the US Federal Reserve’s rate hikes. Last year, the Vision Fund lost a record $30 billion.
But a year of restraint has helped SoftBank regain its financial footing. The company has accumulated a cash pile of almost ¥6 trillion ($42 billion), which Goto said he believes is the highest ever for the company. SoftBank’s loan-to-value ratio, or the ratio of its net debt against the equity value of its holdings, fell to 8% at the end of June, its lowest ever, he said.
SoftBank’s Vision Fund unit swung to a profit of ¥61 billion in the June quarter from a ¥2.3 trillion loss a year ago. That helped SoftBank report a smaller group-wide net loss, although the Japanese company remained underwater with earnings dragged down by paper losses on its stakes in Alibaba Group Holding Ltd., Deutsche Telekom AG and T-Mobile US Inc.
“Shifting to offense mode,” read one slide in his presentation. “But from the CFO’s perspective, I’d like to see us do that carefully,” said Goto.
How aggressively Son will be able to chase new deals hinges on the planned initial public offering of SoftBank’s Arm Ltd. The chip designer is seeking to raise as much as $10 billion in a market debut as soon as September at a valuation of between $60 billion and $70 billion. If successful, that would make Arm the largest tech debut on record after Alibaba Group Holding Ltd. and Meta Platforms Inc.
But Arm stumbled financially in the latest quarter. It logged a quarterly loss of ¥9.5 billion on a 11% decline in sales in dollar terms because of a slowdown in smartphone sales and buildup of inventory in the electronics market. Goto declined to go into detail on the planned offering.
“Arm’s IPO plan is going very smoothly,” he said. Son handed over the earnings presentations to Goto in November because he said he wanted to focus on Arm.
The Nasdaq 100 index, a proxy for tech stock performance, rallied 15% during the June quarter, capping its best ever first-half of a year. Hype over artificial intelligence and easing concern over higher interest rates have bolstered SoftBank’s investments in companies including Grab Holdings Ltd., Coupang Inc. and Roivant Sciences Ltd.
Tomoaki Kawasaki, a senior analyst at Iwai Cosmo Securities Co., said the rally in tech stocks should benefit SoftBank both in the Arm IPO and in the value of its portfolio companies.
“Once Arm goes public, investors would be able to bet on two profit streams: one from Arm and other AI- and chip-related investments; and another from the expansion of Vision Fund investments,” he said.
SoftBank has cut deeply into the staffing at the Vision Fund unit, in part because it has backed off new investments. The division reduced headcount by about 30% in the last fiscal year and began another round of layoffs this year, Bloomberg reported in June.
Navneet Govil, executive committee member for the Vision Funds, confirmed there was another round of cuts in the June quarter, but wouldn’t elaborate on how many jobs had been eliminated.
“We now believe we have right-sized the organization for the investment opportunities that we see ahead,” Govil said on the earnings call.
SoftBank has spent a large percentage of the $166 billion it raised across investment funds. But it still has more cash to invest than most venture firms.
“We have over $8 billion in available capital in Vision Fund 2 to invest,” Govil said. “At the same time, the bar for investment is very high.”
--With assistance from Vlad Savov, Edwin Chan and Ritsuko Ando.
(Adds executive comments from earnings call)
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P. | Emerging Technologies |
Dec 5 (Reuters) - The use of artificial intelligence (AI) in emerging technologies continues to advance rapidly. San Francisco-based OpenAI made its latest creation, the ChatGPT chatbot, available for free public testing on Nov. 30. A chatbot is a software application designed to mimic human-like conversation based on user prompts.Within a week of ChatGPT being unveiled, over a million users had tried to make the tool talk, according to Sam Altman, co-founder and CEO of OpenAI.WHO OWNS OPENAI AND IS ELON MUSK INVOLVED?OpenAI, a research and development firm, was founded as a nonprofit in 2015 by Silicon Valley investor Sam Altman and billionaire Elon Musk and attracted funding from several others, including venture capitalist Peter Thiel. In 2019, the group created a related for-profit entity to take in outside investment.Musk, who remains engulfed in his overhaul of social networking firm Twitter, left OpenAI’s board in 2018, but chimed in with his take on the viral phenomenon, calling it "scary good".Musk later tweeted that he was pausing OpenAI’s access to Twitter’s database after learning that the firm was using it to "train" the tool.HOW OPENAI WORKSOpenAI states that their ChatGPT model, trained using a machine learning technique called Reinforcement Learning from Human Feedback (RLHF), can simulate dialogue, answer follow-up questions, admit mistakes, challenge incorrect premises and reject inappropriate requests.Initial development involved human AI trainers providing the model with conversations in which they played both sides – the user and an AI assistant. The version of the bot available for public testing attempts to understand questions posed by users and responds with in-depth answers resembling human-written text in a conversational format.WHAT COULD IT BE USED FOR?A tool like ChatGPT could be used in real-world applications such as digital marketing, online content creation, answering customer service queries or as some users have found, even to help debug code.The bot can respond to a large range of questions while imitating human speaking styles.IS IT PROBLEMATIC?As with many AI-driven innovations, ChatGPT does not come without misgivings. OpenAI has acknowledged the tool’s tendency to respond with "plausible-sounding but incorrect or nonsensical answers," an issue it considers challenging to fix.AI technology can also perpetuate societal biases like those around race, gender and culture. Tech giants including Alphabet Inc's (GOOGL.O) Google and Amazon.com (AMZN.O) have previously acknowledged that some of their projects that experimented with AI were "ethically dicey" and had limitations. At several companies, humans had to step in and fix AI havoc.Despite these concerns, AI research remains attractive. Venture capital investment in AI development and operations companies rose last year to nearly $13 billion, and $6 billion had poured in through October this year, according to data from PitchBook, a Seattle company tracking financings.Reporting by Siddharth K in Bengaluru; Editing by Lisa ShumakerOur Standards: The Thomson Reuters Trust Principles. | Emerging Technologies |
Cabinet Approves Rs 7,210 Crore For Phase 3 Of E-Courts Mission
Traffic challan proceedings will transition to a fully virtual format.
The Union Cabinet granted approval on Wednesday for the eCourts Project Phase III with an allocation of Rs 7,210 crore.
In a cabinet briefing, Union Minister Anurag Thakur said this phase aimed to establish online and paperless court systems.
Thakur highlighted the objectives of this phase, including the universalisation of e-filing and e-payments as well as the digitisation of legacy records. Critical components like case records, software applications, electronic evidence, and live-streaming data will be securely stored on the cloud.
Court complexes are set to have 4,400 e-Seva centres to facilitate efficient services.
Traffic challan proceedings will transition to a fully virtual format, eliminating the need for litigants and lawyers to be physically present in court. These virtual traffic courts will operate 24/7 across the country.
Phase III of the eCourts Mission is expected to span four years, indicating a comprehensive and sustained effort to modernise India's judicial processes.
"The impetus given to a tech-led judiciary will shape large-scale reforms to make the judiciary universally accessible with the help of emerging technologies," the Ministry of Law and Justice said in a post on social media platform X. | Emerging Technologies |
Join the audience for a Quantum Week live webinar at 9 a.m. GMT on 1 November 2022 exploring the forthcoming quantum technology revolution Want to take part in this webinar?
(Courtesy: iStock/agsandrew)
Quantum science and technology is advancing and evolving rapidly and, in the last decade, has shifted from foundational scientific exploration to adoption by commercial and government organizations. It is essential that scrutiny and guidance is applied to this quantum revolution to bring other societal stakeholders onboard and ensure that the benefits can be maximized for all society.
What considerations exist for quantum technologies? How should we engage as a society in the future, as promised and created by this emerging sector? We will discuss some key questions that will shape the forthcoming quantum technology revolution.
Want to take part in this webinar?
Left to right: Rob Thew, Ana Belén Sainz, Zeki C Seskir, Alexander Holleitner, Mehul Malik, Vivek Krishnamurthy, Tara Roberson
Rob Thew is a senior researcher and group leader in the Quantum Technologies group at the University of Geneva. His research covers fundamental to applied topics in quantum communication and sensing. He is executive director of the Geneva Quantum Centre, chair of the Strategic Research Agenda Work Group for the European Quantum Flagship and founding editor-in-chief for the IOP journal, Quantum Science and Technology.
Ana Belén Sainz is a group leader in the Foundational Underpinnings of Quantum Technologies group at the International Centre for Theory of Quantum Technologies, University of Gdańsk, Poland. Her research on foundations of quantum theory focuses on understanding the nonclassical phenomena featured in Nature, and how to harness their power to enable new forms of information processing.
Zeki C Seskir is a doctoral researcher at Karlsruhe Institute of Technology (KIT) – Institute for Technology Assessment and Systems Analysis (ITAS) and co-ordinator of the project QuTec: Quantum Technology Innovations for Society. He conducts landscaping studies on quantum technologies to be utilized in technology assessment capabilities. His interests cover the emerging innovation and governance ecosystems of QT, together with their ethical, legal, societal and economic impacts, and potential futures.
Alex Holleitner is professor of physics at the Technical University of Munich working on the fundamental aspects of optics and electronics of quantum matter. Alex helped to establish a master programme on “quantum science and technology” within the Munich Center for Quantum Science and Technology and has initiated further programmes on quantum education e.g., to train experts from industry, and internships of MSc students at local quantum technology companies.
Mehul Malik is a professor of physics at Heriot-Watt University, Edinburgh, where he leads the Beyond Binary Quantum Information Laboratory. His research interests include quantum information processing and communication, fundamental studies of entanglement, and complex scattering media. He currently leads a QuantERA consortium studying quantum phenomena with complex media. Mehul is passionate about science communication and issues of gender diversity and researcher mobility in academia.
Vivek Krishnamurthy is the Samuelson-Glushko professor of law at the University of Ottawa and director of the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic. His work focuses on the regulatory and human-rights-related challenges that arise in cyberspace, advising on the impacts of new technologies. Vivek is a faculty associate of Harvard’s Berkman Klein Center for Internet & Society, senior associate of the Human Rights Initiative at the Center for Strategic & International Studies, and member of the Global Network Initiative’s Board of Directors.
Tara Roberson is a science communicator whose work focuses on responsible development and deployment of emerging technologies. As a postdoctoral researcher at the ARC Centre of Excellence for Engineered Quantum Systems, she works with quantum physicists to understand the implications of emerging technologies. Tara also works in industry on activities that address ethics, law and assurance for robotics, autonomous systems and artificial intelligence. | Emerging Technologies |
- Ford Motor's largest competition in electric vehicles isn't U.S. leader Tesla or crosstown rival General Motors, it's Chinese automakers, CEO Jim Farley said Thursday.
- Farley used Warren Buffett-backed BYD as the prime example of a Chinese automaker that has successfully developed and sold EVs.
- BYD has grown its sales in China from 445,000 units in 2015 to nearly 2 million last year, making it one of the top five automakers by sales in China, according to LMC Automotive.
Farley said Chinese companies such as Warren Buffett-backed BYD are ahead of the large U.S. automakers and startups on electric vehicles, specifically battery chemistry and other emerging technologies.
related investing news
"We see the Chinese as the main competitor, not GM or Toyota," Farley said during the Morgan Stanley Sustainable Finance Summit.
He used BYD as the prime example of a Chinese automaker that has successfully developed and sold EVs — first in China, and now Europe.
"I like BYD. Totally vertically integrated, aggressive … very, very impressive company. And they were always committed to electric," Farley said when asked which company is doing EVs right.
BYD has grown its sales in China from 445,000 units in 2015 to nearly 2 million last year, making it one of the top five automakers by sales in China, according to LMC Automotive.
Farley's comments echo those of industry experts and investors regarding the growth of BYD and other Chinese automakers, which have government backing in China.
"BYD has a huge place, both from the electric vehicle perspective and also through the battery production side," Philip Ripman, portfolio manager at Storebrand Asset Management, told CNBC Pro Talks last week.
Ripman, who manages the $1 billion Storebrand Global Solutions sustainable fund, highlighted BYD's developments in cheaper, sodium-ion battery technology, which could potentially replace lithium batteries. He noted that these could become prevalent in BYD's more affordable EVs and help increase profit margins for the automaker.
Farley also noted BYD's battery advantages compared to the current U.S. industry standard of lithium-ion focused batteries.
Ford earlier this year announced a new collaboration with China's Contemporary Amperex Technology Co., or CATL, for a new $3.5 billion plant to build cheaper batteries in Michigan.
The facility will produce new lithium iron phosphate batteries, or LFP, as opposed to pricier nickel cobalt manganese batteries with lithium, which the company is currently using. It is expected to open in 2026 and employ about 2,500 people, according to the Detroit automaker.
Farley touted BYD's role in building out that technology.
"BYD's scale is way bigger than Tesla now, and they developed the LFP technology, which is a better battery," Farley said.
The Ford-CATL deal has been criticized amid tensions between the U.S. and China. Specifically, Marco Rubio asked the Biden administration to review the deal, which includes Ford licensing CATL's technologies. The Detroit automaker will own the new facility through a wholly-owned subsidiary instead of operating it as a joint venture with CATL.
Farley said if politics get in the way of allowing cheaper EV technologies in the U.S., the consumer is going to be "screwed" with higher prices.
"We have to work through that in our country. And I think they're really interesting companies," Farley said. | Emerging Technologies |
MainThe rapid design and development of two COVID-19 mRNA vaccines marked the advent of a new biotechnology platform for immunization against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and, potentially, a wide spectrum of microbial pathogens and cancers1,2,3,4. The remarkably short timeframe from target identification to phase 1 clinical studies—and the convincing safety profile of mRNA vaccines after billions of administered doses—underscore the potential of a new generation of mRNA therapeutics that lies beyond vaccines and other agents that rely on the ability of mRNA and lipid nanoparticles (LNPs) to stimulate immune responses.The pathway for the development of mRNA therapeutics presents additional challenges compared to those of mRNA vaccines (Fig. 1). Immunization requires only a minimal amount of protein production, as the immune system can markedly amplify the antigenic signal through cell-mediated and antibody-mediated immunity. In contrast, mRNA therapeutics require as much as a 1,000-fold-higher level of protein to reach a therapeutic threshold (Supplementary Table 1). In many cases, it will be necessary for mRNA therapeutics to engage a particular target pathway, cell, tissue or organ. This requirement places greater importance on the efficiency of uptake at the target cell, which drives the duration and level of expression. The tissue bioavailability, circulatory half-life and efficiency of the lipid-based carrier to deliver to the tissue of interest can be strictly rate limiting. Aside from the liver, which is readily targeted by intravenous (i.v.) delivery, efficient delivery to solid organs remains challenging. Another major hurdle is repeated dosing, which is often required in the treatment of chronic diseases. Even with optimized mRNA chemical modifications and advanced LNPs, chronic dosing eventually activates innate immunity, with concomitant attenuation of therapeutic protein expression5,6. Despite these remaining challenges, a host of emerging technologies is under development to systematically address them2,7,8,9,10,11.Fig. 1: Comparative roadmap for the development mRNA vaccines versus therapeutics.The clinical development pathway for mRNA vaccines and therapeutics differs in several important respects.Full size imageThis review surveys the most promising of these new technologies. The first section discusses approaches for designing and purifying the mRNA cargo to enhance the duration and amplitude of protein production in vivo. These approaches include advances in the design of the primary chemical structure of the mRNA, novel forms of circular and self-amplifying mRNA and improved purification strategies. The second section explores improved mRNA packaging systems to enhance delivery of mRNA cargo, including ionizable LNPs, cells and cell-based extracellular vesicles. The third section discusses emerging approaches for targeting mRNA therapeutics to specific tissues, such as percutaneous catheters for delivery to the heart, pancreas and kidney, and the engineering of packaging systems with tissue-specific tropism. The fourth section considers strategies for allowing repeated dosing for the treatment of chronic conditions. The fifth section provides a comprehensive table and summary of current clinical trends in mRNA therapeutics. Finally, the sixth section considers the scope of mRNA therapeutics and guiding principles for near-term and longer-term clinical development of this novel therapeutic modality.Enhancing protein yieldThe inherent immunogenicity of mRNA, although enhancing its efficacy as a vaccine, hinders its use as a therapeutic, which requires a much higher level of protein expression (Fig. 1). Mouse models in applications such as enzyme replacement, localized regenerative therapeutics and oncology typically require a 50–1,000-fold-higher mRNA dosage as compared to mRNA vaccines (Supplementary Table 1)7,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31. The need for high levels of protein expression has led to multiple strategies for optimizing the mRNA cargo to minimize innate immune responses, enhance mRNA stability and maximize translation (Fig. 2). However, for any given indication, the properties of the mRNA cargo must be considered in relation to the efficiency of the delivery system—for example, direct versus systemic injection—and the modality of action of the protein of interest.Fig. 2: Modifications of mRNA to increase protein expression efficiency.Schematic drawing of different modifications of mRNA that are currently used in clinical application or are being investigated to increase protein expression efficiency.Full size imagemRNA cargoEach component of an individual mRNA—the cap, 5′ and 3′ untranslated regions (UTRs), open reading frame (ORF) and polyadenylated (poly(A)) tail—can be optimized to enhance protein expression (Fig. 2). 5′ cap analogs and 3′ poly(A) have been designed to maximize mRNA stability and translational efficiency through exonuclease protection and enhanced catalysis to the ribosomal complex32,33,34,35,36. Optimization of the poly(A) tail length (100–300 nucleotides) has proven critical in balancing the synthetic capability of a given mRNA34,36. Similarly, improved 5′ cap analogs not only increase translational capacity but also enhance capping efficiency, from 70% to 95%, greatly improving the in vitro transcription process32,37. The composition of the 3′ and 5′ UTRs can also be customized for the target cell of interest, increasing the efficiency and tissue specificity of translation35,38,39.At present, most mRNA products contain a synthetic UTR sequence from α-globin or β-globin38,39,40, but UTR optimization can further improve protein expression by a few fold41,42. Careful screening and customization to the target of interest could conceivably offer a wide range of improvements in future UTR sequences, allowing each mRNA to be tailored to the targeted cell and disease-induced microenvironment to maximize protein synthesis per mRNA transcript41,42,43,44.Perhaps the most critical advances in mRNA vaccines and therapeutics lie in the discovery that the inclusion of chemically modified nucleosides, particularly in uridine moieties, can markedly increase protein expression after in vitro or in vivo transfection. The chemical modifications of most new RNA formulations to date have been central in intellectual property claims45,46. Thus far, over 130 different naturally occurring chemical modifications of RNA have been reported47,48. The interest in methylpseudouridine and other modified nucleosides centers on their capacity to greatly reduce (up to 100-fold) detection by the Toll-like receptors of the innate immune system, resulting in an increase in protein expression in vivo compared to unmodified mRNA40,49,50,51,52,53. Combinations of different types of chemical modifications, carriers, methods of in vivo delivery and mRNA purity reveal a surprisingly diverse set of effects, suggesting that there may be additional room for optimization37,40,49,50,54,55. Furthermore, the properties and effects of many other RNA chemical modifications remain to be explored48. In addition to chemical modifications, further improvements may be possible by shifting from total to partial nucleoside substitution, as naturally synthesized mammalian mRNAs are typically only partially and heterogeneously chemically modified47,52,54.An mRNA’s uridine content alone has a large influence on the activation of innate immunity. Using this insight, clinically effective unmodified mRNA vaccines have been generated that display immune-cloaking effects and enhanced translational efficiency in vivo similar to those of chemically modified mRNA vaccines37,54. Molecular understanding of optimal codon compositions is likely to be captured in codon-optimization algorithms that facilitate the generation of clinically effective, unmodified therapeutic mRNAs in the near future56,57,58.In addition to the amplitude of protein expression, a key limitation of mRNA therapeutics for chronic diseases is the relatively short duration of protein production, which necessitates repeated administration. In parallel to the above-mentioned mRNA structural optimizations, which tackle immune stimulation and protein expression level, several approaches in development aim to enhance the duration of protein expression (Fig. 2). Self-amplifying mRNAs (saRNAs) use the self-replication basis of an RNA alphavirus, which can amplify RNA transcripts in the cytoplasm, but replace the viral structural coded genes with the gene of interest59,60,61,62. Because saRNA transcript replication extends expression kinetics, it would be favorable for enzyme replacement therapy by decreasing the frequency of delivery60. This approach also increases protein expression, requiring ~10-fold less RNA for a similar amplitude of protein expression compared to linear modified mRNA, and is now being tested as an in vivo, scalable process for vaccine production60,61,62. Additionally, saRNAs can be delivered as two separate transcripts (trans-amplifying mRNA), which helps reduce the mRNAsʼ overall size (Fig. 2)63,64.Another alternative to linear mRNA is circular mRNA (circRNA) (Fig. 2). The back-folding of the RNA’s loose ends during processing shields circRNA from exonuclease activity, which extends the RNA lifespan by two-fold in transfected HEK293 cells65,66,67. This extended longevity increases the total protein yield without increasing the amplitude of protein expression compared to linear modified mRNA65,66,67. Importantly, circRNA circumvents the need for costly 5′ capping and cumbersome 3′ poly(A) tail by introducing internal ribosomal entry site (IRES) sequences66,67. Moreover, circularization strongly reduces RIG-1 and Toll-like receptor recognition without using chemical substitution65,66. Inversely, total replacement of uridine by methylpseudouridine completely abrogated the translation of circRNA66.RNA purificationStandardization of mRNA quality is crucial when comparing publications and preclinical data in vitro and in vivo. Many studies show conflicting results regarding protein expression capacity or the level of immune stimulation with a variety of modified, unmodified, purified and unpurified mRNAs37,40,49,50,54,55. Most aspects of mRNA in vitro synthesis can give rise to varying proportions of unwanted side products, such as double-stranded RNA (dsRNA), uncapped mRNA or mRNA fragments. These side products can strongly interfere with mRNA translation, activate innate immunity or lead to overestimation of the total functional mRNA cargo53,55,68. High-performance liquid chromatography (HPLC) is often used for size purification of mRNA products. Other systems, such as cellulose purification, anion exchangers or hydrogen bonding, have been developed with a similar purpose53,68,69. Usage of chemically modified nucleosides or optimization of the nucleoside composition mix has been shown to reduce >3-fold dsRNA byproducts53,55. Notably, purification of either unmodified or modified mRNA increased expression of human EPO ≥400% and 30%, respectively, in mice, emphasizing the importance of purification53,55.Packaging systemsThe inherent lability of mRNA requires a packaging/delivery system to protect it against degradation by nucleases and to allow efficient cellular uptake, intracellular release and translation into protein (Fig. 3a). Most of the mRNA therapeutics under development rely on LNPs, which were initially reported over six decades ago70. LNPs have since undergone numerous alterations and advancements, culminating in their first clinical use for the delivery of small interfering RNA (siRNA)71,72. Meanwhile, packaging systems based on cells, extracellular vesicles and biomimetic vesicles are being developed and validated in preclinical studies as alternative approaches.Fig. 3: Modular delivery systems for mRNA.a, Schematic drawing of the intracellular delivery of mRNA and translation into protein. b, Timeline development and milestone improvement of LNPs. c, Pros and cons of advanced carriers. ER, endoplasmic reticulum; HLA, human leukocyte antigen.Full size imageLipid-based packagingCurrent versions of RNA-loaded LNPs are advanced derivatives of the phospholipid-based liposomes first generated in the 1960s70. Today, LNPs are composed of four key components: structural lipids, cholesterol, ionizable cationic lipids and stealth lipids (Fig. 3b). Structural lipids are the fundamental scaffold of LNPs and are mainly neutrally charged phospholipids. The addition of cholesterol at various ratios stabilizes the LNP structure and enables modulation of its properties, such as membrane fluidity, elasticity and permeability73,74,75. Positively charged cationic lipids are needed for the loading of negatively charged nucleic acids into LNPs76,77. However, they have considerable drawbacks as well. Cationic lipids induce cytotoxicity, opsonization with plasma proteins and low transfection efficiencies due to rapid splenic and hepatic clearance78,79,80,81. Therefore, intensive efforts were made to modulate their physiochemical properties, which resulted in the discovery of pH-sensitive ionizable cationic lipids that can substantially reduce LNP immunogenicity. Ionizable cationic lipids are neutral in charge in the circulation, which cloaks them from cellular or molecular recognition. After cellular uptake by the endosomal pathway, they become ionized and fuse with the endosomal membrane, releasing the mRNA cargo into the cytoplasm for subsequent translation77,82,83 (Fig. 3a). Various ionizable lipids have been developed, such as the DLin-MC3-DMA (MC3) lipid reported in 2012 (refs. 84,85,86,87). MC3-composed LNPs showed a functional ED50 ~20-fold lower in mice and non-human primates than the previous gold-standard ionizable lipid KC2. These improvements in efficacy contributed to the first clinical delivery and regulatory approval of the siRNA Onpattro in 2018 (refs. 71,72). Current authorized COVID-19 mRNA vaccines use MC3 analogs (Moderna’s SM-102 and Pfizer’s ALC-0315) with improvements in the lipid’s toxicity and biodegradability profile87,88.Stealth lipids, mainly polyethylene glycol (PEG) polymer–conjugated lipids, have been added to the composition of LNPs to reduce immunogenicity. PEGs are broadly used to augment the colloidal stability of nanoparticles in fluids89,90,91,92 and are physiologically inert because they cloak potential epitopes. These capacities to reduce aggregation and opsonization also improve the immunogenicity and in vivo retention of PEGylated LNPs, enhancing safety and efficacy89,93,94. Interestingly, incorporation of PEGylated lipids facilitates the manufacturing of small, homogeneous LNPs, typically 50–100 nm in diameter, which makes them less likely to activate the immune system86,93,94. On the other hand, growing concerns regarding PEG hypersensitivity may limit the utility of PEGylated lipids for therapies requiring chronic administration89,95,96. Current research focuses on further optimization of PEGylated lipids or the development of different stealth lipids, such as polysarcosine-conjugated lipids97.Cell-based packagingAn alternative to LNPs is the use of biological delivery vehicles such as cells. Rather than delivering an mRNA cargo into targeted cells, this approach harnesses the cellular paracrine function to directly deliver proteins synthesized from mRNA introduced into cells ex vivo. It offers many advantages compared to synthetic LNPs, including biocompatibility, extended longevity in circulation and endogenous intracellular/intercellular signaling98,99 (Fig. 3c). Cells have been used as drug carriers to deliver enzymes, therapeutic drugs or lipid particles to targeted sites. A wide range of customization is possible by introducing mRNA into a variety of available cell types (for example, immune cells, blood cells and mesenchymal cells) and through further genetic engineering98,99,100,101,102. This approach could also be combined with existing cellular therapies to accentuate desired therapeutic or kinetic effects. However, cell-based delivery of mRNA therapeutics may be limited by the same caveats that apply to cell therapies in general, such as donor haplotype compatibility, homogeneous production, cumbersome quality control and a restricted intracellular delivery capacity103 (Fig. 3c).In 2019, our group reported the successful intramuscular delivery of VEGF protein by injecting skin fibroblasts pre-loaded with modified mRNA encoding VEGF into mice104. The treatment significantly decreased tissue necrosis by increasing vascular density in the murine ischemic limb. In a follow-up study, we further examined the therapeutic potential of this approach by delivering rat bone marrow–derived mesenchymal stem cells (MSCs) pre-loaded with modified mRNAs encoding VEGF and BMP-2 in a mouse model of skull defect102. Treated animals showed improved osteogenesis and vasculogenesis, resulting in skull healing102. Others have shown similar successful delivery of mRNAs or protein cargos using MSCs, neutrophils, monocytes and erythrocytes98,99,100,101,103.Extracellular vesicle–based packagingAnother novel approach uses extracellular vesicles (EVs) as a delivery vehicle. EVs encompass a heterogeneous group of extracellular bilayer membrane vesicles produced by most, if not all, cell types105. The characteristics and biogenesis of EVs have been extensively discussed elsewhere106,107,108. In mammalians, there are three major types of EVs based on their size and intracellular origins106,107. Exosomes (50–150 nm), which are the most studied and characterized EVs, shuttle and deliver their cargo between cells through endocytosis and exocytosis. After uptake by recipient cells, exosomes are processed by the endosome similar to LNPs. However, exosomes are further processed intracellularly for ‘sorting’ and ‘exchanging’ before being degraded by the lysosome or shuttled toward other cells. Microvesicles (50–500 nm) are produced at a low rate through plasma membrane budding and apoptotic bodies (>1 µm), which are a specific feature of apoptotic cells.Studies suggest that EVs play a crucial role in homotypic and heterotypic intercellular communications throughout the body107. They can carry and deliver a variety of cargos, ranging from metabolites, short nucleic acids and amino acids to full-length mRNAs and proteins106,109,110,111. As natural vesicles, EVs possess multiple advantages in terms of drug delivery, such as biocompatibility and hypoimmunogenicity106 (Fig. 3c). At present, the function and applications of EVs and, in particular, exosomes are being intensively explored in diagnosis, prognosis and therapeutics for oncology and cardiovascular diseases, with a strong focus on the EVs’ immunomodulatory and cargo delivery properties106,112,113.Similarly to cell-based delivery systems, the cellular source from which EVs are derived is crucial for their potential application. It is thought that EVs can ‘inherit’ properties from their parental cells. For example, EVs derived from blood cells conserved their capacity to penetrate the blood–brain barrier (BBB) after systemic administration109,114,115. Likewise, EVs derived from MSCs possessed similar anti-inflammatory and paracrine properties116,117. Furthermore, preclinical studies suggest that EVs derived from a variety of cell types do not induce toxicity and are well-tolerated after repeated dosing118. Therefore, EVs might not only act as an inert vehicle but can also potentially be engineered for specific delivery and repeated dosing119,120. Fundamental challenges revolve around the characterization, isolation and purification of homogeneous EVs, as typical biomarkers still need to be identified and standardized. Different isolation methods have been comprehensively examined113,121,122,123. An additional challenge is efficient loading of EVs. Strategies typically involve post-loading of the cargo directly with isolated EVs via conventional methods (for example, electroporation, sonication, extrusion or freeze–thaw cycles) or pre-loading the desired cargo into the parental cells before EV isolation124.Recently, several approaches have been reported to enhance pre-loading specificity, including forward screening and targeted engineering. Candidate protein moieties were identified for efficient drug delivery in vivo through a systemic screening of moieties that participate in cargo loading of EVs125. By using arrestin domain-containing protein 1 (ARRDC1) to recognize an mRNA or protein of interest (for example, p53 or CRISPR–Cas9 complex), one can induce specific and efficient loading of AARDC1-mediated microvesicles (ARMMs) and successfully deliver their cargo in vitro and in vivo111,126.Another pre-loading approach consists of generating atypical EVs, such as virus-like particles, in mammalian cells using endogenous homologs of viral capsid genes, which enables preferential loading of the virus-like particles with mRNA containing specific motifs. Based on this technique, selective delivery systems (SEND and eVLPs) were developed to package and deliver specific RNAs in vitro and deliver CRISPR–Cas9 in vitro and in vivo127,128.Biomimetic packagingRecent advances in drug delivery have highlighted the advantages of biomimetic packaging, which combines aspects of biological and synthetic particles (Fig. 3c). One such combination uses a synthetic core, with defined binding properties to encapsulate the cargo (for example, gold, silica, LNPs or polymers), that is then coated with a cellular membrane129. The coating alleviates the immunogenicity of the synthetic materials (for example, anti-PEG antibody), enables tissue targeting based on the cell source and extends particle stability in the circulation. Such coatings have been implemented with membranes from various cell types, including erythrocytes130, platelets131, immune cells132,133, stem cells134, tumor cells135,136 and MSCs137. Additionally, membranes from multiple cell types can be hybridized to achieve the desired properties138,139,140,141.A biomimetic approach that is complementary to coating consists of fusing biological (for example, EVs) and synthetic (for example, LNPs) components to form hybrid particles142. Such hybrid particles possess the controlled manufacturing and stable storage capacities of LNPs while retaining the biocompatibility and targeting specificity of EVs142,143. Although the current data are promising, this approach is at an early stage, and detailed mechanistic insights remain to be defined. An exosome/polymer hybrid was reported to be four-fold more stable in circulation and to possess enhanced storage stability and pharmacokinetic properties144. In another study, administration of hybrid particles composed of exosomes genetically engineered to favor immune cell recruitment and liposomes packaged with a chemotherapeutic drug efficiently inhibited tumor development in a murine carcinoma model145. We can expect rapid development of these novel delivery mechanisms thanks to the recent milestone success of mRNA vaccines, which highlighted both the great potential and current limitations of LNPs.Tissue targetingRealizing the full potential of mRNA therapeutics will require more advanced in vivo delivery systems, particularly for solid organs such as the heart, kidney, brain and lungs. The liver is the organ of choice when it comes to ease of delivery for most molecular therapies. Its fenestrated vasculature facilitates efficient homogeneous delivery and the passage of large particles. Thus, simple i.v. administration enables efficient hepatic expression of mRNA cargos with subsequent therapeutic levels of protein (Supplementary Table 1). However, targeting of most organs other than liver requires improved delivery systems, whether directly via catheters146 or by engineering of packaging systems with appropriate tropism. Every organ has its own advantages and obstacles for efficient delivery. Therefore, specific approaches are being developed for each organ that we discuss here.Injection, inhalation and intranasal administrationThe kidney, unlike the liver, filters out large compounds and allows only small molecules to pass through. The glomerulus actively eliminates proteins above 50 kDa, and constitutive podocytes create slit diaphragms with diameters of merely 10 nm, impeding most molecular therapies delivered from the circulation to the kidney147. Direct subcapsular injection into the kidney’s medulla or cortex can be achieved by varying the insertion depth of a needle or catheter. Efficient local delivery to the different compartments of the kidney is possible using several routes of administration148: (1) renal artery, targeting the glomeruli and tubular epithelium; (2) retrograde renal vein, predominantly targeting the renal tubules through the basolateral domain. Similarly to what occurs in the renal artery, increased localized pressure in the renal capillaries creates transient pores on cell membranes, resulting in nucleic acid extravasation149; (3) retrograde ureteral, targeting the tubular epithelium; and (4) intraparenchymal, with a few reports demonstrating the suitability of this route for treatment of renal diseases by gene therapy and oligonucleosides150,151. Because specific pathologies are associated with different renal compartments and cell types, drug delivery should be targeted to the cell type associated with a specific pathology.No mRNA therapies for the kidney have yet reached the clinic, but, of the few clinical studies involving miRNA, two are for renal disease. One of the drugs targeting miR21 (RG012, lademirsen) was developed for Alport nephropathy—a genetic disorder characterized by chronic glomerulonephritis that progresses to end-stage renal disease in young adult life—and is currently undergoing a phase 2 clinical trial (HERA, NCT02855268). The other miRNA-based therapy, an antagomir-inhibiting miR17 (RGLS4326), was developed for the treatment of autosomal dominant polycystic kidney disease and is undergoing a clinical phase 1 trial (NCT04536688).The lungs can be reached immediately via inhalation, permitting the use of lower drug dosage and, thus, reducing adverse systemic side effects. Attractive systems for pulmonary delivery enable direct, rapid and non-invasive access to the alveoli and lung parenchyma. Moreover, the airside of the lung provides a favorable environment for RNA integrity as its nuclease activity is lower than in serum152. Inhalation delivery also entails specific challenges. The mRNA must be highly concentrated to withstand the shear forces during aerosolization153. The large surface area (~100 m2) and the presence of a protective mucosa on the surface of the lung epithelium are natural barriers for efficient mRNA delivery. Therefore, a successful pulmonary RNA therapeutic must preserve the integrity of the mRNA, penetrate through the mucosa, infiltrate the cells and release its mRNA cargo. Early clinical data from a phase 1/2 trial for cystic fibrosis (RESTORE-CF, NCT03375047) testing an mRNA encoding cystic fibrosis transmembrane conductance regulator (CFTR) delivered by inhalation (nebulization) have shown a promising safety profile for chronic mRNA delivery but failed to show any significant improvement in lung function154,155,156.The brain is both the most genetically complex organ in the body and the most difficult to treat. It is encased protectively by the skull and meninges and isolated biochemically by an extraordinary microvasculature (the BBB), composed of endothelial cells coupled by tight junctions and adherent processes157. The restrictive nature of the BBB presents an obstacle for drug delivery to the central nervous system (CNS). Major efforts have been made to alter or bypass the BBB for the delivery of therapeutics through direct injection into the parenchyma of the brain (intraparenchymal) or the cerebral spinal fluid (CSF). The therapeutic dosage is highly dependent on the route of administration, and it is difficult to homogeneously target the brain through a CSF injection due to the organ’s size and distances between the ventricles and the cortex158. In contrast, the therapeutic can be delivered locally to neurons through direct intraparenchymal administration, limiting the delivery to regions neighboring the injection site. However, this represents a risky and invasive procedure requiring high technical skill, restricting widespread applicability in patients.Alternatively, the neural pathways connecting the nasal mucosa and the brain provide potential routes for non-invasive drug delivery to the CNS159,160. The nose-to-brain pathway enables quick delivery of therapeutic agents to the CNS within minutes. Drugs with low molecular weight (<1 kDa) and high lipophilicity favor rapid intranasal uptake into the CNS but are limited by the concentrations that can be delivered to different regions of the brain and spinal cord161,162. The bioavailability of intranasal macromolecules can be significantly improved by formulations that include permeation enhancers. In the absence of permeation enhancers, nasal absorption declines sharply for molecular weights over 1 kDa163,164. Preclinical studies in rats have shown direct transport of VEGF (molecular weight: 38.2 kDa) to CNS via intranasal administration in 30 minutes165. Another preclinical study in mice has shown that intranasal administration of cationic liposome–encapsulated mRNA is effective for delivering therapeutics to specific brain regions (that is, cortex, striatum and midbrain)166.Drug delivery to the brain is particularly challenging because reaching the target site is no guarantee of success, even for lipid-soluble drugs, which can be rapidly expelled from brain endothelial cells by P-glycoprotein efflux transporters167,168,169. Mechanisms for enhanced tropism are needed to reach a specific cell type169. Clinically, antisense oligonucleosides (ASOs), another type of RNA therapy, have been successfully administered through direct CSF delivery to treat spinal muscular atrophy (Spinraza), which is the first FDA-approved drug for this disorder170. By contrast, the highly anticipated ASO treatment for amyotrophic lateral sclerosis (ALS) (tofersen) recently missed its primary endpoints in a phase 3 trial (VALOR, NCT02623699).Catheter deliverySince the first successful heart catheterization in 1929 by Werner Fossmann, cardiac catheter–based therapies have become an integral part of modern cardiology, enabling efficient treatment of coronary artery disease, valvular disease and structural malformations. In addition, cardiac catheter–based delivery methods have been extensively explored in the context of gene-based and cell-based therapies171,172. As the heart is the central organ of the cardiovascular system, there are multiple ways to approach it intravascularly: transendocardial injections through a catheter placed in the ventricle, transepicardial injections through a catheter placed in the coronary veins, intracoronary artery infusions and retrograde coronary venous infusions (with or without blockage of the antegrade flow).The last decade has seen great improvements in these technologies. Trans-vessel-wall microcatheters can inject cells and other therapeutic agents directly into the tissue, increasing efficacy and decreasing the risk of adverse events173. Preclinical studies with this endovascular device show that it can directly access target tissues, such as the heart, kidney and pancreas, without the need to seal the puncture site146. Re-circulation devices, which allow the agent to pass the area of interest multiple times, can enhance transduction efficiency in large animal models174. Selective pressure-regulated retroinfusion (SSR) with blockage of the antegrade flow is another promising approach for safe, efficient delivery of various agents, including cDNA, miRNA inhibitors and gene therapeutic agents175.Further improvements in catheter design aim to achieve minimal invasiveness through reducing the tubing diameter, optimizing infusion parameters to maximize distribution volume or by adding a reflux-inhibiting feature to halt backflow along the catheter’s entry track176,177,178. Several future developments are to be expected with the integration of micro-electronic components that add features such as navigation and positioning through magnetic sensors or micro-cameras179. As these devices are adapted to the needs of mRNA-based therapeutics, they could also be combined with packaging systems that target specific cell types (see below).En | Emerging Technologies |
Modi's U.S. State Visit: Major Announcements, Decisions Between The Two Countries
The two leaders spoke about strengthening chip supply, space exploration, the minerals and renewables space, and technology.
Prime Minister Narendra Modi kicked off his first official state visit to the U.S. on June 21.
As part of his three-day visit, he met with U.S. President Joe Biden in Washington D.C., and had bilateral discussions on various issues between June 22 and 23.
The two leaders spoke about strengthening chip supply, space exploration, minerals and renewables, technology, and defence.
Here are the major announcements made:
Technology Partnerships
Strengthening Semiconductor Supply Chains
Micron Technology Inc. will invest more than $800 million (Rs 6,563 crore) for a new $2.75 billion (Rs 22,560 crore) semiconductor assembly and test facility in India.
Applied Materials will build a semiconductor centre in India to strengthen chip supply chain diversification.
Critical Minerals Partnership
The countries have established a Minerals Security Partnership for diverse and sustainable critical energy mineral supply chains.
India’s Epsilon Carbon Ltd. will invest $650 million (Rs 5,332.44 crore) in a greenfield electric vehicle battery component factory.
Space Exploration
India has signed the Artemis Accords, joining 26 other countries in exploration of the Moon, Mars, and beyond.
NASA will provide advanced training to Indian Space Research Organisation astronauts with the goal of launching a joint effort to the International Space Station in 2024.
Quantum, Advanced computing, And Artificial Intelligence
The countries have established a Joint Indo-U.S. Quantum Coordination Mechanism to facilitate research between the public and private sectors.
Startup Collaboration
The U.S.-India Commercial Dialogue will roll out a new "Innovation Handshake" that will address regulatory hurdles to cooperation, promote job growth in emerging technologies, and highlight opportunities for hi-tech upskilling.
Fibre Optics Investments
India’s Sterlite Technologies Ltd. has invested $100 million (Rs 820.38 crore) to build an optical fibre cable manufacturing unit near Columbia, which will facilitate $150 million (Rs 1,230.56 crore) in annual exports of optical fibre from India.
Next-Generation Defense Partnerships
GE F414 Engine Co-Production: General Electric and Hindustan Aeronautics Ltd. have proposed to jointly produce the F-414 Jet Engine in India.
General Atomics MQ-9Bs: India intends to procure armed MQ-9B SeaGuardian UAVs, which will increase India’s intelligence, surveillance, and reconnaissance capabilities.
New Sustainment And Ship Repair:
The U.S. Navy has concluded a Master Ship Repair Agreement with Larsen and Toubro Shipyard in Kattupalli, Chennai.
It is finalising deals with Mazagon Dock Shipbuilders Ltd. and Goa Shipyard Ltd. for ship service and repair at Indian shipyards.
Both countries are also engaged in advanced steps to operationalise tools in order to increase defense cooperation and are resolved to strengthen undersea domain awareness.
They have also started negotiations for a Security of Supply Arrangement and a Reciprocal Defense Procurement Arrangement that will allow the supply of defence goods during supply chain disruptions.
Sustainable Development And Global Health Partnerships
India’s VSK Energy LLC will invest up to $1.5 billion to develop a solar panel manufacturing unit in the U.S., including a 2.0 GW module-and-cell manufacturing plant.
India’s JSW Steel USA will invest $120 million at its steel plant in Ohio.
The U.S. and India will create a payment security mechanism that will ease the deployment of 10,000 made-in-India electric buses in India.
The Global Biofuels Alliance between India and the U.S. will facilitate cooperation in accelerating the use of biofuels.
The two countries are developing a broader and deeper bilateral counternarcotics framework to disrupt the illegal production and global trafficking of illicit drugs.
Other Key Highlights
The bilateral talks between India and the U.S. also included the following:
Boeing announced a $100 million investment in infrastructure and programmes to train pilots in India, supporting the country’s need for 31,000 new pilots over the next 20 years.
The U.S. will launch a programme to adjudicate domestic renewals of some petition-based temporary work visas, including for Indian nationals.
India and the U.S. have launched a new Joint Task Force for expanding research and university partnerships.
The two nations are negotiating a Cultural Property Agreement, which would help prevent the illegal trafficking of cultural property from India.
The U.S. and India have taken steps towards deepening bilateral cooperation to strengthen economic relations and trade ties.
The U.S. will join the Indo-Pacific Oceans Initiative to promote a safe, secure, and stable maritime domain, as well as its conservation and sustainable use. | Emerging Technologies |
Why Mastering this Strategy Will Build a Cohesive Brand Message
Entrepreneurs can use this powerful strategy to build a strong brand identity, connect with their target audience and ultimately drive sales and revenue.
Opinions expressed by Entrepreneur contributors are their own.
Marketing is a very important part of running a business. To succeed in the competitive world, you need a strong marketing strategy that connects you with your target audience and helps you achieve your business goals.
But it's important to note that marketing isn't just about promoting your products or services. It's about creating a clear and consistent message that speaks to your audience, no matter where they are. That's where integrated marketing communications (IMC) comes in.
What is integrated marketing communications?
Integrated Marketing Communications (IMC) is a powerful strategy that allows entrepreneurs to bring together all of their marketing efforts into a unified and consistent message. So, instead of having disjointed and confusing messages, IMC combines advertising, public relations, sales promotions and digital marketing to create a seamless and cohesive brand message.
This strategy helps businesses connect with their target audience and build brand awareness, ultimately driving more sales and revenue. Therefore, by implementing an integrated approach to marketing, entrepreneurs can create a memorable brand experience for their customers and stand out from the competition.
Benefits of integrated marketing communications
There are several benefits to using an integrated marketing communications approach for your business. Integrated marketing communications help create a consistent brand message that resonates with your target audience. And by delivering a consistent message across all channels, you can increase brand recognition, build trust with your audience and improve customer loyalty.
It also helps you save time and money by combining all marketing efforts into a cohesive strategy. You can streamline your marketing efforts and avoid duplicating efforts. This can also help you save money by eliminating the need for multiple marketing agencies or vendors.
How to implement a successful IMC strategy?
Elements of an effective IMC strategy include messaging, brand identity, audience segmentation, media channels and data analytics. However, implementing a successful IMC strategy involves several steps.
Here are the steps to follow:
- Define your target audience — Identify and understand your target audience to create a successful IMC strategy. You need to fully understand who your audience is, what they want and how they want to be communicated.
- Develop your brand identity — Develop a strong brand identity that represents your business and resonates with your target audience. This involves creating a brand style guide that outlines your brand's tone, voice and visual identity.
- Create your messaging — Create messaging that resonates with your audience and is consistent across all channels. This includes messaging for advertising, public relations, personal selling, sales promotion, direct marketing and digital marketing.
- Choose your media channels — Select the appropriate media channels for your marketing efforts to reach your target audience. Choose the channels your audience uses most that align with your brand messaging.
- Measure success — Measure the success of your marketing efforts using data analytics. This will help you track your results and identify areas for improvement to make informed decisions for future campaigns.
Tips for creating a successful integrated marketing communications strategy
Creating an effective integrated marketing communications strategy can be overwhelming, but there are several tips that can help you succeed.
Here are some actionable tips that can help you create a powerful IMC strategy:
- Create a compelling message: To create a powerful IMC strategy, you need a compelling message that resonates with your audience across all channels. Your message should be clear, concise and aligned with your brand identity.
- Understand your audience: Knowing your target audience is critical to creating messaging that connects with them. Conduct market research and audience segmentation to identify your target audience, their needs and preferences.
- Use data to measure success: Use data analytics to measure the success of your marketing efforts. This will help you make informed decisions for future campaigns and track metrics such as website traffic, conversion rates and social media engagement.
- Stay ahead of trends: Keep up with the latest trends and technologies in marketing to stay ahead of the competition. This includes staying updated on social media trends, email marketing best practices and emerging technologies like AI and machine learning.
- Collaborate with your team: Collaboration is essential to creating an effective IMC strategy. Work closely with your team and vendors to ensure a cohesive message and consistent branding across all channels.
By following these tips, you can create an integrated marketing communications strategy that resonates with your audience, drives results and helps your business succeed.
Conclusion
In conclusion, integrated marketing communications is a powerful strategy that entrepreneurs can use to build a strong brand identity, connect with their target audience and ultimately drive sales and revenue.
By creating a consistent message across all channels and using data analytics to measure success, businesses can save time and money while creating a memorable brand experience for their customers.
By following the steps and tips outlined in this article, entrepreneurs can develop and implement an effective IMC strategy that helps their businesses stand out from the competition and achieve their marketing goals.
It's essential for entrepreneurs to understand the importance of IMC and invest in a robust marketing strategy to succeed in today's competitive marketplace. | Emerging Technologies |
By Luca Bertuzzi | EURACTIV.com 30-01-2023 (updated: 30-01-2023 ) [BeeBright/Shutterstock] Print Email Facebook Twitter LinkedIn WhatsApp Telegram Washington and Brussels are stepping up their formal cooperation on Artificial Intelligence (AI) research at a crucial time for EU regulatory efforts on the emerging technology. The European Commission and the US administration signed an “administrative agreement on Artificial Intelligence for the Public Good” at a virtual ceremony on Friday evening (27 January). The agreement was signed in the context of the EU-US Trade and Technology Council (TTC), launched in 2021 as a permanent platform for transatlantic cooperation across several priority areas, from supply chain security to emerging technologies. The last high-level meeting of the TTC was held in the US in December, and Artificial Intelligence was presented as one of the most advanced areas in terms of cooperation. In particular, the two blocs endorsed a joint roadmap for reaching a common approach on critical aspects of this emerging technology, such as metrics to measure trustworthiness and risk management methods. “Based on common values and interests, EU and US researchers will join forces to develop societal applications of AI and will work with other international partners for a truly global impact,” Internal Market Commissioner Thierry Breton said in a statement. Digital infrastructure, AI roadmap tangible results of transatlantic cooperation The US administration and European Commission will meet in Washington on 5 December, the third in the context of the Trade and Technology Council (TTC), an EU-US initiative launched last year to provide a permanent platform for cooperation. The first meeting … Research collaboration Building on the AI roadmap, the US and EU executive branches are stepping up their collaboration to identify and develop AI research that has the potential to address global and societal challenges like climate change and natural disasters. Five priority areas have been identified: extreme weather and climate forecasting, emergency response management, health and medicine improvements, electric grid optimisation, and agriculture optimisation. This type of collaboration was until now narrower and limited to more specific topics. While the two partners will build joint models, they will not share the training data sets with each other. Large data sets often contain personal data that is difficult to untangle from the rest. There currently is no legal framework for sharing personal data across the Atlantic due to the disproportionate nature of the US surveillance regime certified under the Schrems II verdict of the EU Court of Justice. “The U.S. data stays in the U.S. and European data stays there, but we can build a model that talks to the European and the U.S. data because the more data and the more diverse data, the better the model,” a senior US official told Reuters. The Commission stressed that, as part of the agreement, the two partners would share the findings and resources with other international partners that share their values but lack the capacity to address these issues. As both Washington and Brussels note that the agreement builds upon the Declaration for the Future of the Internet, signatories to the Declaration are likely candidates that could benefit from the outcome of this research. US, EU and Western allies to subscribe to democratic principles of the internet This article was updated with a correction on Hungary’s position. Washington is promoting a declaration on the future of the internet, outlining a series of democratic principles in an initiative set to receive the support of the EU and other Western … Risk Management Framework While the EU-US collaboration on AI marked a, for now symbolic, step forward with the administrative agreement, Washington seems determined to put some of its standards on the map at a time when the EU is finalising the world’s first rulebook on Artificial Intelligence. Last Thursday, the day before the announcement, the US Department of Commerce’s National Institute of Standards and Technology (NIST) published its Artificial Intelligence Risk Management Framework, which sets out guidelines for AI developers on mapping, measuring, and managing risks. This voluntary framework developed in consultation with private companies and public administration bodies well represents the American non-binding approach to new technologies. When they are regulated, that often occurs at the state level in relation to specific sectors such as healthcare. By contrast, the EU is currently advancing the work on the AI Act, horizontal legislation to regulate all AI use cases based on their level of risk, notably including a list of areas at high risk like health, employment and law enforcement. The AI Act is expected to be highly influential and possibly set international standards on several regulatory aspects via the so-called Brussels effect. As most of the world’s leading companies in the field are American, it is not surprising that the US administration has been trying to shape it. Commission yearns for setting the global standard on artificial intelligence The European Commission believes that its proposed Artificial Intelligence Act should become the global standard if it is to be fully effective. The upcoming AI treaty that is being drafted by the Council of Europe might help the EU achieve just that. In October, EURACTIV revealed that Washington was pushing for the high-risk categorisation to be based on a more individualised risk assessment. Importantly, the US administration argued that compliance with NIST’s standards should be considered an alternative way to comply with the self-assessment mandated in the EU’s AI draft law. The publication of this Framework comes at a critical time for the AI Act, as EU lawmakers are on their way to finalising their position before starting interinstitutional negotiations with the European Commission and the member states. “The AI Risk Management Framework can help companies and other organizations in any sector and any size to jump-start or enhance their AI risk management approaches,” said NIST Director Laurie Locascio in a statement. The US unofficial position on upcoming EU Artificial Intelligence rules The United States is pushing for a narrower Artificial Intelligence definition, a broader exemption for general purpose AI and an individualised risk assessment in the AI Act, according to a document obtained by EURACTIV. [Edited by Nathalie Weatherald] Print Email Facebook Twitter LinkedIn WhatsApp Telegram Topics AI Act artificial intelligence Artificial Intelligence risk management Technology United States | Emerging Technologies |
When we think of drug discovery, we normally do not consider technology misuse potential. We are not trained to consider it, and it is not even required for machine learning research, but we can now share our experience with other companies and individuals.paper's authors
24 March 2022
Artificial intelligence could be repurposed to create new biochemical weapons
A new paper, co-authored by King’s academic Dr Filippa Lentzos, should act as a “wake-up call” to those using artificial intelligence (AI) technologies for drug discovery.
Drug discovery companies using artificial intelligence (AI) to search for new compounds need to be more sensitive to the risk that their technology could be repurposed to create biochemical weapons, a new paper warns.
The paper, published in Nature Machine Intelligence, describes how a thought experiment to deliberately optimize for harm turned into a computational proof. The paper’s authors, including Dr Filippa Lentzos of the Department of War Studies and Department of Global Health & Social Medicine at King’s, say their experience should serve as “a wake-up call” to those in the AI drug discovery community.
As part of a biennial arms control conference in Switzerland that looks at the implications of new technologies on chemical and biological weapons threats, the drug development company Collaborations Pharmaceuticals was invited to present on how AI in drug discovery could be misused. “The thought had never previously struck us” the authors say, “We have spent decades using computers and AI to improve human health--not to degrade it.”
To prepare for their presentation, the company took a piece of drug-discovery software and reversed one of its functions. Instead of penalising toxicity, it rewarded it. Within hours the company's technology had ‘generated’ 40,000 molecules that were highly toxic, including a nerve agent so lethal that just a few salt-sized grains can kill a person. The process also ‘generated’ other new molecules that appeared even more toxic.
While no physical molecules were made as part of the exercise, the authors point out there are many companies offering chemical synthesis and “this area is poorly regulated, with few if any checks to prevent the synthesis of new, extremely toxic agents that could potentially be used as chemical weapons.”
The paper calls for a public discussion on repurposing potential among those in the AI drug discovery community, highlighting the substantial risk to the field if their technology were misused in this way and how “it only takes one bad apple, such as an adversarial state or other actor looking for a technological edge, to cause actual harm.”
We hope that by raising awareness of this technology, we will have gone some way toward demonstrating that although AI can have important applications in healthcare and other industries, we should also remain diligent against the potential for dual use.paper's authors
They outline several recommendations including:
- Discussion of the topic at major scientific conferences and for broader impact statements to become part of applications to funding bodies and review boards.
- Continued efforts to raise awareness of the potential of dual-use aspects of cutting-edge technologies to promote responsible conduct
- Using a public-facing Application Programme Interface (API) for models, with code and data available upon request, to enhance security and control over how published models are used
- A reporting structure or hotline to authorities to alert them to any lapse or someone working on developing toxic molecules for non-therapeutic uses.
- For universities to redouble their efforts toward the ethical training of science students and those from other disciplines, especially computing students, so that they are aware of the potential for misuse of AI from an early stage of their career, as well as understanding the potential for broader impact.
Read the full paper here.
Sean Ekins and Filippa Lentzos also talked about this at an event How are emerging technologies (re)-shaping the security landscape? held on the 19 January 2022 as part of the 'War Studies at 60' series. You can watch it again here. | Emerging Technologies |
Future Technology trends in Artificial Intelligence & ML Here we come with an exciting and brand new video in which we will talk about technology trends in artificial intelligence and machine learning. The core of artificial intelligence is the use of computers and other technologies to solve problems and make decisions in a fashion that closely resembles the skills of the human mind. A computer scientist at Stanford University named John McCarthy offers a more precise meaning of the phrase making intelligent devices, particularly intelligent computer programs, is a scientific and engineering endeavor. Although it is related to the related job of utilizing computers to comprehend human intellect, AI should not be limited to techniques that can be observed by biological means. AI and computer science have given rise to a popular technique known as machine learning, or ML. Machine learning, as opposed to AI in general, focuses more explicitly on using data and algorithms to simulate human learning and gradually increase accuracy. 1- NLP Models for Natural Language Processing NLP is a processing method that makes use of AI and ML to help computers comprehend spoken and written language. Similarly to humans, machine learning, deep learning, and statistical learning models are used in NLP models to describe the rule-based modeling of human language. Therefore, computers can process human language in written or audio form and comprehend both the meaning of the phrase as well as the speaker’s or writer’s intent and sentiment. More specifically, named entity recognition is a method used by NLP models to detect named entities and transform unstructured data into a structured representation. Tokenization stemming and Lamatization, which look at the root forms of words to, for example, identify verb tenses, are other steps in this process that assist discover word patterns. Scientists have created a variety of natural language processing applications by combining sub-techniques such as speech recognition, speech tagging, word sense, disambiguation, named entity recognition, NEM, Corey, resolution, and sentiment analysis. Several instances include spam detection is the process of filtering spam emails by looking for language that is frequently used in phishing and scam attempts, such as excessive use of financial jargon, needless urgency, poor grammar, etc. 2- Google Translate & Google Assistant Machine translation necessitates a sophisticated comprehension of contextual meaning rather than just simple word replacement. Virtual Agents and Chatbots Virtual agents rather than just simple word replacement. Virtual Agents and Chatbots Virtual agents include the likes of Google Assistant, Apple’s Siri, Amazon Alexa, and Samsung’s Bixby, while chatbots are frequently used by businesses as a more affordable alternative to human customer service representatives. Using NLP as a technique for social media sentiment analysis, organizations can uncover hidden information by comprehending the emotions and attitudes expressed in social media. Posts text summarization is the process of breaking down vast amounts of text to provide synopsis and summaries for indexes and research databases m health apps, a subset of telehealth or technologies and methodologies for remote care, including remote patient monitoring. Rpm, which makes use of mobile technology to advance health goals is the M health app. 3- Consumer Application Consumer applications for mobile devices, which frequently do not include actual clinicians, are the main source of power for M health, a field of technology that has risen significantly due to the accessibility and convenience of mobile devices. These applications have become more and more popular. They propose the idea of mobile self-care, in which consumers collect their health data without the help, interpretation, or involvement of a physician. While Mhealth apps initially started as straightforward tools for tracking and documenting patient status, recent advancements in technology have resulted in the addition of AI, greatly enhancing their functionality. Through AI algorithms, sensor technology, and advanced data analytics, mobile consumer devices have been transformed into health management platforms, significantly advancing the potential of mobile health and making it more widely available. AI technology is being utilized in Mhealth and healthcare trends generally to analyze vast amounts of patient data, identify diseases more precisely, and improve disease surveillance. Additionally, it can increase the knowledge and skills of healthcare personnel as well as their productivity. The most useful applications of AI are in clinical decision, support, and information management, and these areas have already shown promise for enhancing patient and healthcare provider care. 4-IoT( Internet of Things) Technology Trends within the Internet of Things IoT The term Internet of Things, or IoT, refers to the trending technology that connects any object or device to the Internet or other connected devices. Utilizing various microsensors and processors, IoT is an enormous network of interconnected objects, all gathering data that be shared. These details may relate to the setting in which these gadgets are used or to their usage patterns. In essence, sensors, equipped devices, and items are linked to an IoT platform, which integrates the gathered data and runs analytics on it. IoT platforms deployed locally or in the cloud can pick out a particular data process the most crucial information that any device or application consumes, and then transfer the process data to the IoT network apps that cater to specific demands. These programs use the data to identify a variety of repeating patterns, suggest optimization strategies, or identify potential issues or abnormalities before they arise. Digital Twin One of the newest technological trends is called a Digital Twin, which makes extensive use of IoT networks. Devices collect data for analyzing and anticipating a physical asset’s performance characteristics and informing on the adjustments that need to improve it. 5-Digital Twin frameworks Digital Twin frameworks help to bridge the physical and digital worlds. As a result, it examines items that have a variety of sensors that generate information about many facets of their performance, including status, position, energy output, working circumstances, and more. A Digital Twin also gathers data and transfers it to a processing system, most frequently a cloud-based one similar to the general idea of IoT technology. By analyzing data that is relevant to a given environment and applying it to a digital replica of the observed object. The digital twin technique differs from its parent technology. The virtual model. A digital twin can be used to run simulations, investigate performance problems, and produce potential improvements. All of these activities can produce insightful results that can be applied to the actual physical product, particularly in the fields of manufacturing, energy production, health care, the automobile sector, smart cities, etc. Digital twins are now commonplace. Applications for digital twins in manufacturing include product design, quality management, process optimization, supply chain management, predictive maintenance, and asset lifecycle management. All of these applications aim to enhance manufacturing operations. On the other hand, using digital twins in the automotive sector can streamline and improve the development, production, sales, and service of vehicles. 6- Smart Grid electrical networks The Smart Grid electrical networks that have operational and energy efficiency measures incorporated are referred to as smart grids. They rely on the Internet of things, IoT networks, and IoT-capable gadgets like smart distribution boards, circuit breakers, and improved metering infrastructure. Additionally, they serve as energy supplies that are renewable and efficient, capable of charging batteries for EVs and power storage and provide a reliable broadband connection with wireless access as a backup. Dataflow and information management are essential components of smart grid technologies. Since digital processing and communications, which are both fundamental IoT technology aspects, are at their core, modern power infrastructure must have increased throughput and stability in addition to the additional digital layer to work with consumer-producer users. Innovative smart grid technologies make it possible for users to send electricity back into the grid using either battery power storage systems or small-scale electrical producers like solar and wind turbines, blurring the distinction between suppliers and consumers. Since the notion of smart grids is such that it interweaves the electrical suppliers, which can be privately held, or government facilities, enterprise consumers, and residential consumers, it is impossible to distinguish between applications for smart grids. That means that depending on the sector of the network in question, smart grids operate as a combination of all three, rather than discretely on a B two B 2G or B two C basis. 7- Big Data Patterns Big data is a broad word that generally refers to data that embodies the three verses of big data greater variety, larger volume, and greater velocity. Larger and more sophisticated data sets are made available by big data trends, especially when using new data sources. Big data is the foundation of numerous emerging technologies that have applications in a variety of sectors, including banking, financial services, government, media, healthcare, and transportation. Large amounts of low-density and unstructured data can be processed using big data solutions, which is highly useful for processing data with unknown values, such as Twitter feeds, click streams, or output from sensor-enabled equipment. Another crucial aspect of big data that enables smart devices to function in real-time or very close to real-time, is the speed or rate at which the data is received, analyzed, and acted upon. So here is the end of this interesting article “Technology trends in Artificial Intelligence”. Did you like it? Give your valuable feedback in our comments section. | Emerging Technologies |
Trinidad And Tobago Inks Pact For Sharing Indian Technology Stack
India has also signed pacts with Armenia, Sierra Leone, Suriname, and Antigua and Barbuda to share India Stack, while Mauritius and Saudi Arabia are at an advanced stage of finalising cooperation on the same.
Trinidad and Tobago has signed an agreement with the Indian government for sharing the India Stack, which is a collection of open application programming interfaces and digital public goods for identity, data and payment services.
The agreement was signed in the presence of officials from the Ministry of Electronics and IT, National E-Governance Division, and the Ministry of External Affairs, according to an official release.
The two nations agreed to cooperate for digital transformation through capacity building, training programmes, exchange of best practices, exchange of knowledge between public officials and experts, development of pilot or demo solutions.
Minister of State for Electronics and IT Rajeev Chandrasekhar said, "With the help of India Stack, these countries can climb up the digitalisation ladder rapidly and transform their economies and governance."
It shall create a robust ecosystem of startups, developers and system integrators working around it on next-gen innovation, he added.
The collaboration comes after Chandrasekhar met Trinidad and Tobago's Minister of Digital Transformation Hassel Bacchus last week and discussed mutual cooperation in IT, emerging technologies and India Stack.
India has also signed pacts with Armenia, Sierra Leone, Suriname, and Antigua and Barbuda to share India Stack, while Mauritius and Saudi Arabia are at an advanced stage of finalising cooperation on the same, the release said.
A similar agreement was signed with Papua New Guinea also last month, while the UPI -- part of India Stack -- has been accepted in France, UAE, Singapore and Sri Lanka, it said. | Emerging Technologies |
America is continually a work in progress, forever being reimagined by bold ideas, whether they arise from the public or private sector, or from pioneering inventors, entrepreneurs and corporations. The pandemic accelerated the “Great Reinvention,” forcing Americans, policymakers and businesses to re-evaluate values, conventional wisdom, and business models. A More Perfect Union 2022, The Hill’s second annual multi-day tentpole festival, explores and celebrates America’s best big ideas through the lens of American Reinvention. We will convene political leaders, entrepreneurs, policy innovators and disruptors, and thought provocateurs to debate and discuss some of the most urgent, challenging issues of our time. Wednesday, December 7th – Emerging Technologies: All industries are ripe for disruption and technological advances often prompt those changes. AI, machine learning, robotic automation, VR/AR, blockchain, the internet of things are all innovative and evolving technology trends constantly changing the face of business. How did the pandemic speed up digital transformation and innovation? How are businesses keeping up with changing tech trends? Thursday, December 8th – Reinventing the American Economy: Small Business and E-Commerce: How are record inflation, supply chain bottlenecks, and labor shortages contributing to the changes in businesses? How are innovative companies disrupting the way businesses are organized? During the pandemic many small businesses had to pivot quickly and find new ways to reach their customers through e-commerce platforms. E-commerce sales grew 50 percent during COVID-19, so what is the future of digital retail? How can technology encourage business growth? And who are the future disruptors of digital commerce? Friday, December 9th – Consensus Builders: A recent Pew analysis finds that, on average, Democrats and Republicans are farther apart ideologically today than at any time in the past 50 years. Extreme polarization creates a kind of legislative catch-22–zero-sum politics means we can’t get bipartisan majorities to change our institutions, while the current institutions intensify zero-sum competition between the parties. Post-midterms, where do we find “the missing middle”? FEATURING Wednesday, December 7th: Emerging Technologies Andrei Papancea, CEO & Chief Product Officer, NLX Rina Shah, Geopolitical Strategist, Investor, & 6xEntrepreneur Emily Landon, CEO, The Crypto Recruiters Thursday, December 8th: Reinventing the American Economy: Small Business and E-Commerce Robert Doar, President, American Enterprise Institute Karen Kerrigan, President & CEO, Small Business & Entrepreneurship Council Emily Glassberg Sands, Head of Information, Stripe Friday, December 9th: Consensus Builders Aliza Astrow, Senior Political Analyst Ryan Clancy, Chief Strategist, No Labels David Eisner, President & CEO, Convergence Center for Policy Resolution David Jolly, Former Member of Congress, Political Analyst SPONSOR PERSPECTIVE Paige Magness, Senior Vice President, Regulatory Affairs, Altria MODERATORS Bob Cusack, Editor-In-Chief, The Hill Steve Scully, Contributing Editor, The Hill SPONSORED BY: Join the conversation! Tweet us: @TheHillEvents using #TheHillAMPU Tags | Emerging Technologies |
The Inflation Reduction Act endorsed by Sen. Joe Manchin (D-WV) includes up to $250 billion in loan funding for the Department of Energy, according to the text of the bill, a major boost for the department's efforts to back clean energy technologies and electric vehicle manufacturing. Though it is unclear where exactly at the Department of Energy the funds would be directed, the investment is significant nonetheless — representing a large share of the legislation, which includes $369 billion in direct funding to combat climate change and bolster clean energy initiatives in the United States. The legislation, if passed, will be the largest climate bill in the nation’s history and includes billions in funding directed to energy security and climate change programs over the next 10 years, including subsidies in domestic production and manufacturing, according to a summary released by Democrats. MANCHIN DEAL WOULD BRING BACK CANCELED OFFSHORE LEASE SALES It is unclear whether the funds would be funneled directly to the Department of Energy’s Loan Programs Office, which offers direct loan and loan guarantees for new and emerging technologies, or whether they would be allocated more broadly to DOE, which would then be tasked with administering the funds. The Department of Energy did not respond to a request for comment. Under President Joe Biden, DOE's Loan Programs Office has sought to increase its investments in clean energy, advanced transportation, and tribal energy projects. CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER Last month, the office announced a $504.4 million loan guarantee to a hydrogen energy and storage facility in Utah, marking its first loan guarantee for a new clean energy project since 2014. | Emerging Technologies |
The US wants to be not only India’s security partner of the first resort but also its “premier partner” in its extraordinary growth story, the Pentagon has said. These remarks were made by Pentagon Press Secretary Brig Gen Patrick Ryder on Wednesday while responding to a question on the recently launched India-US initiative on critical emerging technologies, which has a significant defence component in it.
“The high level of participation from across the US government, US industry, and our universities is unprecedented, and sends a strong signal that the United States wants not only to be India’s security partner of first resort – but to be the premier partner in India’s extraordinary growth story,” Ryder said.
The Department of Defence is excited to work with other US agencies and partners as part of the White House-led United States–India Initiative on Critical and Emerging Technologies (iCET), he said.
“We look forward to sharing more information on our defence cooperation with India as the new initiatives develop moving forward. These initiatives will accelerate a shift from defence sales to defence joint production and development and promote integration between US and Indian defence firms,“ Ryder said.
Earlier this month, iCET was launched at the direction of US President Joe Biden and Prime Minister Narendra Modi who after their Tokyo meeting in May 2022 announced to elevate and expand the strategic technology partnership and defence industrial cooperation between the governments, businesses, and academic institutions of the two countries.
Under the iCET, the two countries have identified six areas of cooperation which would include co-development and co-production, that would gradually be expanded to Quad, then to NATO, followed by Europe and the rest of the world. In January, the Biden administration also said that India is an important partner of choice for the United States. | Emerging Technologies |
This is an excerpt from an essay titled “2022’s seismic shift in U.S. tech policy will change how we innovate,” published as part of MIT Technology Review’s 10 Breakthrough Technologies 2023. It was the perfect political photo op. The occasion was the September 2022 groundbreaking for Intel’s INTC, +2.02% massive $20 billion chip-manufacturing complex in the suburbs of Columbus, Ohio. Backhoes dotted a construction site that stretched across hundreds of flat, empty acres. At a simple podium with the presidential seal, President Joe Biden talked about putting an end to the term “Rust Belt,” a name popularized in the 1980s in reference to the Midwest’s rapidly declining manufacturing sector. “ Three major bills promise hundreds of billions of dollars in federal investments to transform the nation’s technology landscape. ” It was a presidential victory lap after the passage of some landmark U.S. legislation, beginning with the infrastructure bill in late 2021. Together, three major bills promise hundreds of billions of dollars in federal investments to transform the nation’s technology landscape. While ending the Rust Belt might be typical political hyperbole, you get the point: the spending spree is meant to revive the country’s economy by rebuilding its industrial base. The dollar amounts are jaw-dropping. The bills include $550 billion in new spending over the next five years in the Infrastructure Investment and Jobs Act, $280 billion in the CHIPS and Science Act (which prompted Intel to go ahead on the Ohio construction), and another roughly $390 billion for clean energy in the Inflation Reduction Act. Among the investments is the most aggressive federal funding for science and technology in decades. But the greatest long-term impact of the legislative flurry could come from its bold embrace of something that has long been a political third rail in the U.S.: industrial policy. What’s changed now is that the new legislation, which passed with some degree of bipartisan support in Congress, signals a strong appetite across the political spectrum for the U.S. government to reengage with the country’s industrial base. The U.S. legislation passed over the past year is really a series of different industrial and innovation strategies. There’s a classic industrial policy that singles out support to the chip industry, a green industrial policy in the Inflation Reduction Act (which is often called the climate bill) that broadly favors specific types of companies such as EV manufacturers, and other spending choices and policies scattered throughout the bills that aim to create new jobs. Arguably the most important provisions, at least according to some economists, are those designed to boost federal support for research and development. According to the Semiconductor Industry Association, the CHIPS Act has “sparked” some $200 billion in announced investments, with multiple huge new fabs planned across the country. The list is impressive: Micron Technology MU, -0.72% says it will spend up to $100 billion over the next two decades on its new facility just outside of Syracuse, N.Y., while Taiwan Semiconductor Manufacturing Company TSM, +2.87% 2330, +1.04%, the world’s largest chip maker, announced plans to spend $40 billion on a pair of fabs in Phoenix. Even as the semiconductor industry slogs through a sudden downturn, the massive new manufacturing facilities, along with the needed networks of supporting equipment, chemicals, and tools to run them, portend an era of unprecedented growth for the U.S. in domestic chip production. Something similar is happening with EVs and the lithium-ion batteries that power them. With the passage of the climate bill and its generous tax incentives for electric vehicles has come announcements of new battery manufacturing. As with the semiconductor fabs, the explosion of battery manufacturing will produce of a boon in related industries to build up domestic supply chains. Redwood Materials is building several large new facilities to recycle batteries, extracting metals such as lithium and cobalt needed in batteries to power the next generation of EVs. Numerous other sectors could benefit from the federal spending spree, especially from the hundreds of billions for clean tech. There is $8 billion for clean hydrogen and $10 billion for carbon capture, including support for facilities that grab carbon dioxide directly out of the air, an experimental new technology. But what has really caught the attention of many interested in strengthening innovation in the U.S. is the $174 billion for scientific R&D and technology commercialization in the CHIPS and Science Act. The National Science Foundation (NSF) alone gets $81 billion over the next five years. Some $20 billion goes to a new NSF directorate to support emerging technologies including artificial intelligence and quantum computing. The bill also authorizes $10 billion for regional technology and innovation hubs to be built around the country. “ Can the American public be convinced that innovation can lead to widespread prosperity? ” It’s all meant to rebuild America’s industrial base, recognizing the critical role that new technologies like artificial intelligence will play in its future economic growth. But any new narrative that the government can promote innovation and use it to foster economic prosperity is still very much a work in progress. Perhaps the greatest unknown is how the federal funding will affect local economies and the welfare of millions of Americans who have suffered decades of lost manufacturing and declining job opportunities. Economists have long argued that technological advances are what drive economic growth. But over the past few decades, the prosperity resulting from such advances has been largely restricted to a few high-tech industries and has mostly benefited a relatively small elite. Can the American public be convinced that innovation can lead to widespread prosperity? One reason for renewed optimism is that today’s technologies, especially artificial intelligence, robotics, genomic medicine, and advanced computation, provide vast opportunities to improve our lives, especially in areas such as education, health care, and other services. If the government, at the national and local level, can find ways to help turn that innovation into prosperity across the economy, then we will truly have begun to rewrite the prevailing political narrative. David Rotman is editor at large of MIT Technology Review. For more information on the 10 Breakthrough Technologies 2023, visit MIT Technology Review’s website at technologyreview.com. More: CES 2023: Tech legislation is shifting from antitrust focus to broadband, cybersecurity Plus: New 2023 EV tax incentives: How they work, which cars qualify, and where to get even more savings | Emerging Technologies |
The Pentagon's Defense Advanced Research Projects Agency (DARPA) has chosen Boeing to develop a prototype and conduct flight testing of its upcoming Glide Breaker hypersonic interceptor.
An interceptor is a weapon designed to destroy other missiles mid-flight before they reach their targets. Glide Breaker is a planned huge leap forward in missile interceptors, as it's designed to target the highly maneuverable class of weapons known as hypersonic glide vehicles, which are able to execute abrupt "zig-zag" maneuvers as they glide unpowered through Earth's atmosphere at speeds of Mach 5 and higher. (Mach 1 is the speed of sound — about 767 mph, or 1,234 kph, at sea level.) This combination of speed and maneuverability makes such weapons much harder to defend against than traditional missiles.
"Hypersonic vehicles are among the most dangerous and rapidly evolving threats facing national security," Gil Griffin, executive director of Boeing Phantom Works Advanced Weapons, said in a Boeing statement announcing the four-year agreement with DARPA, which involves wind tunnel testing, simulations and flight testing of a Glide Breaker prototype. "We're focusing on the technological understanding needed to further develop our nation's counter-hypersonic capabilities and defend from future threats."
Boeing's contract with DARPA will fund simulations that will evaluate Glide Breaker designs using wind tunnel studies and what is known as computational fluid dynamics, computerized models of how a fluid — in this case air — interacts with an object such as a missile interceptor.
In addition, Boeing will conduct testing to evaluate how Glide Breaker's jet thrusters affect its overall aerodynamic performance as they fire to help the vehicle maneuver into position to intercept and defeat hypersonic weapons in flight.
Because Glide Breaker is designed to intercept rapidly emerging technologies unlike the weapons systems of the past, Boeing will have to use simulations that model the interactions that take place between the air and the interceptor at extreme speeds and altitudes.
"We're operating on the cutting edge of what's possible in terms of intercepting an extremely fast object in an incredibly dynamic environment," Griffin said in the statement.
A Pentagon contract announcement dated Sept. 8 states that Boeing's Glide Breaker development agreement with DARPA is worth $70,554,525. While a few notional images have been published by DARPA, little is known about what the final design or overall capabilities of Glide Breaker will be. DARPA's official page for the program is scant on any details.
Boeing states this Phase 2 contract will "provide the foundation for future operational glide-phase interceptors" capable of defeating the ever-evolving threat of hypersonic glide vehicles.
In 2020, aerospace contractor Aerojet Rocketdyne received an initial Phase 1 contract worth nearly $20 million to develop "enabling technologies" for Glide Breaker. Phase 2 contract solicitations, for which Boeing was just awarded this agreement, opened in 2022. | Emerging Technologies |
Greg Nash The Capitol is seen from the National Art Gallery on Wednesday, April 13, 2022. Tech leaders testifying on Wednesday before a House subcommittee on cyber told lawmakers that more coordination is needed between the public and the private sector to identify security threats, including cyber, that stem from emerging technologies like quantum computing and artificial intelligence. Ron Green, executive vice president and chief security officer at Mastercard, said that partnership should incentivize the government to share threat intelligence to the private sector so that both sectors are able to mitigate cybersecurity risks posed by U.S. adversaries both home and abroad. “Cybercrime is not constrained by borders or sectors,” Green told lawmakers. “Our digital world is too interconnected, and threats are too fast changing for any one organization to counter them alone,” he added. Green, who was joined by three other tech leaders, made his remarks during a House Homeland Security subpanel that touched on the intersection between emerging technologies and security risks. Green’s recommendations to Congress have previously been raised by U.S. cyber officials in government and experts in the private sector. Robert Knake, a U.S. official at the White House’s Office of the National Cyber Director, told lawmakers in April that companies are increasingly asking the government to share cyber threat intelligence as they seek to prepare and counter growing security threats. “What we’ve heard from every private sector company we talked to is to make sure that we can provide the one thing that private companies can’t do on their own, which is intelligence,” Knake said. “Only the U.S. government can collect intelligence, and only the U.S. government can provide it back. So that’s a major focus of our efforts,” he added. Cyber executives who also testified before Congress in April said that the U.S. government should be less of a regulator and more of a partner for critical infrastructure in the private sector, adding that its focus should remain on providing guidance and sharing threat intelligence. Those calls to action have been emboldened by President Biden’s executive order on cybersecurity which introduced several key initiatives, including facilitating threat information sharing between the government and the private sector. Green also suggested that Congress authorizes the Cybersecurity and Infrastructure Security Agency (CISA) to create a national cyber training center to train and prepare cybersecurity workers in identifying and mitigating national security threats arising from emerging technologies. “Planning for an attack is crucial but those plans are ultimately worthless without practice,” Green said. Green made an analogy explaining how cyberattacks should be taken as seriously as military operations. “It’s the same way that [ground] battle plans would be of little use without real world war games and live fire exercises.” Green said that the U.S. army already has a national training center based in Fort Irwin, California, adding that CISA should follow suit. “We need a similar facility for cybersecurity,” he added. Green also said that part of his job is to forecast future threats as technology evolves and anticipate how those security risks may threaten businesses like his and government bodies like Congress. “We’re often looking at 10 years ahead,” Green said. He added that he and his team regularly consult experts in the private sector, government and academia about ways to identify and mitigate emerging threats. “This may seem like purely speculative work, but we’re actually developing an informed textured picture of the future,” he said. | Emerging Technologies |
Katonic AI, Mindfields Partner To Integrate Automation With Generative AI
The companies will integrate generative AI models into an organisation's infrastructure, enhancing data security and precision.
Australian artificial intelligence company Katonic AI and hyper-automation and AI advisory firm Mindfields have partnered to enable organisations to leverage generative AI without concerns surrounding data privacy, security and accuracy.
The collaboration aims to tackle these concerns by integrating generative AI models into an organisation's infrastructure, enhancing data security and precision, the companies said.
Through the partnership, Mindfields and Katonic AI will help organisations with an end-to-end generative AI solution for their business.
Katonic AI said that through its Machine Learning Operations Platform, it will boost capabilities of existing applications by integrating them with large language models by consuming generative AI through APIs and training them further through prompt engineering techniques. Through its more than 50 generative AI LLM models, it will enable businesses to explore, experience and experiment with these models first-hand.
"As we fuse the power of RPA with our advanced no-code generative AI platform, we look forward to unlocking unprecedented levels of efficiency, innovation and intelligence for our customers," Prem Naraindas, founder and chief executive officer of Katonic AI, said.
Mindfields aims to reduce the total cost of ownership for generative AI and automation initiatives. The company said it would leverage the Katonic platform for advising on and implementing automation and generative AI solutions that deliver tangible business outcomes.
"Mindfields' vision is to enable businesses to leverage emerging technologies, optimise processes and drive growth. This collaboration underscores our commitment to leverage the full potential of generative AI and automation, propelling enterprises towards sustained success," Mohit Sharma, founder and executive chairperson of Mindfields, said.
Using generative AI, enterprises can leverage the LLM's ability to comprehend context and user-written simple prompts in natural language to generate text, photos, code, 3D models and videos.
According to an Accenture report, an estimated 65% of all tasks performed in organisations across various industries can be automated or enhanced through the use of generative AI, resulting in improved productivity, precision and cost savings. | Emerging Technologies |
María Victoria Quiñones Triana, also known as Vicky Quiñones, made history earlier this year by hosting Colombia’s first-ever court hearing in the metaverse.
Quiñones, a magistrate at the administrative court of Magdalena - in the northern Caribbean city of Santa Marta i- is known for running "one of the most disruptive courts in the country,” in the words of the president of Colombia's criminal bar association, Francisco Bernante.
In an exclusive interview with Euronews Next, Quiñones spoke about her journey battling Colombia’s "paper culture" and the future of justice using artificial intelligence (AI).
The 55-year-old hosted Colombia’s first-ever court hearing in the metaverse in February, but has been betting on the digital transformation of justice for almost 15 years.
Magdalena, the small town where Quiñones is based, has some particular geographical characteristics. “Almost all the municipalities are very far away; eight hours by car, six hours by car; the roads are not easy, you even have to cross rivers,” she said.
That remoteness inspired Quiñones to look into how technology could help democratise access to justice.
“We had a physical divide, so I thought we had to build digital bridges,” she said.
In 2012, Quiñones founded a website called ‘Despacho 01’ - which she still runs and funds - with the simple purpose of providing online jurisprudence for her court, starting with the digitisation of legal dossiers.
"I thought it was terrible that people who lived in remote municipalities had to take a bus for eight, six hours, just to see a file. So we started telling them to scan their documents and send them by email,” she explained.
Her court then created a platform within the same website where people could access the briefs with a code.
At the time, promoting digital dossiers was “unthinkable,” she said.
Around the same year, Quiñones started broadcasting her court hearings on Youtube and allowing those who could not attend court proceedings to attend via a WhatsApp video call, “in order to guarantee the rights of all parties involved in the proceedings”.
“We were creating this culture of no paper (…) and actually there was the same or even more resistance than there is now to the metaverse,” she told Euronews Next.
Colombia’s first-ever court hearing in the metaverse
Quiñones started fantasising about the uses of the metaverse for her magisterial procedures shortly after the pandemic began.
“I started talking about it on my YouTube channel and suggesting it would be great if we could give it a try. Then one lawyer from an upcoming plaintiff proposed we did the hearing this way, and it was also accepted by the defendant. I was delighted”.
The challenge with Teams, Zoom, WhatsApp and similar platforms is that once you turn the camera off, “it becomes a principle of good faith,” she said. “You cannot confirm identities, and the sense of interaction completely disappears”.
The metaverse, on the other hand, “has a very important sensory element to it, there is an immediate sense of closeness, similar to what we feel when we see each other in flesh and bones,” she explains.
"The identity verification process is also more thorough; there is even a voice recognition software".
Quiñones hosted the legal session in Horizon Workrooms 18, the free virtual collaborative application developed by Meta. All parties - lawyers, clerks, defendants, plaintiffs, etc - showed up in the metaverse using their respective avatars (digital representations of people that look like a cartoon and which are often used in virtual worlds or online gaming).
Quiñones presided over the hearing in a virtual courtroom designed to resemble a traditional one. Once it started, the magistrate heard arguments from both sides, reviewed evidence, and made a ruling on the case.
The metaverse’s ‘enormous’ potential for the justice system
Beyond helping out the people who cannot physically make it to a court hearing, using the metaverse could also help those who cannot bring themselves to attend because of the emotional toll it would take.
For people who have lived trauma, for example, such as women or children who have been victims of abuse, it is often difficult to confront their aggressor.
“In the metaverse, I can create an environment where they feel safe to talk about what happened and confront their perpetrator without being afraid,” Quiñones said.
The naysayers will eventually give in, she added.
“Soon, the same judges who are reluctant to put on virtual reality (VR) glasses or even to examine digital dossiers will have to deal with intellectual property lawsuits within the same metaverse,” she predicted.
“In one way or another, life will find a way for us not to turn our backs on technology”.
What is the future of justice?
“It certainly won't be in the metaverse, at least not in Colombia's immediate future,” said Quiñones.
Colombia, as much as the rest of South America, also has a poor Internet infrastructure, “and none of the emerging technologies can be implemented without that basis,” she explained.
The country’s judicial branch is still working very hard “to try to break the paradigm of digital and getting rid of paper, as well as improving the process to digitise the files and the platforms for their access,” she added.
But local constraints have not stopped Quiñones from continuing to imagine the future of the law. Most recently, in conjunction with her office, the magistrate has been working to implement artificial intelligence in certain procedures.
"I want to automate systems where I see that human interaction is not needed," she said.
To speed up the process of approving damage claims, for example, Quiñones is working to create “an automated, simple digital form in which people would answer questions related to their complaint and provide the relevant documents. Then AI could determine whether the lawsuit proceeds or not”.
This could help deal with the problem of “the many people who sue for the sake of suing, and also to reduce the cost and time of the first round of legal services”.
Her administrative court also used the AI large language model ChatGPT to explain the concept of the metaverse to the audience of the February 15 hearing, which was streamed live on Youtube and watched by more than 68,000 people.
But Quiñones’ main goal for the near future is clear: “I hope we will help the world understand that technology not only helps to make friends, find boyfriends or buy shoes but also to serve justice”. | Emerging Technologies |
Peopleimages | Istock | Getty ImagesTechnology executives face a number of big challenges as they head into 2023, and they will likely need to work closely with CEOs at their organizations to address these hurdles. Much of the pressure CIOs feel is coming from the need to excel at digital transformation."With digital transformation now at the top of the C-suite agenda, there is added pressure within organizations to adopt digital capabilities faster, develop them with scale in mind, and deliver sustained performance benefits by embedding digital ways of working across the business," said Carl Carande, vice chairman, advisory at consulting firm KPMG."The pandemic accelerated the acceptability of digital interactions and entrenched hybrid ways of working," Carande said. But a survey of more than 1,300 global CEOs the firm conducted in July and August indicates organizations face several operational barriers that require creative solutions to build off that momentum and realize new angles of competitive advantage through digital initiatives.One challenge CEOs see is that emerging and disruptive technology is both an opportunity and a threat. "There has been no shortage of exciting advances that offer promise, but ultimately prove too costly or too complex to operationalize at scale," Carande said. "Far too often, going after these advances consumes key resources at the expense of other investment priorities."If companies want to deliver growth rather than great proofs of concept, they must have management and transformation teams that can quickly assess and prioritize the suitability and fit of emerging technologies, Carande said.CEOs said their organizations need to be quicker to shift investments to digital opportunities and divest in those areas where their organizations face digital obsolescence.Getting buy-in from the CEO and board"The challenge is that there is not always conviction at the [C-suite] or board level that 'going digital' is necessary for survival," said Tony Clark, former senior vice president of technology and innovation at professional services and investment management company Colliers International. "It's the job of the digital leader to bring forward the risks as well as the rewards, to paint a compelling picture that adopting technology and embedding [it] deeply into the business operating model is a competitive necessity."Most successful digital transformations "don't happen overnight or in one shot," said Harsha Bellur, CIO at jewelry manufacturer James Avery Jewelry. "You follow a data-driven approach of 'learn and iterate' to what's best for the particular organization and its customers. When decisions are based on customer response [and] insights, then the pace of adoption or obsolescence should be dictated by those findings," Bellur said.Addressing employee burnoutAnother issue CEOs said their organizations need to address is pandemic-related employee burnout from accelerated digital transformation over the past two years."I think that there is some truth to the state of fatigue brought on by the need to quickly adopt remote work-related technology," Clark said. "However, for many companies it is an acceleration of a trend that was already in motion," Clark said.The organizations facing the most significant burnout tend to be those that are trying to balance three significant initiatives at once: implementing emerging technology, modernizing legacy core systems, and developing the skills and talent to adopt and deploy new capabilities, Carande said.The demand on technology services "almost always outpaces the capacity of IT teams," Bellur said. "It is important then to have a process to prioritize the work [based] on expected business outcomes and value generation. This requires strong leadership to influence and drive consensus across the swath of the organizations demanding IT services."Maintaining customers and competitive edgeMany CEOs said deciding on the right technology to deploy is holding back progress on business transformation, according to the KMPG survey."I can see why this can be challenging," Bellur said. "There are a myriad of solutions that often overlap in their capabilities. Often there is never a 'perfect' solution, and it can take weeks or months to evaluate the right solution fit to a given problem. Good due diligence in finding the right solution can make the implementation relatively smoother."Rather than focus on the "right technology," organizations should aim to deploy the right mix of technology, Carande said.The diversity and scale of the current technology landscape can present businesses with a daunting set of options. "Competition for technical talent and complexity leads many businesses to feel they don't have the right mix of internal talent and strategic partners to bring the proper technology to bear," Clark said.To address the coming challenges, it's important that CIOs collaborate with chief executives on a regular basis."In today's digital age, technology is not only an enabler of business functions but also a critical driver of competitive advantage in many organizations," Bellur said. "This means the CIO and CEO — and honestly the entire C-suite—need to work closely to embed technology into the business strategy and operating model."The CIO-CEO partnership "is vital to align the vision and priorities and ultimately deliver business value," Bellur said. "When the C-suite is not aligned on the tech strategy, it undermines the ability of the CIO to deliver business value," and can result in failed projects or cost overruns. "Ultimately, it's a potential loss of customers and competitive edge for the business," he said.CIOs must establish themselves as trusted business advisors, armed with knowledge of their industry and the competitive landscape, Clark said. "It is imperative that this relationship becomes as trusted and important as it is between the CEO and the CFO, COO and CMO," he said. "It will require care and feeding from both parties." | Emerging Technologies |
Graduation project ideas using artificial intelligenceHow do you get ideas for graduation projects?Artificial intelligence is the future, so it is better to include it in the graduation project5 ideas for a graduation project based on artificial intelligence Generating AI startup ideas can be challenging for aspiring graduates, but it is possible to become successful by improving existing products or coming up with a unique new idea.
With the rapid scientific and technological development, we are all aware of the impact of artificial intelligence (AI) in all areas of life, including small and medium-sized companies as well as startups.
Artificial intelligence is definitely a field that can be applied in graduation projects, and the scope of ideas for AI-based startups can be finance, travel, medicine, entertainment, education, and the list goes on.
While large companies are scrambling to add artificial intelligence to their products and services, other companies are working hard to develop their own smart technologies and services.
In general, artificial intelligence (AI) is used in business for several reasons, including reducing costs, increasing efficiency and revenue, and improving customer service. In fact, AI technology has proven to be incredibly beneficial for almost all businesses.
How do you get ideas for graduation projects?
Try to record most of the problems in your surroundings, and find solutions to them through artificial intelligence, and you will find many ideas for graduation projects
For example, if we go to health, we will find that there are many diseases such as diabetic foot amputation. With AI, you can provide a replacement foot that has AI that matches body movements.
Or, for example, with regard to audio books, is it possible to create a device with artificial intelligence that converts what we hear at the same moment on the screen connected to the same device to convert it into written words, i.e. I listen to a sentence and at the same time find it written on the device in front of me.
Artificial intelligence is the future, so it is better to include it in the graduation project
Smart chips and other devices may replace everything in nerve impulses, neurological diseases in general such as Parkinson’s disease and ataxia, and the physical functions of patients with multiple sclerosis.
From this point of view, search for common diseases to find a solution and your project will be widely popular.
I certainly can’t suggest you a particular idea, because technically it’s up to you. In this context, I can suggest to you areas that will help you in creating graduation projects that currently have enhanced chances in similar competitions, which are: environment and preservation.
Public health and its automation.
drinking water.
Ditch the plastic.
Abandonment of pesticides.
Departmental automation.
Automation of public services. 5 ideas for a graduation project based on artificial intelligence
1- Home digitization project
The home digitization project is among the list of AI-based graduation project ideas. As this idea lies in making home appliances smarter and facilitating home management with a few clicks, and if you are passionate about emerging technologies and artificial intelligence, share this passion with others to earn more money, and what distinguishes this project is that it does not require large capital to start, and everything you need He is the necessary and deep skill and experience in the field of emerging technologies.
With your own project, you can let homeowners control the system anywhere, anytime. Home automation startup ideas could include installing an indoor and outdoor lighting network based on artificial intelligence, as well as offering smart washing machines, smart TVs, and smart refrigerators. These machines are smart because they can transmit data or simply talk to their owners.
For example: Smart washing machines can send you an alert when they run out of detergent.
2- E-learning
Due to the global lockdown, there is a rise in the trend towards online teaching, and this trend increases the workload of the teachers, and the faculty members are supposed to use many strategies to engage the students easily.
One way to reduce the burden is to automate the evaluation creation part, and artificial intelligence technology can also be used to create a new chapter with educational objectives only, and this idea can be a great help for content creators in the field of education.
3- Health care
Thanks to recent developments, it has become common to use AI in the healthcare industry. The number one reason behind the popularity of AI in medicine is accuracy. It helps doctors examine patients more accurately, and this may have been evident during the “Covid-19” pandemic, and this technology helps predict the spread of diseases using data, in addition to that it allows taking preventive measures in any specific field, and humans need artificial intelligence techniques. More effective to deal with global situations.
So, start a graduation project in a healthcare unit that is based on AI technology, this includes all the medical devices used in your healthcare unit. Of course, this project needs a lot of global experience and also a lot of capital.
4- Energy and cost saving startups
Financing an AI startup to reduce energy use and cost of drilling operations is another area to consider, and issues related to the transportation of crude oil and natural gas and the storage and refining of oil always arise.
5- Logistics services
Some of the best machine learning startups are those in the fields of sourcing and logistics. Supply Chain Management is a thriving industry, and one of its biggest concerns is the high costs of fuel and transportation.
Meanwhile, there is a greater demand for free delivery; This is why companies have to come up with innovative ways to cut costs while still meeting customer demand. An AI-powered supply chain manager can oversee a company’s entire supply chain; By monitoring new requests and integrating them with the existing infrastructure. | Emerging Technologies |
Forbes spoke to OpenAI’s Sam Altman and Greg Brockman—and more than 60 other leaders, from Bill Gates to Fei-Fei Li—about the new wave of AI hype, driven by the viral popularity of ChatGPT and Stable Diffusion.
Here’s why AI is about to change how you work, like it or not.
In an unremarkable conference room inside OpenAI’s office, insulated from the mid-January rain pelting San Francisco, company president Greg Brockman surveys the “energy levels” of the team overseeing the company’s new artificial intelligence model, ChatGPT.
“How are we doing between ‘everything’s on fire and everyone’s burned out’ to ‘everyone’s just back from the holidays and everything’s good’? What’s the spectrum?” he asks.
“I would say the holidays came at just the right time,” replies one lieutenant. That’s an understatement. Within five days of ChatGPT’s November launch, 1 million users overloaded its servers with trivia questions, poetry prompts and recipe requests. (Forbes estimates it’s now 5 million-plus.)
OpenAI quietly routed some of the load to its training supercomputer, thousands of interconnected graphics processing units (GPUs) custom-built with allies Microsoft and Nvidia, while long-term work on its following models, like the highly anticipated GPT-4, took a back seat.
As the group huddles, ChatGPT’s at-capacity servers still turn away users. The previous day, it went down for two hours. Yet amid the fatigue, this roomful of employees, all in their 20s and early 30s, clearly relish their roles in a historic moment. “AI is going to be debated as the hottest topic of 2023. And you know what? That’s appropriate,” says Bill Gates, the person most responsible for a similar previous paradigm shift—one known as software. “This is every bit as important as the PC, as the internet.”
The markets agree. Valued at US$29 billion following a reported US$10 billion investment commitment from Microsoft, OpenAI—specifically, Brockman, 34, and his boss, CEO Sam Altman, 37—serves as the poster child for something extraordinary. But it’s hardly alone. In image generation, Amazon quietly backs Stability AI (recent value: US$1 billion), whose brash CEO, Emad Mostaque, 39, aspires to be the Amazon Web Services of the category. Hugging Face ($2 billion) supplies tools for giants like Intel and Meta to build and run competitive models themselves. Below the generative AI providers in this budding tech stack, Scale AI (US$7.3 billion) and others provide picks-and-shovels infrastructure; above them, an ecosystem of applications develops, funnelling the AI into specialized software that could fundamentally alter jobs for lawyers, salespeople, doctors—pretty much everyone.
What’s the hype about?
Is there hype? Plenty. The reported valuation for OpenAI, aggressively forecasting 2023 revenue of US$200 million (compared to expected revenue of about US$30 million last year, according to part of a past investor presentation observed by Forbes), would imply a forward 145 price-to-sales multiple, compared to a more typical 10x or 20x. (OpenAI declined to comment on its financials except to say that the investment was multiyear and multibillion.)
No matter that AI insurgents aren’t pure disruptors—Amazon, Google, Microsoft, Nvidia and others already profit by providing the cloud infrastructure underpinning much of the category. Google in particular, with its enormous resources and decade-plus of machine learning research, is the “elephant in the room,” says investor Mike Volpi at Index Ventures.
Societal challenges? Those, too. There’s potential for bias and discrimination in the models, not to mention misuse by bad actors. Legal spats are emerging over the ownership of AI-generated work and the actual data used to teach them. Then there’s the ultimate goal that some, such as OpenAI’s leaders, envision: a conscious, self-improving “artificial general intelligence” that could reimagine capitalism (Altman’s hope)—or threaten humanity (others’ fear, including Elon Musk’s).
But in speaking with more than 60 researchers, investors and entrepreneurs in the category, it’s clear that this AI gold rush also has something other recent crazes have lacked: practical, even boring, business substance. The race to embed tools in company workflows, large and small, is already on. Calls to AI-based code snippets, or APIs, soared tenfold in 2022, with more acceleration in December, according to provider RapidAPI. A recent Cowen study of 100-plus enterprise software buyers found that AI has emerged as the top spending priority among emerging technologies. ChatGPT and OpenAI’s models are coming to Microsoft’s massive-footprint suite of products such as Outlook and Word, with most business software makers poised to follow suit quickly.
A quarter-century after IBM’s Deep Blue program defeated chess grandmaster Garry Kasparov, the shift to artificial intelligence is finally here. “It’s an exciting time,” the press-shy Altman tells Forbes, “but my hope is that it’s still extremely early.”
New York City’s public school system banned ChatGPT, and a Wharton professor who tested the program gave it a “B” on his final exam.
This AI tipping point also has roots in London, the headquarters for Mostaque’s Stability. In August, hot on the heels of the beta launch of OpenAI’s image model, DALL-E, Mostaque released Stable Diffusion, which allows anyone to instantly spin a line of text into a piece of art, or turn a dull selfie into a dramatic self-portrait. Unlike OpenAI’s proprietary model, Stability doesn’t own Stable Diffusion, which is open-source. But it’s become the biggest driving force and profit maker behind the project so far. On any given day, 10 million people use Stable Diffusion—more than any other model.
Such rapid adoption proved a turning point. Previously, AI had existed in three realms. The first was academic: A seminal paper demonstrating the power of neural networks, a key underpinning of GPT and other large language models (so named because they can scan, translate and generate text) was published more than a decade ago. The second was demonstrative: Deep Blue created an arms race of stunts, with Alphabet’s DeepMind unit ultimately creating juggernauts in chess and the ancient board game Go. The third was incremental: apps like Gmail, which works without AI, but is better with features such as autocomplete.
What none of these had was the magic of playing with the technology firsthand that made Stability such a breakthrough. Its overnight virality was enough for investors to offer the company a US$1 billion valuation and more than US$100 million of funding in August, within two weeks of its launch—off virtually no revenue.
AI’s potential impact needs to be debated now: “It’s like an invasive species. We will need policymaking at the speed of technology.”
Now, generative AI has exploded. Electronic music group the Chainsmokers used Stable Diffusion to render a recent music video, and Mostaque predicts it’ll soon be used to generate entire movies. The Dalí Museum in St. Petersburg, Florida, is using DALL-E to help visitors visualize their dreams, and a similar image generation tool from the startup Midjourney sparked outrage online when it was used to create a piece of art that won a top prize at the Colorado State Fair.
“I think this is a Sputnik moment,” says Stripe CEO Patrick Collison, Brockman’s former boss, who says he’s looking forward to AI tools live translating YouTube videos and grouping them by AI-identified themes.
As Stability proliferated, OpenAI had already decided to shelve ChatGPT to concentrate on domain-focused alternatives, saving the interface for a bigger later release. But by November, it had reversed course. And by January, as New York City’s public school system banned ChatGPT on its computers and a Wharton professor who tested the program gave it a “B” on his final exam, the company offered a test paid version of the tool to some users. “Stable Diffusion threw a bomb into the mix by making things dramatically more accessible,” says Sequoia investor Pat Grady, an OpenAI backer. “It really lit a fire under OpenAI and got them to become much more commercially focused.”
This, in turn, accelerated the commercial aspirations across the industry. Stability’s Mostaque gave his entire staff off during the holidays—he himself mostly slept, interrupted only by GPT-fueled panic calls from headmasters of top U.K. schools—with the idea that 2023 would turn gruelling as he tries to go toe to toe with not just OpenAI but the likes of Google and Meta. His message to his team: “You’re all going to die in 2023.”
‘Code Red’
The world’s biggest tech companies have accepted the challenge. At Google, hermetic founders Sergey Brin and Larry Page have reportedly returned to headquarters as part of a “code red” enacted by CEO Sundar Pichai to address ChatGPT and its ilk; at Microsoft, long-retired co-founder Gates tells Forbes he now spends about 10% of his time meeting with various teams about their product road maps.
Google should have the advantage. In 2017, Google researchers invented the “T” in GPT, publishing a paper on transformers that, by analysing the context of a word in a sentence, made large language models more practical. One of its authors, Aidan Gomez, remembers deploying the tech first to Google Translate, then to Search, Gmail and Docs. How it’s used, however, remains mostly behind the scenes—or in support of advertising products, the bulk of its sales—leaving consumers un-wowed. “I was waiting for the world to start picking this up and building with it, and it wasn’t happening,” says Gomez, who launched his own OpenAI challenger, Cohere, in 2019. “Nothing was changing.” Of the paper’s eight authors, six have left Google to start their own companies; another jumped to OpenAI.
Instead, Microsoft seems poised to become the industry leader. In 2019, Brockman and his team realized they couldn’t pay for the large-scale computing GPT would need with the money they had been able to raise as a nonprofit, including from the likes of Peter Thiel and Musk. OpenAI spun up a for-profit entity to give employees equity and take on traditional backers, and Altman came on board full-time. Microsoft CEO Satya Nadella committed US$1 billion to OpenAI at the time and guaranteed a large and growing customer base in his cloud service, Microsoft Azure.
Now, the US$10 billion Microsoft investment will translate into ChatGPT deploying across Microsoft’s Office software suite. RBC Capital Markets analyst Rishi Jaluria, who covers Microsoft, imagines a near-future “game-changer” world, in which workers convert Word documents into elegant PowerPoint presentations at the push of a button.
For years, the big data question for large enterprises has been how to turn hordes of data into revenue-generating insights, says FPV Ventures cofounder Pegah Ebrahimi, the former CIO of Morgan Stanley’s investment banking unit. Now, employees ask how they can deploy AI tools to analyse video catalogues or embed chatbots into their own products. “A lot of them have been doing that exercise in the last couple of months and have concluded that yes, it’s interesting, and there are places we could use it,” she says.
What values?
The big debate around this new AI era surrounds yet another abbreviation: “AGI,” or artificial general intelligence—a conscious, self-teaching system that could theoretically outgrow human control. Helping to develop such technology safely remains the core mission at OpenAI, its executives say. “The most important question is not going to be how to make technical progress, it’s going to be what values are in there,” Brockman says. At Stability, Mostaque scoffs at the objective as misguided: “I don’t care about AGI. . . If you want to do AGI, you can go work for OpenAI. If you want to get stuff that goes out to people, you come to us.”
OpenAI supporters like billionaire Reid Hoffman, who donated to its non-profit through his charitable foundation, claim that reaching an AGI would be a bonus, not a requirement for global benefit. Altman admits he’s been “reflecting a great deal” on whether we will recognize AGI should it arrive. He currently believes “it’s not going to be a crystal-clear moment; it’s going to be a much more gradual transition.” But researchers warn that the potential impact of AI models needs to be debated now, given that once released, they can’t be taken back. “It’s like an invasive species,” says Aviv Ovadya, a researcher at Harvard’s Centre for Internet and Society. “We will need policymaking at the speed of technology.”
In the nearer term, these models, and the high-flying companies behind them, face pressing questions about the ethics of their creations. OpenAI and other players use third-party vendors to label some of their data and train their models on what’s out of bounds, forfeiting some control over their creators. A recent review of hundreds of job descriptions written using ChatGPT by Kieran Snyder, CEO of software maker Textio, found that the more tailored the prompt, the more compelling the AI output—and the more potentially biased. OpenAI’s guardrails know to keep out explicitly sexist or racist terms. But discrimination by age, disability or religion slipped through. “It’s hard to write editorial rules that filter out the numerous ways people are bigoted,” she says.
Copyright laws are another battleground. Microsoft and OpenAI are the targets of a class action alleging “piracy” of programmers’ code. (Both companies recently filed motions to dismiss the claims and declined further comment.) Stability was recently sued by Getty Images, which claims Stable Diffusion was illegally trained on millions of its proprietary photos. A company spokesperson said it was still reviewing the documents.
Even more dangerous are bad actors who could deliberately use generative AI to disseminate disinformation—say, photorealistic videos of a violent riot that never actually happened. “Trusting information is part of the foundation of democracy,” says Fei-Fei Li, the co-director of Stanford’s Institute for Human-Centred Artificial Intelligence. “That will be profoundly impacted.”
‘Stay out of Google’s way’
Who will have to answer such questions depends in part on how the fast-growing AI market takes shape. “In the ’90s we had AltaVista, Infoseek and about ten other companies that were like it, and you could feel in the moment like some or one of any of those were going to the moon,” says Benchmark partner Eric Vishria. “Now they’re all gone.”
Microsoft’s investment in OpenAI, which comes with a majority profit-sharing agreement until it has made back its investment, plus a capped share of additional profits, is unprecedented, including its promise for OpenAI to eventually return to nonprofit control. (Altman and Brockman, respectively, call that a “safety override” and “automatic circuit breaker” to keep OpenAI from concentrating power if it gets too big.) Some industry observers more wryly see the deal as a near-acquisition or at least a rental, that benefits Nadella the most. “Every time we’ve gone to them to say, ‘Hey, we need to do this weird thing that you’re probably going to hate,’ they’ve said, ‘That’s awesome,’” Altman says of the arrangement. (Microsoft declined to discuss the deal’s terms.)
There’s another under-discussed aspect of this deal: OpenAI could gain access to vast new stores of data from Microsoft’s Office suite—crucial as AI models mine the internet’s available documents to exhaustion. Google, of course, already has such a treasure trove. Its massive AI divisions have worked with it for years, mostly to protect its own businesses. A bevy of fast-tracked AI releases are now expected for 2023.
At Stability, Mostaque takes great pains to explain his business as focused on the creative industry, more like Disney and Netflix—above all, staying out of Google’s way. “They’ve got more GPUs than you, they’ve got more talent than you, they’ve got more data than you,” he says. But Mostaque has made his own potential Faustian bargain—with Amazon. A partnership with Stability saw the cloud leader provide more than 4,000 Nvidia AI chips for Stability to assemble one of the world’s largest supercomputers. Mostaque says that a year ago, Stability had just 32 such GPUs.
“They cut us an incredibly attractive deal,” he says. For a good reason: The synergy provides an obvious cash cow from cloud computing run on Amazon Web Services and could generate content for its Studios entertainment arm. But beyond that, Amazon’s play is an open question.
Don’t forget Apple and Facebook parent Meta, which have large AI units, too. Apple recently released an update that integrates Stable Diffusion directly into its latest operating systems. At Meta, chief AI scientist Yann LeCun griped to reporters, and over Twitter, about ChatGPT buzz. Then there are the many startups looking to build all around, and against, OpenAI, Stability and their kind. Clem Delangue, the 34-year-old CEO of Hugging Face, which hosts the Stable Diffusion open-source model, envisions a Rebel Alliance of sorts, a diverse AI ecosystem less dependent on any Big Tech player. Otherwise, Delangue argues, the costs of such models lack transparency and will rely on Big Tech subsidies to remain viable. “It’s cloud money laundering,” he says.
Existing startup players like Jasper, an AI-based copywriter that built tools on top of GPT and generated an estimated US$75 million in revenue last year, are scrambling to keep above the wave. The company has already refocused away from individual users, some of who were paying US$100 or more a month for features now covered roughly by ChatGPT, with OpenAI’s own planned first-party applications yet to arrive. “This stuff gets broken through so quickly, it’s like nobody has an edge,” says CEO Dave Rogenmoser.
That applies to OpenAI, too, the biggest prize and the biggest target in the bunch. In January, a startup founded by former OpenAI researchers called Anthropic (backed most recently by Sam Bankman-Fried of bankrupt firm FTX), released its own chatbot called Claude. The bot holds its own against ChatGPT in many respects, despite having been developed at a fraction of the cost, says Scale AI CEO Alexandr Wang, an infrastructure software provider to both. “It [raises] the question: What are the moats? I don’t think there’s a clear answer.”
At OpenAI, Brockman points to a clause in the company’s nonprofit charter that promises, should another company be close to reaching artificial general intelligence, to shut down OpenAI’s work and merge it into the competing project. “I haven’t seen anyone else adopt that,” he says. Altman, too, is unperturbed by horse race details. Can ChatGPT beat Google search? “People are totally missing the opportunity if you’re focused on yesterday’s news,” he muses. “I’m much more interested in thinking about what comes way beyond.”
This story was first published on forbes.com | Emerging Technologies |
U.S. intelligence agencies are looking to vastly expand the roster of countries, companies and even nonstate actors with whom they partner in order to get — and share — information on threats to the United States and its allies.
The change is part of a "rethink" ordered in the nation's new National Intelligence Strategy, unveiled Thursday, which aims to better prepare the U.S. for a range of threats that are no longer limited to traditional nation-state competitors such as China and Russia or terrorist groups such as al-Qaida and the Islamic State group.
"The United States faces an increasingly complex and interconnected threat environment," according to Director of National Intelligence Avril Haines, who cited a range of challenges from global powers such as China and Russia to climate change and pandemics like COVID-19.
"Subnational and nonstate actors — from multinational corporations to transnational social movements — are increasingly able to create influence, compete for information, and secure or deny political and security outcomes, which provides opportunities for new partnerships as well as new challenges to U.S. interests," she wrote in the 2023 strategy.
Global challenges, disruptive advances
"In addition, shared global challenges, including climate change, human and health security, as well as emerging and disruptive technological advances, are converging in ways that produce significant consequences that are often difficult to predict," Haines said.
The seeds for the new strategy, and the emphasis on finding new partners like those in the private sector, have been in the works for months.
Since before Russia's invasion of Ukraine in February 2022, the U.S. has been selectively declassifying intelligence to better share information with allies and partners.
The effort, credited with building support for Ukraine while catching Russia off guard, is already becoming part of a formal U.S. game plan for countering threats.\
Haines publicly called for more outreach to the private sector and technology companies as recently as April, during an appearance at the Washington-based Carnegie Endowment for International Peace.
"In many scenarios, they see things before we do," she said at the time.
The U.S. intelligence community's annual threat assessment, issued in February, likewise warned of "an evolving array of nonstate actors," global emergencies such as climate change, and emerging technologies that "have the potential to disrupt traditional business and society with both positive and negative outcomes, while creating unprecedented vulnerabilities and attack surfaces."
The new strategy seeks to counter those trends by strengthening existing intelligence partnerships, including the "Five Eyes" arrangement with Britain, Canada, Australia and New Zealand, and by forging new ones.
Specifically, the strategy envisions U.S. intelligence agencies exchanging information with private companies and what it describes as "nonstate and subnational actors."
That includes relationships with nongovernmental organizations, think tanks and other entities that could help provide the U.S. intelligence with local or on-the-ground expertise.
It is also likely to include the type of intensified cooperation and information sharing that has been part of so-called U.S. “hunt forward” cyber operations in countries such as Latvia and Albania, or even more outreach along the lines of what has been done with state and local governments in the U.S. to help secure elections.
'Essential' partnerships
Some former intelligence officials call such outreach a necessity.
"It is a simple fact of how elections and communications infrastructure work that tech companies in the private sector are best positioned to be aware of many such threats first, and so close partnership with them is essential," said Paul Pillar, a former senior CIA officer who now teaches at Georgetown University.
"There will always be delicate negotiations about exact sharing arrangements to give due respect to values such as personal privacy," Pillar told VOA. "But government and the private sector already have gotten into that a lot."
The new U.S. intelligence strategy also emphasizes better understanding of new technologies and supply chains to make sure countries such as China "are not able to undermine our competitiveness and national security."
The strategy is meant to guide all 18 U.S. intelligence agencies, including the Office of the Director of National Intelligence, the Central Intelligence Agency, the National Security Agency, the Defense Intelligence Agency and the Federal Bureau of Investigation.
It replaces the previous strategy issued in 2019, which focused on "speaking truth."
"We need to reassure the policymakers and the American people that we can be trusted … despite the stresses that are persistent in the current environment," said then-Director of National Intelligence Dan Coats.
Some key U.S. lawmakers are welcoming the updated intelligence strategy.
"The National Intelligence Strategy appropriately organizes the Intelligence Community around seminal challenges: a rising China, Russia's war of aggression in Ukraine, and the opportunities and complexities presented by emerging technologies like AI [artificial intelligence]," Senate Intelligence Committee Chairman Mark Warner said in a statement shared with VOA.
"An expert workforce, robust partnerships and resilient capabilities will be central to this effort," Warner noted. | Emerging Technologies |
G20 Summit 2023: Key Takeaways From Modi, Biden Bilateral Meeting
Biden reaffirmed his support for India’s claim to a permanent seat in the United Nations Security Council.
Prime Minister Narendra Modi and U.S. President Joe Biden on Friday vowed to "deepen and diversify" the bilateral defence partnership while welcoming forward movement in India's procurement of 31 drones and joint development of jet engines.
In their over 50-minute talks, the two leaders deliberated on India's G20 presidency, cooperation in nuclear energy, critical and emerging technologies such as 6G and artificial intelligence, and ways to fundamentally reshape multilateral development banks.
The joint statement said the U.S. President welcomed the issuance of a Letter of Request from India's Defence Ministry to procure 31 MQ-9B remotely piloted aircraft from American defence giant General Atomics.
It said the two leaders also welcomed the completion of the Congressional notification process and the commencement of negotiations for a commercial agreement between GE Aerospace and Hindustan Aeronautical Ltd. to manufacture GE F-414 jet engines in India.
Permanent Seat At UN Security Council
Biden reaffirmed his support for India’s claim to a permanent seat in the United Nations Security Council and pledged his commitment to the UN reform agenda.
"The leaders once again underscored the need to strengthen and reform the multilateral system so it may better reflect contemporary realities and remain committed to a comprehensive UN reform agenda, including through expansion in permanent and non-permanent categories of membership of the UN Security Council," the joint statement said.
Astronaut To International Space Station
Biden also congratulated PM Modi on the historic landing of Chandrayaan-3 at the south polar region of the Moon and the success of the Aditya-L1 solar mission.
The two countries said that they have started talks to put in place a strategic framework for human space flight by year's end, as they plan to send an Indian astronaut to the International Space Station in 2024.
"Determined to deepen our partnership in outer space exploration, ISRO and the National Aeronautics and Space Administration have commenced discussions on modalities, capacity building, and training for mounting a joint effort to the International Space Station in 2024 and are continuing efforts to finalise a strategic framework for human space flight cooperation by the end of 2023," said the statement issued after the talks between the two leaders.
Trade Dispute Settlement
The statement said that the two countries have settled the last trade dispute at the World Trade Organisation over poultry products.
With this, the two countries have mutually resolved all seven pending trade disputes at the WTO.
"The leaders lauded the settlement of the seventh and last outstanding WTO dispute between India and the United States. This follows the unprecedented settlement of six outstanding bilateral trade disputes in the WTO in June 2023," the statement said.
(With inputs from PTI) | Emerging Technologies |
When both sender and receiver have identical random numbers, they can share encrypted data without the need to share a key to decode it. This prevents so-called man-in-the-middle attacks. With COSMOCAT, muons (µ) arriving at the sender and receiver at the same time provide the source of the random number. Provided the devices are synchronized, the receiver can know which muon signal relates to which incoming message and can decode it accordingly. Credit: ©2022 Hiroyuki Tanaka State-of-the-art methods of information security are likely to be compromised by emerging technologies such as quantum computers. One of the reasons they are vulnerable is that both encrypted messages and the keys to decrypt them must be sent from sender to receiver. A new method—called COSMOCAT—is proposed and demonstrated, which removes the need to send a decryption key since cosmic rays transport it for us, meaning that even if messages are intercepted, they could not be read using any theorized approach. COSMOCAT could be useful in localized various bandwidth applications, as there are limitations to the effective distance between sender and receiver.
In the field of information communication technology, there is a perpetual arms race to find ever more secure ways to transfer data, and ever more sophisticated ways to break them. Even the first modern computers were essentially code-breaking machines used by the U.S. and European Allies during World War II. And this race is about to enter a new regime with the advent of quantum computers, capable of breaking current forms of security with ease. Even security methods which use quantum computers themselves might be susceptible to other quantum attacks.
"Basically, the problem with our current security paradigm is that it relies on encrypted information and keys to decrypt it both being sent along a network from sender to receiver," said Professor Hiroyuki Tanaka from Muographix at the University of Tokyo.
"Regardless of the way messages are encrypted, in theory someone eavesdropping could use the keys to decode the secure messages eventually. Quantum computers just make this process faster. If we dispense with this idea of sharing keys and could instead find some way of using unpredictable random numbers to encrypt information, then it should lead to a system immune to interception. And I happen to work often with a source capable of generating truly random unpredictable numbers: cosmic rays from outer space." Some use cases for COSMOCAT. As the distance is limited due to the nature of the muon shower arriving at the ground, COSMOCAT is best suited for networks within small areas such as buildings. Offices, data centers and buildings that make use of smart devices, and even electric-car charging stations, are some possible application areas. Credit: ©2022 Hiroyuki Tanaka Various random number generators have been tried over time, but the problem is how to share these random numbers while avoiding interception. Cosmic rays may hold the answer, as one of their byproducts, muons, are statistically random in their arrival times at the ground. Muons also travel close to the speed of light and penetrate solid matter easily.
This means that as long as we know the distance between the sender's detector and the receiver's detector, the time required for muons to travel from the sender to the receiver can be precisely calculated. And providing that a pair of devices are sufficiently synchronized, the muons' arrival time could serve as a secret key for both encoding and decoding a packet of data. But this key never has to leave the sender's device, as the receiving machine should automatically have it as well. This would plug the security hole presented by sending shared keys.
"I call the system Cosmic Coding and Transfer, or COSMOCAT," said Tanaka. "It could be used alongside or in place of current wireless communications technologies such as Wi-Fi, Bluetooth, near-field communication (NFC), and more. And it can exceed speeds possible with current encrypted Bluetooth standards. However, the distance it can be used at is limited; hence, it's ideally kept to small local networks, for example, within a building. I believe COSMOCAT is ready to be adopted by commercial applications."
At present, the muon-detecting apparatus are relatively large and require more power than other local wireless communication components. But as technology improves and the size of this apparatus can be reduced, it might soon be possible to install COSMOCAT in high-security offices, data centers and other local area networks.
The work is published in the journal iScience. More information: Hiroyuki K.M. Tanaka, Cosmic Coding and Transfer (COSMOCAT) for Ultra High Security Near-Field Communications, iScience (2023). DOI: 10.1016/j.isci.2022.105897 Citation: Using cosmic rays to generate and distribute random numbers and boost security for local devices and networks (2023, January 12) retrieved 12 January 2023 from https://techxplore.com/news/2023-01-cosmic-rays-generate-random-boost.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. | Emerging Technologies |
TAKASAKI, April 30 (Reuters) - The European Union is likely to reach a political agreement this year that will pave the way for the world's first major artificial intelligence (AI) law, the bloc's tech regulation chief Margrethe Vestager said on Sunday.
This follows a preliminary deal reached on Thursday by members of the European Parliament to push through the draft of the EU's Artificial Intelligence Act to a vote by a committee of lawmakers on May 11. Parliament will then thrash out the bill's final details with EU member states and the European Commission before it becomes law.
At a press conference after a Group of Seven digital ministers' meeting in Takasaki, Japan, Vestager said the EU AI Act was "pro-innovation" since it seeks to mitigate the risks of societal damage from emerging technologies.
Regulators around the world have been trying to find a balance where governments could develop "guardrails" on emerging artificial intelligence technology without stifling innovation.
"The reason why we have these guardrails for high-risk use cases is that cleaning up … after a misuse by AI would be so much more expensive and damaging than the use case of AI in itself," Vestager said.
While the EU AI Act is expected to be passed by this year, lawyers have said it will take a few years for it to be enforced. But Vestager said businesses could start considering the implication of the new legislation.
"There was no reason to hesitate and to wait for the legislation to be passed to accelerate the necessary discussions to provide the changes in all the systems where AI will have an enormous influence," she said in the interview.
While research on AI has been going on for years, the sudden popularity of generative AI applications such as OpenAI'S ChatGPT and Midjourney have led to a scramble by lawmakers to find ways to regulate any uncontrolled growth.
An organisation backed by Elon Musk and European lawmakers involved in drafting the EU AI Act are among those to have called for world leaders to collaborate to find ways to stop advanced AI from creating disruptions.
Digital ministers of the G7 advanced nations on Sunday also agreed to adopt "risk-based" regulation on AI, among the first steps that could lead to global agreements on how to regulate AI.
"Now when everyone has AI at their fingertips ... there's a need for us to show the political leadership to make sure that one can safely use AI and gain all the amazing possibilities of improvement in productivity and better services," Vestager said in an interview with Reuters.
Our Standards: The Thomson Reuters Trust Principles. | Emerging Technologies |
Artificial intelligence (AI) is one of the most rapidly developing technologies of our time, with a wide range of applications in various industries. As a result, investing in AI startups has become an increasingly popular trend among investors. However, with so many startups and companies vying for attention, it can be difficult to know where to put your money. In this article, we'll explore some key considerations to help you spot the next big thing in the world of AI.
First and foremost, it's essential to understand the different types of AI. There are three main categories: rule-based systems, which operate based on pre-programmed instructions; machine learning, which enables systems to learn and improve over time; and deep learning, a subset of machine learning that utilizes neural networks to simulate the human brain. Each of these categories has its own set of applications and use cases, and understanding them will help you to identify the most promising companies and startups.
Another important factor to consider is the team behind the company. A strong and experienced team is crucial to the success of any startup, especially in the fast-moving field of AI. Look for companies with a mix of technical and business expertise, as well as a track record of success. This will give you a better sense of the company's ability to execute and bring its products or services to market.
It's also important to evaluate the company's business model and revenue potential. Look for companies that have a clear path to monetization and are generating revenue or have a solid plan to do so. This will give you a better sense of the company's long-term viability and potential for growth.
Finally, keep an eye on the industry trends and emerging technologies. The AI market is constantly evolving, and new technologies are being developed all the time. By staying informed and keeping an eye on the latest developments, you'll be in a better position to identify the companies and startups that are most likely to succeed.
As an investor, it's always important to do your due diligence and research before making any investment decisions. By considering these key factors, you'll be better equipped to spot the next big thing in the world of AI and potentially reap the rewards of your investment.
It's also important to note that you don't have to be a tech expert to invest in AI startups, but it's a good idea to seek the help of experts in the field to help you evaluate the potential of a startup. And it's also worth mentioning that investing in startups is high risk and it's important to be prepared to lose your money but also be open to high returns.
In my own personal experience, I remember being introduced to an AI startup that was focused on automating customer service. I was impressed by the team's experience and the potential of their technology, so I decided to invest. Today, the company is a leader in its field and my investment has grown significantly. It just goes to show that with a bit of research and a keen eye for potential, you too can spot the next big thing in the world of AI. | Emerging Technologies |
Horizon scanning (HS) or horizon scan is a method from futures studies, sometimes regarded as a part of foresight.[1] It is the early detection and assessment of emerging technologies or threats for mainly policy makers in a domain of choice.[2][3][4] Such domains include agriculture,[5] environmental studies,[6] health care,[7] biosecurity,[2] and food safety.[8]Some sources mention HS as an alternative name for environmental scanning (ES),[9] or view HS as a subset of ES,[10] or at least suggest ES to have a similar goal to HS.[11] In summary, ES has key differences to HS.[12] ES is rather concerned to provide industry specific information for short-term decision making in a competitive environment.[13][14][15] EtymologyEdit
One of the first usages of the term horizon scanning as related to futures studies appeared in 1995 in a paper discussing trends in information technology and forecasting the year 2005.[16] Then, horizon scanning was used to name detection and early evaluation of health care technologies in a European workshop in September 1997, whose participants were 27 policy makers and researchers from 12 countries.[7] This workshop was organized as a part of the European health technology assessment project (HTA).[7] Policy makers and planners of health services were the main target groups for knowledge produced by horizon scanning.[7]
Definitions of Horizon Scanning Year
Source
Definition
2002
Department for Environment, Food and Rural Affairs Horizon scanning is "the systematic examination of potential threats, opportunities and likely future developments which are at the margins of current thinking and planning’ and, continuing, horizon scanning ‘may explore novel and unexpected issues, as well as persistent problems or trends."[17]
2004
UK Government's Chief Scientific Advisor's Committee
"Horizon scanning is the systematic examination of potential threats, opportunities and likely future developments including – but not restricted to – those that are at the margins of current thinking and planning. Horizon scanning may explore novel and unexpected issues, as well as persistent problems or trends."[18]2015
Report by Fraunhofer Institute for Systems and Innovation Research ISI, Netherlands Organisation for Applied Scientific Research and VTT Technical Research Centre of Finland for the European Commission
"Horizon Scanning is the systematic outlook to detect early signs of potentially important developments. These can be weak (or early) signals, trends, wild cards or other developments, persistent problems, risks and threats, including matters at the margins of current thinking that challenge past assumptions."[19]2019
OECD Horizon scanning is "a technique for detecting early signs of potentially important developments through a systematic examination of potential threats and opportunities, with emphasis on new technology and its effects on the issue at hand."[20]
Phases and techniquesEdit
A 2013 systematic study of 23 formally established health technology HS programs from different countries identified following common phases in a horizon scanning process:[21]
Identify the users of the HS products.
Estimate the time available for the HS effort.
Conduct HS, and identify emerging technologies that potentially affect targeted domain.
Filter the identified technologies by applying criteria for determining the relevance of the technologies to the HS effort.
Prioritize the technologies that have passed through the filtering process by applying criteria based on stakeholders’ requirements and needs.
Assess technologies of high priority for the stakeholders, and predict their potential impacts targeted domain.
Use peer review to check for quality of the HS process and products.
Disseminate the HS products to the relevant audiences in a timely fashion.
Update the HS products on a regular basis or when a significant development occurs related to the technology.Horizon scanning includes following techniques:[6][21]
Technique
Example
Interviews
Environmental Research Funders Forum Horizon Scanning Study[22]Issue tree
Foresight project on Brain Science, Addiction and Drugs[23]Literature searches and state-of-science reviews
Medical Technology Horizon Scanning[24]Expert workshops
Horizon scan of conservation issues in UK.[25] Assessment of 100 ecological questions of highest priority to global conservation.[26]Open fora
Future Wiki[27]Delphi questionnaire
50 key issues for the future of Mediterranean wetlands[28]Trend analysis
HSTOOL – semiautomatic discovery of scientific trends from clusters of publications[29]Scenarios[30]
Wildlife Conservation Societies’ Futures of the Wild[31]Systems/Maps
Foresight project on Tackling Obesities: Future Choices[32]Backcasting Governmental bodiesEdit
UKEdit
In order to centralize horizon scanning, UK has founded the English Horizon Scanning Centre (HSC) in 2005.[33] The Cabinet Office's Horizon Scanning Secretariat and the Government Office for Science's Horizon Scanning Centre were combined to the Horizon Scanning Programme team in 2014.[34]
GermanyEdit
Umweltbundesamt applies horizon scanning since 2012 along with trend analysis.[35]
SwedenEdit
Swedish Defence Research Agency has developed a software tool named HSTOOL for HS of scientific literature in 2019.[36] The scientific literature is searched, clustered in groups that correspond to subject subfields and evaluated based on the bibliometric numbers. The clustering is performed using Gibbs sampling Dirichlet multinomial mixture model algorithm. The citation statistics are provided derived from Thomson Reuters' Web of Science. European UnionEdit
European commission developed the Transport Research and Innovation Monitoring and Information System (TRIMIS) in 2017, an open-access transport information system supporting the implementation of the seven Strategic Transport Research and Innovation Agenda (STRIA) roadmaps.[37] In 2021, a horizon scanning module was added to TRIMIS.[38] This horizon scanning framework developed by Joint Research Centre within TRIMIS uses news media, scientific publication sources, patent data sources, EU funding datasets and other sources as basis for text mining.
Joint Research Centre's "Tool for Innovation Monitoring" augments horizon scanning with text mining of available literature.[39] This tool is developed in 2020. Among the used data sources are Scopus, PATSTAT and Cordis. USAEdit
In 2010, The Agency for Healthcare Research and Quality (AHRQ) established the first publicly funded Healthcare Horizon Scanning program of the US.[40]
RussiaEdit
In the Russian Federation, horizon scanning is performed by Higher School of Economics and financed by Ministry of Education and Science.[41] In 2012, Putin stated that "[a] Foresight exercise for Russia’s science and technology towards 2030 is due to be completed. It highlights specific ways to both revitalize traditional sectors and penetrate into new high-tech markets…". Russian horizon scanning team consisted of 15–20 members and conducted an online survey of 2000 experts. See alsoEdit
Futurology
Risk analysis
Scientific lacuna
Technology assessment
Technology scouting
William J. SutherlandReferencesEdit^ Cuhls, Kerstin E. (2020). "Horizon Scanning in Foresight – Why Horizon Scanning is only a part of the game". Futures & Foresight Science. 2 (1): e23. doi:10.1002/ffo2.23. ISSN 2573-5152. ^ a b "Continuity Central". www.continuitycentral.com. Retrieved 25 February 2021. ^ Sutherland, William J.; Aveling, Rosalind; Brooks, Thomas M.; et al. (January 2014). "A horizon scan of global conservation issues for 2014". Trends in Ecology & Evolution. 29 (1): 15–22. doi:10.1016/j.tree.2013.11.004. PMC 3884124. PMID 24332318. ^ Smith, J; Ward, D; Michaelides, M; et al. (September 2015). "New and emerging technologies for the treatment of inherited retinal diseases: a horizon scanning review". Eye. 29 (9): 1131–1140. doi:10.1038/eye.2015.115. PMC 4565944. PMID 26113499. ^ Text Mining for Horizon Scanning: An Insight Into Agricultural Research and Innovation in Africa. Publications Office of the European Union. 2020. ISBN 978-92-76-21446-5. ^ a b Sutherland, William J.; Woodroof, Harry J. (1 October 2009). "The need for environmental horizon scanning". Trends in Ecology & Evolution. 24 (10): 523–527. doi:10.1016/j.tree.2009.04.008. ISSN 0169-5347. PMID 19660827. ^ a b c d Carlsson, P.; Jørgensen, T. (1998). "Scanning the horizon for emerging health technologies. Conclusions from a European Workshop". International Journal of Technology Assessment in Health Care. 14 (4): 695–704. doi:10.1017/s0266462300012010. ISSN 0266-4623. PMID 9885460. ^ "Horizon Scanning and Foresight: An overview of approaches and possible applications in Food Safety" (PDF). FAO. Retrieved 25 February 2021. ^ Schultz, Wendy L. (1 January 2006). "The cultural contradictions of managing change: using horizon scanning in an evidence‐based policy context". Foresight. 8 (4): 3–12. doi:10.1108/14636680610681996. ISSN 1463-6689. ^ Miles, Ian; Saritas, Ozcan (2 November 2012). "The depth of the horizon: searching, scanning and widening horizons". Foresight. 14 (6): 530–545. doi:10.1108/14636681211284953. ^ van Rij, Victor (1 February 2010). "Joint horizon scanning: identifying common strategic choices and questions for knowledge". Science and Public Policy. 37 (1): 7–18. doi:10.3152/030234210X484801. ISSN 0302-3427. ^ Rowe, Emily; Wright, George; Derbyshire, James (1 December 2017). "Enhancing horizon scanning by utilizing pre-developed scenarios: Analysis of current practice and specification of a process improvement to aid the identification of important 'weak signals'". Technological Forecasting and Social Change. 125: 224–235. doi:10.1016/j.techfore.2017.08.001. ISSN 0040-1625. ^ Choo, Chun Wei (2002). Information Management for the Intelligent Organization: The Art of Scanning the Environment. Information Today, Inc. ISBN 978-1-57387-125-9. ^ Miles, Ian; Saritas, Ozcan (2 November 2012). "The depth of the horizon: searching, scanning and widening horizons". Foresight. 14 (6): 530–545. doi:10.1108/14636681211284953. ISSN 1463-6689. ^ Ramírez, Rafael; Selsky, John W. (February 2016). "Strategic Planning in Turbulent Environments: A Social Ecology Approach to Scenarios". Long Range Planning. 49 (1): 90–102. doi:10.1016/j.lrp.2014.09.002. ^ Gates, William H. (1 January 1995). "Horizon Scanning: Opportunities Technology Will Bring by 2005". Journal of Business Strategy. 16 (1): 19–21. doi:10.1108/eb039676. ISSN 0275-6668. ^ Könnölä, Totti; Salo, Ahti; Cagnin, Cristiano; et al. (1 March 2012). "Facing the future: Scanning, synthesizing and sense-making in horizon scanning". Science and Public Policy. 39 (2): 222–231. doi:10.1093/scipol/scs021. hdl:11475/8386. ^ Palomino, Marco A.; Bardsley, Sarah; Bown, Kevin; et al. (24 August 2012). "Web‐based horizon scanning: concepts and practice". Foresight. 14 (5): 355–373. doi:10.1108/14636681211269851. ^ "Models of Horizon Scanning How to integrate Horizon Scanning into European Research and Innovation Policies" (PDF). Fraunhofer ISI. Retrieved 26 February 2021. ^ National Academies of Sciences, Engineering, and Medicine (2020-01-14). HORIZON SCANNING AND FORESIGHT METHODS. National Academies Press (US). Retrieved 22 June 2021.{{cite book}}: CS1 maint: multiple names: authors list (link) ^ a b Sun, F; Schoelles, K (2013). AHRQ Health Care Horizon Scanning System A Systematic Review of Methods for Health Care Technology Horizon Scanning (PDF). ^ "An Environment Research Funders' Forum Report: Horizon Scanning Study". ERA Visions. 2007. Retrieved 7 March 2021. ^ "Brain science, addiction and drugs". GOV.UK. 2005. Retrieved 7 March 2021. ^ Brown, I.; Smale, A.; Verma, A.; Momandwall, S. (December 2004). "Medical Technology Horizon Scanning". Australasian Physical & Engineering Sciences in Medicine. Retrieved 3 June 2021. ^ Sutherland, William J.; Bailey, Mark J.; Bainbridge, Ian P.; et al. (2008). "Future novel threats and opportunities facing UK biodiversity identified by horizon scanning". Journal of Applied Ecology. 45 (3): 821–833. doi:10.1111/j.1365-2664.2008.01474.x. ISSN 1365-2664. ^ Sutherland, William J.; Armstrong‐Brown, Susan; Armsworth, Paul R.; et al. (2006). "The identification of 100 ecological questions of high policy relevance in the UK". Journal of Applied Ecology. 43 (4): 617–627. doi:10.1111/j.1365-2664.2006.01188.x. hdl:2318/123675. ISSN 1365-2664. ^ "Futura Wikia". Retrieved 7 March 2021. ^ Taylor, Nigel G.; Grillas, Patrick; Al Hreisha, Hazem; et al. (2021). "The future for Mediterranean wetlands: 50 key issues and 50 important conservation research questions". Regional Environmental Change. 21 (2): 33. doi:10.1007/s10113-020-01743-1. PMC 7982080. PMID 33776560. ^ Karasalo, Maja; Schubert, Johan (September 2019). "Developing Horizon Scanning Methods for the Discovery of Scientific Trends". 2019 International Conference on Document Analysis and Recognition (ICDAR): 1055–1062. doi:10.1109/ICDAR.2019.00172. ISBN 978-1-7281-3014-9. S2CID 207977849. Retrieved 20 May 2021. ^ Palomino, Marco A.; Bardsley, Sarah; Bown, Kevin; et al. (1 January 2012). "Web‐based horizon scanning: concepts and practice". Foresight. 14 (5): 355–373. doi:10.1108/14636681211269851. ISSN 1463-6689. Retrieved 16 May 2021. ^ Futures of the Wild: A Project of the Wildlife Conservation Society Futures Group. Wildife Conservation Society Futures Group and Bio-Era. 2007. ^ "Reducing obesity: future choices". GOV.UK. 2007. Retrieved 7 March 2021. ^ Miles, Ian; Saritas, Ozcan (2 November 2012). "The depth of the horizon: searching, scanning and widening horizons". Foresight: The Journal of Future Studies, Strategic Thinking and Policy. 14 (6): 530–545. doi:10.1108/14636681211284953. ^ "Horizon Scanning Programme team". GOV.UK. Retrieved 26 February 2021. ^ Lehmphul, Karin (24 May 2016). "Horizon Scanning / Trendanalyse". Umweltbundesamt (in German). ^ "HSTOOL for Horizon Scanning of Scientific Literature". www.foi.se. Retrieved 20 May 2021. ^ "JRC Publications Repository". publications.jrc.ec.europa.eu. Retrieved 23 January 2022. ^ Tsakalidis, Anastasios; Boelman, Elisa; Marmier, Alain; et al. (1 September 2021). "Horizon scanning for transport research and innovation governance: A European perspective". Transportation Research Interdisciplinary Perspectives. 11: 100424. doi:10.1016/j.trip.2021.100424. ISSN 2590-1982. ^ Text mining for horizon scanning : an insight into agricultural research and innovation in Africa (PDF). Luxembourg. 2020. ISBN 978-92-76-21446-5. ^ "methodology_emerging-innovations_US_health-care" (PDF). www.ispor.org. Retrieved 26 February 2021. ^ Cuhls, Kerstin E. (2020). "Horizon Scanning in Foresight – Why Horizon Scanning is only a part of the game". Futures & Foresight Science. 2 (1): e23. doi:10.1002/ffo2.23. ISSN 2573-5152. S2CID 212853499. Retrieved 12 October 2021. | Emerging Technologies |
MIAMI, FL - APRIL 27: Chipotle restaurant workers fill orders for customers. (Photo by Joe ... [+] Raedle/Getty Images) Getty Images During Chipotle’s Q2 earnings call Tuesday afternoon, there was a lot of discussion around “throughput” and the company’s efforts to improve it. Why that’s important is simple: Chipotle experienced a same-store sales increase of 10.1% in the quarter and has largely remained insulated from the current inflationary pressures hitting consumers’ wallets. But there remains plenty of room for improvement, particularly if Chipotle can serve even more meals to more customers throughout the day. That means speeding up the in-restaurant makeline and the second, digital makeline. Throughput. This process, however, is easier said than done in an industry that has struggled to find employment. That said, Scott Boatwright, chief restaurant officer, has a game plan. “We just launched Project Square One, and it’s literally just that. Let’s get back to square one on how we deliver great fundamentals of great throughput,” Boatwright said during a phone interview Tuesday evening. “The nuances of great throughput include teaching team members on the line how to deliver a great experience and keep moving, to listen out of both ears, hand items down politely to the next team member. The little things add up during a peak volume window and make us so much more efficient.” Chipotle was close to achieving optimum throughput in 2019 after Boatwright and team introduced a training program specifically focused on the basics of operations. That training included defining necessary positions to execute orders efficiently–positions like expediters, which can move items down the line up to 20-to-30% faster. In 2019, however, digital sales only made up about 20% of Chipotle’s mix. Now, the company remains well above 35% on digital sales, even as its in-restaurant sales return closer to pre-pandemic levels. In-restaurant sales increased 36% on the quarter. This has essentially created two separate multibillion-dollar businesses within the company, which has become somewhat of a challenge as team members spent the past year and a half mostly focused on only digital. “What’s transpired, when we lost in-restaurant business during Covid and moved to digital, that stuff like throughput wasn’t important anymore. After two years, we have new team members and new managers in the business who don’t recall what great throughput down the line was like or how to drive it,” Boatwright said. “As our in-restaurant recovery began to happen about eight or nine months ago, it became apparent to me that we just weren’t there.” The need to be “there” has become even more critical as Chipotle looks to more than double its footprint, with most new units including a mobile-order-ahead Chipotlane, and as the chain aspires to reach $3 million in average unit volumes, from the current $2.8 million. In addition to launching Project Square One, Chipotle has also put several other pieces into place to maximize operational efficiencies. Field leaders occasionally work “shoulder to shoulder” with team members during peak hours, for instance.
Chipotle has also implemented a time management and labor delivery tool to ensure staffing is maximized at the right time. The tool’s scheduling capabilities is facilitated by machine learning, meaning it factors in considerations such as promotional events and weather. The company is also installing a new point-of-sale system to streamline the ordering process for team members, and a new pin pad system to allow customers a faster and contactless payment option. “All of these things are more efficient and easier for team members and for customers and they save some time on the order,” Boatwright explained. Of course, there’s also the idea of automation–which Chipotle has embraced with gusto–to save on time and labor. In May, the chain announced it was testing a robot named Chippy to help make tortilla chips. And, just last week, Chipotle announced an investment in Hyphen, a foodservice platform that automates kitchen operations. Boatwright said Hyphen has the potential to make digital orders automatically, while Chippy removes mundane tasks from team members’ workloads. “If you ideate to some future state, you can foreseeably see digital orders come into our ecosystem and Hyphen will recognize and prepare a bowl in real time. This will reduce labor on the line, create better accuracy and better portioning and, overall, a more efficient process,” he said. “We think it’s a big idea.” It’s also a different position from what some of Chipotle’s peers are taking. During McDonald’s Q2 earnings call Tuesday, for instance, CEO Chris Kempczinski said automation won’t be a “silver bullet” and the idea of robots is not practical for the majority of its restaurants. Conversely, Chipotle is all in on finding emerging tech to roll into its operations. The company launched a $50 million “Cultivate Next” fund in the spring to provide investments with companies that align with Chipotle’s mission, and Hyphen is a part of that fund. Operational efficiency in general is a priority.
According to Boatwright, Chipotle is well positioned to consider emerging technologies, perhaps more so than its peers. “I think a lot of peers are entrenched and saturated and that has caused them not to think about innovation in the right way. I also think we have an advantage because we’re company-owned and we don’t have a franchise community that may be scared of the unknown,” he said. “We’re at 3,000 restaurants and headed toward 7,000 and we have a big opportunity to really build the Chipotle of the future. We’re not looking for solutions and trying to apply it to a problem, we’re looking at problems we are trying to solve.” | Emerging Technologies |
IBM Ties Up With Government On Semiconductor, AI, Quantum Computing
MoUs have been signed with MEITY with focus on skill development, accelerating R&D efforts in semiconductors, AI and quantum.
U.S. tech giant IBM Inc. has tied up with the Ministry of Electronics and Information Technology to advance and accelerate innovation in AI, semiconductor and quantum technology.
IBM will support 'FutureSkills' program with National Institute of Electronics and Information Technology and MEITY, and partner with 'futureDESIGN' startups in quantum and AI, it said in a statement.
Three memoranda of understanding have been signed with three entities engaged with MEITY—IndiaAI, India Semiconductor Mission and Centre for Development of Advanced Computing—with a focus on skill development, engaging the ecosystems and accelerating R&D efforts in semiconductors, AI, and quantum, it said.
"This body of work will aim to accelerate India’s comprehensive national strategy for AI, strengthen efforts to be self-reliant in semiconductors and advance its National Quantum Mission," the statement said.
"These are technologies that will shape the future of tech, represents tremendous opportunities for academic, startup and innovation ecosystem, as also the broader opportunity of creating global standard talent," Minister of State for Electronics and IT Rajeev Chandrasekhar said at a briefing, reported PTI.
According to IBM, the tech major's collaboration with the government spans across three levels:
IBM And IndiaAI
Digital India Corp. intends to collaborate to establish a world-class national AI Innovation Platform for India that will focus on AI skilling, ecosystem development, and integrating advanced foundation models and generative AI capabilities to support India’s scientific, commercial, and human-capital development in this technology.
AIIP will serve as an accelerator for incubation and competency development in AI technologies and their applications for use cases of national importance. AIIP would have access to relevant capabilities of IBM’s Watsonx platform, including the ability to use models in language, code and geospatial science with the intent to train models for other domains as needed.
IBM And ISM
IBM would be a knowledge partner of ISM for a semiconductor research center. IBM may share its experience with ISM on intellectual property, tools, initiatives, and skills development, aimed at promoting innovation in semiconductor technologies such as logic, advanced packaging and heterogeneous integration, and advanced chip design technologies, using modernized infrastructure.
IBM And C-DAC
The two entities will also explore opportunities for working together to support the advancement of India’s National Quantum Mission by building competency in quantum computing technology, applications in areas of national interest, and a skilled quantum workforce. Activities would broadly focus on: workforce enablement; development of industries and startups; R&D; and quantum services and infrastructure.
In September, IBM partnered with the Ministry of Education and the Ministry of Skill Development and Entrepreneurship to provide curated courses to empower youth in India with future-ready skills.
The collaboration will focus on the co-creation of curriculum and access to IBM's learning platform, IBM SkillsBuild, for skilling learners across school education, higher education and vocational skills on emerging technologies like AI, including generative AI, cybersecurity, cloud computing and professional development skills. | Emerging Technologies |
Digital technologies to better predict future city environments may be key tool in sustainable urban design
Natural disasters like floods and heat waves demonstrate the real lack of control people have over the environment—although some of those disasters may actually be a consequence of human decisions and carelessness.
An increase in the frequency and severity of natural disasters has shone a spotlight on the urgent need for greater urban sustainability and "digital twins" technology is taking a leading role in tackling this challenge.
Defined as computer models of physical processes or replicas of physical entities, a digital twin is essentially a realistic and accurate virtual model.
Benefits of a digital twin
Digital twins offer promise as important tools for urban sustainability because they allow researchers to recreate a specific city environment and replicate the factors or processes that affect it, like traffic or emissions.
Digital twins can also be coupled with sensors in the environment, providing real-time data for detailed monitoring.
Researchers can then use AI to learn about those processes and how they affect the environment, predict future conditions and impacts, and so enable sustainable decision-making.
Our critical examination of digital twins and their potential in the world of urban sustainability shows these recent technological developments have proven financial and sustainability benefits for public and private organizations.
We showed that digital twins can make resource allocation more efficient by monitoring the real-time dynamic data of physical assets and then examining their performance in different virtual environment scenarios.
For example, by measuring and simulating the stormwater capacity of new road networks, waste and loss could be reduced by integrating historical and real-time sensor data and using that combined data to create a water-sensitive urban design.
Current barriers to digital engineering
But despite urban digital twins (UDT) leading the way in tackling technological, ethical and socio-technical issues, there are barriers to its application. So how can this technology be harnessed to support urban sustainability?
The success of UDT technology depends on timely and two-way communication between the physical and digital environments—without any compromises.
The first factor we have identified is a lack of digital literacy of many decision-makers, which results in less appreciation of digital technologies and therefore little contribution to their advancements in both research and financial resources.
It follows then that the more we can achieve technological readiness, the greater the possibility of adopting digital technologies in organizations or day-to-day activities.
And lastly, standards and shared data models are needed so that important data doesn't remain in silos.
According to professional associations including the Institute of Surveying and Spatial Sciences (SSSI) in Australia, Standards Australia, Engineers Australia and the Planning Institute of Australia (PIA), standardization plays a vital role in developing a common language, process and data models across stakeholders and jurisdictions.
The Principles for Spatially Enabled Digital Twins for Built and Natural Environment developed by the Australia New Zealand Spatial Information Council, highlight the role of standards in managing information and data, UDT interoperability, privacy and security.
Trusting AI
An issue for many industries is that algorithmic decisions may be questioned and doubted due to accountability and transparency concerns.
Our recent research, published in Nature Sustainability, highlighted the vital role of explainable AI (XAI), or AI capable of explaining its results, in improving trust and transparency of AI-based decisions.
XAI addresses the challenge created by the "black box" concept where even the AI developers cannot explicitly explain why AI arrived at a specific result or decision.
Finally, current digital technologies only measure the objective aspects of urban entities and focus on the physical aspects of the city, like building height, tree canopy, type of land use and density, three-dimensional buildings, urban redevelopment visualization and building energy assessment.
However, cities are a combination of objective (physical and functional) and subjective (social construction and place experience) characteristics.
While some research has demonstrated new capabilities for measuring place quality, equitable access to facilities and the sociability of urban spaces, system-wide simulations and practical applications are still deficient and should be a key focus of future research to prevent ill-informed decisions and strategies based on inaccurate models.
Combining expertise to create digital cities
Because the applications of digital cities are so far-reaching, so is the expertise behind them.
By merging IT and engineering professionals with policymakers, end-users and planning and building experts, we can better leverage the value of digital technologies, address future challenges and return the current investments to the community.
Australian state governments have already started to draw on digital twin capabilities to better service the community. The NSW Spatial Digital Twin facilitated a cross-organizational, collaborative digital workflow for the entire state. It aggregates and visualizes location information in a dynamic and multi-dimensional model of the real world.
The Victorian Government secured $37.4 million to develop the Digital Twin Victoria platform to collate a mass of 2D, 3D and live data in a single online platform.
This project was motivated by the Government's pilot digital twin project in Fishermans Bend, carried out in collaboration with the University of Melbourne and other stakeholders.
We know that a digital twin should be more than a replica: it should be coupled with physical processes or entities into a cyber-physical-social system.
Such a system may function more like a brain than a twin—with nerves that sense, with an agency that can change the physical or the digital system, and with moderation mechanisms to preserve the equilibrium of the physical and digital system.
We have started to upskill, create awareness among professionals, managers and executives, and educate the future workforce about digital twin technology. Our new education programs, like the Master of Digital Infrastructure Engineering and the Graduate Certificate of Digital Engineering (Infrastructure), address the technological, ethical and socio-technical challenges.
We have also collaborated with the industry to identify future digital engineering demand for the Australian and global infrastructure sector, which is booming and progressively adopting digital tools like Building Information Modeling (BIM), the Internet of Things (IoT) and virtual reality.
Leveraging our research and development of emerging technologies, in addition to these education programs, creates a new capability for future skill sets to integrate digital data with statistics, machine learning and data simulations.
The aim is simple—to engage better with communities and communicate physical and social processes, patterns and predictions in the design of sustainable future cities.
More information: Asaf Tzachor et al, Potential and limitations of digital twins to achieve the Sustainable Development Goals, Nature Sustainability (2022). DOI: 10.1038/s41893-022-00923-7
Journal information: Nature Sustainability
Provided by University of Melbourne | Emerging Technologies |
Google has announced a new Workspace feature that allows individuals to skip meetings without missing out on key discussions or action points.
The ‘take notes for me’ function is powered by the company’s Duet AI tool and will now automatically generate call transcripts, with the tool recapping key talking points within a discussion and highlighting “action items” outlined during a call.
Duet AI will also generate real-time video snippets during a call as part of the feature, Google said, enabling users to go back and refer to specific parts of a meeting.
Key features highlighted by Google include the “summary so far” and “attend for me” functions. The former of these will give latecomers to a meeting a “snapshot of everything they’ve missed”, Google said.
Meanwhile, if a user can’t attend a meeting, they can direct Duet AI to join the meeting on their behalf and provide a recap after the meeting.
The tech giant revealed that Duet AI for Workspace will be made generally available for customers starting today, who can experience a “no-cost trial” of the new features.
Google Meet, Chat, and Gmail will all be bolstered with new AI-driven capabilities helping to boost collaboration and streamline the cumbersome process of meetings.
“To help you better engage during meetings, we're removing the burden of note-taking and sending out recaps,” the firm said in a statement.
“Duet Al can capture notes, action items, and video snippets in real time with the new ‘take notes for me’ feature and it will send a summary to attendees after the meeting.”
Duet AI for Google Chat
Google Meet isn’t the only aspect of Workspace subject to Duet AI integration, the firm revealed.
Chat’s user interface is set for a refresh, along with the introduction of new shortcuts and an enhanced search function powered by Duet that will “let you stay on top of conversations”.
How the way we work will change the office of the future
Redefine the way your business operates and envision the office of the future.
Using natural language input, Duet AI will enable users to ask specific questions about chat-related content and topics, provide automatic summaries of documents shared in a space, and even catch up on conversations with automated transcripts.
“With Duet Al in Chat as a real-time collaboration partner, you can get updates, insights, and proactive suggestions across your Google Workspace apps,” Google said.
“We plan for Duet Al to answer complex queries by searching across your messages and files in Gmail and Drive, summarize documents shared in a space, and provide a recap of missed conversations.”
AI collaboration tools
The integration of Duet AI in Workspace follows a spate of similar announcements from industry counterparts. Zoom recently announced two generative AI feature launches for its platform, including the Zoom IQ Meeting Summary and Zoom IQ Team Chat Compose functions.
Both of these provide users with automated meeting summaries and an AI-powered chat companion.
With Duet, however, Google appears to be directly butting heads with Microsoft. In May, Microsoft unveiled the integration of its own AI-powered Copilot assistant for Teams customers.
Not much separates Teams and Meet in terms of AI-related features following this announcement though. Both platforms now support AI-generated meeting summaries, “action point” highlights, and call recaps.
Meet’s real-time video snippets feature does represent a unique feature in this regard. However, the integration of Duet AI in Workspace and Microsoft’s Copilot across 365 products places both firms in a head-on battle with regard to AI-supported collaboration tools moving forward.
Cloud Pro Newsletter
Stay up to date with the latest news and analysis from the world of cloud computing with our twice-weekly newsletter
Ross Kelly is a staff writer at ITPro, ChannelPro, and CloudPro, with a keen interest in cyber security, business leadership and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
In his spare time, Ross enjoys cycling, walking and is an avid reader of history and non-fiction.
Thank you for signing up to Cloud Pro. You will receive a verification email shortly.
There was a problem. Please refresh the page and try again. | Emerging Technologies |
(AP Photo/Martin Meissner, File) FILE – In this Nov. 29, 2019, file photo, a metal head made of motor parts symbolizes artificial intelligence, or AI, at the Essen Motor Show for tuning and motorsports in Essen, Germany. This month, the White House Office of Science and Technology Policy (OSTP) issued a “Blueprint for an AI Bill of Rights.” The document maps out areas in which artificial intelligence (AI) might be a threat to our existing rights and establishes a set of rights that individuals should expect to have as they use these emerging technologies. The OSTP blueprint sends two messages. First, it acknowledges that AI is affecting — and likely will transform — everything: changing medical practices, businesses, how we buy products and how we interact with each other. Second, it highlights the fact that these technologies, while transformational, can also be harmful to people at an individual, group and societal scale, with potential to extend and amplify discriminatory practices, violations of privacy or systems that are neither safe nor effective. The document establishes principles for our rights in the digital world. The next step is to determine how to operationalize these principles in practice. While there are many ways to think about this translation of principle into practice, it is helpful to ask one simple question: “Is it safe?” — and, if the answer is unknown, do the work to create a science of safety to provide it. It’s useful to look to other technologies and products that changed our world. Electricity, automobiles and telecommunication all radically changed the way people work, live, interact with each other and do business. Each had — and still has — issues with the way they affect us, but for all of them, we ask a single, pointed question: Is it safe? Electricity gives us light, power and warmth, but if the cost is fires that burn down our homes, its benefits are meaningless. Cars are synonymous with freedom, but what good is freedom if our ongoing driving destroys our planet? And the world of communication links us together, but not if the content that flows is misleading, hateful or damaging to our children. For all these technologies, the question of safety is a constant. We always want these technologies to be useful, and we want them to be safe. To make them safe, we need to determine the conditions under which they would be damaging and put practices in place that prevent, mitigate, or resolve their potential harms. As we consider the future of AI, we need to make the same demand and uncover the decisions and conditions that lead to harmful effects. We need to determine how to make intelligent technologies useful and safe. Safe in that medical diagnostic systems must be trained with inclusive example sets so that individuals and groups are not excluded and undertreated. Safe in that systems that help us make decisions need to be designed to help us — rather than manipulate us to take less so that someone else can take more. Safe in that systems evaluating someone’s credit, skills for a job or fit for college have not been trained on historical data in which these decisions were biased and thus make the future as sexist, racist and tribal as the past. Safe in that the people using systems that might make mistakes understand that they cannot arrest someone simply because the machine said so. Unfortunately, it is sometimes difficult to argue with developers and the businesses that employ them that they need to change their behavior because they are treading on someone’s rights. The idea makes sense, but they struggle to understand the actions they need to take. It may be more powerful to flip the conversation away from rights and toward responsibility: The responsibility to develop and deploy systems that are useful and safe. If we are to uphold the rights outlined in the OSTP’s report, we need to develop a genuine science of safety for AI that we can translate into best practices for the people who are building it. What specifically must we do to design and build systems with individual and societal safety at the center of that design? What new disciplines and sciences need to be established to equip us for the AI future? What mechanisms and tools are required to evaluate and monitor for safety? How do we develop and establish remediation approaches? Establishing a safety ecosystem for AI requires more than policy, more than technological advances, more than good intentions. It isn’t a one-off solution or an example of a single system that causes no unforeseen harm. It requires a spectrum of interdisciplinary, multipronged, iterative sociotechnical contributions coming together toward the responsibility of safety. Kristian Hammond is the Bill and Cathy Osborn Professor of Computer Science at Northwestern University. An AI pioneer and entrepreneur, Hammond is cofounder of Narrative Science, a startup that uses AI and journalism to turn information from raw data into natural language. | Emerging Technologies |
America is continually a work in progress, forever being reimagined by bold ideas, whether they arise from the public or private sector, or from pioneering inventors, entrepreneurs and corporations. The pandemic accelerated the “Great Reinvention,” forcing Americans, policymakers and businesses to re-evaluate values, conventional wisdom, and business models. A More Perfect Union 2022, The Hill’s second annual multi-day tentpole festival, explores and celebrates America’s best big ideas through the lens of American Reinvention. We will convene political leaders, entrepreneurs, policy innovators and disruptors, and thought provocateurs to debate and discuss some of the most urgent, challenging issues of our time. Wednesday, December 7th – Emerging Technologies: All industries are ripe for disruption and technological advances often prompt those changes. AI, machine learning, robotic automation, VR/AR, blockchain, the internet of things are all innovative and evolving technology trends constantly changing the face of business. How did the pandemic speed up digital transformation and innovation? How are businesses keeping up with changing tech trends? Thursday, December 8th – Reinventing the American Economy: Small Business and E-Commerce: How are record inflation, supply chain bottlenecks, and labor shortages contributing to the changes in businesses? How are innovative companies disrupting the way businesses are organized? During the pandemic many small businesses had to pivot quickly and find new ways to reach their customers through e-commerce platforms. E-commerce sales grew 50 percent during COVID-19, so what is the future of digital retail? How can technology encourage business growth? And who are the future disruptors of digital commerce? Friday, December 9th – Consensus Builders: A recent Pew analysis finds that, on average, Democrats and Republicans are farther apart ideologically today than at any time in the past 50 years. Extreme polarization creates a kind of legislative catch-22–zero-sum politics means we can’t get bipartisan majorities to change our institutions, while the current institutions intensify zero-sum competition between the parties. Post-midterms, where do we find “the missing middle”? FEATURING Wednesday, December 7th: Emerging Technologies Andrei Papancea, CEO & Chief Product Officer, NLXRina Shah, Geopolitical Strategist, Investor, & 6xEntrepreneurEmily Landon, CEO, The Crypto Recruiters Thursday, December 8th: Reinventing the American Economy: Small Business and E-Commerce Robert Doar, President, American Enterprise InstituteKaren Kerrigan, President & CEO, Small Business & Entrepreneurship CouncilEmily Glassberg Sands, Head of Information, Stripe Friday, December 9th: Consensus Builders Ryan Clancy, Chief Strategist, No LabelsDavid Eisner, President & CEO, Convergence Center for Policy ResolutionDavid Jolly, Former Member of Congress, Political AnalystChristine Todd Whitman, Co-Chair, Forward Party; Former Governor of New JerseyAndrew Yang, Co-chair, Forward Party; Founder, Humanity Forward SPONSOR PERSPECTIVE Paige Magness, Senior Vice President, Regulatory Affairs, Altria MODERATORS Bob Cusack, Editor-In-Chief, The HillSteve Scully, Contributing Editor, The Hill SPONSORED BY: Join the conversation! Tweet us: @TheHillEvents using #TheHillAMPU Tags | Emerging Technologies |
In March, an Italian privacy regulator temporarily banned OpenAI's ChatGPT, worried that the text generator had no age-verification controls or "legal basis" for gathering online user data to train the AI tool's algorithms. The regulator gave OpenAI until April 30 to fix these issues, and last Friday, OpenAI announced it had implemented many of the requested changes ahead of schedule. In a statement to the Associated Press, OpenAI confirmed Italy lifted the ban.
"ChatGPT is available again to our users in Italy," OpenAI's statement said. "We are excited to welcome them back, and we remain dedicated to protecting their privacy.”
OpenAI made several concessions to the Italian Data Protection Authority to bring ChatGPT back to Italy, The Wall Street Journal reported.
First, OpenAI agreed to better inform users about how ChatGPT processes their data and to create an online form so that users can opt out and remove their data from ChatGPT's training algorithms. Then, OpenAI agreed to require Italian users to provide their birth date at sign-up, which will assist OpenAI's effort to identify and block ChatGPT users under 13 years old or request parental permissions for users under 18.
But just because the ban is lifted, that doesn't mean the Italian regulator's investigation is over. OpenAI is still expected to continue working on implementing the rest of its demands—including launching a publicity campaign to inform ChatGPT users how the tool really works and explain how to opt out of data sharing, the WSJ reported. Ars could not immediately reach OpenAI to comment, but an OpenAI spokesperson told the AP that the company would "look forward to ongoing constructive discussions" until Italy's investigation concludes.
Italy's temporary ban became one of the first nationwide efforts to restrict access to ChatGPT, arriving just after the tool became the fastest-growing app of all time. In just two months, ChatGPT attracted 100 million monthly active users—surpassing apps like TikTok, which took nine months to reach that mass of adoption, and Instagram, which took 2.5 years, Time reported. Since its release, ChatGPT has evolved and introduced more user protections as concerns were flagged after its first major data leak. Now, ChatGPT allows users to disable chat history, decline training, and export data. However, despite OpenAI's seeming responsiveness to reported issues, ChatGPT's sudden arrival has left lawmakers scrambling to adjust laws in the face of the app's widespread adoption.
Governments debate how to regulate AI
Likely motivated to continue attracting more users worldwide, OpenAI moved quickly to appease the Italian Data Protection Authority after the regulator showed how suddenly ChatGPT could be restricted. Italy heightened the stakes by escalating its response to perceived AI risks, and some generative AI critics have urged governments globally to step in swiftly, as Italy did, and even "pause" AI development completely until regulators can catch up. These critics claim that companies are in a race to capture the AI market and can't be depended upon to self-regulate and mitigate known risks amid stiff competition.
The case in Italy seems like a win for those on the side of passing more AI-specific regulations globally, but not every government agrees that more laws are needed to adequately protect users from irresponsible AI development. Some lawmakers that drafted AI-specific legislation are already taking a step back to reconsider their entire AI strategy after realizing the law failed to account for emerging technologies like ChatGPT.
The European Commission, the European Parliament, and the Council of the European Union are "going back to the drawing board" to redraft the Artificial Intelligence Act, Politico reported. Originally drafted to regulate AI applications like social scoring, manipulation, or facial recognition, the EU's AI Act is now being reconfigured as lawmakers consider how tools like ChatGPT should be factored in. One problem, Politico reports, is that divided lawmakers can't decide if the text generator should be deemed "high risk" because it can be used in benign ways—to write a birthday card to grandma—or in malignant ways to widely spread disinformation. | Emerging Technologies |
Innovative photoresist materials pave the way for smaller, high performance semiconductor chips
For more than 50 years, the semiconductor industry has been hard at work developing advanced technologies that have led to the amazing increases in computing power and energy efficiency that have improved our lives. A primary way the industry has achieved these remarkable performance gains has been by finding ways to decrease the size of the semiconductor devices in microchips. However, with semiconductor feature sizes now approaching only a few nanometers—just a few hundred atoms—it has become increasingly challenging to sustain continued device miniaturization.
To address the challenges associated with fabricating even smaller microchip components, the semiconductor industry is currently transitioning to a more powerful fabrication method—extreme ultraviolet (EUV) lithography. EUV lithography employs light that is only 13.5 nanometers in wavelength to form tiny circuit patterns in a photoresist, the light-sensitive material integral to the lithography process.
The photoresist is the template for forming the nanoscale circuit patterns in the silicon semiconductor. As EUV lithography begins paving the way for the future, scientists are faced with the hurdle of identifying the most effective resist materials for this new era of nanofabrication.
In an effort to address this need, a team of scientists at the Center for Functional Nanomaterials (CFN)—a U.S. Department of Energy (DOE) Office of Science User Facility at DOE's Brookhaven National Laboratory—has designed a new light-sensitive, organic–inorganic hybrid material that enables high-performance patternability by EUV lithography. Their results were recently published in Advanced Materials Interfaces.
Composition is key
The hybrid materials used to create these new photoresists are composed of both organic materials (those that primarily contain carbon and oxygen atoms) and inorganic materials (those usually based on metallic elements). Both parts of the hybrid host their own unique chemical, mechanical, optical, and electrical properties due to their unique chemistry and structures. By combining these different components, new hybrid organic-inorganic materials emerge with their own interesting properties.
In the case of organic photoresists, adding inorganic molecules can yield a vastly improved material for EUV. The hybrid materials have increased sensitivity to EUV light, which means that they don't need to be exposed to as much EUV light during patterning, reducing the required process time. The hybrid materials also have improved mechanical and chemical resistance, making them better-suited as templates for high-resolution etching.
"To synthesize our new hybrid resist materials, organic polymer materials are infused with inorganic metal oxides by a specialized technique known as vapor-phase infiltration. This method is one of the key areas of materials synthesis expertise at CFN. Compared to conventional chemical synthesis, we can readily generate various compositions of hybrid materials and control their material properties by infusing gaseous inorganic precursors into a solid organic matrix," explained Chang-Yong Nam, a materials scientist at CFN who led the project.
As the team experiments and refines their materials, resists with improved performance are emerging. With any pioneering field, there are challenges to be faced.
"One of the main problems we encountered when initially making these hybrids is that the inorganic content needs to be uniformly distributed inside the organic polymer while making sure that the infused inorganic components are not too strongly bound to organic matrix," said Ashwanth Subramanian, the lead author of the paper. Subramanian is a former CFN-affiliated Ph.D. student from Stony Brook University's Department of Materials Science and Chemical Engineering who is now working as a process engineer at Lam Research.
"It was a little difficult to achieve that in previous research. In this work, however, we were able to choose a different precursor for the metal, the inorganic source, and that allowed us to make a hybrid with a uniform composition as well as weak binding between organic and inorganic components."
In their current research, the team noticed vast improvements after using indium as an inorganic component as compared to the aluminum that was used in the work that was done before. The scientists made the new resist using a poly(methyl methacrylate) (PMMA) organic thin film as the organic component and infiltrated it with inorganic indium oxide. This new hybrid exhibited increased sensitivity and a more uniform material makeup, which improved uniformity in subsequent patterning.
"In our previous work, we demonstrated this concept and were working with established resist composition as a proof of concept," explained Nikhil Tiwale, a materials scientist at CFN. "In this new paper, we used a composition that hasn't been studied in the resist community, yielding better EUV absorption and improved patterning performance."
Always moving forward
Scientists at CFN have been researching hybrid photoresist materials for several years, building a strong foundation of work culminating in the design of new, highly functional materials. Nam leads this research program with a goal of developing even more new materials and functionalities. In 2022, he was recognized as an Inventor of the Year by Battelle Memorial Institute.
Nam's hybrid resists show such promise that he was awarded major funding to pursue this concept through the DOE Accelerate Innovations in Emerging Technologies program. This multi-institute project will explore the development of new classes of hybrid photoresists and exploit machine learning to accelerate EUV research by making material validation easier and more accessible.
"It's currently really hard to do EUV patterning," explained Nam. "The actual patterning machine that industry is using is very, very expensive—the current version is more than $200 million per unit. There are only three to four companies in the world that can use it for actual chip manufacturing. There are a lot of researchers who want to study and develop new photoresist materials but can't perform EUV patterning to evaluate them. This is one of the key challenges we hope to address."
The research team includes CFN staff members Kim Kisslinger, Ming Lu, and Aaron Stein, as well as Won-Il Lee, a Ph.D. student from Stony Brook University, and Jiyoung Kim, a professor in the Department of Materials Science and Engineering at the University of Texas at Dallas. Their combined efforts have helped push EUV lithography techniques beyond current limits.
The team is currently working on other hybrid material compositions and testing how they perform, as well as the processes involved in fabricating them, paving the way for patterning smaller, more efficient semiconductor devices.
More information: Ashwanth Subramanian et al, Vapor‐Phase Infiltrated Organic–Inorganic Positive‐Tone Hybrid Photoresist for Extreme UV Lithography, Advanced Materials Interfaces (2023). DOI: 10.1002/admi.202300420 | Emerging Technologies |
There is a decision being made about you in this box.But you're not allowed to look inside — and even if you could, it wouldn't tell you much.There are countless black boxes, just like this one, making decisions about your online and offline lives.Some of them are pretty benign, like recommending what movie you should watch next.But others decide the news you see, who you go on a date with and how much money you can borrow.They also determine whether you get a job, if you're a potential shoplifter and what route you should take to the shops.In extreme cases, they have been used to 'predict' teenage pregnancies and cut welfare entitlements for people with disabilities.These boxes all contain algorithms, the decision-making machines that are creeping into areas of increasing consequence to all of our lives.It can feel pretty impossible to understand exactly what these algorithms are doing, let alone keep them in check.However, as you'll see, there are creative and powerful ways to shine a light into these black boxes.The trouble is, we can only use them if their owners — mostly corporations and governments — will let us.Centrelink's Robodebt algorithm was locked away inside one of these black boxes.Hidden from public view, it went about its work, sending out hundreds of thousands of miscalculated debt notices — all based on a flawed assumption.The algorithm divided the full year’s worth of welfare recipients' income evenly between the 26 fortnights in the year, rather than considering each fortnight individually.A mundane miscalculation that, when reproduced faithfully and behind closed doors, had consequences that were anything but."I was literally crushed. I was in shock," reads one submission to a Senate inquiry. It tells the story of just one of the approximately 433,000 Australians who were subjected to debts created by the algorithm."I walked around my house trying to deny the reality of what had happened … I was confused as to how I owed this amount of money. Within weeks, I began receiving calls, texts and letters from a debt-collection agency."Following a successful legal challenge and extensive media coverage, the government has launched a royal commission to investigate the scheme's failings.However, even as one algorithm is brought into the light, others continue operating behind closed doors.The Department of Home Affairs has used algorithms to assist visa processing for more than 20 years. And, with increasing demand for visas since our borders re-opened, this is set to expand further.A departmental spokesperson confirmed that Home Affairs was considering "a range of emerging technologies" as part of a "modernisation" strategy.Despite the lessons learned from the Robodebt crisis, these black boxes have, so far, remained firmly shut.The ABC asked Home Affairs questions about transparency, monitoring, testing and redress mechanisms regarding its algorithmic systems, but these were not addressed in their response.Visas in a boxTo get a glimpse of exactly how using algorithms to assist with visa decisions can go wrong, we only need to look at how the Home Office managed a similar scheme in the United Kingdom.The inner workings of the UK's visa-processing algorithm had been kept secret from the public for years, locked away inside a black box.But documents released under freedom of information laws, following a legal challenge, finally allowed us to peek inside.While they don't give the full picture, they reveal a few key things.The algorithm sorted visa applicants into three risk groups — high, medium and low — based on a number of factors that included their nationality.This categorisation had a big impact on the likelihood of an application being refused.At one processing centre, less than half of applications classed as high risk were approved for visas in 2017, compared to around 97 per cent for low-risk ones.Applicants in the high-risk category could be subjected to "counter-terrorism checks and DNA testing" by immigration officials before a decision was made on their visa.This intense scrutiny contributed to the high refusal rates.Meanwhile, those classed as low risk got by with routine document checks and, therefore, far-lower refusal rates.This alone wasn't particularly controversial — after all, the UK Home Office was trying to make the best use of its limited resources.The trouble was that, just like Robodebt, their algorithm had an insidious flaw. Over time, it unfairly amplified biases in the data.To see how, let's imagine that 200 people, split evenly between two made-up countries — Red and Blue — apply for visas in 2015.We're going to simulate how things will play out for applicants from these two nations, incorporating details from how the UK Home Office’s algorithm worked.As the applications from the two nations stream down into the categories, some visas are approved and some are refused.The rates at which they are refused is dependent on which category they fall into.To decide how to categorise each application, the algorithm used by the UK Home Office relied on historical "immigration breaches" data.We've given the Red group a slightly higher rate of historical breaches to simulate the fortunes of two similar — but not identical — nations.The refusal rates for our two nations (shown at the bottom) reflect this difference in their historical records.Okay, we're done for 2015.Of our 200 applications, the Red group had 11 more refusals than the Blue group.The results for 2015 are pretty close to the historical data that we made up, so it seems like our algorithm is doing its job so far.Now, we feed these results back into the algorithm.This is where things start to get ugly. The UK Home Office's algorithm counted merely having a visa refused as a "breach", which led to biases in the historical data being exacerbated.Fast forward to 2016 and another 200 people from the same two groups apply.Based on the prior "breaches", our algorithm flags a higher proportion of people from the Red group as high risk than the previous year.And it's solely the algorithm that's increasing this disparity — there's nothing different about the applicants themselves compared to the year before.The extra scrutiny placed on Red applications results in 18 more being refused than the Blue one this time around.Once again, we feed this new information back into our algorithm.It sees an even greater disparity in the risks in 2017.In the worldview of the algorithm, the evidence is now clear: People from the Red group are predominantly high risk while those from the Blue group are not.This time, the Red group sees more than twice the number of refusals compared to the Blue group.That's a pretty big difference from where we started, isn't it?As the years rolled on, the data increasingly became a reflection of the algorithm's prejudices.The algorithm took the differences between nations in the historical data and blew them out of proportion — regardless of whether they were accurate assessments of risk, or had been created by chance, error or discrimination.So, by 2017, its choices were more of a self-fulfilling prophecy than an accurate reflection of risk in the real world.Jack Maxwell — lawyer and co-author of Experiments in Automating Immigration Systems — found through his investigations that the UK Home Office's algorithm suffered from a feedback loop much like this one.And, according to Mr Maxwell, the historical immigration data was flawed too.By their nature, he said, immigration enforcement statistics were incomplete, and did not "reflect the actual incidence of immigration breaches, so much as the biases of the people reporting those breaches".Now, there's no indication that the Australian Department of Home Affairs is making, or about to make, the same mistakes as the UK Home Office as it expands and "modernises" its use of algorithms.However, as long as it keeps its algorithms locked away, we can't be sure.Fortunately, onerous legal challenges and FOI requests are not the only ways to peer inside.As we'll see, the tools that can open these black boxes come in a range of shapes and sizes.Some can help us understand — and challenge — the decisions they make about us, as individuals, while others can illuminate bias and discrimination embedded within a system.Thinking outside the boxTo explain the decisions made by algorithms in a way that humans can understand, leading artificial intelligence researcher Sandra Wachter and her colleagues at the Oxford Internet Institute turned, not to science, but to philosophy.They went back to basics and "thought outside the box" about what an explanation actually is, and what makes one useful. Sandra Wachter has developed a novel technique for explaining decisions made by algorithms.(Supplied: Sandra Wachter)"Do people really want to understand the internal logic of an algorithm? Or do they mainly care about why they didn't get the thing that they applied for?" Professor Wachter ponders.The philosophy textbooks told Professor Wachter that it was more the latter."They might want to contest the decision because the criteria that [the algorithm] relied upon is wrong. Or it might be the case that the decision was made correctly, and they want guidance on how to change their behaviour in the future," she says.Given these goals, simply looking inside these black boxes is not going to tell us what we want to know.This is because, in practice, most algorithms are a combination of complex variables.Not even the experts can reliably interpret decisions made by sophisticated algorithms.So, rather than trying to explain the nitty gritty technical details of how they work, Professor Wachter and her team came up with a deceptively simple alternative.The idea was to describe "how the world would need to be different, for a different outcome to occur", she explains.That idea — of imagining alternative worlds — may sound like it belongs in a science fiction writers' room, but the way this potential tool for increasing algorithmic accountability works is really quite simple.Such a tool would generate a number of "nearby possible worlds" in which your application would have been successful — and tell you how they differ from the real world.This means you might be told, in plain English, that you'd have been successful had you applied for a different type of visa or requested a shorter stay.So, you wouldn't need to look inside the box at all to understand how it came to its decision and, therefore, how you could do better next time.By offering this kind of transparency without opening the black box, Professor Wachter adds, it will be "protecting the privacy of others and with very little risk of revealing trade secrets".While this "nearby possible worlds" approach is useful and important for understanding specific decisions about one individual, it's not really enough on its own to keep these black boxes in check.Explanations of individual cases alone will not let us identify systemic issues such as the ones seen in the UK.Thinking biggerAn individual often will not know how others are faring when they interact with an algorithm, says Paul Henman, a professor of digital sociology and social policy at the University of Queensland.And even in a system that discriminates against others, many individuals will still receive acceptable or even favourable decisions."Because individuals are experiencing these decisions in isolation, they might not see that an entire group is getting a different outcome." Paul Henman says we need to consider structural biases in algorithmic systems.(ABC News: Tim Leslie)While some algorithms — such as the one used by the UK Home Office — explicitly discriminate based on nationality or other attributes protected by law, discrimination is not always so black and white.Factors such as the applicant's immigration background, location and even their name are not protected attributes, but can correlate closely with race.As these structural biases cannot be seen at the level of individual decisions, we need to think bigger.This is where our second transparency tool — the algorithmic audit — comes in.An algorithmic audit involves putting the algorithm under the microscope to verify that it meets standards for fairness.In the case of the UK Home Office, an expert could have checked that people of all nationalities saw comparable outcomes when controlling for other factors.The same goes for gender, age and other protected attributes.Results from algorithmic audits can be translated into scores and made public, similar to the health advice that is required on food packaging.When published, these results can help us to understand what's going on inside the box, without us all needing to be experts.These tools — and others like them — are not limited to academia anymore. They're being adopted in Australia and around the world.So, why are we not seeing greater transparency around the algorithms used by corporations and our governments?The right to reasonsOne reason, says former Australian human rights commissioner Ed Santow, is that Australia is lagging other parts of the world on digital rights protections.In 2021, Professor Santow and the Australian Human Rights Commission made a number of recommendations about how Australia can make automated decision-making "more accountable and protect against harm"."The cost of inaction [on digital rights] is waiting until the next crisis, the next Robodebt. At the Human Rights Commission, we were saying that we could avoid that next crisis by putting in place stronger fundamental protections," he said. Edward Santow advocates for stronger digital rights protections in Australia.(Supplied: Edward Santow)According to the commission's report, the foundational "right to reasons" would be a legislated right to request an explanation for administrative decisions, regardless of whether they were made by humans or machines.These protections can be the difference between a problematic algorithm caught early and another crisis identified too late.Both Robodebt and the UK Home Office algorithm flew under the radar for years before their flaws became apparent, in part due to the lack of transparency around how they operated.Centrelink sent out erroneous debt notices without equipping those recipients with the tools necessary to challenge or even understand those decisions. Instead, they needed courts and advocates to find justice.The story is similar in the UK. It took the efforts of Foxglove, a tech advocacy group, and the Joint Council for the Welfare of Immigrants to challenge the Home Office's algorithm in court.However, it doesn't have to be this way.Specialised tools like "nearby possible worlds" and algorithmic audits make these explanations more practical than ever to produce.And the European Union has been blazing a trail in digital rights protections, so there is plenty of precedent for our legislators to learn from.Having our fates decided by algorithmic black boxes can feel pretty dystopian.However, if we embrace these tools and legislate the necessary protections, we might at least live in a world where the algorithms have to work in the open.CreditsReporter and developer: Julian FellDesigner: Ben SpraggonEditor: Matt LiddyPosted Yesterday at 6:37pmSun 11 Dec 2022 at 6:37pm, updated 18 hours agoMon 12 Dec 2022 at 1:44am | Emerging Technologies |
The Federal Trade Commission today launched a new Office of Technology that will strengthen the FTC’s ability to keep pace with technological challenges in the digital marketplace by supporting the agency’s law enforcement and policy work.
“For more than a century, the FTC has worked to keep pace with new markets and ever-changing technologies by building internal expertise," said Chair Lina M. Khan. "Our office of technology is a natural next step in ensuring we have the in-house skills needed to fully grasp evolving technologies and market trends as we continue to tackle unlawful business practices and protect Americans."
The Office of Technology will have dedicated staff and resources, and will be headed by Chief Technology Officer Stephanie T. Nguyen.
“I’m honored to lead the FTC’s Office of Technology at this vital time to strengthen the agency’s technical expertise and meet the quickly evolving challenges of the digital economy,” said Nguyen. “I look forward to continuing to work with the agency’s talented staff and building our team of technologists.”
The Office of Technology will boost the FTC’s expertise to help the agency achieve its mission of protecting consumers and promoting competition. Specifically, the new office will:
- Strengthen and support law enforcement investigations and actions: The office will support FTC investigations into business practices and the technologies underlying them. This includes helping to develop appropriate investigative techniques, assisting in the review and analysis of data and documents received in investigations, and aiding in the creation of effective remedies.
- Advise and engage with staff and the Commission on policy and research initiatives: The office will work with FTC staff and the Commission to provide technological expertise on non-enforcement actions including 6(b) studies, reports, requests for information, policy statements, congressional briefings, and other initiatives.
- Highlight market trends and emerging technologies that impact the FTC’s work: The office will engage with the public and external stakeholders through workshops, research conferences, and consultations and highlight key trends and best practices.
The creation of the Office of Technology builds on the FTC’s efforts over the years to expand its in-house technological expertise, and it brings the agency in line with other leading antitrust and consumer protection enforcers around the world.
The Commission voted 4-0 to approve the creation of the Office of Technology. | Emerging Technologies |
- Hypersonic air travel, for both military and commercial use, could be here within the decade.
- China, Russia and now North Korea all claim to have developed and successfully tested hypersonic missiles.
Hypersonic air travel, for both military and commercial use, could be here within the decade.
The $770 billion National Defense Authorization Act signed into law Tuesday calls for investing billions into hypersonic research and development, making them a top priority for Washington. The next step is congressional approval to allocate the money for the technology to the Pentagon.
"If you are traveling at hypersonic speeds, you're, you're going more than a mile per second," said Mark Lewis, executive director of the National Defense Industrial Association's Emerging Technologies Institute. "That's important for military applications. It could have commercial applications. It could also open up new, new ways of reaching space."
Hypersonic is anything traveling above Mach 5, or five times the speed of sound. That's roughly 3,800 mph. At those speeds, commercial planes could travel from New York to London in under two hours.
Significant hypersonic research and development in recent years have highlighted its promising opportunity, but it's also shed light on its destructive potential. According to Rand Corp., hypersonic technology creates a new class of threat that could change the nature of warfare.
"There truly is a sense of concern that we are in a race," said Lewis, who is a former director at the Department of Defense. "We took our foot off the gas. … There are other nations, peer competitors, who are investing very, very heavily in hypersonics."
China, Russia and now North Korea all claim to have developed and successfully tested hypersonic missiles. Unlike traditional ballistic missiles that follow a set trajectory after launch, hypersonic weapons are maneuverable in flight, incredibly fast and hard to detect.
The U.S. doesn't have operational hypersonic missiles yet, but it's a top priority for Washington. According to the Government Accountability Office, funding for hypersonic research increased by 740% between 2015 and 2020. The latest defense budget alone increased funding by 20%.
"It's truly a bipartisan issue," said Lewis.
The DOD is gathering data across multiple agencies, industry leaders and academia as it races to fast-track production on its first hypersonic missile by September 2022.
"We don't want to just match them missile for missile, but introduce new capabilities of transportation capabilities, sensor capabilities. And I'm seeing that play out," Lewis told CNBC.
Watch the video to find out more about hypersonic technology, what it could do for military and commercial purposes, and why it's taking so long to get off the ground. | Emerging Technologies |
Photo of the perovskite/silicon tandem solar cell. You can see the active bluish area in the middle of the wafer, which is enclosed by the metallic, silvery electrode. © Johannes Beckedahl/Lea Zimmerman/HZB The illustration shows the schematic structure of the tandem solar cell with a bottom cell made of silicon and a top cell made of perovskite. While the top cell can utilise blue light components, the bottom cell converts the red and near-infrared components of the spectrum. Different thin layers help to optimally utilise the light and minimise electrical losses. © Eike Köhnen/HZB Among the emerging technologies, silicon/perovskite tandem cells are at the absolute top. The last world record by HZB is a big leap forward. © NRELThe current world record of tandem solar cells consisting of a silicon bottom cell and a perovskite top cell is once again at HZB. The new tandem solar cell converts 32.5 % of the incident solar radiation into electrical energy. The certifying institute European Solar Test Installation (ESTI) in Italy measured the tandem cell and officially confirmed this value which is also included in the NREL chart of solar cell technologies, maintained by the National Renewable Energy Lab, USA. Scientists from HZB could significantly improve on the efficiency of perovskite/silicon tandem solar cells. "This is a really big leap forward that we didn't foresee a few months ago. All the teams involved at HZB, especially the PV Competence Center (PVComB) and the HySPRINT Innovation lab teams have worked together successfully and with passion," says Prof. Steve Albrecht. Interface modifications His team used an advanced perovskite composition with a very smart interface modification. The lead authors, postdocs Dr. Silvia Mariotti, and Dr. Eike Köhnen in Albrecht’s team, developed an interface modification to reduce charge carrier recombination losses and applied detailed analysis to understand the specific properties of the interface modification. These developments were then successfully implemented in tandem solar cells, and with help of Master’s student Lea Zimmermann, combined with further optical improvements. In addition, many more scientists and technicians helped to develop and fabricate the tandem cells to achieve this success. Altogether, the interface and optical modifications enabled highest photovoltages (open-circuit voltage) and resulted in the new record efficiency for this fascinating tandem technology. Fast progress There is an ongoing efficiency development by various research institutes and companies over the last years and especially the last month were quite exciting for the field: Various teams from HZB had achieved a record value in late 2021 with an efficiency of 29.8% that was realized by periodic nanotextures. More recently, in summer 2022, the Ecole Polytechnique Fédérale de Lausanne, Switzerland, first reported a certified tandem cell above the 30% barrier at 31.3%, which is a remarkable efficiency jump over the 2021 value. With the new certified value of 32.5%, the record is again back at HZB. "We are very excited about the new value as it shows that the perovskite/silicon tandem technology is highly promising for contributing to a sustainable energy supply," says Albrecht. HZB's scientific director, Prof. Bernd Rech, emphasises: "At 32.5 percent, the solar cell efficiency of the HZB tandems is now in ranges previously only achieved by expensive III/V semiconductors. The NREL graph clearly shows how spectacular the last two increases from EPFL and HZB really are." | Emerging Technologies |
Stoke Space has received multiple investments from In-Q-Tel, the venture capital arm of the Central Intelligence Agency, TechCrunch has learned.
Stoke Space and In-Q-Tel have not publicly announced their relationship before. While In-Q-Tel is legally a separate entity from any government agency, it receives all of its funding from government partners, including the defense and intelligence community.
In-Q-Tel Principal William Morrison confirmed he led the most recent investment, which closed at the end of February, and that the firm has made multiple investments in the company before. In addition to the investment, the two entities also signed a “technology development agreement,” an In-Q-Tel spokesperson said. Stoke joins a very small cadre of launch companies —including Rocket Lab and ABL Space Systems — that have received investment from the firm.
“The team has been incredible at execution, insanely capital efficient,” Morrison said.
Kent, Washington-based Stoke was founded in 2019 by Andy Lapsa and Tom Feldman. They started the company after years-long stints as propulsion engineers at Blue Origin; when they left, Lapsa held a director-level position and Feldman was a senior engineer. Stoke is developing a fully reusable launch vehicle capable of returning both the booster and the second stage back to Earth. The rocket is being designed to fly daily, a feature that is likely especially attractive to defense customers. The U.S. Space Force has publicly stated its interest in procuring rapid turn-around launch capabilities.
Stoke raised $65 million in a Series A round in December 2021, from investors including Bill Gates’ Breakthrough Energy Ventures, Toyota Ventures and Spark Capital. More recently, the USSF said it would set aside a dedicated area for Stoke’s use at the historic Cape Canaveral — Launch Complex 14, where multiple lift offs occurred in the 1960s. According to its website, Stoke is currently preparing to fly the reusable upper stage on a vertical take-off and landing “hopper” test flight.
“Space access continues to be launch availability constrained,” Lapsa said in a statement to TechCrunch. “Building a robust commercial launch economy is critical to sustaining our industrial base and ensuring space access for defense and national security needs.”
In-Q-Tel was established in 1999 as a not-for-profit venture to help the federal government take advantage of growing innovation and emerging technologies in the private sector. The firm sources technology from startups for the CIA and other government agencies, like the Department of Homeland Security. One of In-Q-Tel’s key value propositions is facilitating connections between private companies and its government partners, including in the intelligence community.
In-Q-Tel has made 25 investments in the space sector, including Stoke. Other investments include Capella Space, Palantir, and Swarm Technologies, which was acquired by SpaceX. Checks usually range from $250,000 to $3 million, the firm says on its website. An In-Q-Tel spokesperson confirmed all of its investments in Stoke are within that window.
Morrison added in a separate written statement that access to space is an important focus area for the firm. “Stoke Space’s unique architecture has the potential to change the way we all design for and use space,” he said. | Emerging Technologies |
Insights into Gartner's Emerging Technologies Hype Cycle 2023
Generative AI Takes Center Stage: Gartner places generative artificial intelligence (AI) at the Peak of Inflated Expectations on the Hype Cycle. This groundbreaking technology is predicted to deliver transformational benefits within two to five years, heralding a wave of workforce productivity and machine creativity.
Emergent AI: Unleashing Innovation: Generative AI falls under the broader theme of emergent AI. This trend is shaping new avenues for innovation, with technologies like AI simulation, causal AI, and reinforcement learning offering immense potential for enhancing digital experiences, driving better decisions, and creating competitive differentiation.
The Role of the Hype Cycle: Distilled from analyzing over 2,000 technologies, the Hype Cycle for Emerging Technologies presents a concise collection of emerging technologies with the potential to deliver significant transformational benefits over the next two to 10 years.
In the ever-shifting terrain of technological advancement, the latest revelation by Gartner the renowned research and advisory firm, casts a spotlight on the promising landscape of emerging technologies. Among these, a remarkable standout emerges – Generative Artificial Intelligence (AI), poised on the precipice of the Peak of Inflated Expectations, as foretold by the Hype Cycle for Emerging Technologies, 2023. This prophetic positioning projects that Generative AI is on the brink of transforming the very fabric of industries and societies, anticipated to unfurl its metamorphic influence within a span of two to five years.
This notion of Generative AI as a harbinger of transformation is rooted in its intricate essence. Beyond the binary world of traditional AI, Generative AI brings to life a different breed of intelligence, one capable of birthing creations and concepts anew.
Leveraging Emerging Technologies for Organizational Success
Balancing the AI Focus: While AI gains much attention, technology leaders should also explore other emerging technologies crucial for transformation. These include technologies that enhance developer experience, foster innovation through cloud adoption, and prioritize human-centric security and privacy.
Enhancing Developer Experience (DevX): DevX, which encompasses interactions between developers and tools, platforms, and processes, is a key factor in digital initiative success. AI-augmented software engineering, internal developer portals, and value stream management platforms are essential for elevating DevX.
Pervasive Cloud Adoption: Cloud computing's evolution into a driver of business innovation requires automation, vertical industry focus, and distributed architectures. Embracing cloud development environments, sustainability practices, and cloud-native solutions are crucial for maximizing cloud investments.
Human-Centric Security and Privacy: Resilience against security incidents and data breaches requires a human-centric approach. Technologies like generative cybersecurity AI and homomorphic encryption are empowering organizations to weave security and privacy into their digital fabric and cultivate a culture of mutual trust.
Harmonizing the AI Symphony:
In the grand theater of technological progress, the spotlight often shines most brightly on the virtuosity of Artificial Intelligence (AI). However, in this intricate symphony of transformation, the conductors of change understand the importance of balance. The discerning leaders of the tech realm recognize that while AI's melody resonates, other harmonies must find their place to craft the masterpiece of organizational success.
Diversifying the Technological Melody:
True innovation arises from the harmony of diverse notes, and that holds true in the realm of emerging technologies. While AI's crescendo echoes, the discerning ear acknowledges the importance of other instruments. Technology leaders orchestrate a symphony of change that goes beyond AI, embracing technologies that heighten developer experiences, foster cloud-driven innovation, and illuminate the path of human-centric security and privacy.
Elevating Developer Experience (DevX):
In the realm of digital endeavors, Developer Experience (DevX) takes center stage. It's the harmonious interaction between creators and their tools, platforms, and processes that determines the cadence of success. Here, AI assumes a dual role – augmenting software engineering with its cognitive prowess. But AI is not the sole protagonist; the stage also features the emergence of internal developer portals and the chorus of value stream management platforms, all in pursuit of elevating DevX to a crescendo of ingenuity.
Embracing Cloud's Luminous Evolution:
As clouds gather and computing takes flight, the evolution of cloud technology echoes like a symphony. It transforms from a mere innovation platform to an architect of business innovation, casting its influence far and wide. Cloud adoption, with its promises of automation and vertical specialization, offers an orchestration of possibilities. As this technology swells, the maestros of transformation guide their organizations to embrace cloud development environments, sustainable practices, and cloud-native solutions. These choices are akin to selecting the right notes, culminating in a harmonious composition that amplifies the returns on cloud investments.
Weaving the Fabric of Security and Privacy:
In the age of digital vulnerability, where cyber threats prowl like shadows, the guardians of organizations adopt a unique stance. They recognize that security and privacy are not mere appendices; they are the very fabric of trust and resilience. With AI as an ally, they venture into a realm where generative cybersecurity AI and homomorphic encryption are like shields and fortresses, shielding sensitive data from prying eyes. A culture of mutual trust is cultivated, and security becomes an intrinsic part of the digital design.
In this orchestration of technologies, where AI dances alongside cloud innovation and security fortresses, organizational success finds its crescendo. The symphony of change is orchestrated not by a single note, but by the harmonious convergence of instruments – an ensemble that embraces diversity and navigates the complexities of transformation.
Benefits of Adopting Emerging Technologies
Unlocking Transformative Potential:
The adoption of emerging technologies bestows organizations with a key – a key that unlocks a trove of new possibilities. Like explorers delving into uncharted territories, businesses step beyond their boundaries to discover pathways that lead to innovation across the tapestry of their operations.
Elevating Efficiency and Productivity:
Generative AI and its companions in the realm of emergent technologies hold the promise of being catalysts for enhanced efficiency and productivity. They stand as mentors to human ingenuity, enabling teams to venture beyond the boundaries of the expected, to think creatively, and to make decisions backed by insights that illuminate the path to business growth.
Pioneering Innovation for the Future:
Organizations that embrace emerging technologies carve their path to the future, poised at the forefront of innovation. These technologies become the scaffolding upon which new ideas, products, and services are woven. They foster agility, allowing organizations to adapt, evolve, and thrive in a tech landscape that is in constant flux.
Guardians of Security and Risk:
In this digital epoch, security stands as a sentinel, guarding against the myriad threats that prowl in the shadows. Emerging technologies, with their mantle of AI TRISM and postquantum cryptography, bolster an organization's defenses. They shield against the vulnerabilities born of human frailty and ensure data's sanctity in an age where information reigns supreme.
Navigating the Cloudscape with Agility:
Cloud technologies, like a symphony that resonates across the sky, offer a portal to unparalleled agility. The pervasive embrace of cloud paves a path to responsiveness, enabling businesses to navigate market shifts with finesse. It's a journey that grants the flexibility to scale, the capacity to experiment, and the means to seize emerging opportunities.
In this confluence of transformative energies, businesses witness the dawn of a new era. The adoption of emerging technologies is not just an act of incorporation; it's an act of empowerment. It's the spark that ignites innovation, the sentinel that guards against vulnerabilities, and the wings that grant the agility to soar in a landscape of infinite possibilities.
Join Ultra Unlimited on this Quest
The journey into the heart of emerging technologies is an expedition of uncharted territories, an exploration of the frontiers of human imagination and technological innovation. As the narrative of Generative AI unfolds, Ultra Unlimited extends an invitation.
Join us in navigating the landscape of transformation, in unraveling the threads of innovation, and in crafting a future that mirrors the convergence of human aspirations and the crescendo of technological possibilities. | Emerging Technologies |
Check out all the on-demand sessions from the Intelligent Security Summit here. Weaponizing artificial intelligence (AI) to attack understaffed enterprises that lack AI and machine learning (ML) expertise is giving bad actors the edge in the ongoing AI cyberwar. Innovating at faster speeds than the most efficient enterprise, capable of recruiting talent to create new malware and test attack techniques, and using AI to alter attack strategies in real time, threat actors have a significant advantage over most enterprises. “AI is already being used by criminals to overcome some of the world’s cybersecurity measures,” warns Johan Gerber, executive vice president of security and cyber innovation at MasterCard. “But AI has to be part of our future, of how we attack and address cybersecurity.” Enterprises are willing to spend on AI-based solutions, evidenced by an AI and cybersecurity forecast from CEPS that they will grow at a compound annual growth rate (CAGR) of 23.6% from 2020 to 2027 to reach a market value of $46.3 billion by 2027. Event Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today. Watch Here Eighty-eight percent of CISOs and security leaders say that weaponized AI attacks are inevitable, and with good reason. Just 24% of cybersecurity teams are fully prepared to manage an AI-related attack, according to a recent Gartner survey. Nation-states and cybercriminal gangs know that enterprises are understaffed, and that many lack AI and ML expertise and tools to defend against such attacks. In Q3 2022, out of a pool of 53,760 cybersecurity applicants, only 1% had AI skills. Major firms are aware of the cybersecurity skills crisis and are attempting to address it. Microsoft, for example, has an ongoing campaign to help community colleges expand the industry’s workforce. There’s a sharp contrast between, on the one hand, enterprises’ ability to attract and retain cybersecurity experts with AI and ML expertise and, on the other, with how fast nation-state actors and cybercriminal gangs are growing their AI and ML teams. Members of the North Korean Army’s elite Reconnaissance General Bureau’s cyberwarfare arm, Department 121, number approximately 6,800 cyberwarriors, according to the New York Times, with 1,700 hackers in seven different units and 5,100 technical support personnel. AP News learned this week that North Korea’s elite team had stolen an estimated $1.2 billion in cryptocurrency and other virtual assets in the past five years, more than half of it this year alone, according to South Korea’s spy agency. North Korea has also weaponized open-source software in its social engineering campaigns aimed at companies worldwide since June 2022. North Korea’s active AI and ML recruitment and training programs look to create new techniques and technologies that weaponize AI and ML in part to keep financing the country’s nuclear weapons programs. In a recent Economist Intelligence Unit (EIU) survey, nearly half of respondents (48.9%) cited AI and ML as the emerging technologies that would be best deployed to counter nation-state cyberattacks directed toward private organizations. Cybercriminal gangs are just as aggressively focused on their enterprise targets as the North Korean Army’s Department 121 is. Current tools, techniques and technologies in cybercriminal gangs’ AI and ML arsenal include automated phishing email campaigns, malware distribution, AI-powered bots that continually scan an enterprise’s endpoints for vulnerabilities and unprotected servers, credit card fraud, insurance fraud, generating deepfake identities, money laundering and more. Attacking the vulnerabilities of AI and ML models that are designed to identify and thwart breach attempts is an increasingly common strategy used by cybercriminal gangs and nation-states. Data poisoning is one of the fastest-growing techniques they are using to reduce the effectiveness of AI models designed to predict and stop data exfiltration, malware delivery and more. AI-enabled and AI-enhanced attacks are continually being fine-tuned to launch undetected at multiple threat surfaces simultaneously. The graphic below is a high-level roadmap of how cybercriminals and nation-states manage AI and ML devops activity. Cybercriminals recruit AI and ML experts to balance attacks on ML models with developing new AI-enabled techniques and technologies to lead attacks. Source: Artificial Intelligence Crime: An Overview of Malicious Use and Abuse of AI, January 2022 IEEE Access “Businesses must implement cyber AI for defense before offensive AI becomes mainstream. When it becomes a war of algorithms against algorithms, only autonomous response will be able to fight back at machine speeds to stop AI-augmented attacks,” said Max Heinemeyer, director of threat hunting at Darktrace. Attackers targeting employee and customer identities Cybersecurity leaders tell VentureBeat that the digital footprint and signature of an offensive attack using AI and ML are becoming easier to identify. First, these attacks often execute millions of transactions across multiple threat surfaces in just minutes. Second, attacks go after endpoints and surfaces that can be compromised with minimal digital exhaust or evidence. Cybercriminal gangs often target Active Directory, Identity Access Management (IAM) and Privileged Access Management (PAM) systems. Their immediate goal is to gain access to any system that can provide privileged access credentials so they can quickly take control of thousands of identities at once and replicate their own at will without ever being detected. “Eighty percent of the attacks, or the compromises that we see, use some form of identity/credential theft,” said George Kurtz, CrowdStrike’s cofounder and CEO, during his keynote address at the company’s Fal.Con customer conference. CISOs tell VentureBeat the AI and ML-based attacks they have experienced have ranged from overcoming CAPTCHA and multifactor authentication on remote devices to data poisoning efforts aimed at rendering security algorithms inoperable. Using ML to impersonate their CEOs’ voice and likeness and asking for tens of thousands of dollars in withdrawals from corporate accounts is commonplace. Deepfake phishing is a disaster waiting to happen. Whale phishing is commonplace due primarily to attackers’ increased use of AI- and ML-based technologies. Cybercriminals, hacker groups and nation-states use generative adversarial network (GAN) techniques to create realistic-looking deepfakes used in social engineering attacks on enterprises and governments. A GAN is designed to force two AI algorithms against each other to create entirely new, synthesized images based on the two inputs. One algorithm, the generator of the image, is fed random data to create an initial pass. The second algorithm, the discriminator, checks the image and data to see if it corresponds with known data. The battle between the two algorithms forces the generator to create realistic images that attempt to fool the discriminator algorithm. GANs are widely used in automated phishing and social engineering attack strategies. How a GAN creates deepfakes so realistically that they are successfully used in AI-automated phishing and CEO impersonation attacks. Source: CEPS Task Force Report, Artificial Intelligence, and Cybersecurity. Technology, Governance and Policy Challenges, Centre for European Policy Studies (CEPS). Brussels. May 2021 Natural language generation techniques are another AI- and ML-based method that cybercriminal gangs and nation-states routinely use to attack global enterprises through multilingual phishing. AI and ML are extensively used to improve malware so that it’s undetectable by legacy endpoint protection systems. In 2022, cybercriminal gangs also improved malware design and delivery techniques using ML, as first reported in CrowdStrike’s Falcon OverWatch threat hunting report. The research discovered that malware-free intrusion activity now accounts for 71% of all detections indexed by CrowdStrike’s Threat Graph. Malware-free intrusions are difficult for perimeter-based systems and tech stacks that are based on implicit trust to identify and stop. Threat actors are also developing and fine-tuning AI-powered bots designed to launch distributed denial of service (DDoS) and other attacks at scale. Bot swarms, for example, have used algorithms to analyze network traffic patterns and identify vulnerabilities that could be exploited to launch a DDoS attack. Cyberattackers then train the AI system to generate and send large volumes of malicious traffic to the targeted website or network, overwhelming it and causing it to become unavailable to legitimate users. How enterprises are defending themselves with AI and ML Defending an enterprise successfully with AI and ML must start by identifying the obstacles to achieving real-time telemetry data across every endpoint in an enterprise. “What we need to do is to be ahead of the bad guys. We can evaluate a massive amount of data at lightning speed, so we can detect and quickly respond to anything that may happen,” says Monique Shivanandan, CISO at HSBC. Most IT executives (93%) are already using or considering implementing AI and ML to strengthen their cybersecurity tech stacks. CISOs and their teams are particularly concerned about machine-based cyberattacks because such attacks can adapt faster than enterprises’ defensive AI can react. According to a study by BCG, 43% of executives have reported increased awareness of machine-speed attacks. Many executives believe they cannot effectively respond to or prevent advanced cyberattacks without using AI and ML. With the balance of power in AI and ML attack techniques leaning toward cybercriminals and nation-states, enterprises rely on their cybersecurity providers to fast-track AI and ML next-gen solutions. The goal is to use AI and ML to defend enterprises while ensuring the technologies deliver business value and are feasible. Here are the defensive areas where CISOs are most interested in seeing progress: Opting for transaction fraud detection early when adopting AI and ML to defend against automated attacks CISOs have told VentureBeat that the impact of economic uncertainty and supply chain shortages has led to an increase in the use of AI- and ML-based transaction fraud detection systems. These systems use machine learning techniques to monitor real-time payment transactions and identify anomalies or potentially fraudulent activity. AI and ML are also used to identify login processes and prevent account takeovers, a common form of online retail fraud. Fraud detection and identity spoofing are becoming related as CISOs and CIOs seek a single, scalable platform to protect all transactions using AI. Leading vendors in this field include Accertify, Akamai, Arkose Labs, BAE Systems, Cybersource, IBM, LexisNexis Risk Solutions, Microsoft and NICE Actimize. Defending against ransomware, a continuing high priority CISOs tell VentureBeat their goal is to use AI and ML to achieve a multilayered security approach that includes a combination of technical controls, employee education and data backup. Required capabilities for AL- and ML-based product suites include identifying ransomware, blocking malicious traffic, identifying vulnerable systems, and providing real-time analytics based on telemetry data captured from diverse systems. Leading vendors include Absolute Software, VMWare Carbon Black, CrowdStrike, Darktrace, F-Secure and Sophos. Absolute Software has analyzed the anatomy of ransomware attacks and provided critical insights in its study, How to Boost Resilience Against Ransomware Attacks. Absolute Software’s analysis of ransomware attacks highlights the importance of implementing cybersecurity training, regularly updating antivirus and antimalware software, and backing up data to a separate, non-connected environment to prevent such attacks. Source: Absolute Software, How to Boost Resilience Against Ransomware Attacks Implementing AI- and ML-based systems that improve behavioral analytics and authentication accuracy Endpoint protection platform (EPP), endpoint detection and response (EDR), and unified endpoint management (UEM) systems, as well as some public cloud providers such as Amazon AWS, Google Cloud Platform and Microsoft Azure, are using AI and ML to improve security personalization and enforce least privileged access. These systems use predictive AI and ML to analyze patterns in user behavior and adapt security policies and roles in real time, based on factors such as login location and time, device type and configuration, and other variables. This approach has improved security and reduced the risk of unauthorized access. Leading providers include Blackberry Persona, Broadcom, CrowdStrike, CyberArk, Cybereason, Ivanti, SentinelOne, Microsoft, McAfee, Sophos and VMWare Carbon Black. Combining ML and natural language processing (NLP) to discover and protect endpoints Attack service management (ASM) systems are designed to help organizations manage and secure their digital attack surface, which is the sum of all the vulnerabilities and potential entry points attackers use for gaining network access. ASM systems typically use various technologies, including AI and ML, to analyze an organization’s assets, identify vulnerabilities and provide recommendations for addressing them. Gartner’s 2022 Innovation Insight for Attack Surface Management report explains that attack surface management (ASM) consists of external attack surface management (EASM), cyberasset attack surface management (CAASM) and digital risk protection services (DRPS). The report also predicts that by 2026, 20% of companies (versus 1% in 2022) will have a high level of visibility (95% or more) of all their assets, prioritized by risk and control coverage, through implementing CAASM functionality. Leading vendors in this area are combining ML algorithms and NLP techniques to discover, map and define endpoint security plans to protect every endpoint in an organization. Automating indicators of attack (IOAs) using AI and ML to thwart intrusion and breach attempts AI-based indicators of attack (IOA) systems strengthen existing defenses by using cloud-based ML and real-time threat intelligence to analyze events as they occur and dynamically issue IOAs to the sensor. The sensor then compares the AI-generated IOAs (behavioral event data) with local and file data to determine whether they are malicious. According to CrowdStrike, its AI-based IOAs operate alongside other layers of sensor defense, such as sensor-based ML and existing IOAs. They are based on a common platform developed by the company over a decade ago. These IOAs have effectively identified and prevented real-time intrusion and breach attempts based on adversary behavior. These AI-powered IOAs use ML models trained with telemetry data from CrowdStrike Security Cloud and expertise from the company’s threat-hunting teams to analyze events in real time and identify potential threats. These IOAs are analyzed using AI and ML at machine speed, providing the accuracy, speed and scale organizations need to prevent breaches. One of the key features of CrowdStrike’s use of AI in IOAs is the ability to collect, analyze and report on a network’s telemetry data in real time, providing a continuously recorded view of all network activity. This has proven an effective approach to identifying potential threats. Source: CrowdStrike. Relying on AI and ML to improve UEM protection for every device and machine identity UEM systems rely on AI, ML and advanced algorithms to manage machine identities and endpoints in real time, enabling the installation of updates and patches necessary to keep each endpoint secure. Absolute Software’s Resilience platform, the industry’s first self-healing zero-trust platform, is notable for its asset management, device and application control, endpoint intelligence, incident reporting and compliance, according to G2 Crowd’s ratings. >>Don’t miss our special issue: Zero trust: The new security paradigm.<< Ivanti Neurons for UEM uses AI-enabled bots to find and automatically update machine identities and endpoints. This self-healing approach combines AI, ML and bot technologies to deliver unified endpoint and patch management at scale across a global enterprise customer base. Other highly rated UEM vendors, according to G2 Crowd, include CrowdStrike Falcon and VMWare Workspace ONE. Containing the AI and ML cybersecurity threat in the future Enterprises are losing the AI war because cybercriminal gangs and nation-states are faster to innovate and quicker to capitalize on longstanding enterprise weaknesses, starting with unprotected or overconfigured endpoints. CISOs tell VentureBeat they’re working with their top cybersecurity partners to fast-track new AI- and ML-based systems and platforms to meet the challenge. With the balance of power leaning toward attackers and cybercriminal gangs, cybersecurity vendors need to accelerate roadmaps and provide next-generation AI and ML tools soon. Kevin Mandia, CEO of Mandiant, observed that the cybersecurity industry has a unique and valuable role to play in national defense. He observed that while the government protects the air, land and sea, private industry should see itself as essential to protecting the cyberdomain of the free world. “I always like to leave people with that sense of obligation that we are on the front lines, and if there is a modern war that impacts the nation where you’re from, you’re going to find yourself in a room during that conflict, figuring out how to best protect your nation,” Mandia said during a “fireside chat” with George Kurtz at CrowdStrike’s Fal.Con conference earlier this year. “I’ve been amazed at the ingenuity when someone has six months to plan their attack on your company. So always be vigilant.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. | Emerging Technologies |
The views expressed by contributors are their own and not the view of The Hill by Zhanna L. Malekos Smith, Opinion Contributor 01/23/23 07:30 AM ET The novelty of replacing one’s “home key” with a microchip implant is gaining worldwide interest, but there’s another more compelling story under the surface. Why is this technology — an integrated circuit the size of a grain of rice — reviled by some and celebrated by self-proclaimed human cyborgs? Arguably, William Shakespeare’s “Hamlet” offers the most elegant explanation: “Nothing is neither good nor bad, but thinking makes it so.” However, it would be prudent to tell Prince Hamlet that not all microchip implants are designed alike, and understanding the technological design enables one to better evaluate the competing viewpoints. Today, more than 50,000 people have elected to have a subdermal chip surgically inserted between the thumb and index finger, serve as their new swipe key, or credit card. In Germany, for example, more than 2,000 Germans have opted to receive these implants; one man even used it to store a link to his last will and testament. As chip storage capacity increases, perhaps users could even link to the complete works of Shakespeare. Chip implants are just one of the many types of emerging technologies in the Internet of Things (IoT) — an expanding digital cosmos of wirelessly connected internet-enabled devices. Some technologists are worried, however, that hackers targeting IoT vulnerabilities in sensors and network architecture also may try to hack chip implants. Radio-frequency identification (RFID) chips are identifying transponders that typically carry a unique identification number and can be tagged with user data such as health records, social media profiles, and financial information. RFID chips are passive transponders, which means the digital reader must be positioned a few inches away from the user’s microchipped hand to communicate. In contrast, near field communication (NFC) chips use electromagnetic radio fields to wirelessly communicate to digital readers in close proximity, much like smartphones and contactless credit cards. A benefit of NFC over RFID is international use, reasons Biohax: “With the power of existing infrastructure and the wide variety of services and products already supporting the NFC standard globally, one huge benefit of ours is that we overlap virtually any private or public sector already using NFC or mobile tech.” According to a 2021 United Kingdom-based consumer survey by Propeller Insights on digital payment trends in Europe, 51 percent of the approximately 2,000 respondents said they would consider getting a chip implant to pay for services. This technology is especially popular in Sweden as a substitute for paying with cash. “Only 1 in 4 people living in Sweden use cash at least once a week,” writes NPR. More than 4,000 Swedes have replaced keycards for chip implants to use for gym access, e-tickets on railway travel, and to store emergency contact information. The technology also may offer increased mobility for people with physically limiting health conditions, such as rheumatoid arthritis, multiple sclerosis, and motor neurone disease, according to BioTeq, a UK-based tech firm. For example, “a wheelchair-mobile person can approach a door and the reader will unlock the door, avoiding the need for keys that the person may not be able to use for themselves.” BioTeq is also exploring providing microchip services for those who are visually impaired to create “trigger audible or touch-sensory signals” in the home. Despite these benefits, the Bulletin of the Atomic Scientists avers that the main challenges to chip implants are security, safety and privacy. A general security concern with NFC technology is that it could allow third parties to eavesdrop on device communication, corrupt data, or wage interception attacks, warns NFC.org. Interception attacks are when someone intercepts the data transmitted between two NFC devices and then alters the data as it’s being relayed. Like any device, these personal chips have security vulnerabilities and potentially could be hacked, even if embedded underneath the skin. With regard to health safety concerns, a 2020 study with the American Society for Surgery of the Hand indicated that RFID chip implants may carry potential health risks such as adverse tissue reaction and incompatibility with some magnetic resonance imaging (MRI) technology. Several social scientists also are apprehensive about the risks to privacy and human rights if the body becomes a type of “human barcode.” According to microbiologist Ben Libberton at Stockholm’s Karolinska Institute, chip implants can reveal sensitive personal information about your health and even “data about your whereabouts, how often you’re working, how long you’re working, if you’re taking toilet breaks and things like that.” Interestingly, the first person to implant a microchip in himself was professor Kevin Warwick of Reading University in 1998; he wanted to determine whether his computer could wirelessly track his movements at work. To date, at least 10 state legislatures in the United States have passed statutes to ban employers from requiring employees to receive human microchip implants. The most recent state was Indiana, which prohibited employers from requiring employees to be chipped as a condition of employment and discriminating against job applicants who refuse the implant. Nevada’s legislation is the most restrictive — although not a total ban, as proposed in 2017, Nevada Assembly Bill 226 prohibits an officer or employee of Nevada from “establishing a program that authorizes a person to voluntarily elect to undergo the implantation of such a microchip or permanent identification marker.” As the impact and influence of chip implants increases in the United States, it will raise complex questions for state legislatures and courts to consider, such as third-party liability for cybersecurity, data ownership rights, and Americans’ rights under the Fourth Amendment and the protection of sensitive digital data under the Supreme Court’s 2018 decision in Carpenter v. United States. Microchips offer alluring benefits of convenience and mobility, but they carry potential cybersecurity, privacy and health risks. The onus cannot be on the law alone, however, to protect consumers. Instead, it is a shared responsibility among consumers to understand their data rights as part of digital literacy, and among technologists to promote cybersecurity-informed engineering at each phase of product development. Further, lawmakers must be mindful of the delicate balance between protecting the flame of technological innovation and advancement, while guarding against misapplication and abuse. As technology historian Melvin Kranzberg noted, “Technology is neither good nor bad, nor is it neutral.” Zhanna L. Malekos Smith is a nonresident adjunct fellow with the Strategic Technologies Program at the Center for Strategic and International Studies (CSIS) in Washington and an assistant professor in the Department of Systems Engineering at the U.S. Military Academy at West Point, where she also is a Cyber Law and Policy Fellow with the Army Cyber Institute and affiliate faculty with the Modern War Institute. The opinions expressed here are solely those of the author and not those of CSIS, the U.S. government or Department of Defense. | Emerging Technologies |
In April 2023, following much speculation, President Biden officially launched his re-election campaign via video announcement. On the very same day, the Republican National Committee (RNC) responded with its own thirty-second advertisement, which envisioned four more years under President Biden with greater crime, open borders, war with China, and economic collapse. It seems like a run-of-the-mill political attack at first glance, but in reality, is the first national campaign advertisement made up of images entirely generated by artificial intelligence (AI). And while the RNC has been transparent about its use of AI, it has nonetheless dragged the electorate into a new era of political advertising, with few guardrails and serious potential implications for mis- and disinformation.
In their 2018 Foreign Affairs article, “Deepfakes and the New Disinformation War,” Robert Chesney and Danielle Citron predicted that the “information cascade” of social media, declining trust in traditional media, and the increasing believability of deep fakes would create a perfect storm to spread mis- and disinformation. Their forecasts have already begun to play out. In January, a deep fake video circulated on Twitter that appeared to show President Biden announcing that he had re-introduced the draft and would be sending Americans to fight in Ukraine. The clip initially displayed a caption describing it as an AI "imagination,” but quickly lost the disclaimer through circulation, showing just how easily even transparently shared AI use can turn into misinformation.
More on:
Though Chesney and Citron focused on the geopolitical threats of deep fakes and large learning models (in the hands of Russia or terrorist organizations), it is not difficult to imagine how these same elements might go off the rails with political advertising. Even without AI-generated imagery, there has been something of a race to the bottom to produce the most provocative campaign ads. This is far from the first use of digitally enhanced images in campaign ads either. In 2015, researchers found that the McCain campaign used images of then-candidate Barack Obama in attack ads that “appear to have been manipulated and/or selected in a way that produces a darker complexion for Obama.”
As we have discussed in previous articles, these emerging technologies are likely to be most effectively used against vulnerable populations, such as women, people of color, and members of the LGBTQI+ community running for office. In a study of the 2020 congressional election cycle, a report from the Center for Democracy and Technology found that women of color candidates were twice as likely to be targets of mis- and disinformation campaigns online. In India, deepfake technology has been weaponized against female politicians and journalists, with many reporting that their photos have been placed onto pornographic images and videos and circulated on the internet. AI generated images and deep fakes in political advertisements could easily be used to sexualize female politicians, opinion makers, and other leaders, which research has shown can undermine women's credibility in campaigns.
There also arises the risk of what Citron and Chesney call the “liars dividend.” Increasingly realistic fake videos, audios, and photos could allow politicians to avert accountability for any problematic soundbite or video, claiming that it should have been obvious to viewers all along that such materials were AI-generated or a deepfake. In an era in which politicians can evade accountability due to negative partisanship, the addition of the liar’s dividend could provide the ultimate “get out of jail free” card.
Social media platforms have begun to roll out new policies to address AI generated content and deepfakes but have struggled to integrate these rules with existing policies on political content. Meta has banned deepfakes on its platforms yet remains steadfast in its policy of not fact-checking politicians. TikTok has banned deepfakes of all private figures, but only bans them for public figures if specifically endorsing products or violating other terms of the app (such as promoting hate speech). Deepfakes of public figures for the purpose of “artistic or educational content” though, are permitted.
In response to the RNC ad, Representative Yvette Clark of New York introduced the “REAL Political Advertisements Act” requiring disclosures for any use of AI-generated content in political advertisements. For its part, the Biden administration hosted tech CEOs at the White House earlier this month and released an action plan to “promote responsible AI innovation.” Last week, the Senate Judiciary Privacy, Technology, and the Law Subcommittee held a hearing on potential oversight of AI technology. Though many have lamented that there has not been more of a response from government to regulate the potential threats of AI more broadly, with another election cycle already beginning, and AI’s foray into politicians' own backyard it could light a necessary fire.
More on:
Alexandra Dent, research associate at the Council on Foreign Relations, contributed to the development of this blog post. | Emerging Technologies |
Here’s the basic problem for conservation at a global level: food production, biodiversity and carbon storage in ecosystems are competing for the same land. As humans demand more food, so more forests and other natural ecosystems are cleared, and farms intensify and become less hospitable to many wild animals and plants. Therefore global conservation, currently focused on the COP15 summit in Montreal, will fail unless it addresses the underlying issue of food production. Fortunately, a whole raft of new technologies are being developed that make a system-wide revolution in food production feasible. According to recent research by one of us (Chris), this transformation could meet increased global food demands by a growing human population on less than 20% of the world’s existing farmland. Or in other words, these technologies could release at least 80% of existing farmland from agriculture in about a century. Around four-fifths of the land used for human food production is allocated to meat and dairy, including both range lands and crops specifically grown to feed livestock. Add up the whole of India, South Africa, France and Spain and you have the amount of land devoted to crops that are then fed to livestock. Brazil’s enormous soy farms mostly produce food for animals not humans. lourencolf / shutterstock Despite growing numbers of vegetarians and vegans in some countries, global meat consumption has increased by more than 50% in the past 20 years and is set to double this century. As things stand, producing all that extra meat will mean either converting even more land into farms, or cramming even more cows, chickens and pigs into existing land. Neither option is good for biodiversity. Beef and lamb might contain plenty of protein but they use vast amounts of land. OurWorldInData (data: Poore & Nemecek (2018)), CC BY-SA Meat and dairy production is already an unpleasant business. For instance, most chickens are grown in high density feeding operations, and pork, beef and especially dairy farming is going the same way. Current technologies are cruel, polluting and harmful to biodiversity and the climate – don’t be misled by cartoons of happy cows with daisies protruding from their lips. Unless food production is tackled head-on, we are left resisting inevitable change, often with no hope of long-term success. We need to tackle the cause of biodiversity change. The principal global approach to climate change is to focus on the cause and minimise greenhouse gas emissions, not to manufacture billions of parasols (though we may need these too). The same is required for biodiversity. So, how can we do this? Cellular agriculture provides an alternative, and could be one of this century’s most promising technological advancements. Sometimes called “lab-grown food”, the process involves growing animal products from real animal cells, rather than growing actual animals. If growing meat or milk from animal cells sounds strange or icky to you, let’s put this into perspective. Imagine a brewery or cheese factory: a sterile facility filled with metal vats, producing large volumes of beer or cheese, and using a variety of technologies to mix, ferment, clean and monitor the process. Swap the barley or milk for animal cells and this same facility becomes a sustainable and efficient producer of dairy or meat products. Animal cruelty would be eliminated and, with no need for cows wandering around in fields, the factory would take up far less space to produce the same amount of meat or milk. The cultivation room at California-based Upside Foods which uses cellular agriculture to produce meat. David Kay / Upside Foods Other emerging technologies include microbial protein production, where bacteria use energy derived from solar panels to convert carbon dioxide and nitrogen and other nutrients into carbohydrates and proteins. This could generate as much protein as soybeans but in just 7% of the area. These could then be used as protein food additives (a major use of soy) and animal feed (including for pets). It is even possible to generate sugars and carbohydrates using desalination or through extracting CO₂ from the atmosphere, all without ever passing through a living plant or animal. The resulting sugars are chemically the same as those derived from plants but would be generated in a tiny fraction of the area required by conventional crops. What to do with old farmland These new technologies can have a huge impact even if demand keeps growing. Even though Chris’s research is based on the assumption that global meat consumption will double, it nonetheless suggests that at least 80% of farmland could be released to be used for something else. That land might become nature reserves or be used to store carbon, for example in forests or in the waterlogged soils of peat bogs. It could be used to grow sustainable building materials, or simply to produce more human-edible crops, among other uses. Gone too will be industrial livestock systems that produce huge volumes of manure, bones, blood, guts, antibiotics and growth hormones. Thereafter, any remaining livestock farming could be carried out in a compassionate manner. Longhorn cattle on a rewilding project in England: if we got most of our protein and carbs through new technologies, this sort of compassionate and wildlife-friendly farming could be scaled up. Chris Thomas, Author provided Since there would be less pressure on the land, there would be less need for chemicals and pesticides and crop production could become more wildlife-friendly (global adoption of organic farming is not feasible at present because it is less productive). This transition must be coupled with a full transition towards renewable energy as the new technologies require lots of power. Converting these technologies into mass-market production systems will of course be tricky. But a failure to do so is likely to lead to ever-increasing farming intensity, escalating numbers of confined animals, and even more lost nature. Avoiding this fate – and achieving the 80% farmland reduction – will require a lot of political will and a cultural acceptance of these new forms of food. It will require economic and political “carrots” such as investment, subsidies and tax breaks for desirable technologies, and “sticks” such as increased taxation and removal of subsidies for harmful technologies. Unless this happens, biodiversity targets will continue to be missed, COP after COP. | Emerging Technologies |
AbstractDynamic shape-morphing soft materials systems are ubiquitous in living organisms; they are also of rapidly increasing relevance to emerging technologies in soft machines1,2,3, flexible electronics4,5 and smart medicines6. Soft matter equipped with responsive components can switch between designed shapes or structures, but cannot support the types of dynamic morphing capabilities needed to reproduce natural, continuous processes of interest for many applications7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24. Challenges lie in the development of schemes to reprogram target shapes after fabrication, especially when complexities associated with the operating physics and disturbances from the environment can stop the use of deterministic theoretical models to guide inverse design and control strategies25,26,27,28,29,30. Here we present a mechanical metasurface constructed from a matrix of filamentary metal traces, driven by reprogrammable, distributed Lorentz forces that follow from the passage of electrical currents in the presence of a static magnetic field. The resulting system demonstrates complex, dynamic morphing capabilities with response times within 0.1 second. Implementing an in situ stereo-imaging feedback strategy with a digitally controlled actuation scheme guided by an optimization algorithm yields surfaces that can follow a self-evolving inverse design to morph into a wide range of three-dimensional target shapes with high precision, including an ability to morph against extrinsic or intrinsic perturbations. These concepts support a data-driven approach to the design of dynamic soft matter, with many unique characteristics. This is a preview of subscription content, access via your institution Access options Subscribe to Nature+Get immediate online access to the entire Nature family of 50+ journalsSubscribe to JournalGet full journal access for 1 year$199.00only $3.90 per issueAll prices are NET prices. VAT will be added later in the checkout.Tax calculation will be finalised during checkout.Buy articleGet time limited or full article access on ReadCube.$32.00All prices are NET prices. Additional access options: Log in Learn about institutional subscriptions Data availabilityAll data are contained within the manuscript. Raw data are available from the corresponding authors upon reasonable request.Code availabilityThe codes that support the findings of this study are available from the corresponding authors upon reasonable request.ReferencesRafsanjani, A., Bertoldi, K. & Studart, A. R. Programming soft robots with flexible mechanical metamaterials. Sci. Robot. 4, eaav7874 (2019).PubMed Article Google Scholar McEvoy, M. A. & Correll, N. Materials that couple sensing, actuation, computation, and communication. Science 347, 1261689 (2015).CAS PubMed Article Google Scholar Morin, S. A. et al. Camouflage and display for soft machines. Science 337, 828–832 (2012).ADS CAS PubMed Article Google Scholar Wang, C., Wang, C., Huang, Z. & Xu, S. Materials and structures toward soft electronics. Adv. Mater. 30, 1801368 (2018).Article CAS Google Scholar Rogers, J. A., Someya, T. & Huang, Y. Materials and mechanics for stretchable electronics. Science 327, 1603–1607 (2010).ADS CAS PubMed Article Google Scholar Cianchetti, M., Laschi, C., Menciassi, A. & Dario, P. Biomedical applications of soft robotics. Nat. Rev. Mater. 3, 143–153 (2018).ADS Article Google Scholar Boley, J. W., Rees, W., Lissandrello, C., Horenstein, M. N. & Mahadevan, L. Shape-shifting structured lattices via multimaterial 4D printing. Proc. Natl Acad. Sci. USA 116, 201908806 (2019).Article CAS Google Scholar Liu, K., Hacker, F. & Daraio, C. Robotic surfaces with reversible, spatiotemporal control for shape morphing and object manipulation. Sci. Robot. 6, eabf5116 (2021).PubMed Article Google Scholar Guo, Y., Zhang, J., Hu, W., Khan, M. T. A. & Sitti, M. Shape-programmable liquid crystal elastomer structures with arbitrary three-dimensional director fields and geometries. Nat. Commun. 12, 5936 (2021).ADS CAS PubMed PubMed Central Article Google Scholar Hajiesmaili, E. & Clarke, D. R. Reconfigurable shape-morphing dielectric elastomers using spatially varying electric fields. Nat. Commun. 10, 183 (2019).ADS PubMed PubMed Central Article CAS Google Scholar Gladman, A. S., Matsumoto, E. A., Nuzzo, R. G., Mahadevan, L. & Lewis, J. A. Biomimetic 4D printing. Nat. Mater. 15, 413–418 (2016).ADS PubMed Article CAS Google Scholar Yu, C. et al. Electronically programmable, reversible shape change in two‐and three‐dimensional hydrogel structures. Adv. Mater. 25, 1541–1546 (2013).CAS PubMed Article Google Scholar Zhang, H., Guo, X., Wu, J., Fang, D. & Zhang, Y. Soft mechanical metamaterials with unusual swelling behavior and tunable stress-strain curves. Sci. Adv. 4, eaar8535 (2018).ADS PubMed PubMed Central Article CAS Google Scholar Li, S. et al. Liquid-induced topological transformations of cellular microstructures. Nature 592, 386–391 (2021).ADS CAS PubMed Article Google Scholar Pikul, J. et al. Stretchable surfaces with programmable 3D texture morphing for synthetic camouflaging skins. Science 358, 210–214 (2017).ADS CAS PubMed Article Google Scholar Barnes, M. et al. Reactive 3D printing of shape-programmable liquid crystal elastomer actuators. ACS Appl. Mater. Interfaces 12, 28692–28699 (2020).CAS PubMed Article Google Scholar Ford, M. J. et al. A multifunctional shape-morphing elastomer with liquid metal inclusions. Proc. Natl Acad. Sci. USA 116, 21438–21444 (2019).ADS CAS PubMed PubMed Central Article Google Scholar Alapan, Y., Karacakol, A. C., Guzelhan, S. N., Isik, I. & Sitti, M. Reprogrammable shape morphing of magnetic soft machines. Sci. Adv. 6, eabc6414 (2020).ADS CAS PubMed PubMed Central Article Google Scholar Kim, Y., Yuk, H., Zhao, R., Chester, S. A. & Zhao, X. Printing ferromagnetic domains for untethered fast-transforming soft materials. Nature 558, 274–279 (2018).ADS CAS PubMed Article Google Scholar Cui, J. et al. Nanomagnetic encoding of shape-morphing micromachines. Nature 575, 164–168 (2019).ADS CAS PubMed Article Google Scholar Ze, Q. et al. Magnetic shape memory polymers with integrated multifunctional shape manipulation. Adv. Mater. 32, 1906657 (2020).CAS Article Google Scholar Mao, G. et al. Soft electromagnetic actuators. Sci. Adv. 6, eabc0251 (2020).ADS CAS PubMed PubMed Central Article Google Scholar Zhang, F. et al. Rapidly deployable and morphable 3D mesostructures with applications in multimodal biomedical devices. Proc. Natl Acad. Sci. USA 118, e2026414118 (2021).CAS PubMed PubMed Central Article Google Scholar Xia, X. et al. Electrochemically reconfigurable architected materials. Nature 573, 205–213 (2019).ADS CAS PubMed Article Google Scholar Fan, Z. et al. Inverse design strategies for 3D surfaces formed by mechanically guided assembly. Adv. Mater. 32, 1908424 (2020).CAS Article Google Scholar Choi, G. P., Dudte, L. H. & Mahadevan, L. Programming shape using kirigami tessellations. Nat. Mater. 18, 999–1004 (2019).ADS CAS PubMed Article Google Scholar Bossart, A., Dykstra, D. M., van der Laan, J. & Coulais, C. Oligomodal metamaterials with multifunctional mechanics. Proc. Natl Acad. Sci. USA 118, e2018610118 (2021).CAS PubMed PubMed Central Article Google Scholar Baek, C., Martin, A. G., Poincloux, S., Chen, T. & Reis, P. M. Smooth triaxial weaving with naturally curved ribbons. Phys. Rev. Lett. 127, 104301 (2021).ADS MathSciNet CAS PubMed Article Google Scholar Coulais, C., Sabbadini, A., Vink, F. & van Hecke, M. Multi-step self-guided pathways for shape-changing metamaterials. Nature 561, 512–515 (2018).ADS CAS PubMed Article Google Scholar Guseinov, R., McMahan, C., Pérez, J., Daraio, C. & Bickel, B. Programming temporal morphing of self-actuated shells. Nat. Commun. 11, 237 (2020).ADS CAS Article PubMed PubMed Central Google Scholar Kaspar, C., Ravoo, B. J., van der Wiel, W. G., Wegner, S. V. & Pernice, W. H. P. The rise of intelligent matter. Nature 594, 345–355 (2021).ADS CAS PubMed Article Google Scholar Hu, W., Lum, G. Z., Mastrangeli, M. & Sitti, M. Small-scale soft-bodied robot with multimodal locomotion. Nature 554, 81–85 (2018).ADS CAS PubMed Article Google Scholar Overvelde, J. T., Weaver, J. C., Hoberman, C. & Bertoldi, K. Rational design of reconfigurable prismatic architected materials. Nature 541, 347–352 (2017).ADS CAS PubMed Article Google Scholar Waters, J. T. et al. Twist again: dynamically and reversibly controllable chirality in liquid crystalline elastomer microposts. Sci. Adv. 6, eaay5349 (2020).ADS CAS PubMed PubMed Central Article Google Scholar Wang, Y. et al. Repeatable and reprogrammable shape morphing from photoresponsive gold nanorod/liquid crystal elastomers. Adv. Mater. 32, 2004270 (2020).CAS Article Google Scholar Xu, C., Yang, Z. & Lum, G. Z. Small-scale magnetic actuators with optimal six degrees-of-freedom programming temporal morphing of self-actuated shells. Adv. Mater. 33, 2100170 (2021).CAS Article Google Scholar Phelan, M. F. III, Tiryaki, M. E., Lazovic, J., Gilbert, H. & Sitti, M. Heat‐mitigated design and lorentz force‐based steering of an MRI‐driven microcatheter toward minimally invasive surgery. Adv. Sci. 9, 2105352 (2022).Article Google Scholar Kotikian, A. et al. Innervated, self‐sensing liquid crystal elastomer actuators with closed loop control. Adv. Mater. 33, 2101814 (2021).CAS Article Google Scholar Wang, X. et al. Freestanding 3D mesostructures, functional devices, and shape-programmable systems based on mechanically induced assembly with shape memory polymers. Adv. Mater. 31, 1805615 (2019).Article CAS Google Scholar Wang, Y., Li, L., Hofmann, D., Andrade, J. E. & Daraio, C. Structured fabrics with tunable mechanical properties. Nature 596, 238–243 (2021).ADS CAS PubMed Article Google Scholar Zhang, B. et al. Short-term oscillation and falling dynamics for a water drop dripping in quiescent air. Phys. Rev. Fluids 4, 123604 (2019).ADS Article Google Scholar Tang, C. et al. Dynamics of droplet impact on solid surface with different roughness. Int. J. Multiph. Flow 96, 56–69 (2017).CAS Article Google Scholar Download referencesAcknowledgementsY.B., Y.P. and Xiaoyue Ni acknowledge funding support from the Pratt School of Engineering and School of Medicine at Duke University. Y.H. acknowledges support from the NSF (grant no. CMMI 16-35443). This work was performed in part at the Duke University Shared Materials Instrumentation Facility, a member of the North Carolina Research Triangle Nanotechnology Network, which is supported by the National Science Foundation (award no. ECCS-2025064) as part of the National Nanotechnology Coordinated Infrastructure. Xiaoyue Ni thanks L. Bridgeman, J. Lu and Z. Wang for helpful discussions.Author informationAuthor notesThese authors contributed equally: Yun Bai, Heling WangAuthors and AffiliationsDepartment of Mechanical Engineering and Materials Science, Duke University, Durham, NC, USAYun Bai, Yuxin Pan & Xiaoyue NiDepartment of Civil and Environmental Engineering, Northwestern University, Evanston, IL, USAHeling Wang, Yeguang Xue, Yonggang Huang & John A. RogersDepartment of Mechanical Engineering, Northwestern University, Evanston, IL, USAHeling Wang, Yeguang Xue, Yiyuan Yang, Yonggang Huang & John A. RogersDepartment of Materials Science and Engineering, Northwestern University, Evanston, IL, USAHeling Wang, Yeguang Xue, Yonggang Huang & John A. RogersLaboratory of Flexible Electronics Technology, Tsinghua University, Beijing, ChinaHeling WangInstitute of Flexible Electronics Technology of THU Jiaxing, Zhejiang, ChinaHeling WangQuerrey Simpson Institute for Bioelectronics, Northwestern University, Evanston, IL, USAJin-Tae Kim, Xinchen Ni, Tzu-Li Liu, Mengdi Han, Yonggang Huang, John A. Rogers & Xiaoyue NiDepartment of Biomedical Engineering, College of Future Technology, Peking University, Beijing, ChinaMengdi HanDepartment of Biomedical Engineering, Northwestern University, Evanston, IL, USAJohn A. RogersDepartment of Neurological Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL, USAJohn A. RogersDepartment of Chemistry, Northwestern University, Evanston, IL, USAJohn A. RogersDepartment of Electrical and Computer Engineering, Northwestern University, Evanston, IL, USAJohn A. RogersDepartment of Biostatistics and Bioinformatics, Duke University, Durham, NC, USAXiaoyue NiAuthorsYun BaiYou can also search for this author in PubMed Google ScholarHeling WangYou can also search for this author in PubMed Google ScholarYeguang XueYou can also search for this author in PubMed Google ScholarYuxin PanYou can also search for this author in PubMed Google ScholarJin-Tae KimYou can also search for this author in PubMed Google ScholarXinchen NiYou can also search for this author in PubMed Google ScholarTzu-Li LiuYou can also search for this author in PubMed Google ScholarYiyuan YangYou can also search for this author in PubMed Google ScholarMengdi HanYou can also search for this author in PubMed Google ScholarYonggang HuangYou can also search for this author in PubMed Google ScholarJohn A. RogersYou can also search for this author in PubMed Google ScholarXiaoyue NiYou can also search for this author in PubMed Google ScholarContributionsY.B., H.W., Y.H., J.A.R. and Xiaoyue Ni conceived the idea and designed the research. Y.B. and Y.Y. fabricated the samples. Y.B., Y.X., Y.P., J.-T.K., Xinchen Ni, T.-L.L., M.H. and Xiaoyue Ni performed the experiments. H.W. and Y.H. performed the finite-element modelling and theoretical study. Y.B. and Xiaoyue Ni analysed the experimental data. Y.B., H.W., Y.H., J.A.R. and Xiaoyue Ni wrote the manuscript, with input from all co-authors.Corresponding authorsCorrespondence to Heling Wang, Yonggang Huang, John A. Rogers or Xiaoyue Ni.Ethics declarations Competing interests The authors declare no competing interests. Peer review Peer review information Nature thanks Guo Zhan Lum and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Additional informationPublisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Extended data figures and tablesExtended Data Fig. 1 The analytical model of the electromagnetic response of a serpentine beam corroborated by FEA study and experimental characterizations.a, Schematic illustration (top and cross-sectional views) of the initial state of a serpentine beam (beam width H = 1.20 mm, serpentine period λ = 0.18 mm). b, Analytical model and FEA prediction of the maximum out-of-plane displacement u dependent on the combination of electric current I, magnetic field B, and material and geometry parameters. c, Schematic illustration of a single beam, placed in a magnetic field B and carrying a current density J with an out-of-plane displacement u, under an electromagnetic force FEM = J × B. d, Optical images of a representative serpentine beam (side view) driven to the maximum displacement u. If exceeding the elastic limit, an irreversible deformation u’ will remain after unloading. e–g, Experimental characterizations of mechanical (e, f) and thermal (g) behaviors of a single beam under current-controlled electromagnetic actuation (B = 224 mT) in comparison with the theoretical predictions. Scale bar, 1 mm.Extended Data Fig. 2 Experimental validation of the scaling law using a single serpentine beam.a, Top-view optical images of serpentine beams with the same beam length (L = 11 mm) but different beam widths (H = 0.84 mm, 1.20 mm, 1.56 mm). In a magnetic field of 224 mT, current-controlled experiments show that the electromagnetic responses of the beams with various PI thicknesses (hPI = 5.0 μm, 7.5 μm, 12.0 μm) agree with the analytical solutions. b, Experimentally measured electromagnetic responses follow the scaling law predicted by the analytical model. c, Side-view optical images of a serpentine beam of the design presented in the main text (H = 1.20 mm, hPI = 7.5 μm) actuated in a magnetic field of 224 mT (left) and a tailored serpentine beam (H = 1.56 mm, hPI = 5.0 μm) actuated in a reduced magnetic field of 25 mT (right). Applying the same current (15 mA) deforms the two beams to the same height (around 2.25 mm). d, The two beams in (c) exhibit approximately the same current-controlled mechanical behavior. Scale bars, 1 mm.Extended Data Fig. 3 Shape morphing in time-varying, non-uniform magnetic fields.a, Schematic illustration of a single serpentine beam in a non-uniform magnetic field generated by a small disk magnet (diameter D = 11.0 mm, thickness h = 5.0 mm, surface field B = 481.6 mT) moving 3-mm below the beam (ΔZ = −3 mm). b, c, Optical images of the beam (applied current I = 20 mA) changing shapes as the position of the magnet changes along X-axis (b, ΔY = 0) and Y-axis (c, ΔX = 0). Scale bars, 1 mm. d, Schematic illustration of a 4 × 4 sample in a non-uniform magnetic field generated by a pair of large magnets (D = 76.2 mm, h = 12.7 mm, surface field B = 245.1 mT) and a small magnet (D = 11.0 mm, h = 5.0 mm, surface field B = 481.6 mT) in the middle, 3.0 mm below the center of the sample. e, Magnetic flux density in X-direction (BX) of the approximately uniform/non-uniform field measured by a gaussmeter (GMHT201, Apex Magnets) with/without the presence of the small magnet across the center (O) along X-axis (left) and Y-axis (right). f, Experimental results (optical images and 3D reconstructed surfaces) of a 4×4 sample morphing into the same donut-like target shape via the experiment-driven self-evolving process in the uniform and the non-uniform magnetic field. Scale bars, 5 mm.Extended Data Fig. 4 Typical descent of loss function over function evaluations.a–c, For a 4×4 sample morphing into Shape I (a), III (b), IV (c) (Supplementary Note 8) through the experiment-driven approach using the gradient-based algorithm (see Methods section ‘Optimization algorithm’), the experimentally-measured loss function f(V) (with an initial value f(V = 0) in the range of 0.05-0.35) descends by ~99.5% to a steady state in 170–510 function evaluations (5-15 iterations). The 3D imaging noise is δu = 0.016 mm (Supplementary Note 14). d–f, Comparison of a global solver (pattern search algorithm) with the gradient-based algorithm for a 4×4 sample morphing into Shape IV using model-driven simulation. Subjecting the objective function to typical experimental noise (δu = 0.016 mm, Supplementary Note 14) and targeting a final loss of 0.005f(V = 0), the gradient-based algorithm finds the solution faster than the global solver (d). Both algorithms settle to a minimum loss of 0.0006f(V = 0) within 20,000 function evaluations (e). With pronounced noise (δu = 0.16 mm), the gradient descent method ends up with a local solution (0.08f(V = 0)), while the pattern search method finds the same minimum (0.0006f(V = 0)) as the case with low noise (f).Extended Data Fig. 5 Experiment-driven self-evolving process in comparison with the model-driven approach.a, Target explicit shapes and optical images of the experiment-driven morphing results of a 4×4 sample. b, 3D reconstructed surfaces overlaid with contour plots of the minimized errors. c, Histogram plots of the minimized errors for model-driven and experiment-driven outputs. Scale bars, 5 mm.Extended Data Fig. 6 Simulation of the impact of experimental noise on the optimization process.a, Comparison between the distribution of final loss f0 after 15 iterations from model-driven simulations (1,000 trials, given 3D imaging noise δu = 0.016 mm, 12-bit PWM output, and maximum current Imax = 27 mA) versus that from the experiments (97 trials), for a 4×4 sample morphing into the target shape in Fig. 3b. b, Simulation results of the final loss f0 (without imaging noise and iteration constraint) given n-bit PWM voltage control, compared with the case without actuation noise (continuous, analog voltage control). c–f, Histogram plots of the final loss f0 (1,000 simulation trials) with a decreasing 3D imaging noise δu = 0.024 mm (c), 0.016 mm (d), 0.008 mm (e) and 0.004 mm (f).Extended Data Fig. 7 The optical images of a 2 × 2 sample with modified serpentine design for amplified nonlinear mechanical behavior in response to a range of actuation voltages.a–d, Side-view images of the sample deforming out-of-plane given an increasing voltage to port 1 (Fig. 4a) given V1 = 0 V (a), 0.25 V (b), 2.75 V (c), and 3 V (d), respectively. The rate of change of u1 decreases as the actuation voltage increases. Scale bar, 5 mm.Extended Data Fig. 8 Self-evolving shape morphing toward semi-real-time shape learning.a, Schematic illustration of a duplicated stereo-imaging setup enabling a semi-real-time control of a 4×4 sample simulating the dynamic shape-shifting of a palm surface with 4×4 markers (with inter-spacing a0 = 15 mm). b, Experimental results of the continuous semi-real-time shape learning of the palm surface with the thumb moving up. c, Morphing results of representative frames from a recording of hand making eight gestures. Scale bars, 5 mm.Extended Data Fig. 9 A 3×3 reflective sample self-evolving to achieve an optical and a structural function simultaneously.a, Representative optical images of the laser spots on the receiving screen. The target optical function is to overlap two laser spots on the receiving screen. A customized image analysis method detects the centroid coordinates of the red/green laser spots to monitor their current locations on the screen ([xr/g, yr/g]). b, The typical evolution of loss functions (Supplementary Note 16) over number of functional evaluations. The optimized loss function (fmulti(V)) is a linear combination of two parts: I) an optical loss function fopt(V) that evaluates the distance between the center of the two laser spots; II) a structural loss function fstruct(V) that evaluates the central nodal displacement error. Scale bar, 5 mm.Extended Data Fig. 10 Allowed shape (structural function) configurations of a 3×3 sample enforcing only the optical function (Fig. 5c, d).a, Allowed values of the central nodal displacement (u5) when the sample overlaps the beams (when the distance between the centroids of the laser spots is less than 0.1 mm) with three distinctive incident angles. b, Model predictions, and the ex-situ 3D imaging results of the sample (cross-sectional view) when overlapping the laser spots in the configurations with the highest, lowest, and target central displacement.Supplementary informationRights and permissionsSpringer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.Reprints and PermissionsAbout this articleCite this articleBai, Y., Wang, H., Xue, Y. et al. A dynamically reprogrammable surface with self-evolving shape morphing. Nature 609, 701–708 (2022). https://doi.org/10.1038/s41586-022-05061-wDownload citationReceived: 28 November 2021Accepted: 01 July 2022Published: 21 September 2022Issue Date: 22 September 2022DOI: https://doi.org/10.1038/s41586-022-05061-w CommentsBy submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. | Emerging Technologies |
- Thursday marked the third Demo Day for climate-based startups that are part of a Google accelerator program.
- The 12 companies showcased fell within three broad categories: Artificial intelligence, electric vehicle infrastructure, and providing companies with more and better data about decarbonizing their operations.
- For example, Agrology helps farmers adapt to climate change with AI, Cambio uses AI to help companies decarbonize large commercial buildings, and Voltpost converts lampposts into electric vehicle chargers.
Thursday marked the third Demo Day for the Google for Startups Accelerator: Climate Change program, where startups in the program presented the status of their startup, capping off 10 weeks of programming and mentorship from Google's robust network of in-house experts, training, and credits to use Google technology.
This year, the 12 companies mostly fell into three broad categories: Artificial intelligence, electric vehicle infrastructure, and providing companies with better data to decarbonize their operations. There are a couple exceptions: For example, Sesame Solar is decarbonizing disaster response, and Bodhi is improving the customer experience for home solar installations.
Google's startup accelerator programs are all focused on using artificial intelligence, and some have industry themes like gaming or the cloud economy, particular geographies like India or Brazil, or underrepresented founders like Black founders or Latino founders. All the programs are equity free, meaning Google does not take a stake in the companies for participating, and so far 1,100 startups have participated since the programs launched in 2016.
For this latest cohort, all of the participants had to be somewhere between their seed and series A rounds of investment, already generating revenue or with an established user base, with five employees or more, and with the potential to benefit from Google's Cloud, artificial intelligence and machine learning capabilities.
Matt Ridenour, Head of Startup Ecosystem at Google in the U.S., told CNBC he derives a sense of meaning in supporting climate change startups.
"I care about climate tech for many reasons, but most personally, having three young children, I often think about the world that they are inheriting. When I read the headlines about the dangers of the climate crisis, I feel a personal obligation to be a part of supporting innovative climate solutions to scale," Ridenour told CNBC. "This is one of the greatest gifts I believe I can offer to my children and future generations."
The programs are also good for Google business because they get early stage companies using the company's technology, giving it an early edge over competitors like Amazon, Microsoft and Apple.
"Google sees value in supporting the best startups and founders around the world. As they work with our people, products and tools, we mutually benefit. And supporting early stage companies sparks further innovation in the ecosystem, providing further opportunities for developers to build their business on Google products — like Cloud and Android for example," Ridenour told CNBC.
Google has hosted three climate change startup accelerators for North American companies in the last three years, and all 33 of the participants are all still operating, a spokesperson for Google told CNBC.
Alphabet-owned Google is itself in the midst of a company-wide push to focus on improving its product offerings with artificial intelligence. Many of the companies in the latest climate change accelerator employ AI and machine learning to help with various tasks such as agricultural soil monitoring, decarbonization of commercial buildings, and improving the process of recycling textiles.
"Teams are leaning deeper into developing AI and ML models to address climate change," Ridenour told CNBC. "By partnering with emerging technologies like these, startups can have an outsized positive impact, developing solutions and innovations faster and more accurately than ever before."
Agrology helps farmers adapt to climate change by providing field-level data on smoke, drought, irrigation optimization, microclimate weather forecasts from extreme weather, pest and disease outbreaks. Also, Agrology has a system to monitor the carbon content in soil to help farmers quantify carbon sequestration they achieve with regenerative farming practices and, if they are interested, participate in the carbon credit markets.
During the Google accelerator, Agrology made its product more accurate.
"Through mentorship they received in the accelerator, Agrology was able to build a new, more efficient API that uses integrated Google Machine Learning products, increasing their training and testing dataset by over 400%, and reducing their error rate by 4x," Ridenour told CNBC. "This will help them deliver more accurate data to farmers so they can grow better and more sustainably."
Another startup within the cohort, Cambio, is using AI to help companies decarbonize large commercial buildings.
"Once companies have set their climate pledges, they find that data tracking and decarbonization across any real estate, whether it's owned or occupied, is the hardest part of their sustainability journey. Implementation remains a blackbox," Stephanie Grayson, a co-founder of Cambio, said on Thursday during the demo day.
Cambio provides a baseline carbon footprint for a building, and then uses AI based on previous building projects and recommendations from leading building scientists and data scientists to provide the customer with a path on how to get that building to net-zero. "The bottom line is we're democratizing best in class building science across the industry at large," Grayson said.
"During the accelerator, Cambio was able to connect with Google's real estate team to get direct product feedback and discuss the topic of decarbonizing buildings," Ridenour told CNBC. "Armed with Cambio's ML models, managers can plot an entire real estate portfolio's path to net zero, a near-term requirement for publicly-traded companies as part of the SEC's latest carbon emissions transparency proposal."
Another example is Refiberd, which is using spectroscopy and artificial intelligence to sort recycled textiles, remove buttons and zippers, and send processed textiles to the recycler that can best manage that particular batch of textiles.
Eugenie.AI uses artificial intelligence to help heavy manufacturers track their emissions, report that data for any relevant compliance standards and reduce those emissions with recommendations on how to solve a particular problem.
"As cars become more and more electrified, a variety of startups are tackling the massive EV industry opportunity in creative ways," Ridenour told CNBC. Indeed, 14% of new cars sold in 2022 that were electric, up from 9% in 2021 and less than 5% in 2020, according to the International Energy Agency.
Batt Genie, one of the startups Google picked for its most recent climate change cohort, was spun out of Venkat Subramanian's labs at the University of Washington and uses software to improve the function and efficiency of lithium ion batteries, which are used in consumer electronics, electric vehicles and grid storage battery applications.
The battery management system, or BMS, in a lithium ion battery monitors how much charge is left and regulates charging. Batt Genie's software aims to makes the BMS system more efficient and productive. If a traditional electric vehicle battery lasts for about six years, the same battery can last for 12 years with Batt Genie's improved BMS, CEO Manan Pathak said on Thursday.
Another startup within the cohort, ElectricFish Energy, is making an energy storage system that both charges electric vehicles quickly which have smart chargers that store cheap, clean power from the grid when it is available.
"The current state of electric grid is fundamentally broken," Anurag Kamal, CEO ElectricFish, said on Thursday. "We are the only ones who understands that EV charging is incredibly connected to feeding energy back to the grid itself," meaning that the ElectricFish device can serve as a source of backup power.
Another company working to improve EV infrastructure is Voltpost, which converts lampposts into electric vehicle chargers. Voltpost has partnered with the New York City Department of Transportation to pilot its lamp posts into EV chargers. And Voltpost is also conducting a pilot at the Detroit Smart Parking Lab in Michigan. During the accelerator, Voltpost connected with the Google Maps team to discuss whether electric vehicle charging locations could be added to Google Maps or Android Auto.
The third area of focus for the startups included in the climate change cohort was improving the data companies use to track their own emissions.
"As governments require more carbon emissions reporting, companies need better data to track their emissions. Startups are offering better analysis and tracking to help customers and consumers understand their emissions and gain actionable recommendations on how to operate more sustainably," Ridenour told CNBC.
For example, Cleartrace provides auditable emissions data for companies.
"The issue is data around the electricity space, the energy space, and the environmental reporting space, is very hard to come by, very siloed, very error prone," CEO Lincoln Payton said on Thursday. Before starting Cleartrace, Payton was the head of investment banking for BNP Paribas Americas. "I retired from that to address the biggest issue I saw, which is the quality data available in the transfer to the renewable energy world."
Cleartrace is particularly focused on measurement techniques for Scope 3 emissions -- emissions associated with a company's entire supply chain or value chain, which can be fiendishly difficult to track. It's also looking at helping companies certify how green their operations are, particularly for processes like direct air capture of CO2 emissions and hydrogen production.
Another data-focused company is Finch, which puts sustainability scores on products to help consumers make more climate-conscious shopping decisions. Finch has a browser extension that works on Amazon and Target websites and gives products a sustainability rating between zero and ten, then suggests a more sustainable alternative if applicable.
"For most of the population who believes in climate change and wants to do something about it, but doesn't necessarily have more than seven minutes to research it online, this is a perfect solution," Lizzie Horvitz, the founder and CEO of Finch said on Thursday.
Finch sells the data it gathers from consumer behavior to clients, including manufacturers and investors, Horvitz said.
"We are able to see who is buying what and why — that women, for instance, between the ages of 35 and 40 are twice as likely to buy aluminum-free deodorant as men of the same age and location," said Horvitz.
This kind of data closes what Horvitz calls the "say and do gap," meaning the difference between what consumers say they will do in a focus group, and what they actually do at checkout. | Emerging Technologies |
- China leads the US in the research of 37 out of 44 key technologies tracked by an Australian think tank.
- These critical and emerging technologies span a range of sectors including defense, space, and energy.
- China's research lead in these sectors could have implications for democratic nations.
China has a "stunning lead" ahead of the US in high-impact research across critical and emerging technologies, according to Canberra-based independent think tank Australian Strategic Policy Institute, or ASPI.
The world's second-largest economy is leading the US in researching 37 out of 44 critical and emerging technologies across the defense, space, energy, and biotechnology sectors — including research of advanced aircraft engines, drones, and electric batteries — the ASPI said in its Thursday report. The US State Department partly funded the study.
The ASPI found that for a few fields, all of the world's top 10 research institutions are in China, and they collectively generate nine times more high-impact research papers than the second-ranked country — which is the US in many cases. In particular, China has the edge in defense and space-related technologies, the ASPI said.
"Western democracies are losing the global technological competition, including the race for scientific and research breakthroughs," the report, led by the institute's senior analyst Jamie Gaida, said.
The ASPI said China's lead is the product of "deliberate design and long-term policy planning" by President Xi Jinping's administration and those who came before him.
The report's authors warned that China's research dominance in strategic sectors could have adverse implications for democratic nations.
In the immediate term, the lead could allow China to "gain a stranglehold on the global supply of certain critical technologies." In the longer run, China's leading position could propel it to excel in almost all sectors, including technologies that don't exist yet, per the ASPI.
"Unchecked, this could shift not just technological development and control but global power and influence to an authoritarian state where the development, testing and application of emerging, critical and military technologies isn't open and transparent and where it can't be scrutinized by independent civil society and media," the think-tank said.
The ASPI urges governments around the world to collaborate and invest more in research to catch up to China. It also recommended measures such as visa screening for visitors to research facilities to limit "illegal technology transfers" to China and said governments should consider "narrow limits" on the movements of researchers who are experts in strategic sectors.
"Recruiting personnel to lead research programs in, for example, defense-relevant technologies in adversarial states poses a clear threat to a country's national security," said the ASPI. It added that serious national-security risks need to be identified before movement restrictions are implemented as they need to be weighed against a person's right to freedom of movement.
The Chinese embassy in Washington, DC did not immediately respond to Insider's request for comment. | Emerging Technologies |
Published on January 11, 2023 In News By Tasmia Ansari OpenAI has signalled that it will begin to charge for its flagship AI model ChatGPT, which can generate poems and computer codes with equal ease. On its official Discord server, OpenAI said it’s beginning to plan monetizing ChatGPT to ensure its long-term viability. The monetized version will be called ChatGPT Professional. The company has posted a waitlist link on its Discord server along with a range of questions on the payment preferences. The waitlist outlines ChatGPT Professional’s benefits, which include no unavailability window, no throttling and an unlimited number of messages with the chatbot. The research firm said those who fill out the waitlist form may be selected to pilot ChatGPT Professional, but the program is still in the experimental stages. It won’t be made widely available “at this time”. Sign up for your weekly dose of what's up in emerging technology. On November 30, when OpenAI rolled out ChatGPT, it took less than a week for the chatbot to go viral. Moreover, reports of Microsoft investing $10 billion have been doing the rounds on the internet. However, the days of using ChatGPT for free may soon be over. OpenAI clearly stated in the form that if selected, it will reach out to you to set up a payment process and a pilot. The company also declared that this is an early experimental program subject to change. Moreover, it is not making paid pro access generally available at this time. Therefore, the cost part is still unclear. Download our Mobile App Support independent technology journalism Get unlimited access for $3.64 a month Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence. Conference, in-person (Bangalore)Machine Learning Developers Summit (MLDS) 202319-20th Jan, 2023 Early Bird Passes expire on 3rd Feb Conference, in-person (Bangalore)Rising 2023 | Women in Tech Conference16-17th Mar, 2023 Conference, in-person (Bangalore)Data Engineering Summit (DES) 202327-28th Apr, 2023 3 Ways to Join our Community Telegram groupDiscover special offers, top stories, upcoming events, and more. Discord ServerStay Connected with a larger ecosystem of data science and ML Professionals Subscribe to our Daily newsletter Get our daily awesome stories & videos in your inbox PyTorch’s cult following The data from Papers With Code suggests PyTorch is the most favourite library among researchers. The dark side of Web3 The soaring Bitcoin price drives miners to run more and more rigs, leading to increased energy consumption. | Emerging Technologies |
By the 2030s, if NASA’s and other space agencies’ plans come to fruition, astronauts and the occasional tourist group will frequently visit the moon. Not long after that, they’ll be able to live for extended periods on lunar outposts, much like astronauts do in space stations today. By the 2040s or 2050s, travelers to Mars could become common too.But what will life actually look like for these intrepid space explorers? (Or foolish guinea pigs, depending on your perspective.) Kelly and Zach Weinersmith envision the future of space settlements in A City on Mars, their new book published Tuesday. The married duo dive into details and practical challenges, including water and food supplies, maintaining people’s health, competition for the most desirable territory, raising kids, and even legal troubles in space. They imagine spats over real estate and labor rights, for example.Kelly Weinersmith is an ecologist and adjunct professor at Rice University, and Zach Weinersmith is the illustrator of the Saturday Morning Breakfast Cereal webcomic. Together, they previously wrote Soonish about emerging technologies. Now they bring their science communication and cartooning skills to bear on space colonization issues, while also debunking misconceptions about what living in a Martian civilization might be like.For example, the duo critiques boastful claims by the head of NASA and commercial space CEOs about a profitable lunar economy and Gold Rush-like race for water. “There’s just not that much water. It’s hard to get, and it’s in a tiny number of places. We did a rough estimate of the total area of water, and it’s about the size of a modest gentleman’s farm,” Zach Weinersmith says.While he likes to make jokes with his artwork, he aimed for more than that throughout this book. “The illustrations are there not just for zingers; they’re there to respond to the text and to provide illumination,” he says.Future astronaut habitats might be built underground rather than in domes on the surface.
Illustration: Zach WeinersmithThroughout their book, the Weinersmiths lay out the pros and cons of building and living on the moon, Mars, and in free-floating space structures—with a clarity that’s often lacking in the bold speeches and comments by space colonization advocates like SpaceX founder Elon Musk and Blue Origin founder Jeff Bezos. The Weinersmiths point out that during long lunar nights the moon’s more frigid than Antarctica. It’s also airless, low-gravity, and bombarded with space radiation, and it lacks carbon for growing plants and any valuable minerals.Mars comes with many of those challenges and more: The dead Martian dirt is filled with poisonous perchlorate, its dust storms are prone to covering outdoor equipment, including much-needed solar panels, and it’s much farther away, which creates a 20-minute time delay when trying to talk to anybody back home. “So that’s Mars. Most of the problems of the Moon, plus toxic dust storms and half-year flight each way. Why then do so many settlement advocates favor it as the ideal second home for humanity?” the couple writes.Would-be space settlers will need to be well aware of these obstacles before attempting to set up camp. For example, a year or two of exposure to space radiation, or high-energy particles from the sun and galactic cosmic rays, could threaten astronauts with cancer. While someone might one day design geodesic-dome-like habitats that offer sufficient shielding, for now, the couple writes, it might make more sense to build underground. Living in a windowless basement might not be fun, but it might be necessary for the first generation of space visitors.Sealed underground lava tubes on the moon and Mars might be useful spots to build habitable space structures.
Illustration: Zach WeinersmithWhile the moon’s pretty big, there aren’t that many prime aboveground spots to set up a base. The Weinersmiths propose another option: lava tubes. “The moon has premium real estate, these extravagantly amazing lava tubes that we’ve never looked inside,” Zach Weinersmith says.More than 3 billion years ago, rivers of lava flowed on the moon. Sometimes a crust formed, cooled, and solidified above them, creating large underground caves. Mars appears to have similar caverns available too. The couple sees them as places that could be further explored and eventually built inside.Space settlers wanting to raise a family on the moon or Mars have tough choices to make.
Illustration: Zach WeinersmithSo far, all astronauts have been adults, which means that space agencies lack an understanding of how space could affect kids. Those effects could include not only exposure to radiation, but also to growing up in low gravity and in a place where it’s hard to exercise.Because there’s such extremely limited information about how space could affect childbirth and harm child development, the Weinersmiths express skepticism about moving civilization to space, at least in the near future. “The science about procreation in space is so unsystematic and basically nonexistent,” Zach Weinersmith says, that any attempts in the next decades to create mass settlements “would essentially be experimentation on children. It would be so obviously unethical.”If it really wanted to, the US could legally pave most of the lunar surface for future parking lots.
Illustration: Zach WeinersmithFew rules govern what astronauts and tourists can do in space. The Outer Space Treaty—which was hammered out in 1967, before anyone even set foot on the moon—says no one can deploy nukes or claim territory for their own. But negotiators let the next generation worry about the details. If they really wanted to, the couple writes, the first batch of 21st-century lunar explorers, who will likely come from NASA and its partners, could use the limited ice to build a huge sculpture or could melt the regolith to pave the surface into a parking lot—and it would all be legal. The US would only have to provide a consultation beforehand.There isn’t a precedent for how world powers or commercial entities could protect the environment or share equitably with others. Like low Earth orbit or international waters, the moon is a place where international law imposes few restrictions. “In all this time, there has never been an attempt to treat Moon rocks as unpossessable or as special property that humans must share,” the duo writes. An effort to establish a Moon Treaty in 1979 never really got off the ground.During the Trump administration, US officials developed a document known as the Artemis Accords, rules for exploration of the moon, Mars, comets, and asteroids. But they’re not binding, and so far only 31 nations have signed on. Those guidelines allow NASA and other future lunar explorers to define safety zones around equipment and facilities. That could mean demarcating a space around a favorite ice patch or crater, and taking ownership over resources like water and minerals. One could even plant a flag, like Buzz Aldrin and Neil Armstrong did on the Sea of Tranquility—although that would be symbolic, since these rules still won’t allow anyone to claim ownership of territory.No one's allowed to claim territory in space, but some space lawyers say a space power could declare safety zones around its facilities, which others could not enter.
Illustration: Zach WeinersmithStill, given the first-come, first-served nature of these “safety zones,” within 10 or 20 years space powers could be scrambling for the best ice-filled craters and the few permanently lit spots that are most suited for harvesting solar power. “My worry is you get a situation where, say, the US and China, and maybe India—where rival powers with nuclear weapons are fighting over scraps of the moon, sort of pointlessly. Turf wars are scary. I think that’s ominous,” Zach Weinersmith says.The authors also point out the need for explorers to follow space-related rules on Earth. Right now, for example, SpaceX’s Starship remains grounded by the Federal Aviation Administration following a test flight explosion in April. The agency and the US Fish and Wildlife Service are conducting an environmental review of the launch site thanks to concerns about explosion debris and the “rock tornado” the launch caused. “There are rules and they obviously have effects, despite pro-settlement people who want to ignore them or try to find loopholes or hope they’ll go away. But it matters so deeply for any kind of fantasy about Mars colonization,” Zach Weinersmith says. | Emerging Technologies |
Picture a world where computers can learn from experience, just like we do. Imagine the endless possibilities of machines capable of analyzing vast amounts of data, making informed decisions, and adapting to new situations on their own. Welcome to the realm of Machine Learning, a revolutionary branch of artificial intelligence that has transformed the way we interact with technology.
What is Machine Learning?
At its core, Machine Learning is all about enabling computers to learn and improve from experience without being explicitly programmed. Instead of following rigid instructions, ML algorithms can analyze data, identify patterns, and make data-driven decisions, offering insights and predictions that have far-reaching implications.
The Real-Life Impact of Machine Learning
Let’s embark on a journey to explore how machine learning is already shaping our world and making a positive impact on various industries.
1. Personalized Recommendations: The Power of Algorithms
Have you ever wondered how streaming services recommend the perfect movie or how e-commerce platforms suggest products tailored to your interests? Machine Learning algorithms are the magic behind these personalized recommendations, analyzing your preferences and behaviors to provide a seamless user experience.
2. Healthcare Revolution: Transforming Patient Care
Machine Learning is revolutionizing the healthcare industry, from early disease detection to personalized treatment plans. ML algorithms can analyze medical images, predict patient outcomes, and even assist in drug discovery, advancing the frontier of modern medicine.
3. Autonomous Vehicles: Paving the Way to Safer Roads
The future of transportation lies in self-driving cars, where Machine Learning algorithms play a vital role. These algorithms process real-time data from sensors and cameras, making split-second decisions to ensure safe and efficient navigation on our roads.
4. Smart Technology and Home Automation: Enhancing Everyday Life
From smart speakers that respond to our voice commands to home automation systems that optimize energy consumption, Machine Learning is at the heart of the smart technology revolution, making our lives more convenient and efficient.
5. Enhanced Cybersecurity: Protecting Against Cyber Threats
In the ever-evolving landscape of cybersecurity, Machine Learning serves as a potent weapon against cyber threats. ML algorithms can detect anomalies, identify potential breaches, and strengthen defenses to safeguard sensitive data.
The Future of Machine Learning: Glimpses of Tomorrow
The journey of Machine Learning has only just begun. As technology continues to evolve, here are some exciting possibilities for the future of this groundbreaking field:
1. Explainable AI: Bridging the Trust Gap
One challenge in AI adoption is the “black box” problem, where AI decisions are difficult to understand. The future of Machine Learning lies in creating more explainable models, increasing transparency, and building trust between humans and AI.
2. AI for Social Good: Making a Positive Impact
Machine Learning can be harnessed to address pressing global challenges, such as poverty, climate change, and healthcare disparities. The future holds immense potential for AI-driven solutions that uplift communities and promote social well-being.
3. AI and Creativity: Redefining Art and Innovation
From generating art to composing music, Machine Learning is exploring the realm of creativity. In the future, we may witness AI-driven artistic expressions that inspire and challenge our perceptions.
4. AI in Space Exploration: Reaching for the Stars
The vastness of space presents numerous challenges for exploration. Machine Learning can aid in analyzing astronomical data, predicting celestial events, and enhancing our understanding of the cosmos.
5. Human-Machine Collaboration: A New Era of Partnership
The future of Machine Learning is not about replacing humans but enhancing our capabilities. Collaborative efforts between humans and machines hold the potential to achieve feats that were once thought impossible.
Embracing the Potential: Overcoming Challenges in Machine Learning
While the potential of Machine Learning is undeniably transformative, it is not without its challenges. As we delve deeper into this exciting domain, we must also address the hurdles that lie ahead.
1. Data Privacy and Ethics: A Balancing Act
Machine Learning relies heavily on data, and with great data comes great responsibility. Ensuring data privacy and ethical practices is crucial to build and maintain public trust in AI technologies. Striking the right balance between data access and individual privacy remains a constant challenge for the AI community.
2. Bias and Fairness: Navigating Ethical Dilemmas
Machine Learning models are only as good as the data they are trained on. Biases present in the data can lead to biased outcomes, perpetuating societal disparities. Efforts are underway to address bias and ensure fairness in ML algorithms, emphasizing the need for diversity and inclusivity in the development process.
3. Regulatory Frameworks: Shaping Responsible AI
As AI technologies become more prevalent, the need for robust regulatory frameworks becomes apparent. Governments and policymakers must collaborate with technologists to create guidelines that promote responsible and accountable AI development.
4. Data Quality and Quantity: The Foundation of ML Success
Machine Learning models thrive on quality data. However, obtaining large-scale, high-quality data can be challenging, especially in niche domains. The future demands advancements in data collection and preprocessing techniques to ensure the accuracy and reliability of ML models.
5. Interpretability vs. Performance: A Trade-off
Deep learning models often achieve remarkable performance but lack interpretability. Balancing the pursuit of high accuracy with the need for transparent and understandable models remains a conundrum in Machine Learning research.
A Personal Glimpse into Machine Learning’s Impact
As a writer passionate about technology, I had the opportunity to witness firsthand the transformative power of Machine Learning. During a recent visit to a healthcare facility, I was amazed to see how ML-driven diagnostic tools were aiding healthcare professionals in identifying medical conditions more efficiently.
The radiologists, armed with advanced image recognition algorithms, were able to analyze medical scans with greater accuracy and speed. Witnessing the seamless synergy between humans and machines, it became evident that Machine Learning was not about replacing experts but enhancing their capabilities to provide better care to patients.
A Journey into the Future: What Awaits Us
As we gaze into the future of Machine Learning, a world of boundless possibilities unfolds before us. From personalized AI assistants that anticipate our needs to innovations in renewable energy and climate change mitigation, the potential applications of ML are virtually limitless.
In the healthcare sector, we can expect AI-powered medical devices and virtual health assistants to become even more prevalent, enhancing patient care and medical research. In finance and banking, Machine Learning will continue to drive fraud detection and risk assessment, safeguarding financial systems.
Moreover, the fusion of Machine Learning with other emerging technologies, such as the Internet of Things (IoT) and 5G, will unlock new opportunities in smart cities, autonomous transportation, and precision agriculture.
Embracing a Collaborative Future
As we venture further into the uncharted territory of AI, it’s essential to foster a collaborative ecosystem that transcends borders and industries. The fusion of expertise from various domains, including computer science, psychology, ethics, and social sciences, will lead to more comprehensive and holistic AI solutions.
As an AI enthusiast myself, I find solace in the notion that Machine Learning is not just about algorithms and models but also about harnessing collective intelligence. When diverse minds come together, remarkable breakthroughs happen.
Conclusion: Embrace the Power of Machine Learning
As we conclude our expedition through the captivating world of Machine Learning, it becomes evident that this technology is revolutionizing our lives in ways we could only dream of before. From personalized recommendations to life-saving healthcare applications and beyond, the impact of ML is both profound and far-reaching.
The future holds even more promises, with explainable AI, social good initiatives, creative innovations, space exploration, and collaborative partnerships with machines. The power of Machine Learning lies in our hands, and its responsible and ethical deployment will shape the trajectory of our progress. | Emerging Technologies |
A secure and resilient supply of critical minerals is paramount to our national security and is an economic necessity. Unfortunately, the United States is almost entirely dependent on foreign nations for our supply — an alarming fact considering most of the technology our government uses today requires these minerals. Every day, the Department of Homeland Security works to secure our border, counter terrorist threats, harden our cybersecurity defenses, and protect us from emerging threats such as weaponized drones and biological weapons. This is a sweeping and, at times, difficult mandate to fulfill. Fortunately for the public, the department has some of the best people in the world executing its critical mission. But our enemies are getting savvier and more sophisticated. For DHS officials to be successful, they need to have access to cutting-edge equipment and technology. Technologies like high-speed communications, surveillance systems, radar satellites, and secure computer networks allow DHS agents and officers to mitigate, prepare for, and respond to any threat facing the country. But these technologies aren't possible without minerals like cobalt, lithium, and rare earth elements, which include 17 minable metallic elements
. These are not just crucial for technology at DHS but are necessary inputs in critical and emerging technologies in both the defense and civilian spaces, from fighter jets to electric vehicles to semiconductors. Critical minerals play an integral part in our ability to innovate and produce the tools necessary to keep America free, secure, and prosperous in the 21st century. While the U.S. was once the leader in critical mineral production, China now dominates the market. Beijing controls around 90% of the world's REEs and has been the source of 80% of U.S. imports of REE compounds and metals in recent years. It processes 50%-70% of the world's lithium and cobalt. China understands the importance of critical minerals and REEs in future technology, so it has made strategic decisions to corner and control the market. We've witnessed firsthand what can occur when despots and dictators control critical resources. President Vladimir Putin has weaponized Russia's oil and natural gas supply, constricting or cutting off energy to European countries that oppose its unprovoked and unjustified attack on Ukraine. Some European Union member states are still importing Russian energy out of sheer necessity. They've been cornered into funding Putin's war machine, and this dependence limits their geopolitical options. China has taken similar actions in other key industries. During the COVID-19 outbreak, China restricted exports of personal protective equipment and other necessary medical supplies. There was also debate in Beijing about restricting critical pharmaceutical exports to the U.S., which could have had a devastating impact on Americans' access to medicine. Beijing also has a history of weaponizing its critical mineral supply. In the 2000s, China imposed export restrictions and taxes on REEs, spurring significant price increases globally. Given the past practices of Putin and Chinese dictator Xi Jinping, the U.S. should take every step necessary to ensure it is not reliant on dictators for rare earth elements. They are too fundamental to our economic and national security. Ending U.S. dependency on China for these products, and developing secure and resilient supply chains, will require a whole-of-America approach, with both the public and private sectors working in tandem. Fortunately, political leadership on both sides of the aisle are aware of the seriousness of this issue and are putting the wheels in motion to end our dependence. But more needs to be done. In March, President Joe Biden invoked the Defense Production Act to increase domestic production of strategic critical minerals like lithium, nickel, cobalt, and others necessary for large-capacity batteries that power electric vehicles and store renewable energy. This is a prudent decision and builds upon several similar initiatives undertaken during the Trump administration. With automakers transitioning to EVs at such a rapid pace, we must ensure that the U.S. retains the capability to power them without China's help. There are also government incentives, such as the proposed EV tax credit, that can spur demand for electric vehicles and boost the need for a secure supply chain of the critical minerals necessary for their production. Lawmakers are also on the cusp of passing major legislation, the Bipartisan Innovation Act, which would make huge investments into our domestic semiconductor industry and help secure our critical mineral supply chains. While there is posturing on both sides of the aisle that has prevented passage thus far, this issue is too important to be overlooked. Lawmakers have a responsibility to get this done. Congress is considering other legislative solutions to bolster our domestic supply chain of critical minerals, but this will prove challenging under the overbearing environmental regulations levied against American miners. States like West Virginia are using federal funding to extract critical minerals from coal waste in abandoned mines and surrounding areas and waterways. It will take the federal government, state and local governments, and the private sector to launch initiatives like this one. The U.S. is facing myriad challenges. If we want to secure our borders, prevent attacks on the homeland, defend our troops, and produce the vehicles and technology of the future, we need a robust and secure supply of critical minerals. Now is the time to roll up our sleeves and get to work. Our future depends on it.
Chad Wolf is the former acting U.S. Secretary of Homeland Security. | Emerging Technologies |
James Rowland
Engineering Lead, Delta AcademyDigital ChinaCentral to the Chinese government’s plan for the 21st century is to become the global leader in tech innovation. While substantial progress has been made towards this goal, the country has long been considered a copycat on the world stage, choosing to adapt western inventions rather than truly innovating. However, the tide is changing, with China pulling ahead on a number of emerging technologies. For AI, perhaps the most important emerging technology of the 21st century, this begs the question: is the future with China? Or will the country continue to sit firmly on the coattails of the USA?Since the end of the cold war, China has undergone a boom in living standards unprecedented in history. 700 million people have been lifted out of absolute poverty, with a 75% rate of rural poverty transformed to near 0. A Chinese child born as the Berlin wall fell will have seen a 30x increase in GDP per capita in their lifetime. The second biggest jump, another post-cold war success story - Poland, increased less than 10x.One of the hallmarks of modern China is the rapid development and comprehensive adoption of digital technology. Again the pace of this change is unparalleled. In 2005, 70% of households in the USA had internet access; in China 10%. Today, China is arguably the world’s most deeply digital society, with vendors barbecuing on street corners preferring payment via mobile phones and QR codes. State infrastructure projects have built the world’s largest fibre-optic network and the number of 5G terminal connections outstrips any other region on earth. This staggering change is the result of a tightly choreographed dance between tech innovators and the CCP. The state intensely encourages private companies to build world-class products but will never yield an inch of overarching control.Do innovation and Chinese politics mix?Chinese tech companies operate as independent private companies but are ultimately answerable to the CCP. Chinese law demands that “any organisation or citizen shall support, assist and cooperate with the state intelligence work in accordance with the law”. Most Chinese people do not access the internet beyond the ‘great firewall’ - a vast and sophisticated filter which prevents access to many western apps, such as Google and Facebook. Hosting a server behind the firewall requires a licence from the government. Firms of more than 50 people are also required to hire a party representative to sit on their boards and the CCP has shown it is not afraid to flex its muscles with big tech. The outspoken founder of marketplace giant Alibaba, Jack Ma, recently disappeared suspiciously from the public eye; this coincided with a regulator crackdown on his business operations.To those familiar with the laissez-faire conditions that Western tech giants tend to thrive in, this sort of state intervention would seem antithetical to innovation. However, China is home to 8 out of the 10 fastest companies to reach unicorn status and the second most unicorn companies worldwide. Typifying the strength of the Chinese tech sector is its poster boy Tencent, developer of the omnipresent WeChat, which rolls social media, ride-hailing and banking into a single sleek app. Although tencent is comically aligned to the CCP, releasing an app in 2017 which allowed users to tap their phone screens to clap for President Xi’s party conference speech, they offer a user experience rivalling or even outstripping western competitors. The result is a product foreign to the west: high-quality, state-aligned software.Despite the meteoric rise of tech and tech companies in modern China, a key sticking point remains that must be unpicked before China can overtake the USA as the world’s leading technological force. Can the Chinese system foster truly revolutionary products? While the “made in China” tag is no longer a flag of low quality, the might of the Chinese tech market has tended to yield evolution rather than revolution. Just creating a product that mirrors an original western idea for the Chinese market can be hugely lucrative, with the leading search engine Baidu generating over a billion dollars in yearly revenue only from a USP that does not challenge the CCP in the way Google would. This opportunity to create wildly successful companies from pre-existing technology strips the incentive to be truly creative and sit on the bleeding edge.China and AIWhile China is not currently challenging the US hegemony on conceptually innovative technology, change is beginning to take shape. A domestic consumer market saturated with affordable high-quality technology and a newfound sense of China as a global leader has led companies to push the boundaries and look outwards. TikTok, launched by Beijing-headquartered ByteDance in 2018, has changed the face of media consumption for generation Z and become the first Chinese software company to achieve truly global penetration.Tiktok’s success lies in AI, its powerful recommendation engine matching content to users and keeping them glued to the platform. While these recommendation algorithms are not conceptually new, it demonstrates how Chinese companies can comfortably compete with their American counterparts.The nature of AI research has meant that despite decades of field leadership, American outfits have been caught by China. Advances in AI don’t require a physical manufacturing process to implement and tend to be published openly, not guarded as trade secrets. As a result, a small team of machine learning engineers can replicate cutting-edge technology developed by competitors. This is inconceivable in areas such as chip manufacture and design which are closely guarded secrets and in which China’s domestic market lags severely behind and buys more than 90% of its chips from foreign companies.But by far the sharpest tool in the Chinese AI arsenal is data. Data is the key driving force in AI, with most cutting-edge models requiring vast amounts of high-quality data to train. Often AI projects are infeasible due to a lack of data, rather than conceptual shortcomings. The current state of the art generative language model, GTP-3, uses technology that is several years old; its sheer size and the volume of training data giving rise to its staggering performance.Official CCP policy makes it clear this is no issue in China. Data privacy laws are few and far between and it is an officially stated goal of the government is to drive economic development through vast quantities of rapidly accessible data. This is nefariously demonstrated by Hikvision, the global leader in CCTV cameras, whose AI is able to recognise and track individuals, as well as monitor their facial expressions and clothing. The company has faced accusations of complicity in the mass surveillance and opression of China’s muslim minority. They’ve also been accused of their technology being used to mark potentially dangerous individuals for re-education programmes.Can China dominate AI?But despite China’s data advantage and vast engineering workforce, becoming the dominant force in AI requires creativity as well as brute force. 2022’s most remarkable AI developments have come in generative image models, capable of creating images from text prompts. Although training these models requires raw computing power, their design was predicated on creative use of AI theory. Despite prioritising technological innovation, the CCP may scare away the best minds in AI, if they think their creativity will put them on a collision course with the state.Perhaps most indicative of the difference between AI research in China and the USA is their respective publication records. Since 2016, China has published almost twice as many AI related papers than any other country. But at the two most prestigious and high profile conferences, ICML and NeurIPS, representation of US research is almost 5 times that of China’s.With politics predicated on lack of state intervention in private business, the USA has long been the destination of choice for tech’s brightest minds. But vast markets and rich data access could be the springboard for China to brush the privacy-focussed US aside and dominate AI. However, to break the mould of Chinese tech and fulfil the CCP’s goal of becoming the global AI powerhouse, China needs more than brute-force and data, it needs creativity and invention. If the country can find that, it will be the force in AI for decades to come.To be the first to hear about new posts:Learn to build Reinforcement Learning agents through games, not lectures at Delta Academy: | Emerging Technologies |
The world must urgently assess the impact of generative artificial intelligence, G7 leaders said Saturday, announcing they will launch discussions this year on "responsible" use of the technology.
A working group will be set up to tackle issues from copyright to disinformation, the seven leading economies said in a final communique released during a summit in Hiroshima, Japan.
Text generation tools such as ChatGPT, image creators and music composed using AI have sparked delight, alarm and legal battles as creators accuse them of scraping material without permission.
Governments worldwide are under pressure to move quickly to mitigate the risks, with the chief executive of ChatGPT's OpenAI telling U.S. lawmakers this week that regulating AI was essential.
"We recognise the need to immediately take stock of the opportunities and challenges of generative AI, which is increasingly prominent across countries and sectors," the G7 statement said.
"We task relevant ministers to establish the Hiroshima AI process, through a G7 working group, in an inclusive manner ... for discussions on generative AI by the end of this year," it said.
"These discussions could include topics such as governance, safeguard of intellectual property rights including copyrights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilisation of these technologies."
The new working group will be organized in cooperation with the OECD group of developed countries and the Global Partnership on Artificial Intelligence (GPAI), the statement added.
On Tuesday, OpenAI CEO Sam Altman testified before a U.S. Senate panel and urged Congress to impose new rules on big tech.
He insisted that in time, generative AI developed by his company would one day "address some of humanity's biggest challenges, like climate change and curing cancer."
However, "we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," he said.
European Parliament lawmakers this month also took a first step towards EU-wide regulation of ChatGPT and other AI systems.
The text is to be put to the full parliament next month for adoption before negotiations with EU member states on a final law.
"While rapid technological change has been strengthening societies and economies, the international governance of new digital technologies has not necessarily kept pace," the G7 said.
For AI and other emerging technologies including immersive metaverses, "the governance of the digital economy should continue to be updated in line with our shared democratic values," the group said.
Among others, these values include fairness, respect for privacy and "protection from online harassment, hate and abuse," among others, it added. | Emerging Technologies |
About 70% Of Businesses Plan To Integrate Metaverse In Company Activities: PwC India
More than 60% of the business leaders surveyed affirmed that they have a detailed or good understanding of the metaverse.
Almost 70% of business executives in India plan to integrate the metaverse into their organisational activities, said a report by PwC India.
Though the term metaverse spans a wide spectrum of definitions, it mainly denotes a digital environment with a virtual world that mimics reality through the use of emerging technologies such as artificial intelligence (AI), low/no-code platforms, and blockchain.
Additionally, 63% of companies that are actively engaged with the metaverse say they will fully embed the metaverse in their organisational activities within a year, said the report titled 'Our Take - Embracing the Metaverse'.
More than 60% of the business leaders surveyed affirmed that they have a detailed or good understanding of the metaverse.
The survey conducted early this year asked respondents – nearly 150 in number – across different regions of India for their title/role, age group, their gender, the sector their company represented, and their company turnover, PwC India said.
“The metaverse opportunity is enormous and we expect exponential growth because it is relevant across genders, geographies, and generations. Consumers are open to adopting new technologies and companies are investing heavily in the required infrastructure to leverage the metaverse,' said Ashootosh Chand, Partner - Digital and Emerging Technologies, PwC India.
The report further said globally, businesses have started exploring partnerships with some of the leading players in the metaverse to explore business opportunities.
However, the metaverse ecosystem in India is still at a nascent stage.
"25% of India respondents say that their metaverse plans will be fully embedded in their activities within a year, while 47% say that this will take place in 2–3 years", it said.
Sudipta Ghosh, Partner and Leader – Data & Analytics, PwC India said, “Metaverse allows organisations to be really innovative about how they can meaningfully engage with the customers, employees and the broader ecosystem.
As per the report, 36% of those surveyed said cybersecurity poses the biggest risk for businesses in India and 28% respondents felt the technological limitations could pose a challenge.
In the U.S. as well, cybersecurity tops the list, followed by privacy risks, which is the third-most important risk area for India respondents. | Emerging Technologies |
Amini, a Nairobi-based climate tech startup focused on solving Africa’s environmental data gap through artificial intelligence and satellite technology, has raised $2 million in a pre-seed funding round.
Pale Blue Dot, the European climate-focused venture capital firm that announced a $100 million fund last week, led the oversubscribed round. At the same time, Superorganism, RaliCap, W3i, Emurgo Kepple Ventures and a network of angel investors participated.
Kate Kallot, the founder and CEO of Amini, has worked for several years in artificial intelligence, machine learning, data science and deep tech roles for companies such as Arm, Intel and Nvidia. Kallot, in an interview with TechCrunch, explained how a work presentation on the intersection of natural capital and emerging technologies made her become fascinated by how she could use her experiences in AI and ML, including her work around social impact with the United AI Alliance, to provide a solution to the continent’s lack of data infrastructure, especially around environmental data.
“The lack of data infrastructure for Africa, from the inability to collect data to analyzing it and its impact, is a deeper problem than most realize,” the CEO said on a call. “If you look at climate or environmental data in Africa today, it’s either nonexistent or difficult to access. And with climate change projected to hit Africa the most, there’s a lack of data for farmers, for instance, to understand what’s happening.”
Often lauded as the last frontier market, Africa is home to 65% of the world’s uncultivated fertile land and 30% of its mineral resources, but only accounts for 3% of global GDP. In addition, frequent food and water scarcity still plague the continent despite having such enormous resources. One reason for this is the lack of reliable and trustworthy data, which has held back Africa’s development for decades by hampering business decisions and capital allocation, as well as making it difficult to measure the impact of climate change. There are other instances of nothingness in accessing weather or geospatial data on the continent.
Enter Amini. The six-month-old startup said it has developed a data aggregation platform that pulls in different sources of data (from satellites and other existing data sources like weather data, sensors and proprietary customer data) down to a square meter, then unifies and processes this data before providing them via APIs local and international companies that need them.
Today, on a granular level, Amini can provide farmers with data from the cycle between crop planting and harvesting to the amount of water and fertilizer used. On a higher level, the platform can help organizations understand the impact of natural disasters, flooding and drought across the entire continent “in a few seconds,” according to Kallot, who also said the platform could pull from almost 20 years of historical data and current data produced every two weeks.
Amini’s current customers, primarily corporations and multinationals, are in the agricultural insurance sector and supply chain monitoring, specifically at the “last mile,” or the initial stages of the global supply chain. Kallot couldn’t disclose the names of Amini’s clients, while adding that the less-than-a-year-old climate tech startup is in discussions to sign up “some of the biggest food and beverage companies and one of the largest insurance companies globally.”
Addressing the lack of environmental data for organizations in these industries effects a necessary change. Take, for instance, global food and beverage companies with franchises in Africa, such as Nestlé or Starbucks. There’s mounting pressure from international regulators such as the SEC Climate Disclosure rules and European Green Deal mandating that these companies understand their carbon emissions and environmental impact and how their operations and supply chain processes affect local farming practices in various regions, including Africa.
Platforms like Amini bring much-needed data transparency for these global organizations with vested interests in Africa, and helps them tackle supply chain issues at the last mile and provide agricultural insurance to farmers. “The beauty of the platform is that it’s easily scalable because once you collect agricultural data for insurance, for example, that same data, 80 to 90%, can be sold into food and beverage companies who have supply chains in Africa or can be sold to governments who are trying to understand the impact of agriculture on their country.”
Amini’s business is such that it engages in a long sales cycle. International clients get access to the platform’s API after paying a flat license fee “in the multi-millions” for two years. Local clients have tiered introductory pricing on a case-by-case basis, allowing them to access what they need and grow over time.
Gro Intelligence, a Kenyan-founded but New York-based AI-powered insights company that provides decision-making tools and analytics to the food, agriculture and climate economies and their participants, is the name that comes up the most when industry observers try to make sense of Amini’s business, according to Kallot. However, the chief executive says there are noticeable differences: Amini collects and generates the data that the likes of Gro aggregate and use to illuminate the inter-relationships between food, climate, trade, agriculture and macroeconomic conditions.
Kallot says Amini, which recently became the first African company accepted into the Seraphim Space Accelerator program (scouts from the top 2% of global early-stage space companies), has direct competition with geospatial companies such as Planet Labs that have deployed a constellation of satellites that collects data around the globe and provide access for $15 per square kilometer. According to Kallot, using such technology is quite expensive for organizations in Africa or looking into the continent, and Amini proffers an alternative.
This piece highlights that venture capital activity around climate tech has been heating up in Africa since last year despite the global VC funding cooldown. Last year, the continent’s climate tech startups secured over $860 million in equity funding. Firms such as Novastar Ventures, Catalyst Fund and Equator are raising or have raised climate-tech funds for pre-seed to Series A startups.
Commenting on why her firm, which typically writes checks to European climate tech startups, invested in Amini, an African startup, Heidi Lindvall, general partner at Pale Blue Dot, said: “The scarcity of high-quality environmental data in Africa is a concern as it prevents others from building important climate solutions such as improving farmer insurance, monitoring climate risk or supply chains. When meeting the team behind Amini, we were blown away by their ambition and expertise and we believe they are best positioned to fill the environmental data gap of Africa.”
Before launching Amini, Kallot was a co-founder and chief impact officer at African web3/crypto Mara. Mwenda Mugendi, Muthoni Karubiu and Eshani Kaushal, all part of Amini’s executive team, bring a wealth of experience in machine learning, data science, geospatial analysis and fintech, working for multinationals including Microsoft, NASA and MTN. | Emerging Technologies |
The festival will also no longer host digital social environments as it returns to in-person experiences, but the future of New Frontier is unclear. Sundance has canceled plans for its New Frontier program at the 2023 festival, the boundary-pushing section that has showcased experimental new works for 15 years. During that time, New Frontier anticipated industry-wide interest in the metaverse and other emerging technologies while catalyzing the curation of creativity in VR and AR at festivals around the world.
The 2023 festival also will not host its digital social space or the online venue known as The Spaceship, where badgeholders can interact as avatars and watch showcased works.
As the festival prepares to hold its first in-person edition in two years, New Frontier was never part of those in-person plans. Instead, the festival planned it as an exclusively virtual event and has been accepting submissions for months. In October, artists who submitted projects received a notice from New Frontier chief curator Shari Frilot informing them that they would be reimbursed for submission fees. “We want to let you know that New Frontier is presently choosing to take time to reflect upon the rich learnings gained over the past two years as we evaluate the shifts in the media landscape to understand how best to serve our community of artists and our audiences,” Frilot wrote. “Accordingly, we will not offer an exhibition for this year’s Festival.”
The note, which also went to the New Frontier alumni community, suggested that the program would be reinvented. “We deeply appreciate your understanding and willingness to hold space for this period of reimagination,” Frilot wrote. “As always, we aim to continue to invest in, and engage with, our creative community, and we hope that you will be open to join us in discussion and collaboration during this period of incubation and reimagination.”
A Sundance representative said the backtracking is unrelated to budget cuts. Frilot, who had no further comment, has been traveling to other festivals in recent months to assess the emerging-media community. After establishing herself as a filmmaker in the early ’90s, Frilot joined Sundance in the early aughts and launched the New Frontier program at the festival in 2007. At that time, it was called “New Frontier on Main” and focused on digital installation work. Over time, it incorporated a range of works that include live performances and interactive experiences as well as virtual reality projects.
The 2012 presentation of new media artist Nonny de la Peña’s “Hunger in Los Angeles” inspired her intern, Palmer Lucky, to create a prototype for the Oculus Rift headset, which was eventually acquired by Facebook for $1 billion and laid the foundation for the future direction of the company, now called Meta. By 2016, New Frontier showcased more than 30 VR projects. It also launched the New Frontier lab program, which helped develop projects in new media from established filmmakers such as Roger Ross Williams and Josephine Decker. (The lab program was scuttled in 2021 amid other pandemic-related cutbacks.) The industry impact of New Frontier programming led major technology companies such as Google and Intel to scout for talent at Sundance, not unlike the way agents discover emerging filmmakers. Nevertheless, Frilot told IndieWire in January that she was reticent to let the program become too commercialized.
“I’ve seen a lot of works and technologies being bought by companies,” she said. “Sometimes, that makes me sad. I’ve definitely seen filmmakers make their first feature and then they climb up and get chewed up in the studio system, and you see the same thing happening with New Frontier visions who create these miraculous innovations in storytelling. It turns them into workhorses.”
While it never generated as much media attention as the more familiar aspects of Sundance programming, New Frontier became a launchpad in its own right. Former filmmaker Chris Milk credited Frilot with validating his early VR work before he made a move into the product space and developed the highly profitable VR workout app Supernatural (Meta initially acquired the app for $400 million before the Federal Trade Commission blocked the deal in July).
Other New Frontier projects that found their footing include Joseph Gordon-Levitt’s hitRECord.org and several live interactive documentary works from celebrated filmmaker Sam Green, whose “32 Sounds” opened the section this year. “I feel like it’s in many ways the creative heart of the festival,” Green wrote IndieWire by email. “New Frontier definitely rose to the occasion the past two years — 2021 and 2022 — when the festival basically was New Frontiers!”
Sam Green’s “32 Sounds”Sundance
Like others in the community, he heard from Frilot about the decision. “I totally support her in taking a pause before everything ‘goes back to normal,'” he wrote. “The world and the film landscape have changed profoundly, and I think what Shari wants to do is to take a couple of deep breaths and figure out what a post-pandemic New Frontiers should look like. I will miss it this year but am excited to see what it will become. … I’m sure whatever Shari dreams up as the next iteration of New Frontier will be dazzling and brilliant and vital.”
Projects that angled for a New Frontier launch may turn to some of the other new media programs at festivals like SXSW and Venice. Even Cannes has felt the effects of Sundance’s VR programming, as Alejandro G. Iñárritu’s VR work “Carne y Arena” (inspired in part by de la Peña’s work) premiered at Cannes in 2017. The festival’s Marché du Film launched Cannes XR in 2019, and its director, Guillaume Esmiol, took over planning for the entire Cannes market this year. All of those developments trace back to New Frontier. While Meta CEO Mark Zuckerberg struggles to make his company a leading entity of the metaverse, New Frontier created a reliable space for the incubation of new technologies, regardless of how the business behaves.
The scuttling of the New Frontier virtual program falls in line with decisions by several major festivals to move away from exclusive online offerings. In addition to ending its 3D social hub and The Spaceship, Sundance will no longer host Zoom-based rooms created by the social platform OhYay; that company announced over the summer it would shut down at the end of the year.
Unlike festivals such as Toronto and New York, Sundance will maintain virtual access to its film program. While the bulk of the lineup will receive exclusive in-person screenings during the first weekend, the full competition program will be available online to badgeholders and ticket buyers starting January 24. At the 2021 festival, Sundance reported that viewership was nearly three times greater than it was at the 2020 physical edition, with 600,000 audience views across 50 U.S. states and 120 countries. In the past two years, a $25 Explorers pass provided attendees with access to New Frontier; in 2023, it will only provide online access to the shorts and episodic programs.
In January, VR filmmaker Lynette Wallworth spoke to IndieWire about the impact of New Frontier on her career after she was selected for a VR residency in 2016. “What Shari was trying to do was to bring artists together so we could continue a trajectory that might not otherwise exist,” she said. “It’s a continuing issue of this field, the diversity around who’s actually developing and experiencing the technology. That’s why I’m so wedded to Sundance in terms of what they support.” Sign Up: Stay on top of the latest breaking film and TV news! Sign up for our Email Newsletters here. | Emerging Technologies |
Are you tired of playing catch-up with the ever-evolving world of technology? Do you find yourself constantly seeking the next big thing to stay ahead of the competition? For startups and product owners, staying ahead of the curve is not only crucial for survival but also a pathway to success.
Discovering new techs can be hard because it requires various expertises, precious time, and keen foresight to detect what will remain popular and what will be abandoned. We’ve done all the hard work for you and explored the top technology trends that will shape 2024 and revolutionize industries worldwide.
If you’re tired of being blindsided by emerging technologies, eager to gain an edge over competitors, and yearning to transform your small business into an unstoppable force, then this article is your guiding light.
Top technology trends for 2024
Artificial Intelligence (AI) and Machine Learning (ML)
These two technologies are going viral in all industries. They revolutionize the workflow by enabling computers to learn, reason, and make decisions almost like us, humans. For startups and product owners, AI and ML offer opportunities for automation, robotics, predictive analytics, personalized recommendations, and big data analysis. Which leads to higher user retention, profit, and fame.
If you want to dive into industries such as healthcare, finance, retail, and manufacturing, you can leverage Artificial Intelligence to improve diagnostics, optimize financial operations, enhance customer engagement, and speed up production processes.
For example, take a look at companies like Affectiva. It uses AI for emotion recognition, and this technology trend helps businesses detect customers’ reactions to products, improve safety, get accurate analytics, etc. Another company, DataRobot, offers a platform for automated machine learning. Both of these businesses have successfully leveraged AI and ML to disrupt their respective industries.
Internet of Things (IoT)
This concept refers to the network of interconnected devices that collect and exchange data via the Internet. You can integrate IoT into your products and services to enable remote monitoring, predictive maintenance, and real-time data analysis.
The emerging technology is very popular in smart homes and agriculture, but other industries such as healthcare and transportation can still benefit from IoT solutions. For example, doctors can use smart devices to monitor patients’ vital signs remotely, leading to improved patient care and reduced costs. As for transportation, this technology trend is a crucial design part of steering wheel control systems. It gives information on kilometers traveled and warns the drivers if they have exceeded the limit.
Startups like Particle, offering an Internet of Things platform for building connected products, and August, a smart lock company, have successfully leveraged the trend to create innovative solutions and drive business growth.
Blockchain technology
It provides a decentralized and transparent system for recording and verifying transactions. If you leverage the emerging Web 3.0 tech, you will enhance security, improve trust, and streamline processes.
By utilizing blockchain, you can create tamper-resistant digital ledgers, ensure data integrity, and enable secure peer-to-peer transactions. This technology trend of 2024 can also help you address data privacy concerns and eliminate intermediaries in transactions.
Meanwhile, startups like Chain, a blockchain infrastructure provider, and Chronicled, which offers supply chain solutions, have demonstrated the potential of this technology in driving innovation and efficiency.
Augmented Reality (AR) and Virtual Reality (VR)
AR overlays digital content in the real world using a smartphone camera or a special device, while VR immerses users in a simulated environment. Using these technologies, you have great potential to enhance marketing campaigns, improve product development processes, and create immersive customer experiences.
The trend has applications in various industries such as gaming, retail, real estate, education, etc. For instance, you can use Augmented Reality to provide virtual try-on experiences for customers in e-commerce, and Virtual Reality to simulate realistic training environments for employees.
As for real-life cases, there is a famous VR company – Oculus. It produced popular headsets Oculus Rift and Quest. According to the analytics, there are at least 5 million Oculus Quest 2 users in the world. Magic Leap, an AR headset manufacturer, created a device that gives enough field of view for the user while integrating digital content within it.
Edge computing
This technology trend brings computational power closer to the network edge (closest to users, devices, and data sources), reducing latency, enhancing security, and enabling real-time processing. So you can leverage edge computing to improve performance, ensure data privacy, and reduce reliance on centralized cloud infrastructures.
The technology has applications in various industries, including healthcare, autonomous vehicles, and industrial IoT. Startups and small businesses benefit from reduced latency in critical applications, improved data security, and cost savings in data transfer and storage. For instance, FogHorn provides edge intelligence software, and Zededa – the company offers an edge virtualization platform.
Quantum Computing
It is an emerging technology trend that has the potential to revolutionize various industries by offering exponential speed and energy efficiency advantages over classical computers. This type of computer holds promise for a wide range of applications in industries such as drug discovery, financial management, climate modeling, and more. With its help, people can finally address complex problems that were previously intractable for classical computers, potentially solving them in a matter of hours or minutes.
Right now, some major companies and startups are investing in quantum computing research and development. IBM, for example, is a leading player in the field and has made significant progress in the field. They offer access to their quantum systems through the IBM Quantum Experience, allowing developers and researchers to experiment with quantum algorithms. Other companies such as Google, Microsoft, and Intel are also investing in quantum computing research and exploring its potential applications.
Cloud computing
Together with quantum computing, small and medium businesses should consider cloud computing technologies as well. It refers to the delivery of computing services over the Internet, providing on-demand access to a wide range of resources and capabilities. The tool eliminates the need for organizations to manage and maintain their own physical infrastructure, such as servers and data centers. Instead, you can leverage cloud services providers offer to store data, run applications, and perform various tasks.
If you’re into app development, you’ve probably heard of Amazon Web Services (AWS) and Microsoft Azure. Both companies provide a wide range of cloud services, including virtual machines, storage, databases, and analytics.
5G technology
It has already made significant impacts on various aspects of business and consumer life. It provides high-speed wireless internet, enabling people to work remotely from their desired locations. It allows for reliable and fast connectivity, facilitating seamless communication and collaboration.
With this digital transformation, businesses that rely on real-time interactions and transactional processes can operate more efficiently. Edge computing, combined with 5G technology, enhances the responsiveness and reliability of critical applications. What’s more, it facilitates high-resolution video streaming, 3D content, 360-degree videos, and augmented reality experiences. This capability opens up new opportunities for you to engage your customers through immersive and interactive brand experiences.
Ericsson is a global leader in the telecommunications industry and plays a crucial role in the development and deployment of 5G networks worldwide. They provide infrastructure solutions and services to enable the implementation of the technology. Another example is Qualcomm. It is a renowned semiconductor and telecommunications equipment company. They are actively involved in the development of 5G technologies, including chipsets and modems, to power next-generation mobile devices.
Cybersecurity
Cybersecurity is a critical technology trend in today’s digital age. As automation advances, so do the methods and complexity of cyber threats, making it essential for companies to invest in cutting-edge protection measures and data privacy. According to Hiscox’s 2022 worldwide study, 43% of organizations reported a cyber attack in 2021, and 48% reported at least one in 2022.
Several startups and famous companies are at the forefront of creating innovative solutions to combat the threats. Here are some notable examples:
- Apiiro Security: it is a cybersecurity company that focuses on Code Risk Platform, helping organizations identify and manage security risks during the software development process.
- Bishop Fox: this one provides a range of services, including offensive security assessments, vulnerability research, and threat intelligence, to assist businesses to identify and address vulnerabilities.
- Abnormal Security: the company offers a cloud-based email security platform to protect organizations against email-based threats, such as phishing and business email compromise (BEC) attacks.
Biotechnology
It is a rapidly growing field that combines biology and tech to develop innovative solutions for various industries.
This trend allows personalized medicine to tailor treatments based on an individual’s DNA by utilizing genomics and genetic information. The approach helps in providing more effective and targeted therapies. It also plays a crucial role in agricultural practices by developing crops with higher yields, resistance to pests, and lower toxic content. With biotechnology, people aim to create a more sustainable and nutrient-rich food supply.
Startups like Emulate are revolutionizing the biotech industry by expanding organ-on-a-chip technology. This approach mimics the functions of organs in a lab setting, allowing researchers to study diseases and test potential treatments more effectively.
Nanotechnology
In 2024, this tool is not sci-fi anymore. It involves manipulating and controlling matter at the nanoscale, which is the scale of individual atoms and molecules contributing to advancements in fields such as precision biotechnology, additive manufacturing, quantum computing, and composite materials. Nanotechnology is also being explored for applications in healthcare, electronics, energy, construction, and environmental sustainability.
As for the companies that already leverage this newborn trend, Hummink and CDotBio are worth mentioning. The first one is a startup specializing in 3D nano-printing technology. The second company provides low-cost genetic engineering solutions using carbon nanoparticles for crop gene editing, which is useful in agriculture.
Green-tech
Green technology, also known as green tech or cleantech, refers to the development and application of environmentally friendly and sustainable solutions to address various environmental challenges. It encompasses a wide range of sectors, including energy, transportation, waste management, agriculture, and more.
This emerging technology aims to reduce carbon emissions, conserve resources, promote renewable energy, and minimize the negative impact on the environment. As a technological trend, it has gained significant momentum in recent years, driven by the growing recognition of the need to combat climate change and create a more sustainable future.
Several startups and companies are actively developing green technology solutions. Here are some notable examples:
- Aurora Solar offers software for designing solar paneling systems. Their platform enables installers to accurately assess the solar potential of a location, optimize system design, and streamline the sales process.
- Northvolt is a company that manufactures sustainable lithium-ion batteries by recycling used-up batteries. They contribute to the circular economy by repurposing battery materials and reducing the environmental impact of production.
Choosing the right technology trends for your startup
How to make a choice and pick the tendencies fitting your project? Here are three options you should consider:
Evaluate and prioritize the trends
To choose the right technology for your startup, evaluate its relevance to your specific business needs and objectives. Consider factors such as scalability, compatibility with existing systems, and budgetary constraints. Prioritize tech that aligns with your long-term vision and has the potential to deliver tangible benefits to your business.
Consider compatibility when adopting new technologies
Aside from scalability to accommodate future growth, you should also pay attention to compatibility with your existing infrastructure, and potential integration challenges. Assess the impact on your workforce, ensure adequate cybersecurity measures, and plan for appropriate training and support. Additionally, consider the total cost of ownership, including upfront investments, ongoing maintenance, and potential ROI.
Stay informed and engage with the technology community
To stay updated with emerging trends, follow industry publications, attend conferences and webinars, and join relevant online communities. Speak with experts, participate in discussions, and seek feedback from peers and mentors. Collaborate with other startups, research institutions, and technology accelerators to stay at the forefront of technological advancements.
Conclusion
Staying up-to-date with technology trends is imperative for small businesses and product owners to remain competitive in their field. Tools like Artificial Intelligence, machine learning, the Internet of Things, blockchain, AR, VR, and edge computing offer immense potential for transforming industries and creating new opportunities. By embracing these tendencies, you will improve efficiency, enhance customer experiences, and pave the way for future growth.
Remember, the future belongs to those who embrace technology and harness its power to innovate, disrupt, and create lasting impact. Seize the opportunities of the future and embark on your journey to success. Ready to create an app jam-packed with cutting-edge tools? Contact us to bring your project to life. | Emerging Technologies |
greener building materials — Humanity's love affair with cement and concrete results in massive CO2 emissions. Enlarge / Cement works, Ipswich, Suffolk, UK. (Photo by BuildPix/Construction Photography/Avalon/Getty Images) Nobody knows who did it first, or when. But by the 2nd or 3rd century BCE, Roman engineers were routinely grinding up burnt limestone and volcanic ash to make caementum: a powder that would start to harden as soon as it was mixed with water.
They made extensive use of the still-wet slurry as mortar for their brick- and stoneworks. But they had also learned the value of stirring in pumice, pebbles, or pot shards along with the water: Get the proportions right, and the cement would eventually bind it all into a strong, durable, rock-like conglomerate called opus caementicium or—in a later term derived from a Latin verb meaning “to bring together”—concretum.
The Romans used this marvelous stuff throughout their empire—in viaducts, breakwaters, coliseums, and even temples like the Pantheon, which still stands in central Rome and still boasts the largest unreinforced concrete dome in the world.
Two millennia later, we’re doing much the same, pouring concrete by the gigaton for roads, bridges, high-rises, and all the other big chunks of modern civilization. Globally, in fact, the human race is now using an estimated 30 billion metric tons of concrete per year—more than any other material except water. And as fast-developing nations such as China and India continue their decades-long construction boom, that number is only headed up. Unfortunately, our long love affair with concrete has also added to our climate problem. The variety of caementum that’s most commonly used to bind today’s concrete, a 19th-century innovation known as Portland cement, is made in energy-intensive kilns that generate more than half a ton of carbon dioxide for every ton of product. Multiply that by gigaton global usage rates, and cement-making turns out to contribute about 8 percent of total CO2 emissions.
Granted, that’s nowhere near the fractions attributed to transportation or energy production, both of which are well over 20 percent. But as the urgency of addressing climate change heightens public scrutiny of cement’s emissions, along with potential government regulatory pressures in both the United States and Europe, it’s become too big to ignore. “Now it’s recognized that we need to cut net global emissions to zero by 2050,” says Robbie Andrew, a senior researcher at the CICERO Center for International Climate Research in Oslo, Norway. “And the concrete industry doesn’t want to be the bad guy, so they’re looking for solutions.”
Major industry groups like the London-based Global Cement and Concrete Association and the Illinois-based Portland Cement Association have now released detailed road maps for reducing that 8 percent to zero by 2050. Many of their strategies rely on emerging technologies; even more are a matter of scaling up alternative materials and underutilized practices that have been around for decades. And all can be understood in terms of the three chemical reactions that characterize concrete’s life cycle: calcination, hydration, and carbonation. Page: 1 2 3 4 Next → | Emerging Technologies |
The sun sets behind transmission lines in Texas on July 11.Nick Wagner/Xinhua via ZUMA Wire This story was originally published by Slate and is reproduced here as part of the Climate Desk collaboration.
Once again, Texas’ ability to keep electricity flowing to its nearly 30 million residents is in doubt: Searing heat waves, and the heightened energy use they’re spurring, are stressing the state’s grid to a nearly calamitous degree. On Sunday—the second-hottest July day recorded in the state since 1950—the Electric Reliability Council of Texas, which oversees the state’s power capacity, asked Texans to voluntarily cut back on their electricity use during peak demand hours on Monday. The ask: turn up thermostats, avoid using appliances like dishwashers and laundry machines, and delay electricity use in general, even as temperatures in some areas rise to 110 degrees Fahrenheit. The Texas Department of State Health Services also sent out tips to stave off heat-related illness, noting that “Everything’s hotter in Texas.”
Many Texans are still scarred by the winter 2021 storm-caused blackouts, which cut off power to millions and led to hundreds of deaths. (The phrase “PTSD” became a local trend on Twitter earlier this year, during the anniversary of the tragedy.) And so they’ve greeted the latest grid stress with some fatalism, often with good cause. The same day ERCOT announced its latest request, with reassurances that mass outages wouldn’t reoccur, thousands of Travis County residents lost power thanks to private-utility damage from thunderstorms. One Texan told Fox Business, “We’ve seen [the grid] go down before for heat as well as cold. And as it’s warmer now, I don’t see it getting any better.”
Thankfully, the worst-case scenario did not come to pass on Monday (even though some Texans “reported sporadic brownouts throughout the state,” according to the Washington Post). State power demand, which had reached a record high last week most likely because of full-blast air conditioners, appeared to lower on Monday and reduce pressure on the grid. Power-sucking, industrial-scale Bitcoin miners who’d set up operations within the state over the past year have shut down their rigs until the heat wave passes, freeing up at least 1 percent of state grid capacity, according to the Texas Blockchain Association. Cities like San Antonio took energy-conservation measures at the municipal level. And ERCOT reported, with some relief, that Texans had indeed voluntary slowed down their power use, altogether saving up to 500 megawatts (an amount that, on its own, is sufficient to fully power 100,000 homes). While it’s certainly not unreasonable to ask citizens to forgo certain activities for the public good, some wondered whether life-or-death decisions involving electricity reliability should fall on Texans every time there’s a weather threat, no matter what kind. US Rep. Eddie Bernice Johnson called it “outrageous and inexcusable” that Texans were being told their power supply could handle neither extreme cold nor extreme heat. Beto O’Rourke, who is challenging Gov. Greg Abbott, tweeted: “We can’t rely on the grid when it’s hot. We can’t rely on the grid when it’s cold. We can’t rely on Greg Abbott.”
For his part, Texas’ governor hasn’t seemed too perturbed, even as heat waves continue across the Lone Star State. Back in February, in the midst of uniquely icy storms, Abbott publicly praised his administration’s electricity operations, stating that “The Texas electric grid is the most reliable and resilient it’s ever been.” That comment was thrown back at him this Monday, when a local ABC affiliate asked Abbott whether these requests to “voluntarily conserve” energy still meant that the state was prepared—or not—for a “potentially hotter August.”
“If you look at the way the grid has performed so far, it’s performed remarkably well,” the governor responded. “The laws that we passed in the last session gave ERCOT the flexibility they needed. … We will be able to make it through the summer.” He further added that since 2021’s catastrophic winter storms, “no Texan has lost power as a result of any problem with ERCOT” thanks to said grid reforms.
Unfortunately, this week wasn’t the only close call—and it’s far from clear that Texas’ grid has been “fixed.”
Early last year, the Texas Legislature took up Abbot’s calls for “ERCOT reform,” which he deemed an emergency-level priority. Much of the state grid’s vulnerability was long baked in, after all: Texas’ power operates independently of the rest of the country and top-down federal regulations, the natural gas lines feeding into the grid were not bolstered to handle extreme weather (nor were the nuclear plants or wind farms), issues with suitable energy transmission weren’t discovered until after the blackouts, and sources of backup power generation were lacking.
Though plenty of legislation had been introduced in the Texas House by March 2021, only two of the bills were signed into law by June. Senate Bill 2 gave the Texas government more control over ERCOT’s makeup and governance, following the firings and resignations of several officials involved in the 2021 crisis; Senate Bill 3 was more involved, including a proposed overhaul of emergency alert systems, a requirement for state regulators to review the availability of energy reserves, and orders for power generators as well as transmission lines to bolster their weather resiliency. After signing the bills, Abbott claimed they “fixed all of the flaws” with the grid and that “everything that needed to be done was done to fix the power grid in Texas.” Not everyone agreed with this: Critics noted at the time that the measures, which would take years to fulfill, didn’t go far enough in incentivizing electricity generators to fix their issues, or in compensating residents who lost loved ones or money during the winter crisis. Plus, enforcement of these laws was passed on to industry-friendly regulators that likely wouldn’t crack down hard on natural gas companies (which are not overseen by ERCOT). Almost as if to drive the faults home, just a week following the bills’ passage, ERCOT called on Texans to check their electricity consumption to avoid potential blackouts, after 12,000 megawatts of power were knocked offline for unknown reasons. By November, ERCOT had released a report claiming that the grid likely wouldn’t be able to withstand another serious disaster.
Still, energy operators kept touting their changes. In December, the Texas Public Utility Commission—which oversees ERCOT—guaranteed residents that they would not lose power in the event of another severe winter event, citing inspections of generators and transmission operations, proposed fines on power providers that didn’t weatherize their systems, and assurances from the state Railroad Commission that gas lines would become sturdier. And around this time, Abbott personally met with cryptocurrency miners to encourage them to consume even more power for their virtual-finance farms—so that they could drive up energy demand and fuel the construction of more backup power plants within the state, even though Texas was already set to see record demand throughout 2022. For Abbott, this was a chance to help Texas “get through the winter” and its subsequent energy travails, per Bloomberg—even though in November, he’d informed a local TV station that he could already “guarantee the lights will stay on.”
Such promises were quickly put to the test. In January of this year, snowy weather shut down 12 percent of the state’s natural gas lines, but the electric grid held on. Afterward, ERCOT moved to pay extra money to keep more backup power online for more of the year. But by February, Abbott was already doubling back, stating that “no one can guarantee” there wouldn’t be another series of power outages. Winter may have gone by without another large-scale power tragedy, but the summer hit hard: May saw another call from ERCOT for domestic power conservation after six power plants unexpectedly quit, and June saw a new record in statewide electricity use. However, the Dallas Morning News reported, solar and wind capacity helped to keep up with this surge, providing one-third of the power Texans were then relying on, and demand was further reduced as Bitcoin farms powered down.
For the time being, it does appear as though ERCOT has been more cautious and preemptively alert to weather-induced electricity issues as well as needed backup reserves, while now actually notifying Texans of emergencies unlike last time; plus, both Bitcoiners and households have been ready to reduce consumption when told to. As of this week, the worst appears to have been averted again. This may not be too reassuring, however. After all, blistering heat will keep warming Texas all summer, likely leading to more records in power demand plus additional climate-linked disasters like wildfires.
Electric power in Texas is getting more expensive thanks both to worldwide energy inflation and heightened costs from long-overdue grid fixes. Meanwhile, further solutions like improved home insulation, out-of-state electricity backup, governmental accountability from gas and electric providers, and manufacturing of more energy-efficient home appliances aren’t anywhere in sight. There’s still a long, hot summer ahead—and a sure-to-be freezing winter after that. Hopefully the lights will really stay on.
This story is part of Future Tense, a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society. | Emerging Technologies |
- A group formed by Meta, Microsoft, Amazon Web Services and TomTom is releasing data that could enable companies to build maps that rival products from Google and Apple.
- The data notably includes 59 million "points of interest," such as restaurants and landmarks, and was collected and donated by Meta and Microsoft.
Google and Apple dominate the market for online maps, charging mobile app developers for access to their mapping services. The other mega-cap tech companies are joining together to help create another option.
The Overture Maps Foundation, which was established late last year, captured 59 million "points of interest," such as restaurants, landmarks, streets and regional borders. The data has been cleaned and formatted so it can be used for free as the base layer for a new map application.
Meta and Microsoft collected and donated the data to Overture, according to Marc Prioleau, executive director of the OMF. Data on places is often difficult to collect and license, and building map data requires lots of time and staff to gather and clean it, he told CNBC in an interview.
"We have some companies that, if they wanted to invest to build the map data, they could," Prioleau said. Rather than spending that kind of money, he said, companies were asking, "Can we just get collaboration around the open base map?"
Overture is aiming to establish a baseline for maps data so that companies can use it to build and operate their own maps.
For many companies, Google's and Apple's maps aren't ideal, because they don't provide access to the underlying data. Instead, those companies allow app makers to use their maps as a service and, in many cases, charge each time the underlying map is accessed.
For example, app makers pay per thousand Google Maps lookups through an application programming interface (API). Apple allows access to Apple Maps for free for native app developers, but web app developers need to pay.
"That works for a lot of people, but not for others," Prioleau said.
Overture is only offering the underlying map data, leaving it up to companies to build their own software on top of it.
Digital maps are important for nearly all mobile apps. Emerging technologies such as augmented reality and self-driving cars also require high-quality mapping software to work. Using Overture's data, companies can integrate their proprietary information, such as exact pickup locations for a delivery app, to customize their offerings.
Overture isn't the first organization to strive to create map data that can be used freely or cheaply. OpenStreetMap, founded in 2004, creates maps using crowdsourced data. Meta uses the data in its maps.
Prioleau, who worked at Meta until earlier this year, says Overture seeks to distinguish its data from OpenStreetMap's by being more closely vetted and curated.
One big challenge is keeping the map data up to date, as businesses close and roads change. The foundation hopes its members can contribute enough real-time information to enable the regular release of accurate updates instead of a one-time data dump. Prioleau envisions using artificial intelligence technology and other automated techniques to help.
"You build maps for the rest of your life," Prioleau said, "which is also one of the reasons why these companies said, 'Hey, we don't get any huge benefit from cleaning up data, right? We're willing to share that, that's not a strategic advantage for us.'"
WATCH: The rise of Google Maps | Emerging Technologies |
Microsoft has teamed up with Aptos Labs to build AI tools that will enable banks to explore blockchain integrations on Azure. Notably, AI (Artificial intelligence) continues to take the world by storm.
Following in the footsteps of other big tech companies, Microsoft is sparing no effort to expand its footprint in the AI space. In line with this, the American technology company is also set to oust Cortana in favour of AI.
Now, the company has announced that it is collaborating with layer-1 blockchain Aptos Labs to work on AI and web3. Co-founder and CEO of Aptos Labs Mo Shaikh told TechCrunch+ that the primary focus for both companies is "solving our respective industries [sic] problems."
As part of the collaboration, Microsoft's AI models will be trained using Aptos' verified blockchain information, Shaikh noted. Aside from this, Aptos is planning to run validator nodes for its blockchain on Microsoft's Azure cloud in a bid to enhance the reliability and security of its service.
In an email, Daniel An, global director of business development for AI and web3 at Microsoft, told TechCrunch+ that the tech behemoth believes that "AI will be infused into web3 solutions at greater scale in the coming months and years." It is no secret that AI is having a huge impact on society.
Shaikh pointed out that people can become efficient in using these tools in their daily lives, "whether it's searching and putting together an index of the best restaurants in your neighborhood or helping you write code for your job or research," the top executive explained.
Aptos to leverage Microsoft's infrastructure
According to a press release, Aptos will take advantage of Microsoft's infrastructure to deploy new offerings that unify AI and blockchain technology, including a new chatbot dubbed Aptos Assistant. This new chatbot is designed to answer questions about the Aptos ecosystem.
In addition to this, Aptos Assistant will provide resources to developers building decentralised apps and smart contracts. Notably, the chatbot draws its power from Microsoft's Azure OpenAI Service. Moreover, Aptos is integrating its native programming language called Move into GitHub's Copilot service.
It is worth noting that GitHub's Copilot service is an AI programming tool. General manager of AI and emerging technologies at Microsoft Rashmi Misra divulged some key details about fusing Aptos Labs' technology with the Microsoft Azure Open AI Service capabilities.
Apparently, this will enable the company to democratise the use of blockchain and enable users to "seamlessly on-board to Web3 and innovators to develop new exciting decentralized applications using AI." Furthermore, the two companies plan to explore blockchain-based financial-service products such as central bank digital currencies, payment options, and asset tokenization.
What else to expect from this collaboration?
An Aptos representative told CoinDesk that Microsoft and Aptos Labs have teamed up to bring these tools to life. "This is a collaboration from day one," the rep said. Aptos Labs' team comprises Web3 developers, Ph.Ds, and AI experts, who are working directly with Microsoft's AI team.
Their work involves training models, integrating AI technology into the Aptos Assistant and GitHub elements integrating with Aptos' blockchain. Also, they work together to find the best resources for developers and casual visitors who are interested either in learning more about building on Aptos or simply asking questions about the Aptos ecosystem.
Blockchain developers haven't shied away from adopting AI technology in recent months amid the skyrocketing popularity of tools like ChatGPT. Its use is likely to increase in the coming days since OpenAI, the company behind the widely popular AI chatbot, is reportedly set to roll out new updates to make ChatGPT more useful.
Likewise, Venture capitalists have deviated their focus on AI as well. They are using the integration of AI as the key to courting tech talent and fundraising.
- ✔️ Unlock the full content of International Business Times UK for free
offer available for a limited time only
- ✔️ Easily manage your Newsletters subscriptions and save your favourite articles
- ✔️ No payment method required
© Copyright IBTimes 2023. All rights reserved. | Emerging Technologies |
Published on Sep 09, 2022 Major industries have witnessed a massive digital transformation in recent years. Today consumer expectations and demands are setting the pace of progress in the consumer-driven market. Companies willing to take this digital leap of faith can better serve their customers and leapfrog over long-standing competitors, and drive change. Digitization is revolutionizing the healthcare experience for consumers. The recent innovation is designed to complement the overall driving experience of a consumer. Digital technology is assisting organizations across different domains to improve their operations, revenue, compliance, claims, and underwriting efficiency, along with sales productivity and speed to market. While digital innovation is becoming a norm in many industries around the world, the pandemic expedited the urgency to reposition to digital and – out of necessity – pushed many organizations into it faster than expected. Digital transformation is now enabling healthcare companies to meet new customer expectations along with enhancing work structures and improving their processes and profitability in this new world driven by new working norms. Read more: Wearable Tech: A Promise to Revolutionize Healthcare Organizations need to adapt to colliding forces to stay relevant as well as competitive in today’s marketplace. With emerging technologies, raised customer expectations for better experiences, and shareholder push to create loyalty gaining center stage, organizations are being forced to take into consideration the following: How to improve customer as well as employee experience to create enduring relationships and sustainable value? How to generate better outcomes for the patients and stakeholders, along with driving purposeful growth? How to stay relevant and updated with the ever-changing customer needs and expectations? Ways to eliminate friction points for customers to drive a more relevant, personalized experience How to integrate technology and data to automate and orchestrate operations across all channels to improve customer experience? Digital Revolution in Healthcare Today a new term is entering as well as circling the healthcare lexicon. And that word is "consumer," as recently pointed out by Harvard Business Review. The article presented a clear differentiation between the term "patient" and "consumer." While a patient is a term applied to a person who is receiving healthcare, a consumer is an individual who makes all the decisions about obtaining good care or service and then proceeds to obtain it. In the past, healthcare providers were more focused on patients and the quality of the technical aspects of care instead of the consumers. This led to the creation of an almost denigration of the consumer aspects of health and healthcare, such as convenience, low cost, and friendly. However, healthcare is progressively moving to value-based care, where reimbursement elements are being tied to outcomes and patient satisfaction and where providers will be held accountable for the health-related decisions of consumers. This shift in focus from the patient to the consumer is transforming healthcare, and healthcare payers are leading the way in many cases. Many healthcare organizations are now focusing on ways to incentivize people to stay healthy, which can be considered the ultimate in consumer satisfaction. Read more: With Subscriber Rate Declining, What Does the Future Hold for Netflix? Solving the Issue of Prior Authorization Technology is often integrated into operations to enhance consumer satisfaction in healthcare. One such example of this is the rise in online medical portals where patients can check their lab results, make appointments, keep track of their medications, and much more. Patient or consumer satisfaction declines when low-tech processes are still predominant, like prior authorization, which is a paramount requirement by both government and commercial healthcare payment plans. This is burdensome because archaic technology is still dominating the prior authorization process. As per the report issued by the Council for Affordable Quality Healthcare (CAQH), only 26% of prior authorization requests were converted into electronic in 2020, whereas an overwhelming 74% were still handled via telephone or fax. In addition to the direct cost of obtaining and managing prior authorizations, there are consequential capacity utilization costs. To avoid downstream denials for the high-cost diagnostic and treatment services, healthcare organizations often require prior authorization. Health systems routinely reschedule patients if the authorization has not been received 48 hours before the appointment schedule. However, it is rare for the organization to fill that slot on such short notice, due to which expensive equipment and highly trained clinicians are left idle. While the impact of healthcare operations is significant, the negative effect on patients cannot be underestimated. It is usually stressful for patients when their physician orders specialized care, like MRI or CT scan. Add to that the frustrating as well as the confusing process that is created to obtain prior authorizations, which leads to delays in patients’ access to necessary care. Read more: Anticipating the Unanticipated: Balancing Business Resilience in the new age of Innovation Existing digital advancements are assisting professionals by providing: Improved mechanism for documentation that aids in minimizing the back and forth, thereby saving time and improving customer satisfaction Mobile tools offer real-time photo submission mechanisms for claims. The input controls within the apps help in driving completeness. Biometrics, including well-implemented voice analytics, also aids in improving security for customer ease and operating efficiencies. Marketing tools with built-in data analytics help in gaining insight into customer inputs and behavior, thus making it easier for organizations to identify customer needs and the possible opportunities to launch new products. Monitoring ranges from telematics to smart devices. They offer critical decision-making data. These can include real-time driver behavior data for smart monitoring and application. Aids in process streamlining and automation of manual processes for cost management and profitable growth. When healthcare payers decided to move their insurance claims to a digital format, they were commended for undertaking that transformation. Now doing the same for prior authorization will likely do wonders for the patient as well as consumer experience. Replacing outdated technology at the heart of prior authorizations will be the first big step in reforming the process, thereby making quality healthcare more accessible to all. And the good news is that many forward-thinking healthcare providers are now paying the way. Summing Up the Pulse Healthcare industry professionals, as well as leadership, need to outline a framework using technology to handle prior authorization. At the same time, most of the advanced tech tools required to make this process more efficient already exist, and technology that does not can be easily worked on incorporating. However, even in such cases, the challenge lies in changing the ingrained habits and already existing traditional standard operating procedures in the healthcare system. Read more: The Metaverse: How is it Revolutionizing the Way We Shop? Here are a few tips to support the adoption of new technologies along with fostering the adoption of change management in an organization. Identify the underlying, bite-sized pieces of the overall change to focus on. Leadership must understand the necessity for enterprise-wide systematic changes. However, when rolling out modifications, it is vital to start with smaller steps to get the employees on board. Facilitate clarity regarding the new tasks as well as behaviors that are desired. Leadership must proactively verbalize that innovation is the need of the hour. It is important to explore ways to reward ideas that employees contribute to and design annual performance evaluations and goals around innovation. Employees will be able to respond to their performance evaluations. To improve the electronic exchange of health care data as well as to streamline processes related to prior authorization, the healthcare industry is seeking solutions. In fact, the Centers for Medicare & Medicaid Services (CMS) has started working to change the existing policies for government-sponsored healthcare plans to reduce the use of fax technology across all programs and bring in a digital revolution. With CMS leading the way, perhaps the year 2022 will likely bring in technology that will allow organizations to open a prime bottleneck in the existing healthcare system. With a presence in New York, San Francisco, Austin, Seattle, Toronto, London, Zurich, Pune, Bengaluru, and Hyderabad, SG Analytics, a pioneer in Research and Analytics, offers tailor-made services to enterprises worldwide. A leader in the Technology domain, SG Analytics partners with global technology enterprises across market research and scalable analytics. Contact us today if you are in search of combining market research, analytics, and technology capabilities to design compelling business outcomes driven by technology. | Emerging Technologies |
IBM chief executive Arvind Krishna’s latest comments on generative AI-related job losses paint a worrying picture of the fate awaiting workers globally as enterprises look to adopt the technology and streamline productivity.
Krishna shed more light on how the company views recent developments in the generative AI space in an interview with CNBC, noting that the productivity benefits afforded by the technology should not be overlooked by enterprises.
Long-term, he said, generative AI has the potential to “make every enterprise process more productive” and unlock marked benefits for organizations - but that will likely come at the expense of human-held jobs.
“That means you can get the same work done with fewer people,” he said. “That’s just the nature of productivity. I actually believe that the first set of roles that will get impacted are - what I call - back office, white-collar work.”
Krishna’s comments come as no surprise given the widely-documented risks to human workers through generative AI over the last nine months.
“Up to 300 million” jobs globally could be lost to automation in the coming years, according to a report from Goldman Sachs, while a study from McKinsey in July found that administrative roles, such as office support, customer service, and HR will see a decline in roles due to generative AI.
What Krishna’s comments do point to is that the veneer of positive marketing positioning generative AI as a mere support tool for human workers is showing cracks.
IBM has been bullish on generative AI throughout 2023, and Krishna has been outspoken on the company’s push to deeply integrate these tools within day-to-day operations. In May, the chief exec revealed that around 7,800 staff in non-customer-facing roles could be replaced by AI, equivalent to roughly 30% of staff currently occupying these positions.
The firm has gone so far as to freeze hiring for roles in human resources and assorted “back office” positions in anticipation of automation on these fronts.
IBM isn’t alone in its focus on AI integration and cutting human workforces, however.
Earlier this year, BT announced it plans to lay off tens of thousands of workers in the coming years and automate a slew of roles. The cuts will see the telecoms giant lay off around 40% of its workforce by the end of the decade.
The firm framed this as a strategy to create a “leaner business”, but the focus on integrating AI that forms a key aspect of this strategy will see up to 10,000 roles cut and replaced with AI.
Administrative and “back office” roles aren’t the only positions in the crosshairs amid the generative AI boom either. KPMG research in July found that tasks performed by programmers and software developers are at risk, along with IT support technicians.
Despite this, Krishna appeared insistent that augmenting human workers is still the key focus, and not workforce consolidation and automation.
Prepare your organization for future success by building your IT operations on cloud-native solutions.
DOWNLOAD FOR FREE
“It’s absolutely not displacing, it’s augmenting,” he told CNBC. “The more labor we get, especially if it’s not human based at all, we can create more GDP. We should all feel better about it.”
This position does align with recent analysis from the International Labor Organisation (ILO), which suggested that most jobs and industries are “only partially exposed” to automation, meaning roles are “more likely to be complemented rather than substituted by AI”.
“The most important impact of the technology is likely to be of augmenting work,” the ILO study stated.
Get the ITPro Newsletter
A daily dose of IT news, reviews, features and insights, straight to your inbox!
Ross Kelly is a staff writer at ITPro, ChannelPro, and CloudPro, with a keen interest in cyber security, business leadership and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
In his spare time, Ross enjoys cycling, walking and is an avid reader of history and non-fiction.
A daily dose of IT news, reviews, features and insights, straight to your inbox!
Thank you for signing up to ITPro. You will receive a verification email shortly.
There was a problem. Please refresh the page and try again. | Emerging Technologies |
The head of MI5, Britain’s security service, has made an unprecedented public appearance alongside his counterparts from the US, Canada, Australia and New Zealand to warn about the emerging threats posed by Artificial Intelligence.
Ken McCallum is in Silicon Valley, California, with members of the Five Eyes intelligence partnership for the first Emerging Technology and Securing Innovation Security Summit.
Speaking ahead of the summit, which brings business leaders, entrepreneurs, and academics face-to-face with the top security chiefs from five nations, Mr McCallum said: "The UK is seeing a sharp rise in aggressive attempts by other states to steal competitive advantage. It’s the same across all five of our countries."
He will appear alongside his Five Eyes counterparts, including FBI Director Christopher Wray, in an attempt to alert companies, large and small, about the threats they face, predominantly from China.
Mr McCallum, who has been the director general of the UK’s domestic security agency since 2020, said: “The stakes are now incredibly high on emerging technologies; states which lead the way in areas like artificial intelligence, quantum computing and synthetic biology will have the power to shape all our futures.”
He warned: “We all need to be aware, and respond, before it’s too late."
Read more from Sky News:
Israel says it could do 'something different' rather than ground offensive in Gaza
Mother pushing baby in pram suffers 'life-changing' injuries in hit-and-run
Donald Trump sues ex-British spy over 'shocking' sexual claims
The idea behind the spooks coming out of their shadows is to attempt to improve public awareness, particularly among smaller companies who may not realise they are at risk.
The five governments are expected to release a joint five-point set of principles to help companies secure their innovation.
The intelligence leaders are also expected to sit down with private sector leaders for in-depth discussions about expanding and strengthening private-public partnerships in order to better protect innovation, and the collective security of the five nations and their citizens.
FBI director Christopher Wray said: "Emerging technologies are essential to our economic and national security, and America’s role as a leading economic power, but they also present new and evolving threats."
Australia’s Intelligence chief, Mike Burgess, added: "The Summit is an unprecedented response to an unprecedented threat.
"The fact the Five Eyes security services are gathering in Silicon Valley speaks to the nature of the threat and our collective resolve to counter it."
The Five Eyes coalition was formed in 1946 and allows the five nations to enhance their intelligence sharing and better coordinate their domestic and international shared security. | Emerging Technologies |
AUKUS is about much more than nuclear submarines, it will have ‘transformative’ benefits for rest of industry
The AUKUS agreement is about much more than nuclear submarines, with Australia's outgoing ambassador to the United States saying the "spillover benefits" will be "transformative".
The AUKUS agreement will have a “transformative” impact on Australia’s industrial base, Australia’s outgoing ambassador to the United States has argued.
Speaking to Sky News Australia’s Andrew Clennell, Ambassador Arthur Sinodinos said the AUKUS agreement was about much more than acquiring nuclear submarines.
“This is a whole of nation effort, this is about being aspirational and ambitious about what we can do as a nation,” Ambassador Sinodinos said.
“It’s quite a complex and sophisticated operation…Not just on the submarine side but what’s called pillar two, the advanced capabilities – technologies, critical and emerging technologies, that will lay the basis for future industries.
While the acquisition of nuclear submarines has dominated the headlines in relation to AUKUS, the agreement also includes a provision for the three countries to “develop and provide joint advanced military capabilities to promote security and stability in the Indo-Pacific region.”
This includes greater information sharing and innovation work on the development of quantum technologies, artificial intelligence, advanced cyber capabilities, electronic warfare capabilities, hypersonic and counter-hypersonic capabilities, and autonomous undersea capabilities.
According to Mr Sinodinos, who finishes his tenure as Ambassador in the coming days, the “spillover benefits” this will have for the rest of Australia’s industry are “incalculable.”
“Don’t underestimate the impact AUKUS will have on our capacity to integrate our industrial bases, our capacity to share information, to share technology, and create common platforms. This is the way of the future…. in that sense this is quite a path-breaking agreement.”
“This development, if we come at this as a whole of nation effort, can be really transformative for our industrial base.”
Ambassador Sinodinos said he was “bowled over” by the scale of the ambition when he was brought into negotiations that eventually led to the AUKUS agreement.
“The nuclear navy at first were quite skeptical, they wanted to work out whether we had the capability to undertake what they call the nuclear stewardship – because they maintain such very high standards of safety and performance,” he said.
“But once we engaged the strategic side, the White House, they saw the strategic benefits of this.
“They said to us afterwards, this is a multi-decadal commitment, we are bound together – in one case they said forever.”
The Ambassador said that President Biden had spend a lot of time thinking about the issue during 2021, because of his strong nuclear non-proliferation credentials, “but once he was convinced this was the right thing to do strategically, he really embraced it.”
“In the last few days he’s been making all sorts of calls to congresspeople, he’s spoken with other leaders, he’s all in now. And we saw that enthusiasm in San Diego,” the Ambassador said.
Ambassador Sinodinos was also asked whether he had any concerns about Kevin Rudd’s ability to fill the role of US ambassador, given past comments where the former Labor prime minister said AUKUS was Scott Morrison trying to make himself look “important and hairy chested.”
Mr Rudd also told SBS French that he was “concerned about the long-term impact” AUKUS would have on the Australian sovereignty.
“As an ally of the US, you don’t end up agreeing with them on every element of strategy. Sometimes our American friends get it wrong,” Mr Rudd said in 2021.
But Ambassador Sinodinos, who was chief of staff to prime minister John Howard before serving as a Liberal Senator for NSW, said he was excited about what Mr Rudd would be able to achieve.
Australiaâs soon-to-be US Ambassador has used taxpayer resources to lodge a complaint to Sky News about media coverage of his views on AUKUS.https://t.co/dLe0NKSXTk— Sky News Australia (@SkyNewsAust) March 14, 2023
“Kevin accepts that as ambassador he’s here to talking on behalf of country and on behalf of the government he represents. Just as I’ve been doing this under both liberal and Labor governments.
“He’ll be working day and night once he’s the ambassador here to implement the AUKUS agreement… and I think he can be very effective in doing that.”
“He’s got a great network himself which I think we can really leverage in this town, in Washington DC. And he also brings his China knowledge, which is so extensive, so I’m quite excited by what that can mean in terms of what we can do in Washington and our effectiveness." | Emerging Technologies |
Will the renaissance of Chinese technology lead to their global domination of AI research and technology?Digital China
Central to the Chinese government’s plan for the 21st century is to become the global leader in tech innovation. While substantial progress has been made towards this goal, the country has long been considered a copycat on the world stage, choosing to adapt western inventions rather than truly innovating. However, the tide is changing, with China pulling ahead on a number of emerging technologies. For AI, perhaps the most important emerging technology of the 21st century, this begs the question: is the future with China? Or will the country continue to sit firmly on the coattails of the USA?
Since the end of the cold war, China has undergone a boom in living standards unprecedented in history. 700 million people have been lifted out of absolute poverty, with a 75% rate of rural poverty transformed to near 0. A Chinese child born as the Berlin wall fell will have seen a 30x increase in GDP per capita in their lifetime. The second biggest jump, another post-cold war success story - Poland, increased less than 10x.
One of the hallmarks of modern China is the rapid development and comprehensive adoption of digital technology. Again the pace of this change is unparalleled. In 2005, 70% of households in the USA had internet access; in China 10%. Today, China is arguably the world’s most deeply digital society, with vendors barbecuing on street corners preferring payment via mobile phones and QR codes. State infrastructure projects have built the world’s largest fibre-optic network and the number of 5G terminal connections outstrips any other region on earth. This staggering change is the result of a tightly choreographed dance between tech innovators and the CCP. The state intensely encourages private companies to build world-class products but will never yield an inch of overarching control.
Do innovation and Chinese politics mix?
Chinese tech companies operate as independent private companies but are ultimately answerable to the CCP. Chinese law demands that “any organisation or citizen shall support, assist and cooperate with the state intelligence work in accordance with the law”. Most Chinese people do not access the internet beyond the ‘great firewall’ - a vast and sophisticated filter which prevents access to many western apps, such as Google and Facebook. Hosting a server behind the firewall requires a licence from the government. Firms of more than 50 people are also required to hire a party representative to sit on their boards and the CCP has shown it is not afraid to flex its muscles with big tech. The outspoken founder of marketplace giant Alibaba, Jack Ma, recently disappeared suspiciously from the public eye; this coincided with a regulator crackdown on his business operations.
To those familiar with the laissez-faire conditions that Western tech giants tend to thrive in, this sort of state intervention would seem antithetical to innovation. However, China is home to 8 out of the 10 fastest companies to reach unicorn status and the second most unicorn companies worldwide. Typifying the strength of the Chinese tech sector is its poster boy Tencent, developer of the omnipresent WeChat, which rolls social media, ride-hailing and banking into a single sleek app. Although tencent is comically aligned to the CCP, releasing an app in 2017 which allowed users to tap their phone screens to clap for President Xi’s party conference speech, they offer a user experience rivalling or even outstripping western competitors. The result is a product foreign to the west: high-quality, state-aligned software.
Despite the meteoric rise of tech and tech companies in modern China, a key sticking point remains that must be unpicked before China can overtake the USA as the world’s leading technological force. Can the Chinese system foster truly revolutionary products? While the “made in China” tag is no longer a flag of low quality, the might of the Chinese tech market has tended to yield evolution rather than revolution. Just creating a product that mirrors an original western idea for the Chinese market can be hugely lucrative, with the leading search engine Baidu generating over a billion dollars in yearly revenue only from a USP that does not challenge the CCP in the way Google would. This opportunity to create wildly successful companies from pre-existing technology strips the incentive to be truly creative and sit on the bleeding edge.
China and AI
While China is not currently challenging the US hegemony on conceptually innovative technology, change is beginning to take shape. A domestic consumer market saturated with affordable high-quality technology and a newfound sense of China as a global leader has led companies to push the boundaries and look outwards. TikTok, launched by Beijing-headquartered ByteDance in 2018, has changed the face of media consumption for generation Z and become the first Chinese software company to achieve truly global penetration.
Tiktok’s success lies in AI, its powerful recommendation engine matching content to users and keeping them glued to the platform. While these recommendation algorithms are not conceptually new, it demonstrates how Chinese companies can comfortably compete with their American counterparts.
The nature of AI research has meant that despite decades of field leadership, American outfits have been caught by China. Advances in AI don’t require a physical manufacturing process to implement and tend to be published openly, not guarded as trade secrets. As a result, a small team of machine learning engineers can replicate cutting-edge technology developed by competitors. This is inconceivable in areas such as chip manufacture and design which are closely guarded secrets and in which China’s domestic market lags severely behind and buys more than 90% of its chips from foreign companies.
But by far the sharpest tool in the Chinese AI arsenal is data. Data is the key driving force in AI, with most cutting-edge models requiring vast amounts of high-quality data to train. Often AI projects are infeasible due to a lack of data, rather than conceptual shortcomings. The current state of the art generative language model, GTP-3, uses technology that is several years old; its sheer size and the volume of training data giving rise to its staggering performance.
Official CCP policy makes it clear this is no issue in China. Data privacy laws are few and far between and it is an officially stated goal of the government is to drive economic development through vast quantities of rapidly accessible data. This is nefariously demonstrated by Hikvision, the global leader in CCTV cameras, whose AI is able to recognise and track individuals, as well as monitor their facial expressions and clothing. The company has faced accusations of complicity in the mass surveillance and opression of China’s muslim minority. They’ve also been accused of their technology being used to mark potentially dangerous individuals for re-education programmes.
Can China dominate AI?
But despite China’s data advantage and vast engineering workforce, becoming the dominant force in AI requires creativity as well as brute force. 2022’s most remarkable AI developments have come in generative image models, capable of creating images from text prompts. Although training these models requires raw computing power, their design was predicated on creative use of AI theory. Despite prioritising technological innovation, the CCP may scare away the best minds in AI, if they think their creativity will put them on a collision course with the state.
Perhaps most indicative of the difference between AI research in China and the USA is their respective publication records. Since 2016, China has published almost twice as many AI related papers than any other country. But at the two most prestigious and high profile conferences, ICML and NeurIPS, representation of US research is almost 5 times that of China’s.
With politics predicated on lack of state intervention in private business, the USA has long been the destination of choice for tech’s brightest minds. But vast markets and rich data access could be the springboard for China to brush the privacy-focussed US aside and dominate AI. However, to break the mould of Chinese tech and fulfil the CCP’s goal of becoming the global AI powerhouse, China needs more than brute-force and data, it needs creativity and invention. If the country can find that, it will be the force in AI for decades to come. | Emerging Technologies |
IBM Collaborates With Government To Scale Digital Skills Training
IBM's courses will scale from school-level education to reskilling on technologies like AI, cloud and cybersecurity.
IBM, the Ministry of Education, and the Ministry of Skill Development and Entrepreneurship have tied up to provide curated courses to empower youth in India with future-ready skills.
The collaboration will focus on the co-creation of curriculum and access to IBM's learning platform, IBM SkillsBuild, for skilling learners across school education, higher education and vocational skills on emerging technologies like AI, including generative AI, cybersecurity, cloud computing and professional development skills.
According to IBM, the tech major's collaboration with MoE and MSDE spans across three core levels of education:
School Education: IBM will provide access to digital content from IBM SkillsBuild for high school students, teachers and trainers on cutting-edge skills in schools identified by the Navodaya Vidyalaya Samiti, National Council for Teacher Education and the Kendriya Vidyalaya Sangathan, as well as the National Institute of Open Schooling This programme will be offered online, via webinars and in-person workshops. IBM will also refresh the Central Board of Secondary Education's AI curriculum for Grades 11 and 12, develop a cyber skilling and blockchain curriculum for high school students to be hosted on IBM SkillsBuild.
Higher Education: IBM will work closely with the Department of Higher Education, the All India Council for Technical Education, the National Institute of Electronics & Information Technology, the National Institute of Technical Teachers' Training & Research, Chandigarh, and state skilling missions to onboard affiliated students and faculty to IBM SkillsBuild and provide them access to digital content, experiential learning, and fresh skills enabling them to take on technical careers.
Vocational Skills: IBM will continue its central collaboration with the MSDE and work closely with the Directorate General of Training and respective state vocational education and skilling departments to onboard, job seekers, including long-term unemployed and school dropouts to IBM SkillsBuild and enable them to gain the technical and professional skills required to re-enter the workforce.
"India, with its vast and youthful population, stands at the cusp of tremendous potential," Dharmendra Pradhan, Union minister for education, said at the event. "To harness this demographic dividend, it is crucial to equip the youth with the necessary skills to excel in today's modern workforce."
"This collaboration marks a significant stride toward our vision of a Skilled India and in scaling up digital skills training and skill building in emerging technologies using IBM SkillsBuild platform," he said.
"IBM’s collaboration with MoE and MSDE ushers in a new era of opportunities in our rapidly evolving digital landscape," Sandip Patel, managing director, IBM India/South Asia, said. "We're dedicated to fostering a well-rounded approach to skill development, ultimately creating a more versatile and adaptable workforce."
"We are confident that this collaboration will contribute significantly to India's status as a digital talent hub," Patel said.
The growth of education focused on technology, digital, and emerging new short-term skills courses is a strategic imperative for the government, industry and academia. A recent study by IBM Institute of Business Value, the augmented work for an automated, AI-driven world, said surveyed executives in India estimate that implementing AI and automation will require more than 40% of their workforce to reskill over the next three years. | Emerging Technologies |
The Department of Homeland Security recently warned that the threat from foreign terrorist organizations and cyberthreat from adversaries like Russia, including election interference, will likely increase in 2022 according to an intelligence analysis obtained by ABC News.The document, titled "Key Threats to the Homeland in 2022" and dated June 8, asserts that the greatest threat to the United States this year comes from lone wolf actors and small groups of individuals motived by a cadre of extremist beliefs like the alleged shooter in Buffalo, New York who is currently facing hate crimes charges for killing 10 African-American shoppers at a grocery store. Federal law enforcement agencies including the DHS and Justice Department have previously prioritized combatting domestic violent extremism since the start of the Biden administration.But DHS also warned about the potential for a resurgence of foreign terrorism due to the relaxing pandemic travel restrictions and said they will be "highly visible" online -- focused on messaging inspiring homegrown terrorism."Foreign terrorists probably will continue to hone their abilities to facilitate international travel, expand their networks, raise funds, and organize, ultimately to improve their ability to target the United States and the Homeland," the bulletin says. "While some travel was very likely curtailed by COVID-19 travel restrictions, we anticipate an increased threat from these actors in 2022 as travel restrictions are relaxed."Some with terrorist connections may seek to travel to the United States and apply for tourist visas, DHS says."What this assessment reinforces is other intelligence indicating that the U.S. continues to experience unacceptable levels of violence by individuals inspired by extremist content promoted online by a diverse array of foreign and domestic threat actors," said John Cohen, an ABC News contributor and former acting undersecretary for intelligence and analysis at DHS. "Law enforcement officials are particularly concerned about the potential for targeted acts of violence directed at law enforcement, election, other elected officials due to increase calls for violence by domestic violent extremists."Threats to the nation's cyberspace stem from three countries according to DHS: China, Russia and Iran, according to the assessment. Russia will continue to target U.S. critical infrastructure through various cyberattacks and sow discord in the country as it has in previous elections, according to DHS, in some cases leveraging emerging technology like artificial intelligence."Russian malign influence actors likely will attempt to dissuade U.S. voters from participating in the 2022 midterm elections using similar tactics employed during the 2020 and 2016 presidential elections, such as targeting audiences with false information about voting logistics, exacerbating racial tensions, and levying attacks or praise on candidates from either political party," the memo said."Russia probably could use emerging technologies to enhance its cyber and malign influence efforts when attempting to affect the outcome of U.S. elections. For instance, advancements in artificial intelligence allow for automated data analysis, classification, creation, and manipulation of digital content, which could be deployed in future foreign malign influence and disinformation campaigns."Russia, according to the Department, embeds intelligence officers "to establish front companies and recruit Russian emigres and American citizens to steal sensitive US academic, government, and business information.""It is a threat environment in which a diverse array of foreign and domestic threat actors use internet-based communication platforms to spread content intended to sow discord, inspire violence, undermine confidence in government institutions and achieve other illicit objectives," Cohen, the ABC News contributor, said.Chinese state-sponsored actors "aggressively target U.S. political, economic, military, educational, and critical infrastructure personnel and organizations to steal sensitive data, key technologies, intellectual property, and personally identifiable information."The department assessed that PRC-sponsored hacking compromised at least 30,000 organizations by exploiting an an email server, DHS says."We assess that the PRC will seek to engage in a range of activities to support policies favorable to Beijing and its interests and will likely prioritize messaging against US audiences in the run-up to midterm elections to influence political outcomes favorable to Beijing," DHS says.The malign influence campaign from China "use at least 30 social media platforms and more than 40 websites and niche forums in several languages, including English, Russian, German, Spanish, Korean, and Japanese, to target US audiences."Additionally, ransomware will continue to increase, according to DHS, because it has been profitable for cybercriminals."We assess that ransomware attacks targeting US networks and infrastructure will increase in the near and long term because cybercriminals have developed effective business models to increase their financial gain, likelihood for success, and anonymity," the Department says.The bulletin was first reported by the Daily Beast. | Emerging Technologies |
Chemical and biological weapons pose a greater threat to global security today than at any point since the end of the Cold War. The COVID-19 pandemic revealed the United States’s disastrous vulnerability to infectious pathogens, novel diseases continue to spread worldwide, and the norms against the use of weapons of mass destruction (WMDs) are eroding. Without a concentrated effort to mitigate these risks, chemical and biological threats will continue to grow as state and nonstate actors gain access to new and more destructive technologies. Despite these growing dangers, the U.S. defense establishment remains less-than-fully prepared to deter and defend against chemical and biological weapons of mass destruction. In particular, the Department of Defense’s Chemical and Biological Defense Program (CBDP) — one of Washington’s most capable and effective programs to counter real-world WMD threats — remains woefully underfunded and slow to utilize existing resources. In an age of reemergent great-power competition, interstate conflict, and potential WMD proliferation, CBDP merits renewed attention by policymakers and Congress. Plugging gaps in the program’s funding and properly speeding up the use of current cash for existing products and novel technologies shouldn’t be difficult. Without too much extra, roughly $3 billion in fiscal year 2024, the United States can make a significant dent in these potentially existential issues — simultaneously protecting U.S. troops while drastically reducing the risk of catastrophic chemical or biological incidents worldwide. Chemical and biological threats aren’t science fiction. Russia has used Novichok chemical weapons in several botched assassination attempts: one in 2018 against former Russian intelligence officer Sergei Skripal and his daughter Yulia in the United Kingdom, and another against opposition leader Alexei Navalny just last year. In 2017, moreover, Pyongyang used VX nerve agent in Kuala Lumpur to assassinate Kim Jong Nam, the half-brother of North Korean leader Kim Jong Un. Analysts also are increasingly concerned about the proliferation of fentanyl and its potential use as a chemical weapon. On the biological side, the world continues to face a remarkable cascade of public health emergencies, including the COVID-19 pandemic, the global spread of monkeypox, a resurgent polio outbreak and, most recently, a worsening outbreak of vaccine-resistant Ebola in Uganda. Beyond the obvious risk to the American public, biological events threaten to degrade and destroy U.S. military capabilities. In the early stages of the COVID pandemic, for example, a large outbreak aboard the USS Theodore Roosevelt aircraft carrier infected more than 1,200 sailors, effectively disabling the ship. To its credit, the United States has released multiple reviews, strategies and plans designed to counter chemical and biological threats over the past two years. These multi-agency efforts, including the 2022 National Biodefense Strategy and Implementation Plan (NBS), highlight vital concerns about naturally occurring biological threats, engineered biological weapons, and chemical weapons. The NBS, in particular, underscores the need to “deter, detect, degrade, disrupt, deny or otherwise prevent nation-state and nonstate actors’ attempts to pursue, acquire or use biological weapons, related materials, or their means of delivery.” The Department of Defense’s recent National Defense Strategy also emphasizes the vital concept of so-called “deterrence by denial” — or the idea that the United States and its allies can deter the use of certain weapons by eliminating their effectiveness against both military and civilian targets. By “improving conventional forces ability to operate in the face of limited nuclear, chemical, and biological attacks,” the strategy explains, Washington can “deny adversaries benefit from possessing and employing such weapons.” While these U.S. government strategies are a good place to start, they lack muscle. A strategy, after all, is just a piece of paper unless it receives adequate funding. Eliminating chemical and biological weapons threats requires sufficient resources to develop cutting-edge capabilities and design effective counter- and non-proliferation regimes. The CBDP — given its leadership role in research, development and acquisition focused on chemical and biological threats — is the right place to start. Presently, the program’s budget is roughly $1.2 billion per year. This is simply not enough to research, test, develop and procure the detection, mitigation, early-warning and response capabilities needed to counter the vast array of contemporary chemical and biological threats, to say nothing of those that might emerge. To address this dangerous imbalance, Washington needs to match recent upgrades to U.S. strategy with comparable resources. With additional funding and spending, the CBDP could invest further in vital emerging technologies, including stand-off detection, predictive wearables, and advanced protective suits — all of which would help the U.S. military protect its fighting advantage. Other tools, including point-of-care diagnostics, artificial intelligence-enabled biosurveillance, and broad-spectrum medical countermeasures can ensure that Washington maintains its ability to quickly identify, track and treat emerging threats. We believe bringing the CBDP budget up to $3 billion for 2024, and growing in subsequent years (while ensuring that the program effectively spends the resources it already has) would allow this. If not, the U.S. ability to develop these tools is hampered by serious resource constraints. In recent years, the Defense Department has consistently underfunded and under-executed the CBDP. This program is ultimately a bargain — especially compared to most other major defense programs. Although the threats posed by chemical and biological weapons are very real, so are the solutions. For the cost of a few aircraft, the United States can protect its soldiers overseas from a host of deadly weapons and, more broadly, head off a potentially catastrophic global chemical or biological incident. Fully funding counter-WMD programs is not only responsible policy but a sound investment and a small price to pay for decades of enhanced U.S. and global security. Andrew Weber is a senior fellow at the Janne E. Nolan Center on Strategic Weapons at the Council on Strategic Risks. He was U.S. Assistant Secretary of Defense for Nuclear, Chemical and Biological Defense Programs during the Obama administration. Follow him on Twitter @AndyWeberNCB. David Lasseter is founder of Horizons Global Solutions and a visiting fellow at the National Security Institute at George Mason University. He was Deputy Assistant Secretary of Defense for Countering Weapons of Mass Destruction during the Trump administration. Follow him on Twitter @dflasseter. | Emerging Technologies |
By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered as follows: Section 1. Policy. It is the policy of my Administration to coordinate a whole-of-government approach to advance biotechnology and biomanufacturing towards innovative solutions in health, climate change, energy, food security, agriculture, supply chain resilience, and national and economic security. Central to this policy and its outcomes are principles of equity, ethics, safety, and security that enable access to technologies, processes, and products in a manner that benefits all Americans and the global community and that maintains United States technological leadership and economic competitiveness. Biotechnology harnesses the power of biology to create new services and products, which provide opportunities to grow the United States economy and workforce and improve the quality of our lives and the environment. The economic activity derived from biotechnology and biomanufacturing is referred to as “the bioeconomy.” The COVID-19 pandemic has demonstrated the vital role of biotechnology and biomanufacturing in developing and producing life-saving diagnostics, therapeutics, and vaccines that protect Americans and the world. Although the power of these technologies is most vivid at the moment in the context of human health, biotechnology and biomanufacturing can also be used to achieve our climate and energy goals, improve food security and sustainability, secure our supply chains, and grow the economy across all of America. For biotechnology and biomanufacturing to help us achieve our societal goals, the United States needs to invest in foundational scientific capabilities. We need to develop genetic engineering technologies and techniques to be able to write circuitry for cells and predictably program biology in the same way in which we write software and program computers; unlock the power of biological data, including through computing tools and artificial intelligence; and advance the science of scale‑up production while reducing the obstacles for commercialization so that innovative technologies and products can reach markets faster. Simultaneously, we must take concrete steps to reduce biological risks associated with advances in biotechnology. We need to invest in and promote biosafety and biosecurity to ensure that biotechnology is developed and deployed in ways that align with United States principles and values and international best practices, and not in ways that lead to accidental or deliberate harm to people, animals, or the environment. In addition, we must safeguard the United States bioeconomy, as foreign adversaries and strategic competitors alike use legal and illegal means to acquire United States technologies and data, including biological data, and proprietary or precompetitive information, which threatens United States economic competitiveness and national security. We also must ensure that uses of biotechnology and biomanufacturing are ethical and responsible; are centered on a foundation of equity and public good, consistent with Executive Order 13985 of January 20, 2021 (Advancing Racial Equity and Support for Underserved Communities Through the Federal Government); and are consistent with respect for human rights. Resources should be invested justly and equitably so that biotechnology and biomanufacturing technologies benefit all Americans, especially those in underserved communities, as well as the broader global community. To achieve these objectives, it is the policy of my Administration to: (a) bolster and coordinate Federal investment in key research and development (R&D) areas of biotechnology and biomanufacturing in order to further societal goals; (b) foster a biological data ecosystem that advances biotechnology and biomanufacturing innovation, while adhering to principles of security, privacy, and responsible conduct of research; (c) improve and expand domestic biomanufacturing production capacity and processes, while also increasing piloting and prototyping efforts in biotechnology and biomanufacturing to accelerate the translation of basic research results into practice; (d) boost sustainable biomass production and create climate-smart incentives for American agricultural producers and forest landowners; (e) expand market opportunities for bioenergy and biobased products and services; (f) train and support a diverse, skilled workforce and a next generation of leaders from diverse groups to advance biotechnology and biomanufacturing; (g) clarify and streamline regulations in service of a science- and risk-based, predictable, efficient, and transparent system to support the safe use of products of biotechnology; (h) elevate biological risk management as a cornerstone of the life cycle of biotechnology and biomanufacturing R&D, including by providing for research and investment in applied biosafety and biosecurity innovation; (i) promote standards, establish metrics, and develop systems to grow and assess the state of the bioeconomy; to better inform policy, decision-making, and investments in the bioeconomy; and to ensure equitable and ethical development of the bioeconomy; (j) secure and protect the United States bioeconomy by adopting a forward‑looking, proactive approach to assessing and anticipating threats, risks, and potential vulnerabilities (including digital intrusion, manipulation, and exfiltration efforts by foreign adversaries), and by partnering with the private sector and other relevant stakeholders to jointly mitigate risks to protect technology leadership and economic competitiveness; and (k) engage the international community to enhance biotechnology R&D cooperation in a way that is consistent with United States principles and values and that promotes best practices for safe and secure biotechnology and biomanufacturing research, innovation, and product development and use. The efforts undertaken pursuant to this order to further these policies shall be referred to collectively as the National Biotechnology and Biomanufacturing Initiative. Sec. 2. Coordination. The Assistant to the President for National Security Affairs (APNSA), in consultation with the Assistant to the President for Economic Policy (APEP) and the Director of the Office of Science and Technology Policy (OSTP), shall coordinate the executive branch actions necessary to implement this order through the interagency process described in National Security Memorandum 2 of February 4, 2021 (Renewing the National Security Council System) (NSM-2 process). In implementing this order, heads of agencies (as defined in section 13 of this order) shall, as appropriate and consistent with applicable law, consult outside stakeholders, such as those in industry; academia; nongovernmental organizations; communities; labor unions; and State, local, Tribal, and territorial governments to advance the policies described in section 1 of this order. Sec. 3. Harnessing Biotechnology and Biomanufacturing R&D to Further Societal Goals. (a) Within 180 days of the date of this order, the heads of agencies specified in subsections (a)(i)-(v) of this section shall submit the following reports on biotechnology and biomanufacturing to further societal goals related to health, climate change and energy, food and agricultural innovation, resilient supply chains, and cross-cutting scientific advances. The reports shall be submitted to the President through the APNSA, in coordination with the Director of the Office of Management and Budget (OMB), the APEP, the Assistant to the President for Domestic Policy (APDP), and the Director of OSTP. (i) The Secretary of Health and Human Services (HHS), in consultation with the heads of appropriate agencies as determined by the Secretary, shall submit a report assessing how to use biotechnology and biomanufacturing to achieve medical breakthroughs, reduce the overall burden of disease, and improve health outcomes. (ii) The Secretary of Energy, in consultation with the heads of appropriate agencies as determined by the Secretary, shall submit a report assessing how to use biotechnology, biomanufacturing, bioenergy, and biobased products to address the causes and adapt to and mitigate the impacts of climate change, including by sequestering carbon and reducing greenhouse gas emissions. (iii) The Secretary of Agriculture, in consultation with the heads of appropriate agencies as determined by the Secretary, shall submit a report assessing how to use biotechnology and biomanufacturing for food and agriculture innovation, including by improving sustainability and land conservation; increasing food quality and nutrition; increasing and protecting agricultural yields; protecting against plant and animal pests and diseases; and cultivating alternative food sources. (iv) The Secretary of Commerce, in consultation with the Secretary of Defense, the Secretary of HHS, and the heads of other appropriate agencies as determined by the Secretary of Commerce, shall submit a report assessing how to use biotechnology and biomanufacturing to strengthen the resilience of United States supply chains. (v) The Director of the National Science Foundation (NSF), in consultation with the heads of appropriate agencies as determined by the Director, shall submit a report identifying high-priority fundamental and use‑inspired basic research goals to advance biotechnology and biomanufacturing and to address the societal goals identified in this section. (b) Each report specified in subsection (a) of this section shall identify high-priority basic research and technology development needs to achieve the overall objectives described in subsection (a) of this section, as well as opportunities for public-private collaboration. Each of these reports shall also include recommendations for actions to enhance biosafety and biosecurity to reduce risk throughout the biotechnology R&D and biomanufacturing lifecycles. (c) Within 100 days of receiving the reports required under subsection (a) of this section, the Director of OSTP, in coordination with the Director of OMB, the APNSA, the APEP, the APDP, and the heads of appropriate agencies as determined through the NSM-2 process, shall develop a plan (implementation plan) to implement the recommendations in the reports. The development of this implementation plan shall also include the solicitation of input from external experts regarding potential ethical implications or other societal impacts, including environmental sustainability and environmental justice, of the recommendations contained in the reports required under subsection (a) of this section. The implementation plan shall include assessments and make recommendations regarding any such implications or impacts. (d) Within 90 days of the date of this order, the Director of OMB, in consultation with the heads of appropriate agencies as determined through the NSM-2 process, shall perform a budget crosscut to identify existing levels of agency spending on biotechnology- and biomanufacturing-related activities to inform the development of the implementation plan described in subsection (c) of this section. (e) The APNSA, in coordination with the Director of OMB, the APEP, the APDP, and the Director of OSTP, shall review the reports required under subsection (a) of this section and shall submit the reports to the President in an unclassified form, but may include a classified annex. (f) The APNSA, in coordination with the Director of OMB, the APEP, the APDP, and the Director of OSTP, shall include a cover memorandum for the reports submitted pursuant to subsection (a) of this section, along with the implementation plan required under subsection (c) of this section, in which they make any additional overall recommendations for advancing biotechnology and biomanufacturing. (g) Within 2 years of the date of this order, agencies at which recommendations are directed in the implementation plan required under subsection (c) of this section shall report to the Director of OMB, the APNSA, the APEP, the APDP, and the Director of OSTP on measures taken and resources allocated to enhance biotechnology and biomanufacturing, consistent with the implementation plan described in subsection (c) of this section. (h) Within 180 days of the date of this order, the President’s Council of Advisors on Science and Technology shall submit to the President and make publicly available a report on the bioeconomy that provides recommendations on how to maintain United States competitiveness in the global bioeconomy. Sec. 4. Data for the Bioeconomy. (a) In order to facilitate development of the United States bioeconomy, my Administration shall establish a Data for the Bioeconomy Initiative (Data Initiative) that will ensure that high-quality, wide-ranging, easily accessible, and secure biological data sets can drive breakthroughs for the United States bioeconomy. To assist in the development of the Data Initiative, the Director of OSTP, in coordination with the Director of OMB and the heads of appropriate agencies as determined by the Director of OSTP, and in consultation with external stakeholders, shall issue a report within 240 days of the date of this order that: (i) identifies the data types and sources, to include genomic and multiomic information, that are most critical to drive advances in health, climate, energy, food, agriculture, and biomanufacturing, as well as other bioeconomy-related R&D, along with any data gaps; (ii) sets forth a plan to fill any data gaps and make new and existing public data findable, accessible, interoperable, and reusable in ways that are equitable, standardized, secure, and transparent, and that are integrated with platforms that enable the use of advanced computing tools; (iii) identifies — based on the data types and sources described in subsection (a)(i) of this section — security, privacy, and other risks (such as malicious misuses, manipulation, exfiltration, and deletion), and provides a data-protection plan to mitigate these risks; and (iv) outlines the Federal resources, legal authorities, and actions needed to support the Data Initiative and achieve the goals outlined in this subsection, with a timeline for action. (b) The Secretary of Homeland Security, in coordination with the Secretary of Defense, the Secretary of Agriculture, the Secretary of Commerce (acting through the Director of the National Institute of Standards and Technology (NIST)), the Secretary of HHS, the Secretary of Energy, and the Director of OMB, shall identify and recommend relevant cybersecurity best practices for biological data stored on Federal Government information systems, consistent with applicable law and Executive Order 14028 of May 12, 2021 (Improving the Nation’s Cybersecurity). (c) The Secretary of Commerce, acting through the Director of NIST and in coordination with the Secretary of HHS, shall consider bio-related software, including software for laboratory equipment, instrumentation, and data management, in establishing baseline security standards for the development of software sold to the United States Government, consistent with section 4 of Executive Order 14028. Sec. 5. Building a Vibrant Domestic Biomanufacturing Ecosystem. (a) Within 180 days of the date of this order, the APNSA and the APEP, in coordination with the Secretary of Defense, the Secretary of Agriculture, the Secretary of Commerce, the Secretary of HHS, the Secretary of Energy, the Director of NSF, and the Administrator of the National Aeronautics and Space Administration (NASA), shall develop a strategy that identifies policy recommendations to expand domestic biomanufacturing capacity for products spanning the health, energy, agriculture, and industrial sectors, with a focus on advancing equity, improving biomanufacturing processes, and connecting relevant infrastructure. Additionally, this strategy shall identify actions to mitigate risks posed by foreign adversary involvement in the biomanufacturing supply chain and to enhance biosafety, biosecurity, and cybersecurity in new and existing infrastructure. (b) Agencies identified in subsections (b)(i)-(iv) of this section shall direct resources, as appropriate and consistent with applicable law, towards the creation or expansion of programs that support a vibrant domestic biomanufacturing ecosystem, as informed by the strategy developed pursuant to subsection (a) of this section: (i) the NSF shall expand its existing Regional Innovation Engine program to advance emerging technologies, including biotechnology; (ii) the Department of Commerce shall address challenges in biomanufacturing supply chains and related biotechnology development infrastructure; (iii) the Department of Defense shall incentivize the expansion of domestic, flexible industrial biomanufacturing capacity for a wide range of materials that can be used to make a diversity of products for the defense supply chain; and (iv) the Department of Energy shall support research to accelerate bioenergy and bioproduct science advances, to accelerate biotechnology and bioinformatics tool development, and to reduce the hurdles to commercialization, including through incentivizing the engineering scale-up of promising biotechnologies and the expansion of biomanufacturing capacity. (c) Within 1 year of the date of this order, the Secretary of Agriculture, in consultation with the heads of appropriate agencies as determined by the Secretary, shall submit a plan to the President, through the APNSA and the APEP, to support the resilience of the United States biomass supply chain for domestic biomanufacturing and biobased product manufacturing, while also advancing food security, environmental sustainability, and the needs of underserved communities. This plan shall include programs to encourage climate-smart production and use of domestic biomass, along with budget estimates, including accounting for funds appropriated for Fiscal Year (FY) 2022 and proposed in the President’s FY 2023 Budget. (d) Within 180 days of the date of this order, the Secretary of Homeland Security, in coordination with the heads of appropriate agencies as determined by the Secretary, shall: (i) provide the APNSA with vulnerability assessments of the critical infrastructure and national critical functions associated with the bioeconomy, including cyber, physical, and systemic risks, and recommendations to secure and make resilient these components of our infrastructure and economy; and (ii) enhance coordination with industry on threat information sharing, vulnerability disclosure, and risk mitigation for cybersecurity and infrastructure risks to the United States bioeconomy, including risks to biological data and related physical and digital infrastructure and devices. This coordination shall be informed in part by the assessments described in subsection (d)(i) of this section. Sec. 6. Biobased Products Procurement. (a) Consistent with the requirements of 7 U.S.C. 8102, within 1 year of the date of this order, procuring agencies as defined in 7 U.S.C. 8102(a)(1)(A) that have not yet established a biobased procurement program as described in 7 U.S.C. 8102(a)(2) shall establish such a program. (b) Procuring agencies shall require that, within 2 years of the date of this order, all appropriate staff (including contracting officers, purchase card managers, and purchase card holders) complete training on biobased product purchasing. The Office of Federal Procurement Policy, within OMB, in cooperation with the Secretary of Agriculture, shall provide training materials for procuring agencies. (c) Within 180 days of the date of this order and annually thereafter, procuring agencies shall report previous fiscal year spending to the Director of OMB on the following: (i) the number and dollar value of contracts entered into during the previous fiscal year that include the direct procurement of biobased products; (ii) the number of service and construction (including renovations) contracts entered into during the previous fiscal year that include language on the use of biobased products; and (iii) the types and dollar values of biobased products actually used by contractors in carrying out service and construction (including renovations) contracts during the previous fiscal year. (d) The requirements in subsection (c) of this section shall not apply to purchase card transactions and other “[a]ctions not reported” to the Federal Procurement Data System pursuant to 48 CFR 4.606(c). (e) Within 1 year of the date of this order and annually thereafter, the Director of OMB shall publish information on biobased procurement resulting from the data collected under subsection (c) of this section and information reported under 7 U.S.C. 8102, along with other related information, and shall use scorecards or similar systems to encourage increased biobased purchasing. (f) Within 1 year of the date of this order and annually thereafter, procuring agencies shall report to the Secretary of Agriculture specific categories of biobased products that are unavailable to meet their procurement needs, along with desired performance standards for currently unavailable products and other relevant specifications. The Secretary of Agriculture shall publish this information annually. When new categories of biobased products become commercially available, the Secretary of Agriculture shall designate new product categories for preferred Federal procurement, as prescribed by 7 U.S.C. 8102. (g) Procuring agencies shall strive to increase by 2025 the amount of biobased product obligations or the number or dollar value of biobased-only contracts, as reflected in the information described in subsection (c) of this section, and as appropriate and consistent with applicable law. Sec. 7. Biotechnology and Biomanufacturing Workforce. (a) The United States Government shall expand training and education opportunities for all Americans in biotechnology and biomanufacturing. To support this objective, within 200 days of the date of this order, the Secretary of Commerce, the Secretary of Labor, the Secretary of Education, the APDP, the Director of OSTP, and the Director of NSF shall produce and make publicly available a plan to coordinate and use relevant Federal education and training programs, while also recommending new efforts to promote multi-disciplinary education programs. This plan shall promote the implementation of formal and informal education and training (such as opportunities at technical schools and certificate programs), career and technical education, and expanded career pathways into existing degree programs for biotechnology and biomanufacturing. This plan shall also include a focused discussion of Historically Black Colleges and Universities, Tribal Colleges and Universities, and Minority Serving Institutions and the extent to which agencies can use existing statutory authorities to promote racial and gender equity and support underserved communities, consistent with the policy established in Executive Order 13985. Finally, this plan shall account for funds appropriated for FY 2022 and proposed in the President’s FY 2023 Budget. (b) Within 2 years of the date of this order, agencies that support relevant Federal education and training programs as described in subsection (a) of this section shall report to the President through the APNSA, in coordination with the Director of OMB, the ADPD, and the Director of OSTP, on measures taken and resources allocated to enhance workforce development pursuant to the plan described in subsection (a) of this section. Sec. 8. Biotechnology Regulation Clarity and Efficiency. Advances in biotechnology are rapidly altering the product landscape. The complexity of the current regulatory system for biotechnology products can be confusing and create challenges for businesses to navigate. To improve the clarity and efficiency of the regulatory process for biotechnology products, and to enable products that further the societal goals identified in section 3 of this order, the Secretary of Agriculture, the Administrator of the Environmental Protection Agency, and the Commissioner of Food and Drugs, in coordination with the Director of OMB, the ADPD, and the Director of OSTP, shall: (a) within 180 days of the date of this order, identify areas of ambiguity, gaps, or uncertainties in the January 2017 Update to the Coordinated Framework for the Regulation of Biotechnology or in the policy changes made pursuant to Executive Order 13874 of June 11, 2019 (Modernizing the Regulatory Framework for Agricultural Biotechnology Products), including by engaging with developers and external stakeholders, and through horizon scanning for novel products of biotechnology; (b) within 100 days of completing the task in subsection (a) of this section, provide to the general public plain-language information regarding the regulatory roles, responsibilities, and processes of each agency, including which agency or agencies are responsible for oversight of different types of products developed with biotechnology, with case studies, as appropriate; (c) within 280 days of the date of this order, provide a plan to the Director of OMB, the ADPD, and the Director of OSTP with processes and timelines to implement regulatory reform, including identification of the regulations and guidance documents that can be updated, streamlined, or clarified; and identification of potential new guidance or regulations, where needed; (d) within 1 year of the date of this order, build on the Unified Website for Biotechnology Regulation developed pursuant to Executive Order 13874 by including on the website the information developed under subsection (b) of this section, and by enabling developers of biotechnology products to submit inquiries about a particular product and promptly receive a single, coordinated response that provides, to the extent practicable, information and, when appropriate, informal guidance regarding the process that the developers must follow for Federal regulatory review; and (e) within 1 year of the date of this order, and annually thereafter for a period of 3 years, provide an update regarding progress in implementing this section to the Director of OMB, the United States Trade Representative (USTR), the APNSA, the ADPD, and the Director of OSTP. Each 1-year update shall identify any gaps in statutory authority that should be addressed to improve the clarity and efficiency of the regulatory process for biotechnology products, and shall recommend additional executive actions and legislative proposals to achieve such goals. Sec. 9. Reducing Risk by Advancing Biosafety and Biosecurity. (a) The United States Government shall launch a Biosafety and Biosecurity Innovation Initiative, which shall seek to reduce biological risks associated with advances in biotechnology, biomanufacturing, and the bioeconomy. Through the Biosafety and Biosecurity Innovation Initiative — which shall be established by the Secretary of HHS, in coordination with the heads of other relevant agencies as determined by the Secretary — agencies that fund, conduct, or sponsor life sciences research shall implement the following actions, as appropriate and consistent with applicable law: (i) support, as a priority, investments in applied biosafety research and innovations in biosecurity to reduce biological risk throughout the biotechnology R&D and biomanufacturing lifecycles; and (ii) use Federal investments in biotechnology and biomanufacturing to incentivize and enhance biosafety and biosecurity practices and best practices throughout the United States and international research enterprises. (b) Within 180 days of the date of this order, the Secretary of HHS and the Secretary of Homeland Security, in coordination with agencies that fund, conduct, or sponsor life sciences research, shall produce a plan for biosafety and biosecurity for the bioeconomy, including recommendations to: (i) enhance applied biosafety research and bolster innovations in biosecurity to reduce risk throughout the biotechnology R&D and biomanufacturing lifecycles; and (ii) use Federal investments in biological sciences, biotechnology, and biomanufacturing to enhance biosafety and biosecurity best practices throughout the bioeconomy R&D enterprise. (c) Within 1 year of the date of this order, agencies that fund, conduct, or sponsor life sciences research shall report to the APNSA, through the Assistant to the President and Homeland Security Advisor, on efforts to achieve the objectives described in subsection (a) of this section. Sec. 10. Measuring the Bioeconomy. (a) Within 90 days of the date of this order, the Secretary of Commerce, through the Director of NIST, shall, in consultation with other agencies as determined by the Director, industry, and other stakeholders, as appropriate, create and make publicly available a lexicon for the bioeconomy, with consideration of relevant domestic and international definitions and with the goal of assisting in the development of measurements and measurement methods for the bioeconomy that support uses such as economic measurement, risk assessments, and the application of machine learning and other artificial intelligence tools. (b) The Chief Statistician of the United States, in coordination with the Secretary of Agriculture, the Secretary of Commerce, the Director of NSF, and the heads of other appropriate agencies as determined by the Chief Statistician, shall improve and enhance Federal statistical data collection designed to characterize the economic value of the United States bioeconomy, with a focus on the contribution of biotechnology to the bioeconomy. This effort shall include: (i) within 180 days of the date of this order, assessing, through the Department of Commerce’s Bureau of Economic Analysis, the feasibility, scope, and costs of developing a national measurement of the economic contributions of the bioeconomy, and, in particular, the contributions of biotechnology to the bioeconomy, including recommendations and a plan for next steps regarding whether development of such a measurement should be pursued; and (ii) within 120 days of the date of this order, establishing an Interagency Technical Working Group (ITWG), chaired by the Chief Statistician of the United States, which shall include representatives of the Department of Agriculture, the Department of Commerce, OSTP, the NSF, and other appropriate agencies as determined by the Chief Statistician of the United States. (A) Within 1 year of the date of this order, the ITWG shall recommend bioeconomy-related revisions to the North American Industry Classification System (NAICS) and the North American Product Classification System (NAPCS) to the Economic Classification Policy Committee. In 2026, the ITWG shall initiate a review process of the 2023 recommendations and update the recommendations, as appropriate, to provide input to the 2027 NAICS and NAPCS revision processes. (B) Within 18 months of the date of this order, the ITWG shall provide a report to the Chief Statistician of the United States describing the Federal statistical collections of information that take advantage of bioeconomy-related NAICS and NAPCS codes, and shall include recommendations to implement any bioeconomy-related changes as part of the 2022 revisions of the NAICS and NAPCS. As part of its work, the ITWG shall consult with external stakeholders. Sec. 11. Assessing Threats to the United States Bioeconomy. (a) The Director of National Intelligence (DNI) shall lead a comprehensive interagency assessment of ongoing, emerging, and future threats to United States national security from foreign adversaries against the bioeconomy and from foreign adversary development and application of biotechnology and biomanufacturing, including acquisition of United States capabilities, technologies, and biological data. As part of this effort, the DNI shall work closely with the Department of Defense to assess technical applications of biotechnology and biomanufacturing that could be misused by a foreign adversary for military purposes or that could otherwise pose a risk to the United States. In support of these objectives, the DNI shall identify elements of the bioeconomy of highest concern and establish processes to support ongoing threat identification and impact assessments. (b) Within 240 days of the date of this order, the DNI shall provide | Emerging Technologies |
The European Union is likely to reach a political agreement this year that will pave the way for the world's first major artificial intelligence (AI) law, the bloc's tech regulation chief, Margrethe Vestager, said on Sunday.
This follows a preliminary deal reached on Thursday by members of the European Parliament to push through the draft of the EU's Artificial Intelligence Act to a vote on May 11. Parliament will then thrash out the bill's final details with EU member states and the European Commission before it becomes law.
At a press conference after a Group of Seven digital ministers' meeting in Takasaki, Japan, Vestager said the EU AI Act was "pro-innovation" since it seeks to mitigate the risks of societal damage from emerging technologies.
Regulators around the world have been trying to find a balance where governments could develop "guardrails" on emerging artificial intelligence technology without stifling innovation.
"The reason why we have these guardrails for high-risk use cases is that cleaning up … after a misuse by AI would be so much more expensive and damaging than the use case of AI in itself," Vestager said.
While the EU AI Act is expected to be passed by this year, lawyers have said it will take a few years for it to be enforced. But Vestager said businesses could start considering the implication of the new legislation.
"There was no reason to hesitate and to wait for the legislation to be passed to accelerate the necessary discussions to provide the changes in all the systems where AI will have an enormous influence," she told Reuters in an interview.
While research on AI has been going on for years, the sudden popularity of generative AI applications such as OpenAI'S ChatGPT and Midjourney have led to a scramble by lawmakers to find ways to regulate any uncontrolled growth.
An organization backed by Elon Musk and European lawmakers involved in drafting the EU AI Act are among those to have called for world leaders to collaborate to find ways to stop advanced AI from creating disruptions.
Digital ministers of the G-7 advanced nations on Sunday also agreed to adopt "risk-based" regulation on AI, among the first steps that could lead to global agreements on how to regulate AI.
"It is important that our democracy paved the way and put in place the rules to protect us from its abusive manipulation – AI should be useful but it shouldn’t be manipulating us," said German Transport Minister Volker Wissing.
This year's G-7 meeting was also attended by representatives from Indonesia, India and Ukraine. | Emerging Technologies |
US And India Renew Push To Deepen Defense Industry Ties
Two sides aim to make India a logistics hub for allied craft. Defense Secretary Lloyd Austin met India’s Rajnath Singh.
(Bloomberg) -- The US and India have made a new pledge to deepen defense-industry ties, including by sharing cutting-edge technology, amid a broader campaign by both nations to counter China’s increased assertiveness in the region.
Washington and New Delhi will focus on technologies for intelligence, reconnaissance and surveillance, as well as aircraft engines and munitions, a senior US defense official told reporters on condition of anonymity. They also intend to make India a logistics hub for US and partner-nation aircraft and ships.
Secretary of Defense Lloyd Austin and his Indian counterpart, Rajnath Singh, announced a Road Map for US-India Defense Industrial Cooperation on Monday after meetings in New Delhi, according to a statement from the Pentagon.
The road map’s goal is to “change the paradigm” for cooperation between the countries’ defense industries and give India access to “cutting edge technologies,” including through the co-production of defense technologies, the Pentagon added in the statement.
The official said the two countries would look at ways to streamline regulations, licensing and export controls, and deepen ties between defense companies.
U.S., India Pledge Deeper Defense Ties Despite Iran Disagreement
China will not be named in the text of the road map, the official said, but both countries have sought to counter Beijing’s increasingly aggressive stance in recent years. The Indian military has repeatedly clashed with China’s People’s Liberation Army on the two countries’ disputed border, while the US twice accused China of “risky” maneuvers in air and at sea this week.
The US has advanced similar initiatives in the past, including in 2018 when the Trump administration signed an agreement to bolster military communications and get Indian firms more involved in the US defense-sector supply chain.
The road map is part of a broader US-India initiative on critical and emerging technologies launched in May 2022. It is one of several defense co-production efforts that the US has launched with partners in Asia, including with Japan and Australia.
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P. | Emerging Technologies |
Security In Digital Economy A Global Challenge, Says Union Minister Rajeev Chandrasekhar
He said that a common understanding must be devised to improve the domestic, legal, technical, and economic aspects of security in the digital economy.
Union minister Rajeev Chandrasekhar, on Monday, said that security in the digital economy is a global challenge and a partnership approach is needed to tackle it.
He was addressing G20 delegates at the Global Digital Public Infrastructure (DPI) summit at the third Digital Economy Working Group (DEWG) meeting in Pune, Maharashtra.
"The digital economy security is neither a domestic issue nor a domain in which selective cooperation is enough," said the Minister of State for Electronics and Information Technology.
He said that a common understanding must be devised to improve the domestic, legal, technical, and economic aspects of security in the digital economy.
"With the rise of health tech, fintech, e-commerce, artificial intelligence, and Internet of Things companies, businesses now hold large data sets that hold sensitive and personal data of consumers. Sectorally, most cyber crimes are reported in the financial services sector, followed by the healthcare sector and crimes on social media," he said.
The minister said these are key sectors where data breaches not only incur economic costs but also hurt consumer trust and business credibility.
"Meanwhile, security threats like ransomware, data breaches, phishing attacks, and denial-of-service attacks threaten ordinary people, businesses, and governments. Such crimes and the consequent loss of trust in consumers could eventually slow down digital transformation and economic growth," he said.
Chandrashekhar added that while big players in the system have the resources to continuously update their security mechanisms to ensure resilience, the same cannot be expected from small businesses i.e., startups and MSMEs who are at the forefront of innovation.
"Because the security in the digital economy is a global challenge, a partnership approach is needed—one that includes governments, businesses, and development institutions—to build trust, improve awareness, and deliver digital solutions," he said.
Elaborating on the security agenda, the minister said that first, it is important for the government to recognize the security threats that hamper innovation, and trust in essential services.
"Security in the digital economy is not a domestic issue nor a domain in which selective cooperation is enough. We must devise a common understanding to improve domestic legal, technical, and economic aspects of security in the digital economy so that specific avenues for cooperation, exchange and goals can be devised by each country," he said.
These benchmarks are significant because of the increasing players entering the digital economy that need to be made cyber resilient, he added.
"Our approach is driven by the need to bring in multi stake-holder cooperation amongst governments, businesses, and international organisations. There are also other important stakeholders to consider i.e. citizens and consumers who play a key role in implementing the best practices at the day-to-day level," the minister said.
Chandrasekhar also said that in the present era of digitisation, it is important that countries invest in reassessing and revamping their strategies for digital skilling, upskilling, and reskilling in accordance with the industry's changing demands.
"I firmly believe that the G20 should actively discuss these issues and work towards achieving outcomes that benefit the global workforce," he said.
The minister said this summit is an excellent opportunity to exchange knowledge and best practices of DPI implementation and further the advancement of the global digital economy.
"The global digital economy has seen significant growth and transformation in recent years, driven by advancements in technology, increased internet penetration, and the proliferation of smartphones and other connected devices. It has fundamentally changed the way businesses operate, how consumers use products and services, and the overall global economic landscape," he said.
Citing a report by the United Nations Conference on Trade and Development (UNCTAD) in 2020, the minister said the value of global e-commerce alone exceeded $26 trillion in 2019 and the numbers today will undoubtedly be much higher.
He added that the Indian market has evolved simultaneously with an increasing number of people adopting digital technologies like smartphones, supported by affordable data plans and widespread internet penetration.
"Therefore, it comes as no surprise that India has created a large market for digital services such as e-commerce, digital payments, and online entertainment. The 'Make in India' and 'Digital India' have been significant contributors to this evolution. 'Make in India' has supported the production of digital devices while Digital India has promoted the adoption of digital services in a secured manner," he added.
He said India is also actively working on multiple policies to further India's vision for a Global Standard Cyber Law Framework, alongside its efforts to enable programs.
"This framework includes key legislations like the Digital Personal Data Protection Bill, which focuses on respecting individuals' rights while processing digital personal data," the minister said.
He said other key legislations like the National Data Governance Framework Policy aim to safely share non-personal and anonymized data for research and innovation, ensuring privacy and security.
"The Draft Digital India Act will harmonize laws, regulate emerging technologies like AI, and incorporate industry input on blockchain and Web 3.0 regulations to protect digital citizens from harm," he added. | Emerging Technologies |
News reports suggest Huawei is upgrading its presence in Saudi Arabia into a regional headquarters, underscoring the Chinese tech giant’s commitment to the kingdom’s and the region’s economic and technological development.
Saudi Arabia’s Regional Headquarters Program requires a multinational company wishing to do business with Saudi government agencies to locate its regional headquarters for the Middle East & North Africa in the country. A joint initiative of the Saudi Ministry of Investment and the Royal Commission for Riyadh City, it takes effect in January 2024.
This follows the signing of a business agreement involving Huawei by Chinese President Xi Jinping and Saudi King Salman during Xi’s visit to Saudi Arabia last December. That agreement covers cloud computing, data centers and the establishment of high-tech complexes in different Saudi Arabian cities.
The two leaders also signed a Comprehensive Strategic Partnership Agreement to “firmly support each other’s core interests” and a “harmonization plan” between China’s Belt and Road Initiative and Saudi Arabia’s Vision 2030 blueprint for economic and technological development, which aims to diversify the kingdom’s economy away from oil export dependence.
The brainchild of Crown Prince Mohammed bin Salman (MBS), prime minister and chairman of the Council of Economic and Development Affairs, Vision 2030 aims to turn Saudi Arabia into a “global investment powerhouse” and “a global hub connecting… Asia, Europe and Africa.” The participation of Huawei, Siemens and other foreign technology leaders is key to realizing the vision.
In that context, the two sides signed a separate agreement covering the digital economy (e-commerce and fintech), communications and information technologies (including nationwide broadband coverage), their commercial and industrial application and research into emerging technologies.
These technologies, which overlap with Huawei’s business portfolio, include:
- Mobile communications
- Artificial intelligence
- Advanced computing
- Quantum information technology
- Robots
- Submarine cables
In particular, Huawei is likely to be involved is the NEOM special economic zone stretching from the Red Sea coast in the northwest of the country to high mountains in the interior where it is possible to ski in the winter season. NEOM includes an international airport, seaside and mountain resorts, a floating industrial complex and a futuristic city called “The Line.”
The Line is designed to be a smart city 170 kilometers long, 200 meters wide and 500 meters high stretching east from the coast in a straight line. There will be no roads or automobiles allowed in the city.
Instead, residents – nine million of them targeted by 2045 – will travel by public transportation powered by renewable energy. Jonathan Gornall described the concept and historical antecedents of The Line for Asia Times last September. Delayed by Covid, construction is now reportedly underway.
The industrial complex, called Oxagon, will feature renewable energy, autonomous mobility, food production, healthcare, telecom, space, robotics and other technologies. In August 2021, ACWA Power, Air Products and the NEOM investment company established a joint venture to build the world’s largest green hydrogen and green ammonia plant, which will be powered by renewable energy. It is scheduled to begin operations in 2025.
In October 2021, Huawei Digital Power signed a contract with SEPCOIII (SEPCO3, a Chinese engineering and construction company and power plant operator) to provide Saudi Arabia’s Red Sea Project with 400 MW of photovoltaics and a 1,300 MWh battery energy storage facility. The battery energy storage facility is one of the largest such facilities in the world.
More recently, during the Mobile World Congress 2023 held in Barcelona, Spain from February 27 to March 2, Huawei and Saudi Arabian telecom operator Zain signed an MoU to build a 5.5G network for individual, corporate and government customers. On the commercial side, the project includes the Internet of Things (IoT) and private network solutions, cloud computing, fintech, business support and drones.
On November 30, 2022, Huawei and Saudi Telecom Company (STC) announced that their SuperLink single-antenna 10 Gbps solution in Dammam had performed successfully on live networks for six months.
Its design reduces the number of antennas by two-thirds and the amount of hardware by 70%, facilitating the deployment of high-bandwidth 5G services to suburban areas. Dammam is the capital of Saudi Arabia’s Eastern Province.
Also of note, in November 2020, Huawei supplied a prefabricated modular data center to Dawiyat Integrated Telecommunications & Information Technology Company for use with its national broadband network. Dawiyat is a wholly-owned subsidiary of the Saudi Electricity Company.
Commenting on this transaction, Dawiyat Director of Technical Affairs Saleh Al Owain said something that might have stirred China-fearing US politicians and diplomats and which explains why Huawei is finding so many business opportunities despite their attempts to sanction the company.
“All National Grid substations are considered national security assets. Dawiyat’s main goal is to participate in the initiatives of Saudi Vision 2030, monetizing National Grid assets while, of course, continuing to comply with all National Grid safety standards and regulations,” said Al Owain.
“The true challenge is, how can we maintain a massive number of sites scattered across the Kingdom while also retaining full visibility to ensure physical site security and protection from vandalism, as well as ruggedizing each site for harsh weather conditions? The only solution that made sense was Huawei’s container-based shelter solution. No other solution came close to it in terms of performance, feature set, robustness, and cost,” he added.
Follow this writer on Twitter: @ScottFo83517667 | Emerging Technologies |
Sustainable Development Program Seeks Teaching Assistants for Fall 2023
- SDEV UN1900 Introduction to Sustainable Development Seminar (x2)
- SDEV UN2320 Economic and Financial Methods for Sustainable Development (x2)
- SDEV UN3280.001 Workshop in Sustainable Development
- SDEV UN3280.002 Workshop in Sustainable Development
- SDEV UN3360 Disasters and Development
- SDEV UN3390 GIS for Sustainable Development
- SDEV GU4550 The New York City Watershed: From Community Displacement to Collaboration and Climate Adaptation
- SDEV GU4650 Building Climate Justice: Co-Creative Coastal Resilience Planning
Applicants must be current full-time Columbia University students enrolled in a degree-granting program. Students should expect to work 10-20 hours per week on average, but this can vary throughout the semester. Applications will only be accepted from graduate students and undergraduate juniors or seniors. Please note Teachers College and Barnard students are not eligible to apply. Be sure to check the description for each position for additional restrictions and information.
To Apply
Applicants are welcome to apply to multiple positions provided they submit a separate application for each. Please post your cover letter stating your interest in the position and a resume (both in PDF format) here. The deadline to apply is May 8 at 11:59 pm.
Introduction to Sustainable Development Seminar (SDEV UN1900; 2 positions available)
Expected course day/time: Tu 11:40 AM – 12:55 PM
The course is designed to be a free-flowing discussion of the principals of sustainable development and the scope of this emerging discipline. This course will also serve to introduce the students to the requirements of the Undergraduate Program in Sustainable Development and the content of the required courses in both the special concentration and the major. The focus will be on the breadth of subject matter, the multidisciplinary nature of the scholarship, and familiarity with the other key courses in the program.
Applicants should have knowledge of sustainable development, with previous coursework in the area, and be familiar with the structure of the major and the special concentration in the Undergraduate Program in Sustainable Development.
Time commitment and responsibilities: A teaching assistant must fulfill the responsibilities as identified by the assigned supervising instructor while maintaining conduct of the highest level of professionalism and confidentiality. The teaching assistant may be responsible for directing drills, recitations, discussions or laboratory sessions related to courses offered by an officer of higher rank. They will be responsible for meeting and coordinating with the instructor regularly and performing other course-related duties as assigned, like grading written coursework. This also may include developing, distributing and statistically analyzing peer-review and self-review forms.
Applicants must be current full-time CU students enrolled in a degree-granting program. Applications will only be accepted from undergraduate juniors or seniors and graduate students. Preference will be given to undergraduates who have taken the course.
Economic and Financial Methods for Sustainable Development (SDEV UN2320; 2 positions available)
Expected course day/time: M/W 1:10 – 2:25 PM
The objective of this course is to introduce students to key analytical concepts, skills and methods necessary to understand and evaluate the economic and financial aspects of sustainable development. Throughout the course, students will compare competing objectives and policies through the prism of economic and financial reasoning. This course is intended to provide students with a flying introduction to key analytical concepts required to understand topics in environmental economics and finance and to introduce them to selected topics within the field.
Applicants should have strong finance and economic skills and an interest in sustainable development.
Time commitment and responsibilities: A teaching assistant must fulfill the responsibilities as identified by the assigned supervising instructor while maintaining conduct of the highest level of professionalism and confidentiality. The teaching assistant will be responsible for maintaining the course materials online, attending classes, directing recitations and discussions, responding to student queries and grading student work. They will be responsible for meeting and coordinating with the instructor regularly and performing other course-related duties as assigned.
Applicants must be current full-time CU students enrolled in a degree-granting program. Applications will only be accepted from graduate students.
Workshop in Sustainable Development (SDEV UN3280.001 & UN3280.002; 2 positions available)
Expected course days/times:
Section 1: M/W 2:10-4:00 PM
Section 2: T/TR 12:10-2:00 PM
The upper-level undergraduate Sustainable Development Workshop will be modeled on teamwork and client-based graduate-level workshops, but with more time devoted to methods of applied policy analysis and issues in sustainable development. The heart of the course is the group project on an issue of sustainable development with a faculty advisor providing guidance and ultimately grading student performance. Students will receive instruction on methodology, group work, communication and the context of policy analysis. Much of the reading in the course will be project-specific and identified by the student research teams.
Applicants should have strong project management skills and an interest in sustainable development.
Time commitment and responsibilities: A teaching assistant must fulfill the responsibilities as identified by the assigned supervising instructor while maintaining conduct of the highest level of professionalism and confidentiality. The teaching assistant may be responsible for directing drills, recitations, discussions or laboratory sessions related to courses offered by an officer of higher rank. They will be responsible for meeting and coordinating with the instructor regularly and performing other course-related duties as assigned, such as grading written coursework. This also may include developing, distributing and statistically analyzing peer review and self-review forms.
Applicants must be current full-time CU students enrolled in a degree-granting program. Applications will only be accepted from graduate students.
Disasters and Development (SDEV UN3360)
Expected course day/time: M/W 6:10-7:25 PM
This course offers undergraduate students, for the first time, a comprehensive course on the link between natural disaster events and human development at all levels of welfare. It explores the role that natural disasters might have and have had in modulating development prospects. Any student seriously interested in sustainable development, especially in light of climate change, must study the nature of extreme events — their causes, global distribution and likelihood of future change. This course will cover not only the nature of extreme events, including earthquakes, hurricanes, floods, and droughts, but also their transformation into disaster through social processes. It will ultimately help students to understand the link between such extreme events, the economic/social shock they represent, and development outcomes. The course will combine careful analysis of the natural and social systems dynamics that give rise to disasters and examine through group learning case studies from the many disasters that have occurred in the first decade of the 21st century.
Applicants should have a basic knowledge of sustainable development, with previous coursework in the area.
Time commitment and responsibilities: A teaching assistant must fulfill the responsibilities as identified by the assigned supervising instructor while maintaining conduct of the highest level of professionalism and confidentiality. The teaching assistant may be responsible for directing drills, recitations, discussions or laboratory sessions related to courses offered by an officer of higher rank. They will be responsible for meeting and coordinating with the instructor regularly and performing other course-related duties as assigned, such as grading written coursework.
Applicants must be current full-time CU students enrolled in a degree-granting program. Applications will only be accepted from graduate students.
GIS for Sustainable Development (SDEV UN3390)
Expected course day/time: M 10:10 AM – 11:25 AM & W 10:10 AM – 12:25 PM
This course is designed to provide students with a comprehensive overview of theoretical concepts underlying GIS systems and to give students a strong set of practical skills to use GIS for sustainable development research. Through a mixture of lectures, readings, focused discussions, and hands-on exercises, students will acquire an understanding of the variety and structure of spatial data and databases, gain knowledge of the principles behind raster- and vector-based spatial analysis, and learn basic cartographic principles for producing maps that effectively communicate a message. Students will also learn to use newly emerging web-based mapping tools such as Google Earth, Google Maps, and similar tools to develop online interactive maps and graphics.
Applicants should have advanced knowledge of geographic information systems software, with previous coursework in the area.
Time commitment and responsibilities: A teaching assistant must fulfill the responsibilities as identified by the assigned supervising instructor while maintaining conduct of the highest level of professionalism and confidentiality. The teaching assistant may be responsible for directing drills, recitations, discussions or laboratory sessions related to courses offered by an officer of higher rank. They will be responsible for meeting and coordinating with the instructor regularly and performing other course-related duties as assigned, such as grading written coursework. This also may include developing, distributing and statistically analyzing peer-review and self-review forms.
Applicants must be current full-time CU students enrolled in a degree-granting program. Applications will only be accepted from graduate students.
The New York City Watershed: From Community Displacement to Collaboration and Climate Adaptation (SDEV GU4550)
Expected course day/time: October 7-8, 2023 *course takes place over one weekend
A graduate student is needed to support the preparation and delivery of a two-day field-based course. The candidate will work closely with Professor Ruth DeFries on the successful implementation of the course on October 7-8, 2022.
The successful candidate will have strong attention to detail, ability to work with minimal supervision, and experience with supporting a class. Preference will be given to students who have previously taken the course.
Time commitment and responsibilities: Under the supervision of the professor, the candidate will help to prepare the logistics of the class and assist in its implementation. Tasks will include, but are not limited to:
- Distributing materials to students
- Answering student queries
- Acting as a second reader/grader on the assigned final paper, due in mid-late October
- Other assignments as needed to support course preparation and implementation
Applicants must be full-time graduate student with:
- Excellent organizational, communications and leadership skills;
- Strong attention to detail;
- Must be able to manage tasks and time with minimal supervision;
- Demonstrated interest and knowledge of the watershed communities or the Catskills Mountains preferred, but not required.
(New course!) Building Climate Justice: Co-Creative Coastal Resilience Planning (SDEV GU4650)
Expected course day/time: M 4:10-6:00 PM & W 2:10-4:00 PM
This course will educate students and support effective coastal resilience planning and climate justice through social and data science learning and data acquisition and analysis, making use of emerging technologies and best practices for collaboration with environmental and climate justice practitioners.
Instruction is provided in two areas: i.) Climate adaptation planning and climate justice; and ii.) Data science: acquisition, analysis and visualization.
Time commitment and responsibilities: A teaching assistant must fulfill the responsibilities as identified by the assigned supervising instructor while maintaining conduct of the highest level of professionalism and confidentiality. The teaching assistant may be responsible for directing drills, recitations, discussions or laboratory sessions related to courses offered by an officer of higher rank. They will be responsible for meeting and coordinating with the instructor regularly and performing other course-related duties as assigned, such as grading written coursework. This also may include developing, distributing and statistically analyzing peer-review and self-review forms.
Applicants must be current full-time CU students enrolled in a degree-granting program. Applications will only be accepted from graduate students. | Emerging Technologies |
Much of the attention surrounding the recently passed CHIPS and Science Act focused on investments in the semiconductor industry, and rightly so — the bill made a historic down payment on chip manufacturing and innovation that will help strengthen supply chains and national security and restore American competitiveness and economic leadership for the future. But the CHIPS and Science Act also authorized substantial investments to accelerate other emerging technologies, like quantum computing, which can help solve some of the world’s most complex problems faster and more efficiently than standard computers and is critical to our national security. As we approach the fourth anniversary of the National Quantum Initiative (NQI), we are witnessing how government-funded programs can significantly accelerate prospects for quantum technology — an “industry of the future.” But there’s more work to be done — we must continue building on the progress we’ve made through NQI, capitalize on the bold, new investments in the CHIPS and Science Act and significantly expand the support and development of quantum technologies. Passed in 2018 with strong support from both parties, the NQI authorized several major increases in support and goals, including an additional $1.25 billion of federal support for quantum efforts into the Department of Energy (DOE), National Science Foundation (NSF) and National Institute of Standards and Technology (NIST). At DOE, it authorized several new national quantum research centers, critically, with joint participation from private quantum companies. For NSF, it authorized several new university research and teaching programs on quantum technologies. NIST was able to build a broad industry consortium, the Quantum Economic Development Consortium or QED-C, to help drive new commercial prospects for quantum technologies. The NQI also aimed to trigger others in America to invest in quantum technologies, including universities and the private sector. Finally, it set up federal coordination of efforts through the White House Office of Science and Technology Policy. During the past few years, all three agencies received full appropriations and have executed them as directed. At DOE, over 70 parties are now part of the five national quantum research centers, including labs, universities and many private companies such as IBM (where one of us works as a senior vice president and director of research) Applied Materials and Goldman Sachs. The effort also triggered substantial additional industry and academic investment beyond the federal NQI funding. As a result, industry, national labs and others have achieved significant technological accomplishments, and many universities expanded degree programs for quantum technologies. Everyone involved with drafting, passing, appropriating and implementing the NQI should feel proud about what it has accomplished. In the last three months, new federal quantum efforts moved forward. In May, a consortium of federal agencies announced a plan to construct the world’s first metro-area quantum network in Washington, D.C., connecting agency locations including the Naval Research Lab, NIST, NASA and the National Security Agency (NSA) In the future, additional areas of the country could be added to build regional quantum networks, which ultimately could be interconnected. But we can’t stop there. The recently passed CHIPS and Science Act authorizes new efforts to advance quantum technologies and presents a new opportunity to double down on our advancements. For DOE, it creates two new efforts: the QUEST program will have DOE procure quantum computing capacity over the cloud for the use of science researchers. This $166 million purchase over five years, amounting to $33.2 million a year, is a good foundation to provide quantum computing capacity to researchers and help nurture the user community for quantum computing applications. The second DOE effort authorizes $500 million over five years to build large-scale quantum network infrastructure around the country. The CHIPS and Science Act also increases support for quantum technologies at NSF. The largest increase will likely come from the new technology and innovation directorate at NSF, where quantum computing is one of the industries of the future that the additional NSF funding would support. To capitalize on this progress and our renewed economic focus on innovation, Congress must fully appropriate first-year funds in the fiscal year 2023. But to accelerate the development of quantum computing and its uses — including in quantum networking — more is needed. The time has come to design and build a new type of supercomputer for our nation — quantum-centric supercomputers. These new national assets would tightly integrate classical computing, including traditional high-performance computers and specialized AI chips, with quantum processors in a new type of architecture. Quantum-centric supercomputers hold the potential to scale and speed up workflows by combining quantum and classical algorithms using parallelization of computations. These are workflows with profound implications for science, business and national security missions. The DOE has a rich history of making this level of investment for the high-performance computing centers of the nation. It is time to evolve that level of initiative to quantum-centric supercomputing centers across the U.S. missions. Computing and communications networks mutually reinforce each other. This will also be true in the quantum era. The future quantum networks will be built from clusters of quantum processors — and quantum supercomputers and datacenters — at its nodes, with short-range intranet links connecting the processors in a node, and long-range quantum communication links connecting the nodes akin to a quantum internet. The NQI and investments under the CHIPS and Science Act and the QUEST program give us an important jump start. Growing the quantum industry to full capacity requires continuing to increase the level of focused investment in high-impact initiatives with ambitious national goals as outlined above. American leadership places us at the forefront of quantum technologies. But that leadership is fragile. New efforts to build quantum supercomputers and a quantum internet will accelerate that leadership and pay dividends for decades to come. We must keep our foot on the gas and continue to make the necessary investments for a vibrant quantum future for all. Dario Gil, Ph.D., is senior vice president and director of research at IBM, and is a member of the National Science Board. The Honorable Paul Dabbar is a former undersecretary for Science at the U.S. Department of Energy, a Distinguished Visiting Fellow at Columbia University, and CEO of Bohr Quantum Technology. | Emerging Technologies |
AP Photo/Manu Fernandez A banner of the 5G network is displayed during the Mobile World Congress wireless show in Barcelona, Spain, in this Feb. 25, 2019, file photo. The United States has always been on the cutting edge of tech. Our free-market system enabled us to win the race to 4G, helped unleash the app economy, and allowed us to get to 5G faster than others. Our country’s leadership in tech helps secure the nation’s economic power and protect national security so the United States continues to serve as a beacon of peace and democracy. Technology should be a force for good in the world. Our national security, and the security of other nations, is tied to our ability to keep up with and get ahead of emerging technologies. I’m encouraged to see that Congress is working together to implement a national spectrum policy. America needs a national strategy to make sure there is enough spectrum to build out 5G networks and not fall behind China. Spectrum refers to the radio waves on which we transmit data, and it serves as the foundation for many of the wireless networks that power our lives, including 5G. Spectrum is the lifeblood of technological innovation — including advancements in national security that power our weapons systems and intelligence operations. 5G is quite literally the fifth generation of wireless connection, and it serves as a crucial foundation for innovations and advancements in the near and not-too-distant future. Alarmingly, America does not have enough spectrum in the pipeline to build out secure and reliable 5G networks. According to a paper by Analysys Mason, the United States ranks 13th in terms of available licensed spectrum — significantly behind nations such as China, Brazil and Saudi Arabia. One reason why is that the United States has overallocated spectrum to unlicensed use. This type of spectrum is available to the public and has important uses, but it’s not the foundation of secure and reliable 5G networks. Unlike managed licensed spectrum, unlicensed spectrum faces interference, and devices connected to unlicensed spectrum aren’t always assessed for security concerns. Indeed, when it comes to security, users of unlicensed spectrum have varying incentives, capabilities and technical skills, resulting in more cybersecurity risks than those who use managed licensed allocations. Like unlicensed spectrum, spectrum sharing frameworks can also increase cybersecurity risks. Some forms of spectrum sharing create new points of attack — such as centralized databases — that can be compromised and disrupt service. Additionally, spectrum sharing increases the cyber threat surface by increasing the number of non-operator entities with access to spectrum. Lastly, spectrum sharing adds complexity, and therefore delays, to the process at a time when delivering connectivity to the masses is needed now. To lead the world in 5G and beat China, we need a spectrum strategy that prioritizes licensed spectrum. After spending billions of dollars to acquire licensed spectrum, mobile network operators protect this investment with additional investments in network protection, device testing and security, and more. Policymakers should encourage these investments in spectrum and security. To efficiently build out 5G networks, mobile operators need a clear understanding of spectrum bands that will become available. Policymakers must develop a national spectrum strategy and identify a pipeline of bands that can be repurposed for licensed, high-power 5G use. The end of year spending bill included a short-term reauthorization of the Federal Communications Commission’s spectrum auction authority. A long-term extension of auction authority should be a priority for the 118th Congress. Allowing auction authority to expire at such a pivotal moment would indicate to our adversaries that the United States is losing sight of what it takes to be a global tech leader. The future of U.S. national security relies on the advancement of 5G networks and standards. To get there, Congress needs to come together now to enact legislation that creates a reliable supply of licensed mid-band spectrum. Action is a must, or the national implications of inaction will prove costly. Mike Rogers, a former representative from Michigan in Congress, served as chairman of the House Intelligence Committee. He was an officer in the U.S. Army and an FBI special agent. He is a member of the board of trustees of the Center for the Study of the Presidency and Congress. | Emerging Technologies |
U.N. Secretary-General Antonio Guterres offered proposals Thursday to deal with what he says is an emerging multipolar world order that is characterized by rising geopolitical tensions, conflicts and emerging technologies.
"Today's new threats to peace create new demands on us," Guterres told U.N. member states.
He said his recommendations, outlined in a policy brief called a "New Agenda for Peace," recognize the interlinked nature of many global challenges.
"Conflicts have become more complex, deadly and harder to resolve," he said. "Last year saw the highest number of conflict-related deaths in almost three decades."
He said Russia's invasion of Ukraine has made it harder to address many global challenges.
"If every country fulfilled its obligations under the [U.N.] Charter, the right to peace would be guaranteed," Guterres said. "But when countries break those pledges, they create a world of insecurity for everyone."
Development goals
The secretary-general said one of the keys to addressing the problems plaguing the planet is to accelerate work on the 2030 Agenda, a set of key global development goals. He noted that conflict prevention and sustainable development are mutually reinforcing.
"It is no coincidence that countries affected by conflict are farthest behind on the sustainable development goals," he said.
He expressed concern about the possibility of nuclear war, which has reemerged with Russia's war in Ukraine, and he called for the elimination of nuclear weapons in general.
"Pending their total elimination, states possessing nuclear weapons must commit to never use them," Guterres said.
He urged nations to reduce military spending and ban inhumane and indiscriminate weapons.
New technologies
The U.N. chief also expressed concern about new technologies that offer the potential for huge progress, but also, in the case of generative artificial intelligence and lethal autonomous weapons (also known as "killer robots"), are "creating new ways in which humanity can annihilate itself."
On that front, Guterres says the international community should adopt, by 2026, a legally binding instrument to prohibit such autonomous weapons. He is also backing the idea of creating a new global body to mitigate the peace and security risks of AI. He envisions it being similar in structure and mandate to the International Atomic Energy Agency, which is the organization's nuclear watchdog agency.
The secretary-general said he plans to convene a high-level advisory body to outline options on global AI governance, which will report back by the end of this year.
Among his other recommendations, Guterres calls for the inclusion of women in leadership and decision-making, urging quotas where necessary.
He also calls for "broad-based reflection" on the future of U.N. peacekeeping operations. The U.N. spends billions annually on its peacekeeping missions, but with limited success. Two of its largest missions – in Mali and the Democratic Republic of the Congo – are presently winding down at the request of those nations.
Guterres urged nations to get involved in discussions about his proposals. In 2024, the United Nations is planning a Summit of the Future to address global risks and opportunities. The secretary-general told member states he hopes his recommendations will help in their deliberations. | Emerging Technologies |
President Biden and Prime Minister Modi announced the U.S.-India initiative on Critical and Emerging Technology (iCET) in May 2022 to elevate and expand our strategic technology partnership and defense industrial cooperation between the governments, businesses, and academic institutions of our two countries.
The United States and India affirm that the ways in which technology is designed, developed, governed, and used should be shaped by our shared democratic values and respect for universal human rights. We are committed to fostering an open, accessible, and secure technology ecosystem, based on mutual trust and confidence, that will reinforce our democratic values and democratic institutions.
Today, the two National Security Advisors led the inaugural meeting of the iCET in Washington, DC. They were joined on the U.S. side by the Administrator of the National Aeronautics and Space Administration, the Director of the National Science Foundation, the Executive Secretary of the National Space Council, and senior officials from the Department of State, Department of Commerce, the Department of Defense, and the National Security Council. On the Indian side, the Ambassador of India to the United States, the Principal Scientific Advisor to the Government of India, the Chairman of the Indian Space Research Organization, the Secretary of the Department of Telecommunications, the Scientific Advisor to the Defense Minister, the Director General of the Defence Research and Development Organization, and senior officials from the Ministry of Electronics and Information Technology and the National Security Council Secretariat participated. The two sides discussed opportunities for greater cooperation in critical and emerging technologies, co-development and coproduction, and ways to deepen connectivity across our innovation ecosystems. They noted the value of establishing “innovation bridges” in key sectors, including through expos, hackathons, and pitch sessions. They also identified the fields of biotechnology, advanced materials, and rare earth processing technology as areas for future cooperation.
The United States and India underlined their commitment to working to resolve issues related to regulatory barriers and business and talent mobility in both countries through a standing mechanism under iCET. This followed the January 30 roundtable hosted by the U.S.-India Business Council with U.S. Secretary of Commerce Gina Raimondo, U.S. National Security Advisor Jake Sullivan, and Indian National Security Advisor Ajit Doval, and other senior U.S. and Indian officials and brought together more than 40 CEOs, university presidents, and thought leaders from both countries to accelerate opportunities for increased technology cooperation.
To expand and deepen our technology partnership, the United States and India are launching new bilateral initiatives and welcoming new cooperation between our governments, industry and academia in the following domains:
Strengthening our Innovation Ecosystems
- Signing a new Implementation Arrangement for a Research Agency Partnership between the National Science Foundation and Indian science agencies to expand international collaboration in a range of areas — including artificial intelligence, quantum technologies, and advanced wireless — to build a robust innovation ecosystem between our countries.
- Establishing a joint Indo-U.S. Quantum Coordination Mechanism with participation from industry, academia, and government to facilitate research and industry collaboration.
- Drawing from global efforts to develop common standards and benchmarks for trustworthy AI through coordinating on the development of consensus, multi-stakeholder standards, ensuring that these standards and benchmarks are aligned with democratic values.
- Promoting collaboration on High Performance Computing (HPC), including by working with Congress to lower barriers to U.S. exports to India of HPC technology and source code.
Defense Innovation and Technology Cooperation
- Developing a new bilateral Defense Industrial Cooperation Roadmap to accelerate technological cooperation between both countries for the joint development and production, with an initial focus on exploring projects related to jet engines, munition related technologies, and other systems.
- Noting the United States has received an application from General Electric to jointly produce jet engines that could power jet aircraft operated and produced indigenously by India. The United States commits to an expeditious review of this application.
- Enhancing long-term research and development cooperation, with a focus on identifying maritime security and intelligence surveillance reconnaissance (ISR) operational use cases.
- Launching a new “Innovation Bridge” that will connect U.S. and Indian defense startups.
Resilient Semiconductor Supply Chains
- Enhancing bilateral collaboration on resilient semiconductor supply chains; supporting the development of a semiconductor design, manufacturing, and fabrication ecosystem in India; and leveraging complementary strengths, both countries intend to promote the development of a skilled workforce that will support global semiconductor supply chains and encourage the development of joint ventures and technology partnerships on mature technology nodes and packaging in India.
- Welcoming a task force organized by the U.S. Semiconductor Industry Association (SIA) in partnership with the India Electronics Semiconductor Association (IESA) with participation from the Government of India Semiconductor Mission to develop a “readiness assessment” to identify near-term industry opportunities and facilitate longer-term strategic development of complementary semiconductor ecosystems.
- This task force will make recommendations to the Department of Commerce and the India Semiconductor Mission on opportunities and challenges to overcome in order to further strengthen India’s role within the global semiconductor value chain, and will also provide input to the U.S.-India Commercial Dialogue. The task force will also identify and facilitate workforce development, R&D including with respect to advanced packaging, and exchange opportunities to benefit both countries.
Space
- Strengthening cooperation on human spaceflight, including establishing exchanges that will include advanced training for an Indian Space Research Organization (ISRO)/Department of Space astronaut at NASA Johnson Space Center.
- Identifying innovative approaches for the commercial sectors of the two countries to collaborate, especially with respect to activities related to NASA’s Commercial Lunar Payload Services (CLPS) project. Within the next year, NASA, with ISRO, will convene U.S. CLPS companies and Indian aerospace companies to advance this initiative.
- Initiating new STEM talent exchanges by expanding the Professional Engineer and Scientist Exchange Program (PESEP) to include space science, Earth science, and human spaceflight and extending a standing invitation to ISRO to participate in NASA’s biannual International Program Management Course
- Strengthening the bilateral commercial space partnership, including through a new U.S. Department of Commerce and Indian Department of Space-led initiative under the U.S.-India Civil Space Joint Working Group. This initiative will foster U.S.-India commercial space engagement and enable growth and partnerships between U.S. and Indian commercial space sectors.
- Welcoming the visit this week by the ISRO Chairman to the United States, as well as a visit to India by the NASA Administrator later in 2023.
- Expanding the agenda of the U.S.-India Civil Space Joint Working Group to include planetary defense.
Science, Technology, Engineering and Math Talent:
- Noting a new joint task force of the Association of American Universities and leading Indian educational institutions, including Indian Institutes of Technology, which will make recommendations for research and university partnerships.
Next Generation Telecommunications:
- Launching a public-private dialogue on telecommunications and regulations.
- Advancing cooperation on research and development in 5G and 6G, facilitating deployment and adoption of Open RAN in India, and fostering global economies of scale within the sector.
The United States and India look forward to the next iCET meeting in New Delhi later in 2023. The National Security Councils of both countries will coordinate with their respective ministries, departments and agencies to work with their counterparts to advance cooperation, and to engage with stakeholders to deliver on ambitious objectives ahead of the next meeting.
### | Emerging Technologies |
Arthur Mensch is one of a new generation of entrepreneurs hoping to solve a longstanding problem with the European economy: its failure to produce a Silicon Valley-style tech behemoth.
The 31-year-old Frenchman is chief executive of Mistral, a startup that achieved a €240m (£206m) valuation in its first round of financing – four weeks after it was founded. And he believes artificial intelligence (AI) will be the great leveller, putting Europe on a par with its previously uncatchable competitors across the Atlantic.
Mistral develops large language models – the technology that underpins AI tools such as ChatGPT – and Mensch believes this could hand the initiative to a continent producing a new wave of fast-moving startups.
“Given the new tools we have to hand, like large language models, everything has to be rebuilt around them. When something has to be rebuilt, it gives new players an advantage because they can go fast,” he says.
Mensch, a former employee of Google’s AI unit, now called Google DeepMind, is part of a big-tech European diaspora that has served an apprenticeship of sorts with big US firms and is now going it alone. And he has already achieved standing among his peers: he will be attending the global AI safety summit this week with other tech chief executives, world leaders, experts and civil society figures at Bletchley Park in the UK.
Gabriel Hubert is part of that transatlantic return wave and is an AI entrepreneur too. The 39-year-old Frenchman has returned from a tech role in California to establish Dust, a Paris-based startup that builds internal AI-powered assistants for companies.
“If you look at the founders of some of the startups in Berlin, London and Paris right now, many of them have former operators from US tech companies at their helm or in key leadership positions,” he says.
Europe is a world leader in an array of industries from fashion to pharmaceuticals, cars and aerospace, but it has underperformed in tech, despite a skilled workforce, formidable academic talent and the opportunities afforded by the single market.
There is no European equivalent to Amazon, Google’s owner Alphabet, Facebook’s parent Meta or tech industry grandees such as Apple or Microsoft. Together with Elon Musk’s Tesla and chip maker Nvidia, this so-called Magnificent Seven have opened up a wide gulf between New York’s stock exchanges and the bourses of London, Paris and Frankfurt.
Mensch and Hubert cite a number of reasons why there has not been a breakthrough tech success on a scale of the world’s biggest search engine or a Mark Zuckerberg-led social network. They point to the strength of the US tech sector at the turn of the millennium – as much as Europe’s comparative weakness back then – as a reason why the likes of Google and Facebook broke through.
There was a “tight-knit” community of engineers, designers, entrepreneurs and investment firms in the US, particularly in California, says Hubert. They could identify business opportunities and build them quickly in a massive market, with the help of a host of US-based venture capital (VC) funds – investment firms that back startup businesses. Facebook in the early 2000s and Twitter in the late 2000s were able to fit into a wider infrastructure that had “already built successful tech companies”, he says.
Clara Chappaz, director of La Mission French Tech, a government body supporting French startups, agrees that US tech benefited from access to a huge domestic market and the ready availability of finance.
“A weakness compared with the US has been supporting companies with all the financing they need,” she says, adding that the French government is addressing this with policies including tax credits for research, a flat tax on capital gains and the multibillion-euro France 2030 investment plan.
A perennial complaint of tech entrepreneurs – and a frequently heard argument for why there has been no European Google – is that Europe-based investors can be risk-averse. Mensch, whose company funding seems to prove that Europe is at least alive to new tech opportunities now, says the investment backdrop is changing.
“Compared with what was happening 10 years ago in Europe, there is much more appetite for risk in investing in emerging technologies. That’s why I am optimistic that something good can happen,” he says.
France, like the UK, has high hopes for AI. Hubert’s compatriot, billionaire Xavier Niel, committed last month to investing €200m in AI, including a research lab and extra computing capacity.
Elsewhere, it is not hard to find optimists in the tech sector, as you would expect of an industry that thrives on the new. Fredrik Cassel is a general partner at Stockholm-based venture capital firm Creandum, which has a record of picking winners, including Spotify, the £24bn music streaming business that is one of the outstanding European tech successes of the past few decades. Asked why Europe has not produced its own Google, Apple or Microsoft, he says: “It’s just a matter of time.”
Cassel points to a surge in capital. VC funding in European tech, including the UK, has risen from less than $1bn two decades ago to more than $100bn in 2021, according to VC firm Atomico. However, that is expected to fall to $51bn (£42bn) this year due to global pressures on the market.
A lot of European venture capital bets have not come off – as is standard in the industry – but Creandum’s instincts on the potential of companies such as buy-now-pay-later firm Klarna and fashion app Depop have been proved right.
Cassel says the likes of Spotify, Klarna and London-based fintech firm Revolut have only been around for a short time compared with members of the US tech establishment such as Apple and Microsoft, but they are starting to inspire other startups – sometimes via former staff members. In addition to the ex-Silicon Valley diaspora, there is now a new wave of entrepreneurs coming from European firms themselves. Cassel says: “These companies in turn produce 30, 40, 50 new companies through executives that leave and start new businesses.”
He adds that Europe also has strengths in new tech areas that were not as well developed when the likes of Facebook were riding the social media phenomenon. As well as AI, there is climate-related technology, health, general software and fintech (a catch-all term for digital businesses that work in banking or financial services), with Cassel citing Nordic businesses such as battery maker Northvolt and eco-conscious steel firm H2 Green Steel in the climate tech field. “The next big one in tech? There’s high potential it is going to be European, and from these fields,” he says.
However, Europe is not immune to the downward pressure that has affected tech firms globally. The aggregate value of European “unicorns” – startup companies now valued at €1bn or more – has fallen for the first time in more than a decade to €442bn, from €446bn at the end of last year, according to data firm Pitchbook, amid lower investor appetite for stock market listings. The data includes the UK.
Revolut, Klarna, Delivery firm Getir and online payments firm Checkout.com have all had their valuations reduced, although they are still worth multibillion-dollar sums.
Atomico also points to a fall in the number of new European startups – about 11,000 in 2022 compared with 18,000 in 2020. And according to Pitchbook, only four new unicorns have been created in Europe so far this year, compared with 40 for the full year in 2022.
Tom Wehmeier, a partner at Atomico, says the tougher market conditions are due to higher interest rates in response to inflation in Europe and the US, which made it harder for tech firms to raise funds. “This was a global retraction in the tech market,” he says.
Jean-Marc Ollagnier, chief executive of the European arm of Accenture, the consulting group, says the continent has “not failed” to produce world-leading tech companies but admits it has underperformed. “We could have more giants than we have,” he says.
But like Mensch, Hubert and Cassel, he sees the emergence of new technological breakthroughs as a chance to remedy that. Green tech – “the world needs to be sustainable” – and AI – “a force of disruption that is massive” – will create opportunities that Europe is now in a better position to capitalise on.
“At least for the time being, the game is open,” he says. “It doesn’t mean Europe will win, but it does not mean Europe will certainly lose.” | Emerging Technologies |
SYDNEY, March 2 (Reuters) - China has a "stunning lead" in 37 out of 44 critical and emerging technologies as Western democracies lose a global competition for research output, a security think tank said on Thursday after tracking defence, space, energy and biotechnology.
The Australian Strategic Policy Institute (ASPI) said its study showed that, in some fields, all of the world's top 10 research institutions are based in China.
The study, funded by the United States State Department, found the United States was often second-ranked, although it led global research in high-performance computing, quantum computing, small satellites and vaccines.
"Western democracies are losing the global technological competition, including the race for scientific and research breakthroughs," the report said, urging greater research investment by governments.
China had established a "stunning lead in high-impact research" under government programs.
The report called for democratic nations to collaborate more often to create secure supply chains and "rapidly pursue a strategic critical technology step-up".
ASPI tracked the most-cited scientific papers, which it said are the most likely to result in patents. China's surprise breakthrough in hypersonic missiles in 2021 would have been identified earlier if China's strong research had been detected, it said.
"Over the past five years, China generated 48.49% of the world's high-impact research papers into advanced aircraft engines, including hypersonics, and it hosts seven of the world's top 10 research institutions," it said.
In the fields of photonic sensors and quantum communication, China's research strength could result in it "going dark" to the surveillance of western intelligence, including the "Five Eyes" of Britain, United States, Australia, Canada and New Zealand, it said.
National talent flows of researchers were also tracked and monopoly risks were identified.
China was likely to emerge with a monopoly in 10 fields including synthetic biology, where it produces one-third of all research, as well as electric batteries, 5G, and nano manufacturing.
The Chinese Academy of Sciences, a government research body, ranked first or second in most of the 44 technologies tracked, which spanned defence, space, robotics, energy, the environment, biotechnology, artificial intelligence (AI), advanced materials and quantum technology.
China was bolstering its research with knowledge gained overseas, and the data showed one-fifth of the top Chinese researchers were trained in a Five Eyes country, it said.
The study recommended visa screening programs to limit illegal technology transfers and instead favour international collaboration with security allies.
Australia's universities have said they are complying with foreign influence laws designed to stop the illegal transfer of technology to China, but also noted international collaboration is an integral part of university research.
Reporting by Kirsty Needham; Editing by Edmund Klamann
Our Standards: The Thomson Reuters Trust Principles. | Emerging Technologies |
Last week the Reserve Bank of Australia announced a year-long research project with the Digital Finance Cooperative Research Centre to explore “use cases” for a central bank digital currency (CBDC). Here is what’s going on.What is a CBDC and how is it different from cryptocurrency?Banknotes are a physical form of money we exchange for goods and services. And we’re increasingly making digital transactions, whether tapping credit cards or smartphones. ATM use is down about a third in three years, the RBA says.Now, the RBA and counterparts around the world are studying new digital forms of money that central banks themselves might issue. Research will examine uses of CBDC for commercial banks – the wholesale market – and a retail version the public may one day use.Cryptocurrencies, by contrast, are decentralised, unlike “fiat currencies” produced and regulated by governments. Bitcoin and ethereum are among prominent digital currencies relying on cryptography to secure transactions.To curb price volatility of cryptos, stablecoins have been created to mimic “fiat currencies” by anchoring value to assets such as the US dollar. The failure of TerraUSD and other stablecoins reflects the sector’s infancy. CBDCs might fill the gap.“A fully realised central bank digital currency has the promise to bring the regulatory certainty and power of digital assets to a place that’s coupled with the trust and faith that we have in money that’s issued by the Reserve Bank today,” said Michael Bacina, a partner at Piper Alderman and a fintech specialist.Why is the RBA getting involved?Partly exploratory. “I don’t think it’s inevitable” that the bank will issue CBDCs, says the RBA deputy governor, Michele Bullock.“In terms of day-to-day payments that touch you and [me] and our friends and family, it’s not clear to us what the case for it is,” she says. “We have banknotes. We have lots and lots of digital money alternatives [including] fast payments now.“I think we just need to keep our toes in it, and not be at the bleeding forefront.”Australia Weekend signupThe focus will be less on the technology itself but rather settling on design principles of how decentralised such currencies might be, while maintaining standards of protecting privacy that the public can accept.“Do you put limits on the amount of money people can have in this? Does the central bank issue it directly, or [as] we do with banknotes issue CBDCs via existing banks,” Bullock says. “I don’t think anyone’s come to a complete consensus.”Is there an appetite?If an Australian Securities and Investments Commission report on investor behaviour released on Thursday is any guide, the market for digital currencies is growing rapidly.Its survey of 1,053 investors found that cryptocurrencies were second only to Australian shares in terms of most common asset held, at 73% and 44%.In terms of the value of the holdings, cryptos were also on a par with residential investment properties.An ASIC survey of 1053 retail investors found their holdings of cryptocurrencies were on a par with resident housing investments, with only their holdings of Australian shares larger. pic.twitter.com/uF7e4iJtgk— Peter Hannam (@p_hannam) August 11, 2022
What do researchers say?Andreas Furche, the chief executive of the Digital Finance Cooperative Research Centre, notes the RBA’s ongoing caution.“It’s not something that’s a done deal,” Furche says. “It’s not clear yet whether from the RBA perspective this is going to fit or be useful or not.”The trial will be “ring-fenced” with only registered parties taking part. It will, though, be open in another sense: “We don’t have a preconceived outcome.“Those of us who build or discuss or provide infrastructure aren’t necessarily the innovators that build new kinds of market infrastructure, business models or whatever on that infrastructure,” Furche says. “If we just make that assessment based on what we can think of ourselves, we’re not going to get anywhere.”Quick GuideHow to get the latest news from Guardian AustraliaShowPhotograph: Tim Robberts/Stone RFHe says the rise of stablecoins indicates there’s an opportunity to meet people’s interest in digital currencies without the exposure to as much volatility.“Despite the name, [stablecoins] are often still fraught with risk because they’re not necessarily backed 100%,” he says. CBDCs, based on a national currency, are an “ultimate stablecoin”.What do market participants say?Chloe White, an independent consultant and formerly Treasury’s representative on the Council of Financial Regulators examining cryptos, says blockchain and the ecosystems that are building around it will continue to function and grow whether governments issue CBDCs or not.“What we see happening in cryptocurrency markets at the moment very much mirrors what we see in the traditional system,” White says. “You have a so-called real economy where people are transacting goods … and then you have a financial layer wrapped around” with derivatives, insurance and so on.There may even be national security reasons for having CBDCs and not missing out on emerging technologies and new ways of doing business.“China, in particular, seems quite determined to want to leverage this technology in some way,” she says. “And there’s barely a corner of the world that you can point at that has influence and economic power that’s not looking at these issues in some way.”Bacina says the fintech world is evolving faster than the internet at its genesis. “It’s the same as we could not predict Netflix and we could not predict Amazon’s next-day delivery when the internet was being invented and rolled out.“There are no wires to be put down, and that physical infrastructure to be connected – it’s already there.“We’re talking about the ability to automate things like bank guarantees, and other slow, manual processes which currently drive up compliance costs.”As for who might benefit from the RBA and Digital Finance Cooperative Research Centre study, Bacina says participants may learn as much as the institutions.“It’s a six- or seven-way street,” he says. Interest will focus on “deep analysis of systems contracts, regulatory interfaces – that kind of analysis doesn’t occur very often”. | Emerging Technologies |
MIAMI, FL - APRIL 27: Chipotle restaurant workers fill orders for customers. (Photo by Joe ... [+] Raedle/Getty Images) Getty Images During Chipotle’s Q2 earnings call Tuesday afternoon, there was a lot of discussion around “throughput” and the company’s efforts to improve it. Why that’s important is simple: Chipotle experienced a same-store sales increase of 10.1% in the quarter and has largely remained insulated from the current inflationary pressures hitting consumers’ wallets. But there remains plenty of room for improvement, particularly if Chipotle can serve even more meals to more customers throughout the day. That means speeding up the in-restaurant makeline and the second, digital makeline. Throughput. This process, however, is easier said than done in an industry that has struggled to find employment. That said, Scott Boatwright, chief restaurant officer, has a game plan. “We just launched Project Square One, and it’s literally just that. Let’s get back to square one on how we deliver great fundamentals of great throughput,” Boatwright said during a phone interview Tuesday evening. “The nuances of great throughput include teaching team members on the line how to deliver a great experience and keep moving, to listen out of both ears, hand items down politely to the next team member. The little things add up during a peak volume window and make us so much more efficient.” Chipotle was close to achieving optimum throughput in 2019 after Boatwright and team introduced a training program specifically focused on the basics of operations. That training included defining necessary positions to execute orders efficiently–positions like expediters, which can move items down the line up to 20-to-30% faster. In 2019, however, digital sales only made up about 20% of Chipotle’s mix. Now, the company remains well above 35% on digital sales, even as its in-restaurant sales return closer to pre-pandemic levels. In-restaurant sales increased 36% on the quarter. This has essentially created two separate multibillion-dollar businesses within the company, which has become somewhat of a challenge as team members spent the past year and a half mostly focused on only digital. “What’s transpired, when we lost in-restaurant business during Covid and moved to digital, that stuff like throughput wasn’t important anymore. After two years, we have new team members and new managers in the business who don’t recall what great throughput down the line was like or how to drive it,” Boatwright said. “As our in-restaurant recovery began to happen about eight or nine months ago, it became apparent to me that we just weren’t there.” The need to be “there” has become even more critical as Chipotle looks to more than double its footprint, with most new units including a mobile-order-ahead Chipotlane, and as the chain aspires to reach $3 million in average unit volumes, from the current $2.8 million. In addition to launching Project Square One, Chipotle has also put several other pieces into place to maximize operational efficiencies. Field leaders occasionally work “shoulder to shoulder” with team members during peak hours, for instance.
Chipotle has also implemented a time management and labor delivery tool to ensure staffing is maximized at the right time. The tool’s scheduling capabilities is facilitated by machine learning, meaning it factors in considerations such as promotional events and weather. The company is also installing a new point-of-sale system to streamline the ordering process for team members, and a new pin pad system to allow customers a faster and contactless payment option. “All of these things are more efficient and easier for team members and for customers and they save some time on the order,” Boatwright explained. Of course, there’s also the idea of automation–which Chipotle has embraced with gusto–to save on time and labor. In May, the chain announced it was testing a robot named Chippy to help make tortilla chips. And, just last week, Chipotle announced an investment in Hyphen, a foodservice platform that automates kitchen operations. Boatwright said Hyphen has the potential to make digital orders automatically, while Chippy removes mundane tasks from team members’ workloads. “If you ideate to some future state, you can foreseeably see digital orders come into our ecosystem and Hyphen will recognize and prepare a bowl in real time. This will reduce labor on the line, create better accuracy and better portioning and, overall, a more efficient process,” he said. “We think it’s a big idea.” It’s also a different position from what some of Chipotle’s peers are taking. During McDonald’s Q2 earnings call Tuesday, for instance, CEO Chris Kempczinski said automation won’t be a “silver bullet” and the idea of robots is not practical for the majority of its restaurants. Conversely, Chipotle is all in on finding emerging tech to roll into its operations. The company launched a $50 million “Cultivate Next” fund in the spring to provide investments with companies that align with Chipotle’s mission, and Hyphen is a part of that fund. Operational efficiency in general is a priority.
According to Boatwright, Chipotle is well positioned to consider emerging technologies, perhaps more so than its peers. “I think a lot of peers are entrenched and saturated and that has caused them not to think about innovation in the right way. I also think we have an advantage because we’re company-owned and we don’t have a franchise community that may be scared of the unknown,” he said. “We’re at 3,000 restaurants and headed toward 7,000 and we have a big opportunity to really build the Chipotle of the future. The ideas we want to lean into are born in the restaurants and answering what problems we want to solve." | Emerging Technologies |
A True Zero hydrogen fuel pump for fuel cell vehicles is shown in San Diego, California, U.S. November, 9, 2021. REUTERS/Mike BlakeRegister now for FREE unlimited access to Reuters.comSummaryCompaniesDeveloped with AIG, Liberty Speciality MarketsUp to $300 mln cover for construction, start-up phasesComes as world looks to scale up nascent industryLONDON, Aug 22 (Reuters) - Broker Marsh, a unit of Marsh & McLennan (MMC.N), said on Monday it was launching the world's first dedicated insurance for hydrogen energy projects, as the nascent industry looks to scale up quickly in the fight against climate change.As the world targets net-zero emissions by mid-century in an effort to cap global warming, hydrogen, particularly "green" hydrogen made from renewable energy sources, is seen as a crucial means of getting there.U.S. politicians earlier this month backed a $430 billion spending package that included support for a range of renewable energy sources such as hydrogen. read more Register now for FREE unlimited access to Reuters.comProjects involving the highly flammable gas have often found it harder to find cover, partly because of the complexity and risks involved in production, transportation and storage, and as new and emerging technologies are generally considered riskier.Developed with insurers American International Group (AIG.N) and Liberty Specialty Markets, Marsh said the new facility would provide up to $300 million of cover per risk for the construction and start up phases of hydrogen projects globally.The facility would be available to multinational organizations as well as smaller firms and cover both new and existing "blue" and "green" hydrogen projects, the world's largest insurance broker said.Blue hydrogen is produced from natural gas, while green hydrogen is made from renewable sources and is seen providing a flexible and low-emission fuel for transportation, electricity generation, and as an input into various industrial processes."Marsh's facility is an important development for the insurance industry that will help enable the acceleration of the global energy transition to renewables," said Andrew George, Global Head, Energy & Power, Marsh Specialty."As the global hydrogen industry, especially green hydrogen, scales up rapidly to meet demand the facility will reduce the complexity of securing risk transfer options for operators of all sizes and boosts investor and lender confidence in achieving their ambitious project timeframes."Marsh's clients could either opt for coverage for the startup phase or choose a combined risks policy that extends to first-year operations, the New York-based company said.Renewable and low-carbon hydrogen would account for only 5% of the global final energy mix by 2050, falling short of what is needed to meet climate goals, according to a report in June from Norway-based global energy consultancy DNV. read more To meet the Paris Agreement to limit global warming to 1.5 degrees by 2050, hydrogen would need to reach 13%.(The story is refiled to correct million to billion in paragraph three)Register now for FREE unlimited access to Reuters.comEditing by David EvansOur Standards: The Thomson Reuters Trust Principles. | Emerging Technologies |
A new paper from the University of California Berkeley reveals that privacy may be impossible in the metaverse without innovative new safeguards to protect users.
Led by graduate researcher Vivek Nair, the recently released study was conducted at the Center for Responsible Decentralized Intelligence (RDI) and involved the largest dataset of user interactions in virtual reality (VR) that has ever been analyzed for privacy risks.
What makes the results so surprising is how little data is actually needed to uniquely identify a user in the metaverse, potentially eliminating any chance of true anonymity in virtual worlds.
Simple motion data not so simplistic
As background, most researchers and policymakers who study metaverse privacy focus on the many cameras and microphones in modern VR headsets that capture detailed information about the user’s facial features, vocal qualities and eye motions, along with ambient information about the user’s home or office.
Some researchers even worry about emerging technologies like EEG sensors that can detect unique brain activity through the scalp. While these rich data streams pose serious privacy risks in the metaverse, turning them all off may not provide anonymity.
That’s because the most basic data stream needed to interact with a virtual world — simple motion data — may be all that’s required to uniquely identify a user within a large population.
And by “simple motion data,” I mean the three most basic data points tracked by virtual reality systems – one point on the user’s head and one on each hand. Researchers often refer to this as “telemetry data” and it represents the minimal dataset required to allow a user to interact naturally in a virtual environment.
Unique identification in seconds
This brings me to the new Berkeley study, “Unique Identification of 50,000-plus Virtual Reality Users from Head and Hand Motion Data.” The research analyzed more than 2.5 million VR data recordings (fully anonymized) from more than 50,000 players of the popular Beat Saber app and found that individual users could be uniquely identified with more than 94% accuracy using only 100 seconds of motion data.
Even more surprising was that half of all users could be uniquely identified with only 2 seconds of motion data. Achieving this level of accuracy required innovative AI techniques, but again, the data used was extremely sparse — just three spatial points for each user tracked over time.
In other words, any time a user puts on a mixed reality headset, grabs the two standard hand controllers and begins interacting in a virtual or augmented world, they are leaving behind a trail of digital fingerprints that can uniquely identify them. Of course, this begs the question: How do these digital fingerprints compare to actual real-world fingerprints in their ability to uniquely identify users?
If you ask people on the street, they’ll tell you that no two fingerprints in the world are the same. This may or may not be true, but honestly, it doesn’t matter. What’s important is how accurately you can identify an individual from a fingerprint that was left at a crime scene or input to a finger scanner. It turns out that fingerprints, whether lifted from a physical location or captured by the scanner on your phone, are not as uniquely identifiable as most people assume.
Let’s consider the act of pressing your finger to a scanner. According to the National Institute of Standards and Technology (NIST) the desired benchmark for fingerprint scanners is a unique matching with an accuracy of 1 out of 100,000 people.
That said, real-world testing by NIST and others have found that the true accuracy of most fingerprint devices may be less than 1 out of 1,500. Still, that makes it extremely unlikely that a criminal who steals your phone will be able to use their finger to gain access.
Eliminating anonymity
On the other hand, the Berkeley study suggests that when a VR user swings a virtual saber at an object flying towards them, the motion data they leave behind may be more uniquely identifiable than their actual real-world fingerprint.
This poses a very serious privacy risk, as it potentially eliminates anonymity in the metaverse. In addition, this same motion data can be used to accurately infer a number of specific personal characteristics about users, including their height, handedness and gender.
And when combined with other data commonly tracked in virtual and augmented environments, this motion-based fingerprinting method is likely to yield even more accurate identifications.
Motion data fundamental to the metaverse
I asked Nair to comment on my comparison above between traditional fingerprint accuracy and the use of motion data as “digital fingerprints” in virtual and augmented environments.
He described the danger this way: “Moving around in a virtual world while streaming basic motion data would be like browsing the internet while sharing your fingerprints with every website you visit. However, unlike web-browsing, which does not require anyone to share their fingerprints, the streaming of motion data is a fundamental part of how the metaverse currently works.”
To give you a sense of how insidious motion-based fingerprinting could be, consider the metaverse of the near future: A time when users routinely go shopping in virtual and augmented worlds. Whether browsing products in a virtual store or visualizing how new furniture might look in their real apartment using mixed reality eyewear, users are likely to perform common physical motions such as grabbing virtual objects off virtual shelves or taking a few steps back to get a good look at a piece of virtual furniture.
The Berkeley study suggests that these common motions could be as unique to each of us as fingerprints. If that’s the case, these “motion prints” as we might call them, would mean that casual shoppers wouldn’t be able to visit a virtual store without being uniquely identifiable.
So, how do we solve this inherent privacy problem?
One approach is to obscure the motion data before it is streamed from the user’s hardware to any external servers. Unfortunately, this means introducing noise. This could protect the privacy of users but it would also reduce the precision of dexterous physical motions, thereby compromising user performance in Beat Saber or any other application requiring physical skill. For many users, it may not be worth the tradeoff.
An alternate approach is to enact sensible regulation that would prevent metaverse platforms from storing and analyzing human motion data over time. Such regulation would help protect the public, but it would be difficult to enforce and could face pushback from the industry.
For these reasons, researchers at Berkeley are exploring sophisticated defensive techniques that they hope will obscure the unique characteristics of physical motions without degrading dexterity in virtual and augmented worlds.
As an outspoken advocate for consumer protections in the metaverse, I strongly encourage the field to explore all approaches in parallel, including both technical and policy solutions.
Protecting personal privacy is not just important for users, it’s important for the industry at large. After all, if users don’t feel safe in the metaverse, they may be reluctant to make virtual and augmented environments a significant part of their digital lives.
Dr. Louis Rosenberg is CEO of Unanimous AI, chief scientist of the Responsible Metaverse Alliance and global technology advisor to XRSI. Rosenberg is an advisor to the team that conducted the Berkeley study above.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. | Emerging Technologies |
U.S. President Joe Biden will host British Prime Minister Rishi Sunak for talks at the White House Thursday that are expected to cover economic ties and supporting Ukraine in its defense against a Russian invasion.
The visit by Sunak is his first to the United States since becoming prime minister in October, but he and Biden have already met three times this year.
"The two leaders will review a range of global issues, including our economic partnership, our shared support for Ukraine as it defends itself against Russia's brutal war of aggression, as well as further action to accelerate the clean energy transition," White House press secretary Karine Jean-Pierre told reporters Wednesday. "The president and the prime minister will also discuss the joint U.S.-U.K. leadership on critical and emerging technologies as well as our work to strengthen our economic security. They will also review developments in Northern Ireland as part of their shared commitment to preserving the gains of the Belfast/Good Friday Agreement."
Ahead of Thursday's talks, Sunak said he would push for closer economic relations in the same spirit as the countries' defense and security cooperation.
"Just as interoperability between our militaries has given us a battlefield advantage over our adversaries, greater economic interoperability will give us a crucial edge in the decades ahead," Sunak said.
British officials said Sunak also wanted to discuss ways to protect global supply chains, particularly against individual countries that may corner and manipulate markets for certain sectors.
Another topic on the agenda for Sunak is the regulation of the burgeoning field of artificial intelligence.
Before meeting with Biden, Sunak held talks with congressional leaders and took part in a wreath-laying at Arlington National Cemetery. He also appeared at the Washington Nationals baseball game where the team was honoring U.S.-U.K. Friendship Day.
White House correspondent Anita Powell contributed to this report. Some information for this report came from The Associated Press and Reuters | Emerging Technologies |
Blurred macro image of a row of microchips. While China may have recently closed the military capability gap, Sen. John Cornyn (R-TX) has introduced legislative language that could spoil the Chinese Communist Party’s predatory military plans in one fell swoop. Building off the important work of former President Donald Trump
, who imposed 25% tariffs on Chinese-made microchips, the senator’s amendment to the National Defense Authorization Act (NDAA) would restrict the U.S. from purchasing microchips from companies that work with the CCP. Microchips are what make much of America’s modern warfare equipment hum. Yet, the U.S. remains dependent on China for its supply. That’s a problem when companies with ties to the CCP make them and sell them to contractors and suppliers working with the federal government. And it’s especially a problem when the bulk of China’s strategy to defeat the U.S. military hinges upon these chips. According to a new report released in October by the Special Competitive Studies Project, China is seeking to use advanced technologies to apply military force “with the aim of eroding or even leapfrogging the United States’ military strengths.” China wants to become the “first movers” in “intelligentized warfare” — warfare that uses emerging technologies such as AI, 5G networks, and quantum computing to beat military rivals — to supersede the U.S. as the world’s leading superpower. And it’s developed a Made in China 2025 road map to make becoming a global leader in artificial intelligence (AI), 5G wireless, quantum computing, and other related industries a reality in dangerously short order. Guess what technology is necessary to operationalize this intelligentized warfare technology
? You guessed it: microchips. So why would Congress allow the federal government to purchase ones that have ties to the same CCP trying to use them to destroy America and its interests? That doesn’t sound safe — not by a long shot. It raises many issues, including but not limited to the prospect of future supply chain problems and backdoors that could lead to cyber-attacks and espionage. While both sides of the aisle agree on the need to reduce dependency on these Chinese-made chips, they have disagreed on the best approach. Some opposed the CHIPS Act, which aimed to reduce dependency on this foreign technology by bolstering U.S. production, because they found its $250 billion price tag too expensive and too favorable to large, wealthy corporations. Sen. Cornyn’s NDAA proposal eliminates all those concerns. It doesn’t spend a ton of money we don’t have; it doesn’t even ban all microchips — it just gets to the root of the problem by stopping microchips built by companies with known ties from the CCP from coming the federal government’s way. After hearing the chilling speeches and proclamations that came out of the CCP’s 20th Party Congress last month, including the party’s plans to prioritize tech and innovation for strategic purposes, every legislator should agree that protecting America from potential Chinese technology threats should be a top national priority. Passing the Cornyn amendment to the NDAA would be a great place to start. Here’s hoping the rational heads prevail. Our national security depends on it. CLICK HERE TO READ MORE FROM RESTORING AMERICA Jon Schweppe is the director of policy and government affairs for the American Principles Project. Follow him on Twitter @JonSchweppe. | Emerging Technologies |
The Pentagon is asking Congress for nearly $2 billion for artificial intelligence in its budget proposal for the next fiscal year, which one expert said will help the U.S. keep pace with China in "the arms race of our generation."
The proposed FY2024 budget asked for $1.8 billion for AI as part of the Pentagon's Research, Development, Test & Evaluation (RDT&E) budget. The FY 2023 budget request didn't attach a dollar figure to AI, while the FY 2022 budget sought $874 million for AI.
Parham Eftekhari, Executive Vice President of CyberRisk Alliance, said the increase would put the U.S. somewhere in the neighborhood of China's budget, which reportedly is already spending about $1.6 billion for military AI development.
"It’s difficult to gauge if the U.S. is ahead or behind AI spending because public estimates on Chinese spending are unverified, but we should assume China and the U.S. are in a race to achieve superiority in the use of AI for national security purposes," Eftekhari said.
He said the higher spending level would help the military better understand both the threats and opportunities presented by the rapidly advancing sector. "AI is the arms race of our generation," he added.
Sen. Mike Rounds, R-S.D., who helps lead the Senate AI Caucus, said he "welcomes" the additional funding for AI and said spending in this area would need to stay higher than the rate of inflation to keep pace with China.
"Given the urgency of developing the Department’s AI capabilities, I believe AI funding should be above the rate of inflation, something that should apply to defense spending in general," Rounds told Fox News Digital.
The office of Armed Services Committee member Sen. Gary Peters, D-Mich., who chairs the Senate Homeland Security Committee, told Fox News Digital that he "believes artificial intelligence will play a critical role in shaping the future of warfare and that the U.S. must be prepared for the application of these technologies on the battlefield to maintain military readiness and defend our troops."
The Defense Department described its higher funding request as a way to make sure it can take advantages of faster decision-making and other capabilities that AI promises.
"Building enduring advantages means that the Department must also continue to innovate and modernize, enabling technical breakthroughs and integrating emerging technologies to strengthen national security and enhance defense capabilities," the Defense Department’s budget request said.
"Government investment in AI will center around mission dominance and giving warfighters a competitive advantage on the battlefield. Some applications will be to create speed and efficiency around processes and decision-making. Others will be to improve predictive capabilities and accuracy," Eftekhari said.
Institute for Critical Infrastructure Technology Executive Director Joyce Hunter told Fox News Digital that RDT&E funding is typically used to support development and testing of new technologies that will be the backbone of new weapons systems and military equipment. She said AI and machine learning will also likely be used to "enhance situational awareness, decision-making and mission effectiveness."
"AI enabled technologies can assist in identifying and responding to emerging threats, detecting and mitigating cyberattacks, and optimizing logistics and resource allocation," she said. Hunter noted, however, that the military will also have to find ways around the potential challenges posed by AI, such as privacy, built-in bias and the responsible use of autonomous weapons.
Introducing the budget request at a recent press conference, Deputy Pentagon Secretary Kathleen Hicks called AI a "key technology" in the military’s continued development.
Among the military’s intended uses for AI is to bolster reconnaissance efforts, according to the text of the Pentagon’s proposal. It specified Special Operations Forces (SOF) funding that "invests in artificial intelligence to increase the speed of processing, exploitation, and dissemination" of Intelligence, Surveillance, and Reconnaissance.
The larger AI budget request comes against the backdrop of rising global worries about how AI will be deployed in the battlefield. More than 60 countries, including the U.S. and China, signed a non-binding resolution calling for responsible use and development of AI for military purposes this year.
AI has been seen in action for the first time in military history during Russia’s invasion of Ukraine. Kyiv’s forces have used AI facial recognition technology to identify Russian troops, and are using U.S.-designed Switchblade drones, which have some autonomous capability, can receive target data from other drones and use feature-recognition technology to complete their missions.
Last year, the National Defense Authorization Act mandated that the Pentagon produce "a five-year roadmap and implementation plan" for how it plans to incorporate AI on the cyber warfront.
It also gave an additional $50 million for U.S. Cyber Command to use toward AI development. | Emerging Technologies |
Government Plans Incentive Scheme For Electrolyzers, Green Hydrogen Production: Official
The incentives will be provided under a scheme and the draft of the same has been prepared, Bhalla said.
The government has planned over Rs 17,000 crore in incentives to promote the manufacturing of electrolyzers and green hydrogen in the country, MNRE Secretary Bhupinder Singh Bhalla said on Wednesday.
The draft of the incentive scheme for electrolyzer manufacturing and a part of the incentive scheme for the production of green hydrogen have been finalized and will be rolled out soon.
The incentives will be provided under a scheme and the draft of the same has been prepared, Bhalla said.
The move will result in demand creation for the clean energy source, the official said.
The government is already working with respective ministries to promote green hydrogen. The Ministry is also working on provision of incentives for electrolyzer manufacturing and for production of green hydrogen, Bhalla said.
The MNRE Secretary said that "the draft of the incentive scheme for electrolyzer manufacturing and part of the incentive scheme for the production of green hydrogen have been finalized and will be rolled out soon. The total incentives being offered under the Hydrogen Mission are more than Rs 17,000 crore until the year 2030, which will be rolled out in tranches, so that the government will learn from the first tranche and evolve the second one."
The official also announced that the International Conference on Green Hydrogen will be held from July 5-7, 2023 in New Delhi.
Ajay Yadav, Joint Secretary at MNRE, said around 25 sessions will be held at the three-day conference next month in Vigyan Bhawan.
The event will bring together the stakeholders to discuss emerging technologies in the entire green hydrogen value chain. Around 1,500 delegates from India and abroad from various countries like Japan, US, EU will also be taking part in the first conference of hydrogen in India. | Emerging Technologies |
Scientists from Nanyang Technological University, Singapore (NTU Singapore) have created a process that can upcycle most plastics into chemical ingredients useful for energy storage, using light-emitting diodes (LEDs) and a commercially available catalyst, all at room temperature.
The new process is very energy-efficient and can be easily powered by renewable energy in the future, unlike other heat-driven recycling processes like pyrolysis.
This innovation overcomes the current challenges in recycling plastics such as polypropylene (PP), polyethylene (PE) and polystyrene (PS), which are typically incinerated or discarded in landfills. Globally, only nine per cent of plastics are recycled, and plastic pollution is growing at an alarming rate.
The biggest challenge of recycling these plastics is their inert carbon-carbon bonds, which are very stable and thus require a significant amount of energy to break. This bond is also the reason why these plastics are resistant to many chemicals and have relatively high melting points.
Currently, the only commercial way to recycle such plastics is through pyrolysis, which has high energy costs and generates large amounts of greenhouse emissions, making it cost-prohibitive given the lower value product of the resulting pyrolysis oil.
Developed by Associate Professor Soo Han Sen, an expert in photocatalysis from NTU's School of Chemistry, Chemical Engineering, and Biotechnology, the new method uses LEDs to activate and break down the inert carbon-carbon bonds in plastics with the help of a commercially available vanadium catalyst.
Published this week in the journal Chem, the NTU method can upcycle a range of plastics, including PP, PE and PS. These plastics, together, account for over 75 per cent of global plastic waste.
In developing a green solution to the plastic waste problem, the team wanted to ensure that minimal extra carbon emissions are generated through the recycling of plastics, which are long chains of molecules containing carbon atoms.
Inventor Assoc Prof Soo said: "Our breakthrough not only provides a potential answer to the growing plastic waste problem, but it also reuses the carbon trapped in these plastics instead of releasing it into the atmosphere as greenhouse gases through incineration."
How the plastics are broken down
First, the plastics are dissolved or dispersed in the organic solvent known as dichloromethane, which is used to disperse the polymer chains so that they will be more accessible to the photocatalyst. The solution is then mixed with the catalyst and flowed through a series of transparent tubes where the LED light is shone on it.
The light provides the initial energy to break the carbon-carbon bonds in a two-step process with the help of the vanadium catalyst. The carbon-hydrogen bonds in the plastics are oxidised -- making the bonds less stable and more reactive -- after which the carbon-carbon bonds are broken down.
After separation from the solution, the resulting end products are chemical ingredients such as formic acid and benzoic acid, which can be used to make other chemicals employed in fuel cells and liquid organic hydrogen carriers (LOHCs). LOHCs are now being explored by the energy sector as they play critical roles in clean energy development, given their ability to store and transport hydrogen gas more safely.
Unlike current and other emerging technologies to recycle plastics such as pyrolysis, which uses a high-temperature process to melt and degrade the plastics into low-quality fuels, or carbon nanotubes and hydrogen, the new LED-driven method requires much less energy.
Prof Soo adds that their method is unique in that it can use sunlight or LEDs powered with electricity from renewable sources such as solar, wind or geothermal, to completely process and upcycle such a wide range of plastics. This can allow for clean and energy-efficient management of plastics in a circular economy and increase the recycling rate of plastics.
The process may also help Singapore to reduce the amount of plastic waste from being incinerated or landfilled, helping the country to meet its Zero-Waste Masterplan, where it aims to increase the overall recycling rate to 70 per cent by 2030 and reduce waste going to the Semakau landfill, estimated to run out of space by 2035.
Singapore generates around 1 million tonnes of plastic waste annually and only six per cent of Singapore's plastic waste is recycled.
This study is part of a bigger project, entitled SPRUCE: Sustainable Plastics RepUrposing for a Circular Economy, which also involves Professor Xin (Simba) Chang, Associate Dean (Research) from the Nanyang Business School and Associate Professor Md Saidul Islam from the School of Social Sciences.
The interdisciplinary team estimates that if Singapore can upcycle 80 per cent of its plastics, it could lead to at least a 2.1 million tonnes reduction in carbon dioxide emissions -- about four per cent of the nation's total greenhouse gas emissions.
In addition, when plastics are upcycled into chemical feedstock, it reduces the need by the chemical industry to combust fossil fuels to produce chemical feedstock, further cutting down greenhouse gas emissions.
Based on the estimations by Prof Chang and other team members, the economic benefit of reducing carbon dioxide emissions is estimated to be S$41.40m per year while the estimated cost savings from avoiding landfill use is about S$41.35 million per year in Singapore., Plastic reuse and recycling are projected to generate a profit-pool growth of as much as US$60 billion for the chemical industry globally.
Prof Chang, an expert in corporate finance, added, "Given that Singapore's chemical industry accounts for about one-third of the manufacturing output in 2015, the integration of plastic upcycling technology into the industry has the potential to yield considerable positive economic and environmental impact."
Sociology expert Assoc Prof Islam said: "This innovative approach -- by transforming plastic waste into valuable resources like formic acid -- not only reduces the burden of plastic pollution but also addresses the growing demand for sustainable chemicals. This contributes to a cleaner environment, enhances public health, and creates new employment opportunities, especially in research, development, and production sectors, thereby fostering economic growth with a shift towards circular economies."
The NTU team has filed a patent for their photocatalytic process, which has been designed with industrial scalability in mind, through the University's innovation and enterprise company NTUitive. The team is now seeking partners to further commercialise the technology, which may contribute toward helping Singapore achieve its 2050 Net Zero Emissions target.
Their innovation exemplifies NTU's unwavering commitment to developing sustainable solutions to address pressing global challenges such as climate change. In its NTU 2025 Strategic Plan, the University also outlined its Sustainability Manifesto that elaborates on how NTU aims to go carbon neutral by 2035.
NTU also seeks to nurture and support novel research solutions and to speed up the commercialisation process through its recently launched Innovation and Entrepreneurship initiative.
SPRUCE is supported by NTU and the Alliance to End Plastic Waste. This project is also partly supported by the National Research Foundation, Singapore (NRF) under its Competitive Research Programme, as well as an A*STAR Advanced Manufacturing and Engineering Individual Research Grant and a Ministry of Education Tier 1 grant.
Story Source:
Journal Reference:
Cite This Page: | Emerging Technologies |
Microsoft, an early backer of emerging technologies that take carbon dioxide emissions out of the atmosphere, has agreed to purchase carbon removal credits from Los Angeles-based startup CarbonCapture.
CarbonCapture has a massive facility called a direct air capture (DAC) plant in the works in Wyoming. Named Project Bison, the facility is projected to start running sometime in the latter half of 2024. The startup has developed modular technology that draws in CO2 from the ambient air so it can be stored underground, preventing the greenhouse gas from contributing to climate change.
The startup has developed modular technology that draws in CO2 from the ambient air so it can be stored underground
Microsoft has a goal of becoming “carbon negative” by 2030, meaning it would remove more CO2 pollution from the atmosphere than it generates through the use of fossil fuels. By 2050, Microsoft also plans to remove the equivalent of all its historical emissions since the company was founded. That’s a tall order, to say the least, considering carbon removal technology doesn’t yet exist at the scale needed for Microsoft to meet its climate goals.
“This agreement with CarbonCapture helps us move toward our carbon negative goal, while also helping to catalyze the growth of the direct air capture industry as a whole,” Microsoft’s carbon removal portfolio director Phillip Goodman said in the announcement.
Microsoft says its priority is to reduce how much pollution it creates in the first place, minimizing how much CO2 it would need to draw down from the atmosphere. But after falling for a few years, the company’s greenhouse gas emissions started to climb again in fiscal year 2021, according to its latest sustainability report. Microsoft was responsible for roughly 14 million metric tons of CO2 emissions that year, about as much as 35 gas-fired power plants might produce in a year.
The tech giant’s new agreement with CarbonCapture will only be able to address a fraction of those emissions. CarbonCapture expects to be able to capture and store around 10,000 metric tons of CO2 annually after deploying its first modules in Wyoming next year.
The modules look like vented shipping containers stacked on top of each other. The equipment can filter out about 75 percent of the CO2 in the air that passes through them. This generates concentrated streams of CO2 that would then need to be piped some 12,000 feet underground into saline aquifers. Another startup based in Dallas, Frontier Carbon Solutions, is partnering with CarbonCapture to permanently store the CO2 on-site.
“This is a big deal for us,” CarbonCapture CEO and CTO Adrian Corless tells The Verge. Its purchase agreement with Microsoft is larger than the sum of the startup’s deals with other, smaller clients put together, according to Corless. “This is just an important, you know, validating step for our business,” he says.
Neither company is divulging specific details yet when it comes to how much carbon dioxide Microsoft wants to remove or how much that will cost. Microsoft has also purchased carbon removal credits from Swiss company Climeworks for an undisclosed amount.
By 2030, CarbonCapture plans to be able to remove 5 million metric tons of carbon dioxide a year at its Wyoming facility in Sweetwater County. That alone is a big endeavor; the global capacity for carbon removal today is still just .01 million metric tons of CO2 annually. Cost has so far been a big limiting factor for the industry. The price per metric ton of captured CO2 can be upwards of $600 per metric ton. | Emerging Technologies |
Subsets and Splits