article_text
stringlengths 294
32.8k
⌀ | topic
stringlengths 3
42
|
---|---|
Hot takes on artificial intelligence are everywhere: Depending on where you look, generative AI will either kill us, take our jobs, liberate us from drudgery, or spur tremendous innovation. (Either way, calls for regulation are here.) At The Markup we like to take a measured approach that often, well, includes measuring actual consequences. So I’m thrilled to share this Q&A with associate professor of computer science at University of California, Riverside, Shaolei Ren, who together with his team—Riverside Ph.D. candidates Pengfei Li and Jianyi Yang, and Mohammad A. Islam, an associate professor of computer science at University of Texas at Arlington—recently published a paper quantifying the secret water footprint of AI. While the carbon footprint of emerging technologies has gotten some attention, to truly build toward sustainability, water has to be part of the equation too.
Also, hello! I’m Nabiha Syed, the CEO of The Markup, and I happen to be a person who believes technological advancement can coexist with a healthy planet—if we make that a priority. Keep reading to learn how much water a single ChatGPT conversation might “drink” and tangible steps we can take to reduce the water footprint of AI.
(This Q&A has been edited for brevity and clarity.)
Syed: With very good reason, we’re starting to see more scrutiny of the carbon footprint of various technologies, including AI models like GPT‑3 and GPT‑4 as well as bitcoin mining. But your research focuses on something receiving less attention: the secret water footprint of AI technology. Tell us about your findings.
Ren: Water footprint has been staying under the radar for various reasons, including the big misperception that freshwater is an “infinite” resource and the relatively lower price tag of water. Many AI model developers are not even aware of their water footprint. But this doesn’t mean water footprint is not important, especially in drought regions like California.
700,000
Number of liters of clean freshwater to train GPT-3 in Microsoft’s U.S. data centers, not including electricity generation
Together with my students and my collaborator at UT Arlington, I did some research on AI’s water footprint using state-of-the-art estimation methodology. We find that large-scale AI models are indeed big water consumers. For example, training GPT‑3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough to produce 370 BMW cars or 320 Tesla electric vehicles), and the water consumption would have been tripled if training were done in Microsoft’s data centers in Asia. These numbers do not include the off-site water footprint associated with electricity generation.
For inference (i.e., conversation with ChatGPT), our estimate shows that ChatGPT needs a 500-ml bottle of water for a short conversation of roughly 20 to 50 questions and answers, depending on when and where the model is deployed. Given ChatGPT’s huge user base, the total water footprint for inference can be enormous.
Then, we further studied the unique spatial-temporal diversities of AI models’ runtime water efficiency—the water efficiency changes over time and over locations. This implies that there’s potential to reduce AI’s water footprint by dynamically scheduling AI workloads and tasks at certain times and in certain locations, the way we reduce our electricity bills by utilizing the low electricity prices during the night to charge our electric vehicles.
Syed: How does this water footprint compare to something like, say, beef?
Ren: We might see from some websites that beef and jeans have a big water footprint, but their water footprint is for the entire life cycle and includes a large portion of nonpotable water. For example, the water footprint of jeans starts from cotton growth. In our study, we only consider the operational water footprint (i.e., water consumption associated with training and running the AI models), whereas the embodied water footprint (e.g., water footprint associated with AI server manufacturing and transportation, including chipmaking) is excluded. If we factor in the embodied water footprint for AI models, my gut feeling is that the overall water footprint would be easily increased by 10 times or even more.
Syed: I’m particularly curious about how carbon reduction and water conservation might be in tension with one another. What do you mean when you ask whether we should “follow or unfollow the sun”?
Ren: Water efficiency mostly depends on the outside temperature as well as energy fuel mixes for electricity generation. Carbon-efficient hours/locations do not mean water-efficient hours/locations, and sometimes they’re even opposite to each other.
For example, in California, there’s a high solar energy production around noon, and this leads to the most carbon-efficient hours, but around noon, the outside temperature is also high, and hence the water efficiency is the worst. As a result, if we only consider carbon footprint reduction (say, by scheduling more AI training around noon), we’ll likely end up with higher water consumption, which is not truly sustainable for AI.
On the other hand, if we only reduce water footprint (say, by scheduling AI training at midnight), we may increase the carbon footprint due to less solar energy available.
Syed: It’s clear that tech giants like Microsoft, Google, and Amazon are betting big on the future of AI, but are we seeing them make environmental considerations a priority in their development?
Also, legislators have recently begun to consider the impact of data centers’ water usage on the local environment. For example, in Virginia, where Loudon County is known as the “data center capital of the world,” SB 1078, proposed early this year, would require “a site assessment … to examine the effect of the data center on water usage and carbon emissions as well as any impacts on agricultural resources.”
Most of the industry’s efforts so far have been focused on improving the on-site water efficiency from the “engineering” perspective, e.g., improving the data center’s cooling tower efficiency and processing recycled water instead of tapping into the local potable water resources. Nonetheless, the vast majority of data centers still use potable water and cooling towers. For example, even tech giants such as Google heavily rely on cooling towers and consume billions of liters of potable water each year. Such huge water consumption has produced a stress on the local water infrastructure; Google’s data center used more than a quarter of all the water in The Dalles, Ore. Moreover, many data centers are also located in drought-prone areas such as California.
Our study shows that “when” and “where” to train a large AI model can significantly affect the water footprint. The underlying reason is the spatial-temporal diversity of both on-site and off-site water usage effectiveness (WUE)—on-site WUE changes due to variations of outside weather conditions, and off-site WUE changes due to variations of the grid’s energy fuel mixes to meet time-varying demands. In fact, WUE varies at a much faster timescale than monthly or seasonably. Therefore, by exploiting spatial-temporal diversity of WUE, we can dynamically schedule AI model training and inference to cut the water footprint.
For example, if we train a small AI model, we can schedule the training task at midnight and/or in a data center location with better water efficiency. Likewise, some water-conscious users may prefer to use the inference services of AI models during water-efficient hours and/or in water-efficient data centers, which can contribute to the reduction of AI models’ water footprint for inference.
Such demand-side water management complements the existing engineering-based on-site water-saving approaches that focus on the supply side. Also, our approach is software-based and hence can be used together with any cooling systems for free without particular requirements on the climate conditions or new cooling system installations.
Syed: You propose transparency as a helpful next step. What questions could greater transparency help us answer?
Ren: By having more transparency, we’d be able to know exactly when and where we have the most water-efficient AI models.
Transparency also makes it possible to measure, benchmark, and improve the AI models’ water footprint, which can be of great value to the research community. Currently, some AI conferences have requested that authors declare their AI models’ carbon footprint in their papers; we believe that with transparency and awareness, authors can also declare their AI models’ water footprint as part of the environmental impact.
With such information, AI model developers can better schedule their AI model training and also exploit the spatial-temporal diversity to better route the users’ inference requests to save water with little to zero degradation in other performance metrics.
Additionally, transparency can also let users know their water footprint at runtime and better their water footprint (say, they might want to defer some nonurgent inference requests to water-efficient hours if possible). Apple has integrated clean energy scheduling into its iPhone products by selecting low-carbon hours for charging, and we hope that water-aware AI training and inference can also be turned into reality in the future.
I hope you found this as thought-provoking as I did! This one hit especially close to home: My dad and sister are both water resources engineers, and I grew up in drought-riddled Southern California. We’re in a technological boom, for sure—but innovation should serve our public good, not obliterate it.
Thanks for reading!
Always,
Nabiha Syed
Chief Executive Officer
The Markup | Emerging Technologies |
“While all eyes are on AI right now, CIOs and CTOs must also turn their attention to other emerging technologies with transformational potential,” said Melissa Davis, VP Analyst at Gartner. “This includes technologies that are enhancing developer experience, driving innovation through the pervasive cloud and delivering human-centric security and privacy.”
“As the technologies in this Hype Cycle are still at an early stage, there is significant uncertainty about how they will evolve,” added Davis. “Such embryonic technologies present greater risks for deployment, but potentially greater benefits for early adopters.”
Four Themes of Emerging Technology Trends
Emergent AI: In addition to generative AI, several other emerging AI techniques offer immense potential for enhancing digital customer experiences, making better business decisions and building sustainable competitive differentiation. These technologies include AI simulation, causal AI, federated machine learning, graph data science, neuro-symbolic AI and reinforcement learning.
Developer experience (DevX): DevX refers to all aspects of interactions between developers and the tools, platforms, processes and people they work with to develop and deliver software products and services. Enhancing DevX is critical for most enterprises’ digital initiative success. It is also vital for attracting and retaining top engineering talent, keeping team morale high and ensuring that work is motivating and rewarding.
Key technologies that are enhancing DevX include AI-augmented software engineering, API-centric SaaS, GitOps, internal developer portals, open-source program office and value stream management platforms.
Pervasive cloud: Over the next 10 years, cloud computing will evolve from a technology innovation platform to become pervasive and an essential driver of business innovation. To enable this pervasive adoption, cloud computing is becoming more distributed and will be focused on vertical industries. Maximizing value from cloud investments will require automated operational scaling, access to cloud-native platform tools and adequate governance.
Key technologies enabling the pervasive cloud include augmented FinOps, cloud development environments, cloud sustainability, cloud-native, cloud-out to edge, industry cloud platforms and WebAssembly (Wasm).
Human-centric security and privacy: Humans remain the chief cause of security incidents and data breaches. Organizations can become resilient by implementing a human-centric security and privacy program, which weaves a security and privacy fabric into the organization’s digital design. Numerous emerging technologies are enabling enterprises to create a culture of mutual trust and awareness of shared risks in decision making between many teams.
Key technologies supporting the expansion of human-centric security and privacy include AI TRISM, cybersecurity mesh architecture, generative cybersecurity AI, homomorphic encryption and postquantum cryptography.
Gartner clients can read more in “Hype Cycle for Emerging Technologies, 2023.”
Gartner IT Symposium/Xpo
Additional analysis on emerging technologies will be presented during Gartner IT Symposium/Xpo,
the world's most important conferences for CIOs and other IT executives. Gartner analysts and attendees will explore the technology, insights and trends shaping the future of IT and business, including how to unleash the possibility of generative AI, business transformation, cybersecurity, customer experience, data analytics, executive leadership and more. Follow news and updates from the conferences on X using #GartnerSYM.
Upcoming dates and locations for Gartner IT Symposium/Xpo™ include:
September 11-13 | Gold Coast, Australia
October 16-19 | Orlando, FL
November 6-9 | Barcelona, Spain
November 13-15 | Tokyo, Japan
November 28-30 | Kochi, India
About Gartner for Information Technology Executives
Gartner for Information Technology Executives provides actionable, objective insight to CIOs and IT leaders to help them drive their organizations through digital transformation and lead business growth. Additional information is available at www.gartner.com/en/information-technology.
Follow news and updates from Gartner for IT Executives on X and LinkedIn using #GartnerIT. Visit the IT Newsroom for more information and insights. | Emerging Technologies |
Replicator Is DoD’s Big Play To Build Thousands Of Autonomous Weapons In Just Two Years
Replicator could significantly change how the U.S. military fights and is aimed squarely at overcoming China’s quantitative advantage.
The Pentagon has unveiled its latest strategy to counter China’s rapid military progress, with a program named Replicator that intends to focus on fielding “thousands” of attritable autonomous platforms that will be characterized by being “small, smart, cheap, and many.” The initiative seeks to harness U.S. innovation as a way to counter the mass of China’s armed forces, while also, once again, putting the onus on uncrewed systems that will benefit from AI algorithms.
The Replicator program was announced today by U.S. Deputy Defense Secretary Kathleen Hicks, speaking at the National Defense Industrial Association’s Emerging Technologies conference in Washington.
As to the threat that Replicator is meant to overcome, Hicks was explicit about what she described as “the PRC’s biggest advantage, which is mass: more ships, more missiles, more people.” Hicks also identified the particular challenge posed by China’s rapidly diversifying anti-access/area-denial capabilities.
She added that there is a historical precedent for the kind of approach that Replicator espouses: “Even when we mobilize our economy and manufacturing base, rarely have America’s war-winning strategies relied solely on matching an adversary, ship for ship or shot for shot,” she said, before adding a barbed comment that seemed to refer to Russia’s all-out invasion of Ukraine: “After all, we don’t use our people as cannon fodder like some competitors do.”
In contrast, Replicator is intended to continue and build upon the U.S. ability to “outmatch adversaries by out-thinking, out-strategizing, and outmaneuvering them; we augment manufacturing and mobilization with our real comparative advantage, which is the innovation and spirit of our people.”
So, what will Replicator consist of once it becomes a reality?
Here, Hicks provided few details, other than to explain that the program seeks to “master the technology of tomorrow,” namely, “attritable autonomous systems in all domains.” The advantages of these kinds of platforms include that they are “less expensive, put fewer people in the line of fire, and [they] can be changed, updated, or improved with substantially shorter lead times. We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat.”
Attritable, in this context, is normally taken to refer to a platform that’s inexpensive enough to be willing to lose on high-risk missions, while being capable enough to be relevant for those missions. More recently, however, the Air Force began to use the term “affordable mass,” on the basis that attritable suggests a greater willingness to actually lose these systems, which might not necessarily be the case in an operational scenario. There are other definitions of attritable, too, which you can read more about here. In the case of Replicator, it’s really not yet possible to get an ideas of what these platforms might cost, but, clearly, affordability, rapid iterative development cycles, and the possibility of mass production are all considerations at this stage.
In terms of autonomous systems, Hicks said that Replicator will be “developed and fielded in line with our responsible and ethical approach to AI and autonomous systems, where the DoD has been a world leader for over a decade.”
As we have explored in the past, the U.S. military has been working on developing autonomous capabilities for decades now publicly, and there has certainly been significant work done in the classified realm, too.
Hicks’ reference to a “responsible and ethical approach to AI” suggests that Replicator may still include humans ‘in the loop,’ especially when it comes to certain sensitive tasks, above all, decisions about whether or not to employ lethal force. In this respect, it’s widely assumed that China, in particular, takes a somewhat different approach, something that Hicks apparently referred to, when she described “another comparative advantage we have over the PRC,” namely, that “these systems will empower our warfighters — not overpower or undercut their abilities.”
Hicks brought up the example of the war in Ukraine to show how “emerging tech developed by commercial and non-traditional companies” can be “decisive in defending against modern military aggression.” Specifically, she pointed to Starlink satellite internet constellation, Switchblade loitering munition, and the use of commercial satellite imagery to influence the conflict.
The kinds of commercial and rapidly developed drones that Ukraine has used to great effect for intelligence, reconnaissance, and surveillance, as well as targeting and attack, may provide one pointer to the sorts of systems that Replicator may yield, but the program is altogether much wider.
While the development of attritable and autonomous systems has often frequently been in the air warfare domain, Hicks was keen to point out that the same concepts have already been subject to Pentagon investment through all the military services, the Defense Innovation Unit, the Strategic Capabilities Office, and at the level of the different combatant commands themselves.
The development of attritable and autonomous systems already spans multiple domains, from “self-piloted ships to uncrewed aircraft and more,” and the same will be and the same will be the case for Replicator.
As well as bringing down costs, Hicks observed that the attritable concept also offers the significant benefit of allowing systems to be “produced closer to the tactical edge.” Such systems can be brought into battle faster than traditional defense technologies and once fielded, they can be used in more unorthodox ways, including outside the normal mission command chains, meaning that they can “empower the lowest possible echelons to innovate and succeed in the battle.”
Hicks also brought up another interesting function of Replicator’s attritable and autonomous systems, namely that they should have the ability to “serve as resilient distributed systems even if bandwidth is limited, intermittent, degraded, or denied.”
Perhaps the most striking aspect of Replicator is the speed and size envisaged, with Hicks outlining the goal of fielding attritable and autonomous systems “at a scale of multiple thousands, in multiple domains, within the next 18 to 24 months.” She admitted that this is “easier said than done,” and it’s one that will apparently require an altogether new approach to harnessing industry, almost certainly including non-traditional companies, for the benefit of the Pentagon.
In the past, the Air Force, in particular, has looked to so-called ‘digital engineering’ to help rapidly develop new aircraft that can be brought into production on an interactive basis. More recently, there have been suggestions that the concept isn’t likely to be so revolutionary in reality, with even the Air Force’s own boss, Frank Kendall, concluding that the digital engineering processes has been “over-hyped.” For Replicator, the Pentagon may have to look to other routes for rapidly developing and fielding new platforms.
Exactly what the kinds of systems that Replicator should produce will look like, and the particular missions they undertake, remains very much speculative at this point. However, Hicks emphasized the fact that these attritable and autonomous systems are not simply expected to supersede current systems overnight, but instead herald a longer-term shift in the way that the Pentagon prepares for and goes to war.
Hicks painted a picture of the future U.S. military as one in which “Americans still benefit from platforms that are large, exquisite, expensive, and few.” However, Replicator will “galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many,” with these also operating for benefit of, if not collaboratively with more expensive and less prolific systems. This is very much defines the Air Force and Navy's Next Generation Air Dominance efforts. In particular, the dichotomy between the Collaborative Combat Aircraft (CCA) drones that will accompany the extremely high-end manned NGAD aircraft into combat.
Hicks hopes that Replicator is the program that finally tilts the balance in favor of this kind of warfare. With extremely ambitious targets as regards the volume and velocity of the program, that may be very hard to achieve, especially since attritable and autonomous systems already present plenty of operational challenges, even without the accelerated timeline.
And while details of the kinds of systems that Replicator is expected to produce are still to come, the program is already highly noteworthy, not least for the very central position given to the challenge posed by China’s growing military dominance in the Asia Pacific region and, in the future, likely elsewhere, too.
We can reasonably expect unmanned systems in the air, on the water, and below the waves of varying capabilities and complexities, none of which will be too 'exquisite' that it makes their development a long-term proposition or their price tag very high. But above all that, grand networking capabilities that can tie many of these systems together and keep controllers apprised of their activities will likely be the biggest challenge when it comes to operationalizing what Replicator produces to its full potential. The possibility for mesh networks spanning multiple domains over great area is likely also a critical aspect of this endeavor.
AI will not just be needed for autonomy, but also for parsing absolutely massive amounts of data produced by these systems which can quickly clog critical communications bandwidth 'pipes.' Parsing that data on-platform or at least in-theater before it is sent afar will be a major challenge and feature of any such concept. But above all else, the ability for disparate capabilities to collaborate autonomously in a diverse 'swarm or swarms' will be arguably the most impactful aspect of this emerging strategy. Overwhelming the enemy by volume and speed of action is definitely the critical play here.
Bottom line, this is a very big deal and not just in scope. This is a major part of the unmanned shift we have long been waiting for and predicting now coming into focus. It goes far beyond just near-term tactical and procurement changes. If it is realized as discussed, it will forever change the way the U.S. military fights and how it develops and procures weaponry.
As for the timing, two years from now puts the fielding of these assets at the heart of the time frame in which many are predicting China will make a move on Taiwan militarily. As such, Replicator could end up being as much a deterrence play as anything else. It's worth noting, that war gaming has shown that massive swarms of autonomous systems would be a decisive factor as to who wins a fight over the Taiwan Strait. You can read more about this here.
“We must ensure the PRC leadership wakes up every day, considers the risks of aggression and concludes today is not the day and not just today,” said Hicks. Time will tell to what degree Replicator is successful in helping achieve that.
Contact the author: [email protected] | Emerging Technologies |
By Aurélie Pugnet | EURACTIV.com Est. 4min 21-06-2023 (updated: 21-06-2023 ) Content-Type: News News Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources. European Commissioner in charge of Internal Market and Defence Thierry Breton speaks with Commissioner for Budget Johannes Hahn at Commission weekly College meeting [EPA-EFE/JULIEN WARNAND] EURACTIV is part of the Trust Project >>> Print Email Facebook Twitter LinkedIn WhatsApp Telegram The European Commission proposed on Tuesday (20 June) to bolster the EU’s defence research and development as part of its mid-term budget review, re-prioritising it through a slight budget increase. An additional €1.5 billion will be earmarked for the European Defence Fund (EDF), according to the proposal, reaching a total close to €9.5 billion. The EDF programme set up in 2021 aims at boosting investment into defence research and development in the bloc. The funding of technologies used in “defence applications” would also be allowed under the Commission’s proposal of its new Strategic Technologies for Europe Platform, dubbed “STEP”, according to the Commission’s text. STEP is based on the idea of the European sovereignty fund, first hashed out by Commission President Ursula von der Leyen. France largely supported the move, EURACTIV reported, although some other member states have already raised questions about financing an industry that already benefits from increased national budgets. The original budget proposed by the Commission in 2018 was €13 billion but member states cut it to €8 billion during days-long negotiations on the 2021-2027 budget. With war on the continent in Ukraine, the European Union member states attention has shifted to putting defence at the fore. “The global landscape has changed,” the Commission wrote in its communication. “Russia’s unprovoked aggression shows that it poses and will continue to pose in the coming years a threat to the security of Europe,” it also says to explain the move. Many members of the European Union and the Western military alliance, NATO, have pledged additional budgets for defence-related activities. Berlin, for instance, promised to commit €100 billion to step up its military abilities. The EU has also launched the fund to boost military-equipment joint procurement (EDIRPA) to replenish the members’ stockpiles and a fund to boost ammunition production on the continent (ASAP), worth €500 million each. Defence is a semi-priority In its mid-term review of the EU budget, the Commission proposes to refit the EU budget to match current priorities since it did not consider the economic consequences of the COVID-19 pandemic or the war in Ukraine. The STEP “will reinforce the European Defence Fund, which will boost the innovation capacity of the European defence technological and industrial base, thus contributing to the Union’s open strategic autonomy,” the proposal states. More specifically, the extra budget will go into “deep and digital technologies that can significantly boost the performance of future capabilities throughout the Union,” the Commission’s plans read. It will aim to “maximise innovation and introduce new defence products and technologies,” it continues. The EDF is allocated to projects on an annual work programme crafted by the 27 member states, which specifies which project will be financed and at which level. STEP also targets specific technologies development, such as deep and digital technologies, which are critical for the defence sector. These include microelectronics, high-performance computing, quantum computing, cloud computing, artificial intelligence, cybersecurity, robotics, 5G and advanced connectivity, “including actions related to deep and digital technologies for the development of defence and aerospace applications”. The proposal will be presented at the next European Council on 29-30 June. So far, “no debate is expected because of lack of time for preparation,” a senior Commission official said, nevertheless pushing for a swift adoption. [Edited by Alice Taylor] Read more with EURACTIV Commission briefs on Ukraine, Moldova and Georgia's reform progress towards EU membershipUkraine and Moldova have made good reform progress and are expected to wrap them up in the next months, while bigger efforts are required from Georgia, an internal European Commission update is expected to say this week, indicating how fast the countries could progress on their path toward future EU membership. Print Email Facebook Twitter LinkedIn WhatsApp Telegram Topics Defence and security Economy emerging technologies EU defence European Defence Fund Global Europe Industrial Policy R&D | Emerging Technologies |
Lawmakers struggle to differentiate AI and human emails
Natural language models such as ChatGPT and GPT-4 open new opportunities for malicious actors to influence representative democracy, new Cornell research suggests.
A field experiment investigating how the natural language model GPT-3, the predecessor to the most recently released model, might be used to generate constituent email messages showed that legislators were only slightly less likely to respond to AI-generated messages (15.4%) than human-generated (17.3%).
The 2% difference, gleaned from more than 32,000 messages sent to about 7,000 state legislators in the U.S., was statistically significant but substantially small, the researchers said. The results highlight the potential threats this technology presents for democratic representation, but also suggest ways legislators might guard against AI-sourced astroturfing, the disingenuous practice of creating a sense of grassroot support, in this case by sending large volumes of content sympathetic to a particular issue.
The study, "The Potential Impact of Emerging Technologies on Democratic Representation: Evidence from a Field Experiment," co-authored by Sarah Kreps, the John L. Wetherill Professor in the Department of Government in the College of Arts and Sciences (A&S), director of the Cornell Jeb E. Brooks School Tech Policy Institute and adjunct professor of law, and Douglas Kriner, the Clinton Rossiter Professor in American Institutions in the Department of Government (A&S) and professor in the Brooks School, published March 20 in New Media and Society.
In recent years, new communication technologies have interfered with the democratic process multiple times, Kreps said. In the 2016 U.S. presidential election, Russian agents used micro-targeted social media advertisements to manipulate American voters and influence the outcome. And in 2017, the Federal Communications Commission's public comment lines were flooded with millions of messages generated by natural language models in response to a proposed rollback of regulations.
With these in mind, Kreps, who was an early academic collaborator of OpenAI, the organization that developed GPT-2, -3 and -4, and the more mainstream ChatGPT, wondered what malicious actors could do with more powerful language models now widely available.
"Could they generate misinformation or politically motivated, targeted content at scale?" she asked. "Could they effectively distort the democratic process? Or might they be able to generate large volumes of emails that seem like they're coming from constituents and thereby shift the legislative agenda toward the interests of a foreign government?"
In their experiment, conducted throughout 2020, Kreps and Kriner chose six current issues: reproductive rights, policing, tax levels, gun control, health policy and education policy. To create the human-generated messages, undergraduates associated with the student-run Cornell Political Union drafted emails to state legislators on each topic, advocating for the right-wing or left-wing position.
Then they produced machine generated constituency letters using GPT-3, training the system on 12 letters (a right and a left position for each of the six issues). They generated 100 different outputs for each of the ideologies and topics.
Many legislators and their staff did not dismiss the machine-generated content as inauthentic, the researchers said, as shown in the small difference in responses between AI and human content across the six issues.
Moreover, messages on gun control and health policy received virtually identical response rates, and on education policy, the response rate was higher for AI-generated messages, suggesting that "on these issues GPT-3 succeeded in producing content that was almost indistinguishable in the eyes of state legislative offices from human content," they wrote.
In feedback after the experiment, state legislators shared how they pick out fake emails, such as lack of geographical markers. Some said they represent districts so small they can spot fakes simply by looking at a name.
"It was heartening to hear that a lot of these legislators really understand their constituents and their voices, and that these AI-generated messages did not sound at all like something their constituents would write," Kreps said.
However, such local clues to authenticity would be more difficult for national-level senators and representatives to spot, the researchers said.
Technological tools employing the same type of neural networks can help differentiate real messages from fake, "but so can a discerning eye and digital literacy," Kreps said. "Legislators need to be trained to know what to look for."
As the capacity for electronic astroturfing increases, legislators may have to rely more heavily on other sources of information about constituency preferences, Kriner said, including district polling data and in-person events: "They travel around their constituencies holding town meetings and get a direct earful, at least from those most animated about an issue."
Both scholars are optimistic that democracy in America will survive this new threat.
"You could argue that the move from sitting down and writing a letter to writing an email was a lot bigger than between boilerplate online email templates and GPT-3," Kreps said. "We've adapted before, and democratic institutions will do it again."
More information: Sarah Kreps et al, The potential impact of emerging technologies on democratic representation: Evidence from a field experiment, New Media & Society (2023). DOI: 10.1177/14614448231160526
Provided by Cornell University | Emerging Technologies |
Ottawa, ON – Mitacs, a non-profit organization committed to fostering innovation in Canada, and Hon Hai Research Institute, the research arm of the world’s largest electronics manufacturer Hon Hai Technology Group (“Foxconn”), today signed a memorandum of understanding to advance quantum technology in Canada, with the active support of the Canadian Trade Office in Taipei. Through the MOU, the Hon Hai Research Institute will focus on quantum research with Mitacs providing advice on funding and talent development — drawing on its vast network of leading post-secondary research institutions in Canada. This agreement is the first step in Foxconn’s ambitious plan to expand its R&D and innovation capability in Canada. In addition to quantum research, Foxconn is seeking to develop new research and design facilities in Canada — creating jobs and driving economic growth across the country. “Through this new partnership, we hope to connect the institute with Canadian professionals and experts with the goal of jointly investing in cutting-edge technology research, beginning in the field of quantum technologies,” said Hon Hai Research Institute’s CEO Wei-Bin Lee. “Through Mitacs, we hope to fund quantum computation research projects in Canada and hire five to ten research interns in the first year.” “This MOU demonstrates Canada’s commitment to strengthening our trade and investment relations with partners in the Indo-Pacific region, including Taiwan. Our government will continue to promote Canada as a world-leading investment destination and create new opportunities for cutting-edge technological research — creating good paying jobs and growing our economy along the way,” said the Honourable Mary Ng, Minister of International Trade, Export Promotion, Small Business, and Economic Development, Government of Canada. “Ontario and Canada are home to some of the world’s most qualified research and postsecondary talent,” said Jill Dunlop, Ontario Minister of Colleges and Universities. “Through this partnership, Mitacs’s programs and networks will support the Hon Hai Research Institute’s talent pipeline, providing incredible opportunities for Ontario students to gain the skills they need to secure in-demand jobs after graduation.” “The more successful Canada is at building its international networks of research, the greater the potential to maximize our innovation economy,” added John Hepburn, CEO of Mitacs. “Through today’s agreement, Mitacs will help build a pipeline of talent to advance crucial quantum research, in partnership with one of the world’s leading electronics companies.” International partnerships that bring together post-secondary research talent and industry from around the world are essential to fostering innovation in the sectors that power the global economy and to ensuring Canada maintains a competitive advantage in the years to come. Foxconn is also planning to launch a Canadian Software Research & Development Center (SRDC). At the outset, the SRDC will focus on software development, specifically in the areas of electric vehicle experience, human-machine interface design, and usability engineering, as well as other design projects in response to internal or client demand. The SRDC will hire 100 designers and engineers in the first phase. ABOUT MITACS Mitacs empowers Canadian innovation through effective partnerships that deliver solutions to our most pressing problems. For over 20 years, Mitacs has assisted organizations in reaching their goals, has funded cutting-edge innovation, and has created job opportunities for students and postdocs. We are committed to driving economic growth and productivity and to creating meaningful change to improve quality of life for all Canadians. ABOUT HON HAI RESEARCH INSTITUTE The establishment of the Hon Hai Research Institute is an important step in the Group’s development strategy as it moves one step closer to its “Foxconn 3.0” transformation. The institute has five research centers. Each center has an average of 40 high technology R&D professionals, all of whom are focused on the research and development of new technologies, the strengthening of Foxconn’s technology and product innovation pipeline, efforts to support the Group’s transformation from “brawn” to “brains,” and the enhancement of the competitiveness of Foxconn’s “3+3” strategy. ABOUT HON HAI Established in 1974 in Taiwan, Hon Hai Technology Group (“Foxconn”) (2317: Taiwan) is the world’s largest electronics manufacturer. Hon Hai is also the leading technological solution provider, and it continuously leverages its expertise in software and hardware to integrate its unique manufacturing systems with emerging technologies. In addition to maximizing value-creation for customers who include many of the world’s leading technology companies, Hon Hai is also dedicated to championing environmental sustainability in the manufacturing process and serving as a best-practices model for global enterprises. www.honhai.com. | Emerging Technologies |
-
‘Atlantic Declaration’ agreed by the PM and President at the White House today lays out a new action plan for cooperation on biggest economic challenges of our time
-
Declaration recognises the close UK-US relationship and establishes a new approach which will allow both countries to move faster and co-operate more deeply
-
New action plan will see the UK and US strengthen our supply chains, develop the technologies of the future and invest in one another’s industries
The Prime Minister and President Biden have agreed an innovative economic partnership today (Thursday), which will see our countries work together more closely than ever before across the full spectrum of our economic, technological, commercial and trade relations.
The ‘Atlantic Declaration’ heralds a new era for the thriving economic relationship between the UK and US, and builds on decades of very close cooperation on defence and security. It applies the same principle – that the UK and US will work together in the face of new challenges – to our economic partnership as we long have to our defence alliance.
This unprecedented bilateral partnership takes a different approach to our economic relationship than we have taken before, recognising that our economies must move with speed and agility to address the challenges we face.
Following their meeting at the White House today, the Prime Minister and President Biden have announced new measures under the Atlantic Declaration – an action plan for the future of our partnership. This includes:
-
Working together to reduce our vulnerabilities across critical technology supply chains, including by sharing analysis, developing and deepening our channels for coordination and timely consultation during crises. To support the critical clean energy industry, our net zero ambitions and to keep Russia out of the global civil nuclear power market, the UK and US will launch a new civil nuclear partnership.
-
An innovative approach targeting specific areas for economic advancement. This includes a commitment in principle to a new UK-US Data Bridge which would make it easier for around 55,000 UK businesses to transfer data freely to certified US organisations without cumbersome red tape – translating into an estimated £92.4m in direct savings per year. It also includes the immediate launch of negotiations on a Critical Minerals Agreement.
-
Stepping up international efforts to ensure the safe and responsible development of AI, starting with an international summit on AI safety which will be hosted in the UK later this year, welcomed by the US.
-
Enhancing cooperation on measures to stop our adversaries from developing and acquiring sensitive technologies that can be used to do us harm.
-
Research collaboration to entrench UK and US leadership in the most important future technologies – AI, future telecoms (5G & 6G), quantum, semiconductors and engineering biology.
-
New opportunities for increased investment in one another’s economies. President Biden plans to ask the US Congress to designate the UK as a ‘domestic source’ within the meaning of Title III of the the Defense Production Act – meaning British companies can benefit from US Government investment on the same basis as American firms. The act has previously been used to speed up the development of hypersonic weapons.
Our economies are going through the greatest change since the industrial revolution, with emerging technologies like biotechnology and AI transforming the way we live and work. But while those new technologies offer huge potential to save lives, grow our economies and tackle climate change, in the hands of our adversaries they could be used as tools to undermine our national security.
With our highly interconnected economies, our leadership in areas like emerging technology and our deeply entrenched shared values, the UK and US are natural partners to approach these issues together.
This new approach to our economic partnership, which puts the strength of our relationship front and centre in addressing the biggest challenges we face, will both deliver for our people and support an open international order.
The Prime Minister said:
The UK and US have always pushed the boundaries of what two countries can achieve together. Over generations we have fought alongside one another, shared intelligence we don’t share with anyone else, and built the strongest investment relationship in world history.
So it’s natural that, when faced with the greatest transformation in our economies since the industrial revolution, we would look to each other to build a stronger economic future together.
The Atlantic Declaration sets a new standard for economic cooperation, propelling our economies into the future so we can protect our people, create jobs and grow our economies together.
Negotiations will begin immediately on many aspects of the partnership, including on a Critical Minerals Agreement. An agreement would give buyers of vehicles made using critical minerals processed or mined by UK companies access to tax credits in line with the US Inflation Reduction Act. The Inflation Reduction Act provides a $3,750 incentive for each vehicle, on conditions including that the critical minerals used in its production – principally used in the battery – are sourced from the US or a country with whom the US has a critical minerals agreement.
The UK is already a net exporter of raw materials for EV batteries to the US and this agreement will help UK-based firms involved in the mining, recycling and refining of critical minerals who are suppling US electric vehicle and battery manufacturers – benefitting this growing industry. This is a a sector with companies all over the UK, including nickel production in Wales and lithium processing in Teesside.
With a trading relationship worth £279 billion a year, and shared investment totalling over £1 trillion, the US is already our most important trading partner. Earlier this week the Prime Minister announced £14 billion of new US investment into the UK, demonstrating the importance of this relationship to UK growth and jobs.
Teams from the White House and Downing Street will meet regularly to drive action under the Atlantic Declaration, ensuring it continues to meet the high objectives the Prime Minister and President Biden have set today. | Emerging Technologies |
The quantum twisting microscope: A new lens on quantum materials
One of the striking aspects of the quantum world is that a particle, say, an electron, is also a wave, meaning that it exists in many places at the same time. In a new study, reported today in Nature, researchers from the Weizmann Institute of Science make use of this property to develop a new type of tool—the quantum twisting microscope (QTM)—that can create novel quantum materials while simultaneously gazing into the most fundamental quantum nature of their electrons.
The QTM involves the "twisting," or rotating, of two atomically-thin layers of material with respect to one another. In recent years, such twisting has become a major source of discoveries. It began with the discovery that placing two layers of graphene, one-atom-thick crystalline sheets of carbon, one atop the other with a slight relative twist angle, leads to a "sandwich" with unexpected new properties.
The twist angle turned out to be the most critical parameter for controlling the behavior of electrons: Changing it by merely one-tenth of a degree could transform the material from an exotic superconductor into an unconventional insulator. But critical as it is, this parameter is also the hardest to control in experiments. By and large, twisting two layers to a new angle requires building a new "sandwich" from scratch, a process that is very long and tedious.
"Our original motivation was to solve this problem by building a machine that could continuously twist any two materials with respect to one another, readily producing an infinite range of novel materials," says team leader Prof. Shahal Ilani of Weizmann's Condensed Matter Physics Department. "However, while building this machine, we discovered that it can also be turned into a very powerful microscope, capable of seeing quantum electronic waves in ways that were unimaginable before."
Creating a quantum picture
Pictures have long played a central role in scientific discovery. Light microscopes and telescopes routinely provide images that allow scientists to gain a deeper understanding of biological and astrophysical systems. Taking pictures of electrons inside materials, on the other hand, has for many years been notoriously hard, owing to the small dimensions involved.
This was transformed some 40 years ago with the invention of the scanning tunneling microscope, which earned its developers the 1986 Nobel Prize in Physics. This microscope uses an atomically sharp needle to scan the surface of a material, measuring the electric current and gradually building an image of the distribution of electrons in the sample.
"Many different scanning probes have been developed since this invention, each measuring a different electronic property, but all of them measure these properties at one location at a time. So, they mostly see electrons as particles, and can only indirectly learn about their wave nature," explains Prof. Ady Stern from the Weizmann Institute, who took part in the study along with three other theoretical physicists from the same department: Profs. Binghai Yan, Yuval Oreg and Erez Berg.
"As it turned out, the tool that we have built can visualize the quantum electronic waves directly, giving us a way to unravel the quantum dances they perform inside the material," Stern says.
Spotting an electron in several places at once
"The trick for seeing quantum waves is to spot the same electron in different locations at the same time," says Alon Inbar, a lead author on the paper. "The measurement is conceptually similar to the famous two-slit experiment, which was used a century ago to prove for the first time that electrons in quantum mechanics have a wave nature," adds Dr. John Birkbeck, another lead author. "The only difference is that we perform such an experiment at the tip of our scanning microscope."
To achieve this, the researchers replaced the atomically sharp tip of the scanning tunneling microscope with a tip that contains a flat layer of a quantum material, such as a single layer of graphene. When this layer is brought into contact with the surface of the sample of interest, it forms a two-dimensional interface across which electrons can tunnel at many different locations.
Quantum mechanically, they tunnel in all locations simultaneously, and the tunneling events at different locations interfere with each other. This interference allows an electron to tunnel only if its wave functions on both sides of the interface match exactly. "To see a quantum electron, we have to be gentle," says Ilani. "If we don't ask it the rude question 'Where are you?' but instead provide it with multiple routes to cross into our detector without us knowing where it actually crossed, we allow it to preserve its fragile wave-like nature."
Twist and tunnel
Generally, the electronic waves in the tip and the sample propagate in different directions and therefore do not match. The QTM uses its twisting capability to find the angle at which matching occurs: By continuously twisting the tip with respect to the sample, the tool causes their corresponding wave functions to also twist with respect to one another. Once these wave functions match on both sides of the interface, tunneling can occur.
The twisting therefore allows the QTM to map how the electronic wave function depends on momentum, similarly to the way lateral translations of the tip enable the mapping of its dependence on position.
Merely knowing at which angles electrons cross the interface supplies the researchers with a great deal of information about the probed material. In this manner they can learn about the collective organization of electrons within the sample, their speed, energy distribution, patterns of interference and even the interactions of different waves with one another.
A new twist on quantum materials
"Our microscope will give scientists a new kind of 'lens' for observing and measuring the properties of quantum materials," says Jiewen Xiao, another lead author.
The Weizmann team has already applied their microscope to studying the properties of several key quantum materials at room temperature and is now gearing up toward doing new experiments at temperatures of a few kelvins, where some of the most exciting quantum mechanical effects are known to take place.
Peering so deeply into the quantum world can help reveal fundamental truths about nature. In the future, it might also have a tremendous effect on emerging technologies. The QTM will provide researchers with access to an unprecedented spectrum of new quantum interfaces, as well as new "eyes" for discovering quantum phenomena within them.
More information: A. Inbar et al, The quantum twisting microscope, Nature (2023). DOI: 10.1038/s41586-022-05685-y | Emerging Technologies |
The COVID-19 pandemic and other recent crises have increased reliance on digital banking and e-commerce and increased financial uncertainty and insecurity — cultivating a fertile ground for financial crime such as money laundering, fraud, corruption, and tax evasion. In addition to threatening the stability of domestic and international economic systems, the flow of illicit finance often supports malignant activities such as terrorism, sexual exploitation, human trafficking, environmental crime, drug smuggling, cybercrime and more. getty Although thwarting financial crime and its ramifications has been a focus of policymaking for decades, it’s become an even greater imperative in our ever-changing, digital world. An unfortunate side effect of broader digital banking is that new doors have opened for financial crime. While policymakers, law enforcement, regulators, financial institutions and others in the private sector have invested in people, processes and technology to prevent and mitigate financial crime, law enforcement only recovers one percent of illicit funds. Speed, of course, is essential. Continued industry reform can build on the critical work already underway globally to help improve the anti-financial crime framework. While innovative tools and methodologies relying on data-oriented technologies can reinforce monitoring systems, successfully fighting financial crime requires a genuine partnership between the public and private sectors. Collaboration, with a marked focus on information exchange and systemic reform, can help achieve better outcomes. The way forward: public-private partnerships (PPP)
Public-private partnerships (PPPs), in the form of continued, formalized cooperation and coordination between the public and private sectors, are central to the efforts to fight global financial crime and improve financial intelligence. Several international bodies, such as the Financial Action Task Force (FATF), have a broad consensus that developing such frameworks — enabling intelligence and insights to flow between parties — can disrupt malicious actors and prevent criminal misuse of the financial system. Working across a multitude of areas, spanning policy definition, intelligence monitoring, information sharing and cross-border relationships, PPPs have demonstrated their value. They have built trust and collaboration across stakeholder communities and improved the focus and quality of financial crime reporting. Since the UK Joint Money Laundering Intelligence Taskforce (JMLIT) began in 2014, more than 20 countries across Asia-Pacific, the Americas and Europe have developed PPPs enabling intelligence and information sharing. In addition, several single-issue PPP initiatives have been established, bringing diverse stakeholders together to improve the response to specific threats, such as wildlife trafficking. Meanwhile, Europol’s Financial Intelligence Public-Private Partnership (EFIPPP) has continued to develop its role as the first multilateral PPP. However, while global developments in PPPs are a fundamentally positive story, opportunities to do more remain. As illustrated by the recent EU consultation, the next step for policymakers is to incentivize participation in PPPs through regulatory and supervisory frameworks, focusing on reducing and detecting economic crime and providing beneficial information to law enforcement. Data privacy and digital transformation
As PPPs look to improve data sharing between private and public institutions, they can’t overlook information security. Given the sensitive personal data involved, the right to privacy remains essential. Finding the right balance between data protection privacy (DPP) and information sharing will bolster institutions’ abilities to identify and share vital information about financial crime across borders with internal and external parties. Regulatory bodies also need to take a more proactive role in developing financial crime risk management standards that protect data privacy while augmenting crime prevention efforts. Frameworks for financial institutions should also be agile enough that technological innovation can flourish without violating DPP. A coordinated, global focus on advancing technology can help build coherence in approaches across jurisdictions and assist in developing best practices in driving effectiveness.
Emerging technologies will also help financial institutions aggregate and analyze significantly more data than in the past by using machine learning, artificial intelligence (AI), analytics tools and data science. These capabilities will become increasingly important as the prevalence of online banking increases. The pressure is on to stem the tide of global financial crime. Improving financial intelligence and mitigating financial crime requires the public and private sectors to work together, deploy information-sharing utilities and guidelines, and encourage financial institutions to implement emerging technology that improves response speed and outcomes. | Emerging Technologies |
Job Surge: India Can Be Hub Of AI-Skilled Workforce
Salaries in the field of AI have been increasing rapidly, reflecting the high demand for AI professionals.
The demand for jobs related to artificial intelligence is on a steady rise in India, which can leverage its vast talent pool and create a robust ecosystem for the technology's innovation and development.
A quick search for AI jobs in the country on LinkedIn throws up over 13,500 postings.
There has been an increase in demand for generative AI-related roles in India over the past five years, with an 89.50% rise in job searches and a 158.40% jump in job postings from March 2018 to March 2023, according to job portal Indeed.
The demand for AI talent has increased by 112% since the pandemic, with a rise of 11% over the last six months—led by retail, telecom, BFSI, advertising, marketing and information technology sectors—according to job search platform Foundit.
"The surge in job searches related to AI in India reflects the growing interest and demand for skilled professionals in this field," said Sashi Kumar, head of sales at Indeed India.
Microsoft India President Anant Maheshwari said the supply is also not far behind and India has among the highest—if not the highest—AI skill sets in the world.
India has a natural positioning of being able to serve companies with talent capability that will be required to create and deliver solutions, the Nasscom Chairperson said, in an interview with BQ Prime this week.
TeamLease Digital's Krishna Vij said that according to industry data, India produces 16% of the world's AI talent pool and currently ranks first in terms of skill penetration and talent concentration,
The government is investing in research, education and infrastructure, and has launched several initiatives to promote the development and adoption of AI, said Vij, the business head of IT staffing at the technology professional services platform.
Efforts from industry and academia are also contributing to ramping up AI skilling and initiatives at scale, Vij said.
Jang Bahadur Singh, director of human capital solutions at Aon India, said the country would certainly be one of the largest talent economies when it comes to the skillset across the AI value chain, such as machine learning, data science and software engineering.
Salary
Salaries in the field of AI have been increasing rapidly, reflecting the high demand for AI professionals, Vij said.
She estimated that the median salary for AI professionals in India is around Rs 15–20 lakh per annum, while it could go up to Rs 50–60 lakh per annum for experienced professionals.
Threats And Opportunities
Maheshwari said there would be supply side effects, but how that shapes up remains to be seen.
Foundit CEO Sekhar Garisa said while there are several dialogues regarding job losses with the intervention of AI, it is also expected to create newer roles and increase employment opportunities.
In the present scenario, AI and automation are acting more as catalysts than threats for professionals in the industry. It is necessary to keep up with emerging trends as every role becomes obsolete beyond a certain period without new learnings, according to Garisa.
Hence, he said, upskilling is a long-term investment for career progression and it should be fostered along the growth journey of any employee, making them market-ready even in tough times. "Today's employees must be prepared to adapt to learn, unlearn and relearn to remain relevant in (the) future."
India's AI Edge: Is It Enough?
As the demand for AI talent continues to rise globally, there is an opportunity for India to leverage its vast talent pool and create a robust ecosystem for AI innovation and development, Kumar said.
Despite India being a major hub for AI talent globally, industry projections are that demand outpaces the supply of AI talent by almost 33%, according to Vij.
As AI is a rapidly evolving field and requires a combination of technical and analytical skills along with domain knowledge, there is a constant need for upskilling and reskilling due to the interdisciplinary nature of AI, he said.
Since skilling challenges with AI are complex, it requires significant investments and coordinated effort from the government, businesses and educational institutes.
The adoption of AI and other emerging technologies poses challenges, such as availability of skilled talent, upskilling of existing workforce, and bridging the gap between industry and academia, Kumar said.
Maheshwari said while the world is short of AI talent, India is moving ahead and has a strong skilling ecosystem to drive it forward. | Emerging Technologies |
My cravings for meat are well-known to regular readers (hi mum!). But as a self-righteous vegetarian, I refuse to dine on murdered animals. Those beliefs, however, are now being challenged by a heretic: cultivated meat.
Cultivated meat, also known as cultured meat, brings the farm to the lab. Cells are collected from an animal, grown in vitro, and then shaped into familiar forms of edible flesh.
Industry advocates proffer myriad benefits — and needs. According to the UN, around 80 billion animals are slaughtered each year for meat. This livestock produces an estimated 14.5% of global greenhouse gasses, grazes across 26% of Earth’s terrestrial surface, and uses 8% of global freshwater.
Greetings, humanoidsSubscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.Population growth will eventually make these numbers unsustainable.
Cellular agriculture, argue its supporters, can dramatically allay the damage. The produce can satisfy our need for protein (and desire for meat), reduce our carbon footprint, and prevent animal suffering. CE Delft, an independent research firm, estimates that cultivated meat could cause 92% less global warming and 93% less air pollution, while using 95% less land and 78% less water.
The nascent sector could also become big business. Consulting firm McKinsey predicts the market for cultivated meat could reach $25 billion (€26 billion) by 2030. It’s not like meat — it is meat. The sector has developed rapidly since the world’s first lab-grown burger was unveiled in 2013. The patty cost an eye-watering $330,000 to produce. Chefs described it as edible, but not delectable.
One of the scientists behind the project was Daan Luining. The affable Dutchman went on to found Meatable, a cultivated meat startup based in Delft. The company claims its produce is identical to traditional meat.
“It’s not like meat — it is meat,” Luining tells TNW.
Luining (right) and his Meatable co-founder Krijn de Nood describe their products as “the new natural.”
After the hamburger launch, Luining tried to turn his own research on cellular agriculture into a business. His big idea emerged from a meeting with Mark Kotter, a neurosurgeon at Cambridge University.
Kotter was demonstrating a breakthrough approach to reprogramming human stem cells. Luining suggested applying the technique to a new target: pork and bovine cells.
He quickly convinced Kotter that cultured meat could have enormous value. In 2018, the duo teamed up with Krijn de Nood, a former McKinsey consultant, to co-found Meatable — and bring their concept to the market.
The meat lab
My previous forays into fake meat were plant-based. I’ve sampled a smorgasbord of the produce, from the McDonald’s McPlant burger to Juicy Marbles’ vegan filet mignon. They were admirable imitations, but there always were cracks in the illusion — and the copious sodium couldn’t hide them.
Cultivated meat takes the mimicry to another level. At Meatable, the process begins by extracting a single cell sample from a cow or pig. The cell is then cultivated in a bioreactor, where it’s fed various nutrients. These elements are mixed and shaped to produce the finished meat.
One ingredient Meatable doesn’t use is fetal bovine serum (FBS), a common growth medium in cultivated meat. FBS is harvested from cattle fetuses after their mothers are slaughtered. In addition to tainting the “cruelty-free” marketing of cultured meat, the substance is extremely expensive to use.
Instead of FBS, Meatable harnesses pluripotent stem cells, which can multiply indefinitely and convert into almost any cell type. These pluripotent cells replicate the natural growth of muscle and fat — two essential ingredients to make meat taste like meat.
The farm of the future? Credit: Meatable
Meatable first induces a single cell from a just-born animal’s umbilical cord, which is collected without causing any harm. The company’s proprietary OPTi-OX system then rapidly converts the pluripotent cells into the desired muscle and fat cells.
Luining compares the technique to brewing beer in a vat.
“We’re feeding the cells, brewing to create more cells, and then we turn them into muscle or fat,” he says. “This is basically what meat consists of. It’s the cells of the animal — it is actual meat.”
This puts my head in a spin. It’s not vegetarian, but if it’s removed every drawback of conventional meat, why wouldn’t I eat it? And why can’t I find it in Europe?
Going Dutch
Meatable’s home country, the Netherlands, is often referred to as the birthplace of cultivated meat. In 1948, a Dutch medical school student researcher, Willem van Eelen, came up with the idea after stumbling across experiments using stem cell tech to grow cells in a tank. He wondered if similar techniques could cultivate meat.
Van Eelen went to file several patents that sought to make his vision a reality, but struggled to raise enough money to pursue his plans. He did, however, lay the foundations for another milestone in the Netherlands.
In 2005, Van Eelen helped convince the Dutch government to fund research into cellular agriculture. The program prompted Mark Post, a professor at Maastricht University, to develop the first cultivated hamburger. His achievement attracted mainstream media attention, and turned many of Van Eelen’s doubters into believers.
In 2013, Post formed his own biotech startup, the Netherlands-based Most Meat. His work has also inspired numerous scientists, including Luining.
Meatable, however, plans to launch its products in Singapore — and with good reason. This is pioneering work. Singapore already has cultivated meat on the market. In 2020, the country’s food agency became the first regulatory body to approve the sale of a lab-grown meat product: chicken developed by Eat Just, a Californian startup.
The city state’s embrace of cultured meat is rational. Just 1% of its land is available for agriculture, which forces Singapore to import more than 90% of its food. To shore up against global food supply shocks, the country aims to produce 30% of its food locally by 2030. Cultured meat offers the opportunity to maximize the island’s scarce farming resources.
Meatable will enter the market in partnership with Esco Aster, the world’s only licensed cultivated meat manufacturer. The duo this week unveiled plans to bring cultivated pork to Singapore restaurants in 2024, and supermarkets by 2025.
Meatable is working with Singaporean chefs to make dumplings for the Asian market. Credit: Meatable
Singapore’s regulatory landscape, diverse population, and openness to emerging technologies have created an attractive test ground for cultured meat. Meatable hopes it provides a launching pad for global expansion.
“Let’s see if we can get it into different jurisdictions,” says Luining. “It’s all about practice, because doing this is pioneering work.”
In Europe, however, the path to regulatory approval is a long one.
Europe’s impressive set of cultivated meat startups is seeking growth overseas. Credit: GFI Europe
The EU must provide approval before any cultivated meat is sold in the union. The bloc’s regulatory requirements are typically clearly defined, but time-consuming to meet.
Still, there are encouraging signs from the union. In 2021, the REACT-EU program gave another Dutch company, Mosa Meat, €2 million to cut cultured beef costs, while Horizon Europe offered €32 million for research into sustainable proteins. The European Commission also recently funded lab-grown foie gras.
The Netherlands, meanwhile, is increasing domestic support. In March, the Dutch government passed a motion to legalize public samplings of cultured meats. Four months later, Meatable’s founders finally tasted their first product: pork sausages.
Public tastings could attract support from a skeptical public. Credit: Meatable
Funding has also increased. In April, the Dutch government allocated €60 million for the development of cellular agriculture. The grant was the largest ever sum of public funding for the sector.
These moves have attracted global recognition. ProVeg International, an NGO dedicated to reducing animal consumption, recently named the Netherlands the top European nation for government support of cultured meat.
The country may have lost its head-start in the sector, but it’s starting to catch up again.
Global divides
Legislation isn’t the only barrier to mainstream adoption. Cultivated meat remains costly to produce, complicated to scale, and ethically contentious.
Scientists have also questioned the environmental benefits. A 2019 study from Oxford University determined that, in some scenarios, cultivated meats released more greenhouse gasses than conventional farming.
Research published last year produced more positive conclusions. Analysts at CE Delft found that renewable energy could lead cultivated meat to compete on costs with conventional meat production by 2030 — and leave a lower carbon footprint. Critics, however, called the predictions unrealistic.
The field also has powerful rivals. The value of the global meat sector was estimated at $897 billion in 2021, and forecast to reach $1.354 billion by 2027. The industry’s farmers and lobbyists have a lot to lose from a transition to lab-grown meat.
The Netherlands has the EU’s highest livestock concentration. It produces large emissions but also thousands of jobs. Credit: kees torn
Cultured meat must also overcome a range of consumer concerns, from religious objections to Gen Z viewing the produce as disgusting.
These challenges may not prove insurmountable. Public opinions can be swayed; bigger bioreactors, cheaper nutrients, and increased cell densities can yield efficiency surges.
Luining is confident that Meatable will sell a cost-competitive product by 2025. Ultimately, he envisions consumers mixing conventional, cultured, and plant-based meat.
He makes one more surreal appeal to my vegetarian diet. “You could have a steak while the cow whose cells it’s been made from is sitting next to you happily living its life.”
Luining ends our chat with a question: would I eat his cultivated meat? If the cow next to me has no beef with it — why not? | Emerging Technologies |
Sen. Ed Markey (D-Mass.) speaks with reporters at the U.S. Capitol Aug. 6, 2022.Photo: Francis Chung (AP)In the wake of the midterm elections, Democratic leaders in the House and Senate have introduced a bill crafted to ensure emerging technologies keep pace with the needs of people with disabilities. The effort is receiving widespread praise from groups such as the Blinded Veterans Association and Communications Service for the Deaf, and lawmakers are pushing for a swift passage in the lame duck session. OffEnglishThe Communications, Video, and Technology Accessibility Act, or CVTA, would amend key portions of the current federal accessibility law by, among other measures, requiring the improvement and expansion of closed captioning and audio description standards for online streaming platforms (in addition to television), the authors said. It would also update requirements to make closed captioning and audio descriptions more easily accessible.The bill, coauthored by Sen. Edward Markey, would further help to improve access to video programming for people who are deaf and use sign language, and, according to the authors, would empower the Federal Communications Commission to “ensure accessibility regulations keep pace with emergencing technologies, including artificial intelligence and augmented or virtual reality platforms.”“As technology has rapidly evolved over the last two decades, much of our economy and day-to-day lives have moved online. Unfortunately, accessibility standards have stayed largely the same, leaving people with disabilities behind,” said Rep. Anna Eshoo, senior member of the House Energy and Commerce Committee and a co-author of the bill. Eshoo stated that, as of last year, more than two-thirds of people who were blind or had low vision reported issues with technologies necessary for the jobs. And around 70% of students who are deaf or hard of hearing reported similar challenges in educational environments, she said.G/O Media may get a commissionAn ultra-smart air monitoror Black Friday, uHoo is $140 off its original price, plus you’ll get one year of uHoo’s Premium plan, with customized alerts about air quality.Sen. Markey, a coauthor of the current federal law — known as the 21st Century Communications and Video Accessibility Act (CVAA) — said technologies had changed much since CVAA’s passage. “What hasn’t changed is our obligation to make sure that everyone – including people with disabilities – has equal access to the services and technologies they need to thrive,” he said.The newer CVTA, meanwhile, was announced with the endorsement of FCC Chairwoman Jessica Rosenworcel, who said in statement: “Accessibility means equal opportunity to create, participate, and communicate—and promoting accessible technology is an important part of our agency’s mission. To do so effectively we need to keep up with emerging technologies. This legislation will help us do just that, by ensuring that people with disabilities have full access to communication products and services that are necessary to participate equally in today’s world, while laying a foundation for accessibility in future technologies.”Eric Bridges, executive director of the American Council of the Blind, said the CVAA had “laid the foundation for accessible technology and inclusive media for people who are blind, low vision, and Deafblind,” and this update would ensure that critical communications technologies remain accessible and “reiterate our nation’s commitment to accessible media and video content, regardless of how or where it is viewed by consumers.”“This update to the groundbreaking 21st Century Communications and Video Accessibility Act takes into account how rapidly technology is changing,” added Barbara Kelley, executive director of Hearing Loss Association for America (HLAA). The CVTA would, she said, “ensure people will have access to video conferencing platforms with built-in accessibility features, such as automatic captioning functions that will allow people with hearing loss to be fully part of the conversation.” “That’s real progress,” Kelley said.Numerous other groups focused on accessibility have endorsed the bill, include the National Federation of the Blind, the Leadership Conference on Civil and Human Rights, the American Foundation for the Blind, and the United Spinal Association, among others. | Emerging Technologies |
Genetically engineering associations between plants and diazotrophs could lessen dependence on synthetic fertilizer
Nitrogen is an essential nutrient for plant growth, but the overuse of synthetic nitrogen fertilizers in agriculture is not sustainable.
In a review article publishing in the journal Trends in Microbiology on September 26, a team of bacteriologists and plant scientists discuss the possibility of using genetic engineering to facilitate mutualistic relationships between plants and nitrogen-fixing microbes called "diazotrophs." These engineered associations would help crops acquire nitrogen from the air by mimicking the mutualisms between legumes and nitrogen-fixing bacteria.
"Engineering associative diazotrophs to provide nitrogen to crops is a promising and relatively quickly realizable solution to the high cost and sustainability issues associated with synthetic nitrogen fertilizers," writes the research team, led by senior author Jean-Michel Ané of the University of Wisconsin–Madison.
Diazotrophs are species of soil bacteria and archaea that naturally "fix" atmospheric nitrogen into ammonium, a source that plants can use. Some of these microbes have formed mutualistic relationships with plants whereby the plants provide them with a source of carbon and a safe, low-oxygen home, and in return, they supply the plants with nitrogen. For example, legumes house nitrogen-fixing microbes in small nodules on their roots.
However, these mutualisms only occur in a small number of plants and a scant number of crop species. If more plants were able to form associations with nitrogen fixers, it would lessen the need for synthetic nitrogen fertilizers, but these sorts of relationships take eons to evolve naturally.
How to enhance nitrogen fixation in non-legume crops is an ongoing challenge in agriculture. Several different methods have been proposed, including genetically modifying plants so that they themselves produce nitrogenase, the enzyme that nitrogen fixers use to convert atmospheric nitrogen into ammonium, or engineering non-legume plants to produce root nodules.
An alternative method—the topic of this review—would involve engineering both plants and nitrogen-fixing microbes to facilitate mutualistic associations. Essentially, plants would be engineered to be better hosts, and microbes would be engineered to release fixed nitrogen more readily when they encounter molecules that are secreted by the engineered plant hosts.
"Since free-living or associative diazotrophs do not altruistically share their fixed nitrogen with plants, they need to be manipulated to release the fixed nitrogen so the plants can access it," the authors write.
The approach would rely on bi-directional signaling between plants and microbes, something that already occurs naturally. Microbes have chemoreceptors that allow them to sense metabolites that plants secrete into the soil, while plants are able to sense microbe-associated molecular patterns and microbe-secreted plant hormones. These signaling pathways could be tweaked via genetic engineering to make communication more specific between pairs of engineered plants and microbes.
The authors also discuss ways to make these engineered relationships more efficient. Since nitrogen fixation is an energy-intensive process, it would be useful for microbes to be able to regulate nitrogen fixation and only produce ammonium when necessary.
"Relying on signaling from plant-dependent small molecules would ensure that nitrogen is only fixed when the engineered strain is proximal to the desired crop species," the authors write. "In these systems, cells perform energy-intensive fixation only when most beneficial to the crop."
Many nitrogen-fixing microbes could provide additional benefits to plants beyond nitrogen fixation, including promoting growth and stress tolerance. The authors say that future research should focus on "stacking" these multiple benefits. However, since these processes are energy-intensive, the researchers suggest developing microbial communities made up of several species that each provide different benefits to "spread the production load among several strains."
The authors acknowledge that genetic modification is a complex issue, and the large-scale use of genetically modified organisms in agriculture would require public acceptance. "There needs to be transparent communication between scientists, breeders, growers, and consumers about the risks and benefits of these emerging technologies," the authors write.
There's also the issue of biocontainment. Because microbes readily exchange genetic material within and between species, measures will be needed to prevent the spread of transgenic material into native microbes in surrounding ecosystems. Several such biocontainment methods have been developed in the laboratory, for example, engineering the microbes so that they are reliant on molecules that are not naturally available, meaning that they will be restricted to the fields in which the engineered host plants are present, or wiring the microbes with "kill switches."
The authors suggest that these control measures might be more effective if they are layered, since each measure has its limitations, and they stress the need to test these engineered plant-microbe mutualisms under the variable field conditions in which crops are grown.
"The practical use of plant-microbe interactions and their laboratory-to-land transition are still challenging due to the high variability of biotic and abiotic environmental factors and their impact on plants, microbes, and their interactions," the authors write.
"Trials in highly-controlled environments such as greenhouses often translate poorly to field conditions, and we propose that engineered strains should be tested more readily under highly replicated field trials."
More information: Chakraborty et al., Scripting a new dialogue between diazotrophs and crops, Trends in Microbiology (2023). DOI: 10.1016/j.tim.2023.08.007 , cell.com/trends/microbiology/f … 0966-842X(23)00239-1
Provided by Cell Press | Emerging Technologies |
TikTok recently announced that its users in the European Union will soon be able to switch off its infamously engaging content-selection algorithm. The EU’s Digital Services Act (DSA) is driving this change as part of the region’s broader effort to regulate AI and digital services in accordance with human rights and values.TikTok’s algorithm learns from users’ interactions—how long they watch, what they like, when they share a video—to create a highly tailored and immersive experience that can shape their mental states, preferences, and behaviors without their full awareness or consent. An opt-out feature is a great step toward protecting cognitive liberty, the fundamental right to self-determination over our brains and mental experiences. Rather than being confined to algorithmically curated For You pages and live feeds, users will be able to see trending videos in their region and language, or a “Following and Friends” feed that lists the creators they follow in chronological order. This prioritizes popular content in their region rather than content selected for its stickiness. The law also bans targeted advertisement to users between 13 and 17 years old, and provides more information and reporting options to flag illegal or harmful content.In a world increasingly shaped by artificial intelligence, Big Data, and digital media, the urgent need to protect cognitive liberty is gaining attention. The proposed EU AI Act offers some safeguards against mental manipulation. UNESCO’s approach to AI centers human rights, the Biden Administration’s voluntary commitments from AI companies addresses deception and fraud, and the Organization for Economic Cooperation and Development has incorporated cognitive liberty into its principles for responsible governance of emerging technologies. But while laws and proposals like these are making strides, they often focus on subsets of the problem, such as privacy by design or data minimization, rather than mapping an explicit, comprehensive approach to protecting our ability to think freely. Without robust legal frameworks in place worldwide, the developers and providers of these technologies may escape accountability. This is why mere incremental changes won't suffice. Lawmakers and companies urgently need to reform the business models on which the tech ecosystem is predicated.A well-structured plan requires a combination of regulations, incentives, and commercial redesigns focusing on cognitive liberty. Regulatory standards must govern user engagement models, information sharing, and data privacy. Strong legal safeguards must be in place against interfering with mental privacy and manipulation. Companies must be transparent about how the algorithms they’re deploying work, and have a duty to assess, disclose, and adopt safeguards against undue influence.Much like corporate social responsibility guidelines, companies should also be legally required to assess their technology for its impact on cognitive liberty, providing transparency on algorithms, data use, content moderation practices, and cognitive shaping. Efforts at impact assessments are already integral to legislative proposals worldwide, including the EU’s Digital Services Act, the US's proposed Algorithmic Accountability Act and American Data Privacy and Protection Act, and voluntary mechanisms like the US National Institute of Standards and Technology’s 2023 Risk Management Framework. An impact assessment tool for cognitive liberty would specifically measure AI’s influence on self-determination, mental privacy, and freedom of thought and decisionmaking, focusing on transparency, data practices, and mental manipulation. The necessary data would encompass detailed descriptions of the algorithms, data sources and collection, and evidence of the technology's effects on user cognition.Tax incentives and funding could also fuel innovation in business practices and products to bolster cognitive liberty. Leading AI ethics researchers emphasize that an organizational culture prioritizing safety is essential to counter the many risks posed by large language models. Governments can encourage this by offering tax breaks and funding opportunities, such as those included in the proposed Platform Accountability and Transparency Act, to companies that actively collaborate with educational institutions in order to create AI safety programs that foster self-determination and critical thinking skills. Tax incentives could also support research and innovation for tools and techniques that surface deception by AI models.Technology companies should also adopt design principles embodying cognitive liberty. Options like adjustable settings on TikTok or greater control over notifications on Apple devices are steps in the right direction. Other features that enable self-determination—including labeling content with “badges” that specify content as human- or machine-generated, or asking users to engage critically with an article before resharing it—should become the norm across digital platforms.The TikTok policy change in Europe is a win, but it’s not the endgame. We urgently need to update our digital rulebook, implementing new laws, regulations, and incentives that safeguard user’s rights and hold platforms accountable. Let’s not leave the control over our minds to technology companies alone; it’s time for global action to prioritize cognitive liberty in the digital age.WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at [email protected]. | Emerging Technologies |
Artists have frequently depicted the environmental impact of technology. The Impressionists of the 19th century were known for their paintings of trains and the shifting landscapes of industrialization. Photographers in the early 20th century captured with awe the trams and high-rises of the rapidly escalating urban environment. Amid the social movements of the 1960s and ’70s, environmental art became a major new form as artists tried to express the precarity of local ecologies, increasingly aware of the long-term consequences of economic activities. Artists explore emerging technologies to address their potentials and problems, with recent attention turning to the carbon footprint of our electronic expansion, as well as what might be done about it.For artists who want to experiment with NFTs and blockchain, the desire to create environmental art seems to conflict with the actual goal of saving the environment. The Bitcoin and Ethereum platforms operate on a principle called “proof of work” (PoW), in which computers solve complex puzzles to verify a transaction, for which that computer (or “miner”) is then rewarded with some amount of the cryptocurrency. Initially, people could mine on a simple gaming computer. However, the system is designed to increase the difficulty of the puzzles as more people, or rather computers, join the peer-to-peer network. This energy increase is an intentional part of the security in the PoW system.As a result, according to research conducted by artist and computer scientist Memo Akten, by the end of 2020, mining an NFT took at least 35 kWh of electricity—that is, the process, from mouse click to claiming the right to produce the block, demanded that much energy, emitting 20 kg of CO2. For comparison, sending an email produces a few grams of CO2, and watching an hour of Netflix produces only 36 grams, Akten says. Others examining NFTs and studies of Bitcoin have found even higher emissions. Though people debate the calculations, the undeniable point is that carbon emissions must be recognized and addressed, since emissions are responsible for the climate crisis’ temperature increase and ocean acidification, both of which kill existing life.Amid the speculative enthusiasms of Silicon Valley and other global tech breeding grounds, financiers seek profit, not sustainability, in blockchains. Given the energy required to ensure a cryptographically secure blockchain, it seems as though there is no way to be an environmentalist and use the technology. But some artists are now reimagining the system, using blockchain to propose sustainable practices.As early as 2017, artist and engineer Julian Oliver recognized that the number of computers competing to solve a puzzle and produce the hash for a transaction must demand enormous energy from oil, coal, or natural gas to power those machines. He proceeded to create Harvest (2017), which is both a media work and a working prototype for an alternative crypto-mining operation. Adapting a small wind turbine with environmental sensors, a weatherproof computer, and a 4G uplink, the machine uses wind energy as a source of electricity to mine cryptocurrency. All proceeds were funneled to climate change research.As more artists became aware of the environmental consequences of blockchain practices, they pressed for platforms to move away from PoW. An alternative now exists called “proof of stake” (PoS), which some alt-coins have been using for a while. PoS uses a pseudo-random process to assign a miner—now called a “forger” in this PoS landscape—the right to validate a block. The forger has to commit a stake in the chain, typically a deposit of a certain amount, to become a validator that can store data, process transactions, and add new blocks to the chain; a greater stake leads to more validation opportunities, and thus more income. There aren’t many computers competing to solve the puzzle, since only one is assigned to forge the block, which greatly reduces the energy expenditure and carbon emissions of the process. Though there are security risks and economic implications that lead some to reject its improved environmental impact, many artists have committed to using PoS chains.Nancy Baker Cahill is one of them. Her NFTs are largely on PoS chains, but since ether (ETH) is the most popular NFT cryptocurrency, she has received some of that, which is PoW. Baker Cahill has staked that ether to a new iteration of the chain known as ETH2 as a vote for the currency to shift to PoS, because these seemingly ethereal realms have very real material impact—which is a topic of her work as well. Baker Cahill adopts augmented reality’s abilities to overlay content on a geo-specific location and help audiences grasp the interconnectedness of the virtual and the tangible. She says, “We are hybrids of technology and microbes, inhabiting a largely undifferentiated natural-artificial world … The artist’s role in this situation is to discover and harness adaptable parts of old systems and mutate them into something new.” Recognizing that “profound truths often lurk in constructed simulations,” Baker Cahill launched Mushroom Cloud, an AR projection and complex of NFTs, in December 2021, during Art Basel Miami, and iterated for Frieze LA. (Showing this project at fairs aims to keep the art industry aware of the carbon footprint associated with global travel.)Courtesy of Nancy Baker CahillThe animation opens with an incandescent mushroom cloud exploding over the water, a visual that links the impact of continued carbon consumption in the 21st century with the nuclear devastation of the 20th century. And yet, the mushroom is also a symbol of hope; fungi have the power to break down most hydrocarbon materials, including oil spills, and can be used to produce sustainable alternatives to plastic. Their underground mycelia transport carbon through a mycorrhizal lattice connecting and communicating with plants, in what the biologist Merlin Sheldrake called the wood wide web. The fungal system is similar to our internet—the basis for blockchain and therefore the NFT that enables sale of this work, but also 4th Wall App, which allows anyone to access this public AR work. The app also curates geolocated projects around the world. In this way, art can transport ideas across the internet, like mushrooms communicate life across the planet.There is no context in which we can speak of the environment separate from economics. The two are bound together. In 1972, a team of international researchers published Limits to Growth, urging the need to take global warming seriously, not only for environmental and humanitarian reasons, but for national security and economic reasons as well. Blockchain is a technology with protocols and features still in development. Its ecological impact can be mitigated when designed to do so. The Crypto Climate Accord, launched by the Rocky Mountain Institute, the Energy Web Foundation, and the Alliance for Innovative Regulation, encourages all blockchain activity to transition to renewable energy by 2030 and to reach net zero emissions by 2040. This requires that any greenhouse gases going into the atmosphere be balanced by a removal mechanism, which is the basis of carbon markets.One vision for how this may work comes through M Carbon Dioxide, by artist Sven Eberwein. Produced in November 2020, before the hue and cry over blockchain’s environmental impact hit the mainstream, the artwork uses the NFT format to present how carbon markets could be brought on-chain. M Carbon Dioxide shows a blue sphere and cloud formation—reminiscent of NASA’s famous “Blue Marble.” The image is strewn with black specks that slowly dissipate, representing the 1,000 tons of CO2 purchased and retired as verified credit units on the Verra registry.Courtesy of Sven EberweinBusinesses and individuals purchase carbon credits to compensate for their emissions, such as occur from air travel. A credit becomes available for a company to purchase when an organization proves it avoided emitting—or actually removed from the atmosphere—1 metric ton of carbon, through practices like reforestation, wind farm development, or carbon sequestration. Retiring credits means they are no longer available for use. It limits the offsets that people or businesses can buy.Distrust of carbon markets stems partly from earlier questionable practices (like using the same carbon credit for multiple parties), but registry standards and oversight organizations like Verra Registry or CarbonPlan have instituted greater transparency. The UN climate conference in Glasgow formalized how countries would buy and sell UN-certified carbon credits from one another, as a means of measuring their pledges under the Paris climate agreement. Despite concerns about loopholes, a major achievement was managing the possible double spending problem, which is one of blockchain’s initial use purposes.The offsets in M Carbon Dioxide were purchased directly from two projects: Cerro de Hula Wind Project in Honduras and Bull Run Forest Carbon Project in Belize. In the corner of the work are two QR codes that link to Verra Registry to show the proof of ownership of the carbon retired from both organizations in the name of M Carbon Dioxide. This produces a bridge from the real world to the blockchain. It was a proof of concept but not yet scalable. Eberwein was advised on the accounting and legal side of this project by Toucan Bridge and Offsetra, and it became the model used by Toucan. In this way, artists’ projects can help stimulate new possibilities.Eberwein was able to show how blockchain as a technology, and NFTs in particular, could be “productive instruments in fighting the climate crisis by bridging CO2 offsets on-chain and utilizing them in new value-additive ways.” In this process, he connected with KlimaDAO, an organization that tokenizes third-party-verified carbon offsets, making use of blockchain’s transparency and immutability to ensure that those credits are permanently traced and won’t lead to double use. Eberwein was intrigued by the potential in how KlimaDAO seeks to accelerate the cost of carbon, in order to pressure businesses into investing in low-carbon technologies and carbon-removal projects. KlimaDAO’s treasury represents semi-retired carbon credits. (Technically, the credits retain their economic value and act as financial backing for Klima, so they are not retired; but, since they can’t leave the KlimaDAO treasury and are removed from the marketplace, they operate as if retired.) The decrease in carbon offsets available to businesses drives up the price, incentivizing alternative fuel programs and practices.Courtesy of Sven EberweinEberwein was among many artists concerned for the environment who was criticized for using Ethereum, given its PoW energy demands. M Carbon Dioxide was carbon-neutral, due to the offsets assigned for the project, but he shifted to Polygon for CO2_Compound (2021). In this iteration of his continued attempts to discover how blockchain might impinge on fossil fuel markets, he made the artwork into an economic actor. CO2 Compound appears like an eye, with the pupil representing the Klima token staked into the project, then valued at 4.14 tons of carbon offsets.That token can’t be extracted, and so it is a commitment to KlimaDAO’s project to produce sustainable and verifiable carbon markets. Each Klima token is backed by a minimum of 1 ton of tokenized carbon offsets that are held in reserves by the treasury; the reserves in the treasury determine the maximum supply of Klima tokens at any given time. As the token increases in value, so does the amount of carbon tonnage that it represents, tracked by a website. After three months, the work has compounded to 26 tons of carbon offset, which is a remarkable 525 percent return. As an NFT, the work was sold for 7.5 ether on December 6, 2021, through OceanDrop, an auction organized by the Open Earth Foundation funding marine life conservation. What is fascinating about KlimaDAO is the way it aims to subvert current carbon practices so that the value of carbon comes not from its emission but from its removal and sequestration. Eberwein’s work experiments with the potential of this new economic model to protect our environment.These artists communicate and challenge the impact of this emergent technology on our fragile ecosystem. Digital artists have a chance to iterate in their explorations of climate and energy issues, to risk and fail and try again, as they attempt to both visualize and enact a better set of environmental politics. It took one carpet-tile company 30 years to develop a carbon-negative product; every advance revealed additional improvements to adopt. Mistakes are not a reason to quit, and fear of failure cannot stop efforts to find better ways of being in the world. Memo Atken is right when he warns, “Rejection of technology is a rejection of humanity. To break out of this false dichotomy, we must adapt a holistic approach—to embrace not only technology, but all of humanity, all of nature—including technology.” The flaws in current blockchain practices can’t be wished away, and only active exploration of the consequences can create alternatives. These artists are not just representing how blockchains work, but reimagining how they can work to defend our world.More Great WIRED Stories📩 The latest on tech, science, and more: Get our newsletters!Here come the underdogs of the robot OlympicsPokémon Legends: Arceus isn't great. It doesn't matterInside Trickbot, Russia's notorious ransomware gangUse these keyboard shortcuts and ditch your mouseThe unnerving rise of video games that spy on you👁️ Explore AI like never before with our new database✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers | Emerging Technologies |
There's a new use for 5G networks: spotting wildfires before they get out of control, an increasing worry as climate change makes fires spread faster and burn longer than before.The startup Pano AI uses a series of cameras that survey the wilderness and AI algorithms that watch for telltale smoke -- an indicator of small blazes that could grow into raging wildfires. That footage is sent to the startup's headquarters for human confirmation, and if a fire is burning, evidence is sent to clients who could be affected. While Pano AI had been sending evidence photos over 4G LTE networks at slow rates of around 20 to 30 6-megapixel images per minute, its new partnership with T-Mobile has it using the carrier's 5G network to send video at 30 frames per second, which is around 90 times more data. Ultimately, getting evidence to Pano AI's clients, which include utility companies, much quicker on 5G means a faster response from firefighters and potentially squashing big fires before they get dangerous."The ultimate problem we're solving is megafires, and we're helping fire departments deploy a new strategy that is gaining popularity, which is to put more resources on the fire earlier," said Sonia Kastner, CEO of Pano AI. Rather than slowly increase the amount of fire trucks and aircraft flying in to drop extinguishing payloads as the fire grows, Rapid Initial Attack is a new strategy to send all those resources immediately as soon as the fire is detected. Naturally, having a network of fire-spotting cameras helps direct those resources to the right locations quickly. Pano AI works with a number of utilities, governments, fire authorities, forestry companies and private landlords who in turn work with local emergency responders. Its newest client and the first with a system using T-Mobile's 5G network is Portland General Electric (PGE), a utility supplying gas and electricity to 16 million customers around Portland, Oregon. Pano AI has 20 cameras set up in the forests surrounding the city that give 10-mile panoramic views, which include powerlines. This lets PGE know if fires are headed toward its infrastructure.T-Mobile recruited Pano AI to be part of its Innovation Lab alongside other companies harnessing 5G to improve their services, such as Mixhalo, which is using the carrier's 5G network to pipe in concert audio directly to audience members' phones. But Pano AI's partnership goes deeper, as it's mounting its cameras on T-Mobile's cell towers, saving months of time and paperwork needed to request and install its equipment on other signal towers or similar vantage points. Pano AI is expecting better coverage than the 4G LTE it was using after committing to implementing T-Mobile's 5G network with future client solutions too. The network will cover even more rural areas with the over 7,000 additional midband 5G licenses the carrier acquired in last week's spectrum auction. That should cover areas that challenged Pano AI in the past, like prior clients that were so remote they needed subscriptions to SpaceX's Starlink service or even private LTE networks requiring new towers to be built -- a costly process that can take lots of precious time to get up and running, Kastner said. "And the wildfires do not wait," Kasner said.While Pano AI's algorithms only spot smoke at the moment, its cameras are capturing a lot of data that the startup's clients want to sift through for other indicators of fire safety, like building of brush that could fuel wildfires. In the future, Pano AI's tech could be advanced to spot indicators of other weather-related calamities such as floods or hurricanes, Kastner said. For now, Pano AI has over a dozen clients spread across five US states and two Australian states. They're based primarily in the Western US, though the startup is in talks with potential clients in other areas that are also suffering more wildfires, such as Florida and Tennessee. 5G will help Pano AI grow its tech and services. "5G actually lowers the cost of prediction, which increases the scale of prediction as well," said John Saw, executive vice president of advanced and emerging technologies at T-Mobile. | Emerging Technologies |
Justin Sullivan/Getty Images (left), David Ramos/Getty Images (right)Apple CEO Tim Cook discussed AR, VR, and more in a new interview with Dutch media outlet Bright.He said Apple avoids using the term "metaverse" because the average person doesn't know what it is.The tech giant's approach is a stark contrast to Mark Zuckerberg's obsession with the metaverse.Apple CEO Tim Cook revealed in an exclusive interview with Dutch media outlet Bright that the tech giant's hesitancy toward joining the metaverse hype is intentional."I always think it's important that people understand what something is. And I'm really not sure the average person can tell you what the metaverse is," Cook told the outlet.The metaverse is a term derived from science-fiction and refers to a hypothetical version of a three-dimensional internet accessed via immersive technologies rather than 2D screens.The word "metaverse" was only mentioned once on Apple earnings calls so far this year, compared to 36 mentions on Meta earnings calls. Despite the buzz word's explosive usage across the industry, executives are divided on whether the metaverse represents a real product — like virtual reality — or if it's just a concept for a virtual world that may never actually exist.However, Mark Zuckerberg told staff in July that Meta is in "deep, philosophical competition" with Apple to build the metaverse.Following Facebook's name change and announcement last fall that it would invest $10 billion into building the so-called metaverse, the iPhone maker stood out from the wider tech industry in its apparent refusal to join Mark Zuckerberg's latest obsession.Instead, Apple has focused its emerging tech investment specifically on Augmented Reality.When an analyst asked Cook this January about Apple's role in the metaverse space, he responded that Apple is "always exploring new and emerging technologies" and pointed to the company's 14,000 AR kit apps in the App Store."I think AR is a profound technology that will affect everything," Cook told Bright in the interview published Friday, adding that virtual reality is not a way to "live your whole life.""It's something you can really immerse yourself in. And that can be used in a good way," he said. "VR is for regular periods, but not a way to communicate well. So I'm not against it, but that's how I look at it."Read the original article on Business Insider | Emerging Technologies |
Buzzy products like ChatGPT and DALL-E 2 will have to turn a profit eventually.Getty; The AtlanticDecember 21, 2022, 3:34 PM ETArthur C. Clarke once remarked, “Any sufficiently advanced technology is indistinguishable from magic.” That ambient sense of magic has been missing from the past decade of internet history. The advances have slowed. Each new tablet and smartphone is only a modest improvement over its predecessor. The expected revolutions—the metaverse, blockchain, self-driving cars—have plodded along, always with promises that the real transformation is just a few years away.The one exception this year has been in the field of generative AI. After years of seemingly false promises, AI got startlingly good in 2022. It began with the AI image generators DALL-E 2, Midjourney, and Stable Diffusion. Overnight, people started sharing AI artwork they had generated for free by simply typing a prompt into a text box. Some of it was weird, some was trite, and some was shockingly good. All of it was unmistakably new terrain.That sense of wonderment accelerated last month with the release of OpenAI’s ChatGPT. It’s not the first AI chatbot, and it certainly won’t be the last, but its intuitive user interface and overall effectiveness leave the collective impression that the future is arriving. Professors are warning that this will be the end of the college essay. Twitter users (in a brief respite from talking about Elon Musk) are sharing delightful examples of genuinely clever writing. A common refrain: “It was like magic.”ChatGPT is free, for now. But OpenAI’s CEO Sam Altman has warned that the gravy train will eventually come to a screeching halt: “We will have to monetize it somehow at some point; the compute costs are eye-watering,” he tweeted. The company, which expects to make $200 million in 2023, is not a charity. Although OpenAI launched as a nonprofit in 2015, it jettisoned that status slightly more than three years later, instead setting up a “capped profit” research lab that is overseen by a nonprofit board. (OpenAI’s backers have agreed to make no more than 100 times what they put into the company—a mere pittance if you expect its products to one day take over the entire global economy.) Microsoft has already poured $1 billion into the company. You can just imagine a high-octane Clippy powered by ChatGPT.Making the first taste free, so to speak, has been a brilliant marketing strategy. In the weeks since its release, more than a million users have reportedly given ChatGPT a whirl, with OpenAI footing the bill. And between the spring 2022 release of DALL-E 2, the current attention on ChatGPT, and the astonished whispers about GPT-4, an even more advanced text-based AI program supposedly arriving next year, OpenAI is well on its way to becoming the company most associated with shocking advances in consumer-facing AI. What Netflix is to streaming video and Google is to search, OpenAI might become for deep learning.How will the use of these tools change as they become profit generators instead of loss leaders? Will they become paid-subscription products? Will they run advertisements? Will they power new companies that undercut existing industries at lower costs?We can draw some lessons from the trajectory of the early web. I teach a course called “History of the Digital Future.” Every semester, I show my students the 1990 film Hyperland. Written by and starring Douglas Adams, the beloved author of the Hitchhiker’s Guide to the Galaxy series, it’s billed as a “fantasy documentary”—a tour through the supposed future that was being created by multimedia technologists back then. It offers a window through time, a glimpse into what the digital future looked like during the prehistory of the web. It’s really quite fun.The technologists of 1990 were focused on a set of radical new tools that were on the verge of upending media and education. The era of “linear, noninteractive television … the sort of television that just happens at you, that you just sit in front of like a couch potato,” as the film puts it, was coming to an end. It was about to be replaced by “software agents” (represented delightfully by Tom Baker in the film). These agents would be, in effect, robot butlers: fully customizable and interactive, personalizing your news and entertainment experiences, and entirely tailored to your interests. (Sound familiar?)Read: The internet is Kmart nowSquint, and you can make out the hazy outline of the present in this imagined digital future. We still have linear, noninteractive television, of course, but the software agents of 1990 sound a lot like the algorithmic-recommendation engines and news feeds that define our digital experience today.The crucial difference, though, is whom the “butlers” serve in reality. Early software agents were meant to be controlled and customized by each of us, personally. Today’s algorithms are optimized to the needs and interests of the companies that develop and deploy them. Facebook, Instagram, YouTube, and TikTok all algorithmically attempt to increase the amount of time you spend on their site. They are designed to serve the interests of the platform, not the public. The result, as the Atlantic executive editor Adrienne LaFrance put it, is a modern web whose architecture resembles a doomsday machine. In retrospect, this trajectory seems obvious. Of course the software agents serve the companies rather than the consumers. There is money in serving ads against pageviews. There isn’t much money in personalized search, delight, and discovery. These technologies may develop in research-and-development labs, but they flourish or fail as capitalist enterprises. Industries, over time, build toward where the money is.The future of generative AI might seem like uncharted terrain, but it’s really more like a hiking trail that has fallen into disrepair over the years. The path is poorly marked but well trodden: The future of this technology will run parallel to the future of Hyperland’s software agents. Bluntly put, we are going to inhabit the future that offers the most significant returns to investors. It’s best to stop imagining what a tool such as ChatGPT might accomplish if freely and universally deployed—as it is currently but won’t be forever, Altman has suggested—and instead start asking what potential uses will maximize revenues.New markets materialize over time. Google, for instance, revolutionized web search in 1998. (Google Search, in its time, was magic.) There wasn’t serious money in dominating web search back then, though: The technology first needed to become effective enough to hook people. As that happened, Google launched its targeted-advertising platform, AdWords, in 2001, and became one of the most profitable companies in history over the following years. Search was not a big business, and then it was.This is the spot where generative-AI hype seems to come most unmoored from reality. If history is any guide, the impact of tools such as ChatGPT will mostly reverberate within existing industries rather than disrupt them through direct competition. The long-term trend has been that new technologies tend to exacerbate precarity. Large, profitable industries typically ward off new entrants until they incorporate emerging technologies into their existing workflows.We’ve been down this road before. In 1993, Michael Crichton declared that The New York Times would be dead and buried within a decade, replaced by software agents that would deliver timely, relevant, personalized news to customers eager to pay for such content. In the late 2000s, massive open online courses were supposed to be a harbinger of the death of higher education. Why pay for college when you could take online exams and earn a certificate for watching MIT professors give lectures through your laptop?The reason technologists so often declare the imminent disruption of health care and medicine and education is not that these industries are particularly vulnerable to new technologies. It is that they are such large sectors of the economy. DALL-E 2 might be a wrecking ball aimed at freelance graphic designers, but that’s because the industry is too small and disorganized to defend itself. The American Bar Association and the health-care industry are much more effective at setting up barriers to entry. ChatGPT won’t be the end of college; it could be the end of the college-essays-for-hire business, though. It won’t be the end of The New York Times, but it might be yet another impediment to rebuilding local news. And professions made up of freelancers stringing together piecework may find themselves in serious trouble. A simple rule of thumb: The more precarious the industry, the greater the risk of disruption.Altman himself has produced some of the most fantastical rhetoric in this category. In a 2021 essay, “Moore’s Law for Everything,” Altman envisioned a near future in which the health-care and legal professions are replaced by AI tools: “In the next five years, computer programs that can think will read legal documents and give medical advice … We can imagine AI doctors that can diagnose health problems better than any human, and AI teachers that can diagnose and explain exactly what a student doesn’t understand.”Indeed, these promises sound remarkably similar to the public excitement surrounding IBM’s Watson computer system more than a decade ago. In 2011, Watson beat Ken Jennings at Jeopardy, setting off a wave of enthusiastic speculation that the new age of “Big Data” had arrived. Watson was hailed as a sign of broad social transformation, with radical implications for health care, finance, academia, and law. But the business case never quite came together. A decade later, The New York Times reported that Watson had been quietly repurposed for much more modest ends.The trouble with Altman’s vision is that even if a computer program could give accurate medical advice, it still wouldn’t be able to prescribe medication, order a radiological exam, or submit paperwork that persuades insurers to cover expenses. The cost of health care in America is not directly driven by the salary of medical doctors. (Likewise, the cost of higher education has skyrocketed for decades, but believe me, this is not driven by professor pay increases.)As a guiding example, consider what generative AI could mean for the public-relations industry. Let’s assume for a moment that either now or very soon, programs like ChatGPT will be able to provide average advertising copy at a fraction of existing costs. ChatGPT’s greatest strength is its ability to generate clichés: It can, with just a little coaxing, figure out what words are frequently grouped together. The majority of marketing materials are utterly predictable, perfectly suited to a program like ChatGPT—just try asking it for a few lines about the whitening properties of toothpaste.This sounds like an industry-wide cataclysm. But I suspect that the impacts will be modest, because there’s a hurdle for adoption: Which executives will choose to communicate to their board and shareholders that a great cost-saving measure would be to put a neural net in charge of the company’s advertising efforts? ChatGPT will much more likely be incorporated into existing companies. PR firms will be able to employ fewer people and charge the same rates by adding GPT-type tools into their production processes. Change will be slow in this industry precisely because of existing institutional arrangements that induce friction by design.Then there are the unanswered questions about how regulations, old and new, will influence the development of generative AI. Napster was poised to be an industry-killer, completely transforming music, until the lawyers got involved. Twitter users are already posting generative-AI images of Mickey Mouse holding a machine gun. Someone is going to lose when the lawyers and regulators step in. It probably won’t be Disney.Institutions, over time, adapt to new technologies. New technologies are incorporated into large, complex social systems. Every revolutionary new technology changes and is changed by the existing social system; it is not an immutable force of nature. The shape of these revenue models will not be clear for years, and we collectively have the agency to influence how it develops. That, ultimately, is where our attention ought to lie. The thing about magic acts is that they always involve some sleight of hand. | Emerging Technologies |
Anna Moneymaker/Getty Images
toggle caption
Senate Majority Leader Chuck Schumer, D-N.Y., is embarking on an effort to draft legislation that would put guardrails around rapidly evolving artificial intelligence.
Anna Moneymaker/Getty Images
Senate Majority Leader Chuck Schumer, D-N.Y., is embarking on an effort to draft legislation that would put guardrails around rapidly evolving artificial intelligence.
Anna Moneymaker/Getty Images
For the past several weeks, Senate Majority Leader Chuck Schumer has met with at least 100 experts in artificial intelligence to craft groundbreaking legislation to install safeguards.
The New York Democrat is in the earliest stages of talking to members of his own party and Republicans to gauge their interest in getting behind a new proposed AI law.
"Our goal is to maximize the good that can come of [artificial intelligence]," Schumer said. "And there can be tremendous good, but minimize the bad that can come of it. ... But to do it is more easier said than done."
It's all part of a congressional race to try to catch up legislatively to exploding advances in AI.
Monday night, a bipartisan group of House members will host a top industry figure for a joint dinner. On Tuesday, a Senate panel will hold a hearing to consider new oversight of the technology.
But while lawmakers look to craft AI rules, they face Congress' lackluster history of regulating emerging technologies.
Schumer admits he's facing some clear challenges.
"It's a very difficult issue, AI, because a) it's moving so quickly and b) because it's so vast and changing so quickly," Schumer said.
As Schumer works to build a bipartisan consensus behind his legislative framework, he must also navigate a bitterly divided Congress.
Congress has struggled to regulate emerging technology
Congressional lawmakers missed critical windows to install guardrails for the internet and social media.
Now, it faces the equivalent of trying to put in brakes for a runaway train.
"AI, or automated decision-making technologies, are advancing at breakneck speed," said law professor Ifeoma Ajunwa, who co-founded an AI research program at the University of North Carolina at Chapel Hill. "There is this AI race ... yet ... the regulations are not keeping pace."
Ajunwa says that there aren't enough experts in both computer science and law on Capitol Hill and that this makes AI lawmaking all the more challenging.
"There is a real need for such ... legal training for people who will then end up in Congress," she said.
The name of the technology alone can add a mystique and cause confusion for lawmakers, Ajunwa argues.
She and some industry experts say AI should instead be called "automated decision-making" to reflect the human decision-making — including values and biases — embedded in it.
Lawmakers play catch-up
Sen. Josh Hawley, R-Mo., wants to play a role in the development of AI law, but he admits he has work to do.
"I've got to get educated," he said during a recent ride on a Senate subway train back to his office.
Hawley has loomed large in partisan fights over a variety of issues, but on this topic, he's intrigued by the Democratic leader's plans.
Hawley is the top Republican on a Senate Judiciary Committee subpanel that will examine AI oversight options in a hearing on Tuesday.
"For me right now, the power of AI to influence elections is a huge concern," Hawley said. "So I think we've got to figure out what is the threat level there, and then what can we reasonably do about it?"
Sen. Richard Blumenthal, D-Conn., chairs the subpanel that will hold Tuesday's hearing. Sam Altman, CEO of OpenAI, the company behind the chatbot ChatGPT, will testify for the first time before a congressional panel.
But that marks just one of many planned AI hearings.
Sen. Gary Peters, D-Mich., who chairs the Senate Homeland Security and Governmental Affairs Committee, plans to hold at least one hearing on AI during every work period.
Peters argues that Congress has already seen some progress passing legislation related to AI, including four bills that Peters wrote during the last Congress.
"We're going to continue to focus on that in Homeland Security," he said. "We had a hearing last month. We're going to have another one coming up later this month."
Across the Capitol, Rep. Ted Lieu, D-Calif., will co-lead a bipartisan dinner hosting OpenAI CEO Altman on Monday.
This year, Lieu introduced the first piece of federal legislation written by AI. Lieu said he used ChatGPT by asking the bot how he should write a resolution pushing for AI regulation.
"You have all sorts of harms in the future we don't know about, and so I think Congress should step up and look at ways to regulate," Lieu told reporters just before introducing the legislation.
The urgency is obvious
Law professor Ajunwa, who recently wrote a book on the influence of tech and AI on the modern workplace called The Quantified Worker, worries about AI's privacy issues. She says key questions are not being asked about the technology's impact on disadvantaged people, while the focus remains on profits.
"The way the internet developed, unfortunately, is the same way that AI is developing," she said.
Ajunwa says that with the U.S. already lagging behind the technology — for example, the European Union is already years ahead in regulation efforts — the best bet for regulation may be quicker White House executive actions.
The Biden White House announced a series of initiatives ahead of meetings with industry officials this month.
Still, back at the majority leader's office just off the Senate chamber, Schumer remains undeterred.
"Look, it's probably the most important issue facing our country, our families and humanity in the next 100 years," he said. "And how we deal with AI is going to determine the quality of life for this generation and future generations probably more than anything else." | Emerging Technologies |
Lab-grown meat has been touted as a way to save the planet, but a new study suggests its green credentials are not as solid as many believe.
Researchers have revealed that lab-grown or 'cultured' meat, produced by cultivating animal cells, is up to 25 times worse for the climate than real beef.
Production of real meat has a huge carbon footprint because it requires water, feed and the clearing of trees to make way for cattle.
Despite this, experts say the carbon footprint of lab-grown meat could be 'orders of magnitude higher' once the industry grows.
Although lab-grown meat is yet to hit the shops, British scientists are among those growing meat products in a lab with a view to commercialise them.
The new research was led by scientists at the Department of Food Science and Technology, University of California, Davis.
It has been detailed in a new study published as a preprint paper, yet to be peer-reviewed, on the bioRxiv server.
'Currently, animal cell-based meat products are being produced at a small scale and at an economic loss, however companies are intending to industrialize and scale-up production,' the scientists say in their paper.
'Results indicate that the environmental impact of near-term animal cell-based meat production is likely to be orders of magnitude higher than median beef production if a highly refined growth medium is utilised.'
Good Food Institute, a non-profit organisation that promotes plant- and cell-based alternatives to animal products, stressed that the study has not yet been through a full peer review process, 'so its assumptions and conclusions are subject to change'.
'Several key assumptions in the UC Davis study do not align with the current or expected practices for sourcing and purification of cell culture media ingredients,' a Good Food Institute spokesperson told MailOnline.
Lab-grown meat is different from plant-based 'meat', which is not meat at all but uses vegan ingredients such as vegetable protein to replicate the look and taste of real meat.
Lab-grown or 'cultured' meat is generally seen as more ethical than real meat because it requires a sample of body tissue rather than the death of the animal, although many vegans and vegetarians will not touch it because it is made of animal.
The process can be done with multiple types of animal cell to create an approximation of the real thing, whether it's chicken, pork or beef.
Taking beef as an example, scientists use a cow's stem cells - the building blocks of muscle and other organs - to begin the process of creating the cultured meat.
The cells are placed in petri dishes with a 'growth medium' comprising nutrients such as amino acids, glucose, vitamins, and inorganic salts.
This is supplemented with growth factors and other proteins to help the muscle cells multiply and grow.
They're allowed to proliferate just as they would inside an animal, until there are trillions of cells from a small sample.
These cells later form muscle cells, which naturally merge to form primitive muscle fibres and edible tissue that can be packaged, shipped and sold.
Experts think lab-grown meat is set to become more ubiquitous in the next 10 years, transforming from a niche concept to a common fridge staple.
But for this to happen, production methods will have to be scaled up from mere petri dishes to massive energy-intensive industrial units.
In the study, the scientists estimated the energy required for stages of lab-grown meat's production, from the ingredients making up the growth medium and the energy required to power laboratories, and compared this with beef.
They largely focused on the quantity of growth medium components, including glucose, amino acids, vitamins, growth factors, salts and minerals.
They found the global warming potential of lab-grown meat ranged from 246 to 1,508 kg of CO2 equivalent per kilogram of lab-grown meat, which is four to 25 times greater than the average global warming potential of retail beef.
According to the experts, this does not change depending on which animal's cells are being grown and the meat that's being created, whether it's beef, chicken or lamb.
But the team say that they did not consider the environmental impact of scaling up animal cell-based meat production facilities, which could bump the industry's footprint up even higher.
The team conclude the environmental impact of emerging technologies such as cultured meat is a new concept but 'highly important'.
'Our results indicate that animal cell-based meat is likely to be more resource intensive than most meat production systems according to this analysis,' they say.
Lab-grown meat has its origins a decade ago but the industry is still very young, and Singapore is so far the only country in the world to have approved the meats for sale.
Lab-grown chicken produced by the US company Eat Just was first served at Singapore restaurant in 2020 and was described as tasting 'just like its farmed counterpart'.
Earlier this year in the US, the Food and Drug Administration declared cultured meat safe for human consumption, paving the way for them to be sold stateside, but in the UK the Food Standards Agency is yet to do the same.
The industry has since grown to more than 150 companies as of late 2022, backed by $2.6 billion in investments, according to the Good Food Institute.
Professor Mark Post at Maastricht University in the Netherlands was the first person to present a proof of concept for lab-grown meat, back in 2013.
He thinks it will be so popular with animal welfare activists and burger fans alike it will eventually displace plant-based substitutes, like soy burgers, that are increasingly common in UK supermarkets.
'Novel technologies such as the ones developed in cellular agriculture are part of the solution, next to reducing food waste and changing consumer behaviour,' Professor Post previously told MailOnline.
Never mind plant burgers! Could lab-grown red meat save the environment? Lab-grown meat is set to become more ubiquitous this decade, transforming from a niche concept to a common fridge staple. Professor Mark Post at Maastricht University in the Netherlands unveiled the world's first lab-grown burger from cow muscle cells, in 2013. He's now pioneering a 'kinder and cleaner' way of making beef with his firm, Mosa Meat, which created the world's first hamburger without slaughtering an animal. The company extracts cells from the muscle of an animal, such as a cow for beef, when the animal is under anaesthesia. The cooked Mosa Meat patty looks similar to conventionally-made beef burgers. The company says it tastes 'like meat' The cells then are placed in a dish containing nutrients and naturally-occurring growth factors, and allowed to proliferate just as they would inside an animal, until there are trillions of cells from a small sample. These cells later form muscle cells, which naturally merge to form primitive muscle fibres and edible tissue. From one sample from a cow, the firm can produce 800 million strands of muscle tissue, which is enough to make 80,000 quarter pounders. Mosa Meat has also created cultured fat that it adds to its tissue to form the finished product, which simply tastes 'like meat', the company says. Professor Post think this product will be so popular with animal welfare activists and burger fans alike it will eventually displace plant-based substitutes, like soy burgers, that are increasingly common in UK supermarkets. 'Novel technologies such as the ones developed in cellular agriculture are part of the solution, next to reducing food waste and changing consumer behaviour,' Professor Post told MailOnline. 'A good example of strong trend in consumer behaviour is increased vegetarianism among young generations to unprecedented numbers. 'Most likely, this trend will continue and spread towards other age groups and eventually will lead to disappearance of plant-based meat substitutes.' Mosa Meat received $55 million in 2021 to scale up production of cultured meat. The funding will help extend the firm's current pilot production facility in the Dutch city of Maastricht and develop an industrial-sized production line. | Emerging Technologies |
Is AI a Threat to Remote Work? Here's How to Understand the Challenges and Opportunities of AI in Business
While artificial intelligence has great potential to enhance different aspects of our lives, both personally and professionally, there still remain ethical considerations, and problem areas arise should we fail to pay attention to what exactly controls us.
Opinions expressed by Entrepreneur contributors are their own.
The dawn of the 21st century has ushered in the era of remote work. With technological advancements allowing for increased connectivity amongst individuals, organizations can now operate from disparate locations around the globe. The concept of remote work has been gaining traction over the last decade, with more companies embracing it as a viable option for their businesses.
However, despite its potential benefits, particular challenges must be addressed if remote working is to become an integral part of the workforce. In this article, we will explore some of these challenges and opportunities that lie ahead regarding the future of remote work.
Challenges
One of the primary challenges associated with implementing a successful remote work policy is ensuring that employees remain productive while away from their traditional office environment. It is essential to create effective communication systems and establish clear expectations around duties and tasks to ensure that employees remain motivated and productive. Without these safeguards, productivity could suffer due to distractions or lack of motivation.
Another challenge is providing adequate support systems for staff. When managing a distributed team, it can be challenging to provide consistent feedback and guidance on activities and effectively monitor progress and performance. This can lead to feelings of isolation among staff members, which can harm employee well-being and overall business performance.
Opportunities
Despite these challenges, many opportunities are associated with introducing remote working policies into organizations. One such opportunity lies in cost savings for employers; by reducing rental costs on office spaces or eliminating travel expenses for commuting staff members, organizations can make significant cost reductions which can improve financial performance or provide additional funds for other investments within the business.
Remote working is also beneficial from an employee perspective; studies suggest that staff who can work experience increased job satisfaction due to improved flexibility and control over their daily routine remotely. Additionally, enabling remote working also provides employers access to global talent pools as they no longer need to be confined by physical boundaries when recruiting new staff members.
Finally, enabling flexible working arrangements could help organizations become more agile in responding to changing customer needs or market conditions; by having access to external resources, they'll no longer need to rely solely on internal resources when adapting their operations quickly.
Impact of artificial intelligence on business and society
As technology advances exponentially, so does its application within various fields, including business and society. Artificial intelligence (AI) presents great potential for increasing efficiency and creating innovative solutions within various industries such as healthcare, finance and manufacturing. However, like any new development, AI also raises concerns about its potential societal implications. In this section, we shall explore some key ways AI may have both positive and negative implications for businesses, society and human rights.
Positive effects
1. Enhanced accuracy and efficiency — One significant advantage artificial intelligence offers are its ability to improve accuracy & efficiency across many different tasks. For example, AI-powered bots and applications can automate mundane tasks with precision far beyond what humans would be capable of achieving. This increases output accuracy while freeing up valuable time, which could instead be used to tackle higher-value tasks. As such, adopting AI-driven solutions often leads to increased operational efficiency & cost savings, which can benefit both businesses and society.
2. Improved decision-making capabilities — AI technologies also possess remarkable decision-making capabilities, which can significantly aid in strategic decision-making processes. For example, using automated data analysis algorithms, businesses can gain valuable insights about target markets and customers, leading to improved marketing strategies and customer service protocols.
Similarly, healthcare providers may use AI-driven genomic mapping algorithms to identify diseases earlier than possible, enabling more effective treatment plans before symptoms develop. Such innovations present great potential benefits to societies at large, providing improved medical care while simultaneously reducing costs associated with wasted resources resulting from ineffective decisions being made previously.
Negative effects
1. Loss of human jobs — One concerning factor raised frequently when discussing potential impacts AI might have upon society relates to the loss of jobs currently done by humans being replaced by machines taking over roles once held by people. At the same time, it may create social difficulties, particularly for those already vulnerable, such as low-income earners and elderly citizens
2. Regulation — Another downside of automation through artificial intelligence lies in difficulty surrounding regulation and enforcement. Given the current rate of advancement, technology outpaces traditional regulatory systems meaning lawmakers struggle to keep up with ever-changing technical sectors. This means laws may not sufficiently address issues directly related to emerging technologies, leaving them open to exploitation.
While artificial intelligence has great potential to enhance different aspects of our lives, both personally and professionally, there still remain ethical considerations, and problem areas arise should we fail to pay attention to what exactly controls us. | Emerging Technologies |
India and the U.S. are working with renewed trust in areas of new and emerging technologies, Prime Minister Narendra Modi said on Friday as he thanked the American leadership for giving him a grand welcome.
India and the U.S. are working with renewed trust in areas of new and emerging technologies, Prime Minister Narendra Modi said on Friday as he thanked the American leadership for giving him a grand welcome.
Speaking at a luncheon hosted in his honour by U.S. Vice President Kamala Harris at the White House, Modi said the two countries have added and expanded the scope of the cooperation in the defence and strategic areas.
"We are working with renewed trust in areas of new and emerging technologies. We are resolving long pending and difficult issues in training," he said.
He lauded Indian-origin Vice President Harris and her parents, saying “Your contribution to strengthening our strategic partnership has been incredible.”
"Thank you so much for this grand welcome. The sweet melody of the India-U.S. relationship is composed of our people-to-people relations," Modi said at the event also co-hosted by Secretary of State Antony Blinken. | Emerging Technologies |
There might be lots of news stories about job losses in tech right now but research suggests there are still plenty of openings in open source and Linux to go around.
As Hillary Carter, SVP of research and communications at the Linux Foundation, said in her keynote speech at Open Source Summit North America in Vancouver, Canada: "In spite of what the headlines are saying, the facts are 57% of organizations are adding workers this year."
Other research also points to brighter signs in tech employment trends. CompTIA's recent analysis of the latest Bureau of Labor Statistics (BLS) data suggests the tech unemployment rate climbed by just 2.3% in April. In fact, more organizations plan to increase their technical staff levels rather than decrease.
The demand for skilled tech talent remains strong, particularly in fast-developing areas, such as cloud and containers, cybersecurity, and artificial intelligence and machine learning.
New hiring is focused on developers and IT managers. Companies are also spending more on training. As many as 70% of organizations surveyed by the Linux Foundation provide training opportunities for their existing technical staff on new technologies. That investment in training is being driven by the fact that there aren't enough experts in hot technologies, such as Kubernetes and generative AI, to go around.
If your company isn't interested in helping you learn new skills, that's a red flag. The Linux Foundation found investing in upskilling is vital for companies looking to keep up with emerging technologies and remain competitive in the market. No matter how great a job is, if the company goes down, your job will fall with it.
Looking further ahead, it appears that taking specific technical classes and getting certified is a really smart move to help you land your next tech job. Interestingly, a college degree is no longer seen as such a huge benefit. Businesses responding to the Linux Foundation's research felt upskilling (91%) and certifications (77%) are more important than a university education (58%) when it comes to addressing technology needs.
So, as Jim Zemlin, executive director at the Linux Foundation, said at the conference: "It's been a tough time in tech. We've seen rounds of layoffs in the name of cost-cutting. But open source is countercyclical to these trends. The Linux Foundation itself, for instance, had its best first quarter ever. So, I'm seeing some hope where there's darkness."
Let's hope that's an indication there's a bright light coming at the end of what's been a dark tunnel for many IT professionals. | Emerging Technologies |
NASA and the U.S. Air Force are testing Joby Aviation’s eVTOL air taxi for potential civilian and military applications, building on NASA’s existing Advanced Air Mobility research and aiming to redefine future air transportation.
A new air taxi from the manufacturer Joby Aviation will allow NASA to evaluate how this kind of vehicle could be integrated into our skies for everyday use, while the Air Force researches its potential military use.
On September 25, Joby announced the delivery of one of their air taxis – an electric vertical takeoff and landing (eVTOL) aircraft – through a funded contract with their customer, the U.S. Air Force AFWERX Agility Prime program. NASA has an interagency agreement with AFWERX to use the aircraft for testing concentrated on how such vehicles could fit into the national airspace.
“NASA and AFWERX have an important, active collaboration on Advanced Air Mobility,” said Parimal Kopardekar, integration manager for NASA’s Advanced Air Mobility (AAM) mission. “This collaboration puts the best talent with the latest resources in the same place to accelerate the future of this industry.”
Starting in 2024, NASA pilots and researchers will work to test the Joby aircraft, focusing on air traffic management, flight procedures, and ground-based infrastructure. The research will use NASA pilots and hardware, such as the NASA Mobile Operating Facility, a research lab on wheels.
NASA’s History With AAM
NASA’s Advanced Air Mobility (AAM) research has contributed to this moment. Through this AAM research, NASA is developing a blueprint for how the air transportation systems of the future will fit together.
Air taxis and drones can be used for emergency response, fighting wildfires, and delivering medical supplies – and they will make our communities more connected and accessible than ever. NASA’s goal is to help mature technologies that will push the entire air taxi and drone industry forward, sharing its findings with the Federal Aviation Administration (FAA) to inform new policies. The work with the Joby aircraft will contribute to the wealth of knowledge NASA’s Aeronautics Research Mission Directorate has already provided for industry and the FAA.
This work builds upon progress NASA made with Joby under a now-completed non-reimbursable Space Act Agreement. The research focused on studying aircraft noise and involved a series of flight test simulations in Joby’s simulator, as well as flight testing.
Joby was one of NASA’s Small Business Innovation Research (SBIR) recipients during the early stages of the company’s technology development. NASA’s SBIR program provides support that small businesses jumpstart innovative technologies, benefitting the U.S. economy.
Military Applications and Future Prospects
On the military front, the AFWERX’s Agility Prime program is primarily responsible for exploring the potential defense applications of these revolutionary aircraft. The first of the Joby air taxis was delivered to Edwards Air Force Base in California. At this location, the Emerging Technologies Integrated Test Force of the 412th Test Wing is slated to spearhead the flight test campaign for both Joby and Agility Prime. Additionally, NASA’s Armstrong Flight Research Center is conveniently located at Edwards, making it a strategic location for extensive flight research. The delivery to Edwards marks the initiation of a series, with several more Joby aircraft destined for testing at different U.S. military bases in the future. | Emerging Technologies |
Foundation models, which include large language models and generative artificial intelligence (AI), that have emerged over the past five years, have the potential to transform much of what people and businesses do. To ensure that innovation in AI continues in a way that benefits consumers, businesses and the UK economy, the government has asked regulators, including the Competition and Markets Authority (CMA), to think about how the innovative development and deployment of AI can be supported against five overarching principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.
In line with government’s AI white paper and the CMA’s role to support open, competitive markets, the review seeks to understand how foundation models are developing and produce an assessment of the conditions and principles that will best guide the development of foundation models and their use in the future.
This initial review will:
- examine how the competitive markets for foundation models and their use could evolve
- explore what opportunities and risks these scenarios could bring for competition and consumer protection
- produce guiding principles to support competition and protect consumers as AI foundation models develop
The development of AI touches upon a number of important issues, including safety, security, copyright, privacy, and human rights, as well as the ways markets work. Many of these issues are being considered by government or other regulators, so this initial review will focus on the questions the CMA is best placed to address − what are the likely implications of the development of AI foundation models for competition and consumer protection?
Sarah Cardell, Chief Executive of the CMA, said:
AI has burst into the public consciousness over the past few months but has been on our radar for some time. It’s a technology developing at speed and has the potential to transform the way businesses compete as well as drive substantial economic growth.
It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information. Our goal is to help this new, rapidly scaling technology develop in ways that ensure open, competitive markets and effective consumer protection.
The CMA is seeking views and evidence from stakeholders and welcomes submissions by 2 June 2023. The CMA encourages interested parties to respond and be proactive in identifying relevant evidence.
Following evidence gathering and analysis, the CMA will publish a report which sets out its findings in September 2023.
All updates on the CMA’s work in this area can be found on the artificial intelligence case page.
Note to editors:
- The CMA is exercising its function, under section 5 of the Enterprise Act 2002 (its general review function) of obtaining, compiling, and keeping under review information about matters relating to the carrying out of its functions. The CMA is carrying out this review with a view to (among other things) ensuring that it has sufficient information to take informed decisions in relation to its work.
- The CMA has taken steps to ensure it is proactive and forward-looking in the emergence of new technologies or emerging markets. This includes the development of its internal horizon scanning capabilities to identify new and emerging technologies and trends in digital markets. Foundation models were prioritised in the CMA’s 2022 scan of important future technological developments.
- In March 2023, the UK Government published its white paper on AI, noting that a pro-innovation and proportionate approach to the regulation of how AI is used is key to realise the benefits it has to offer.
- The CMA will work closely with the Office for AI, and fellow members of the Digital Regulation Cooperation Forum (DRCF) on the review and share findings with government to help inform the UK’s AI strategy. The CMA will engage with businesses, academics and public policy stakeholders in this sector and welcomes views or evidence on foundation models by 2 June 2023.
- There are other important questions raised by foundation models – copyright and intellectual property, online safety, data protection, security and more – but these are not included in the scope of this review.
- All media enquiries should be directed to the CMA press office by email on [email protected], or by phone on 020 3738 6460. | Emerging Technologies |
Google is developing an artificial intelligence tool — code-named “Genesis” — that will be capable of writing news articles and crafting headlines, the company said Thursday.
The search giant has approached major news organizations — including News Corp, the parent company of the New York Post and the Wall Street Journal — to pitch them on the benefits of its AI tool.
However, Google stressed its robots would not replace ink-stained editors and digital journalists currently employed.
“Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity, just like we’re making assistive tools available for people in Gmail and in Google Docs,” a spokesperson told The Post.
Journalists, however, were more skeptical.
Some executives who sat in on the presentation said the plan for the product was “unsettling” and glossed over the human effort required to write “accurate and artful news stories,” the New York Times reported, citing three people familiar with the matter.
Others were less quick to pass judgment on Google boss Sundar Pichai’s vision for Genesis, which he sees as a potential “personal assistant” for journalists that could help ease their workload by handling basic tasks, a source told the Times, which along with the Washington Post was also approached.
“We have an excellent relationship with Google, and we appreciate Sundar Pichai’s long-term commitment to journalism,” a News Corp spokesman said.
The New York Times and the Washington Post declined to comment.
The Google spokesperson said the company was in the “earliest stages of exploring ideas to potentially provide AI-enabled tools to help journalists with their work.”
“Quite simply, these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating, and fact-checking their articles,” the rep said.
Some outlets, such as BuzzFeed and the tech news site CNET, have already begun to make use of AI in their coverage.
However, Google is developing AI tools for news content during a period of significant tension between media outlets and Big Tech firms.
Last month, USA Today publisher Gannett sued Google for allegedly pursuing a “deceptive scheme” to gain a monopoly over the online advertising market. Google has denied engaging in anti-competitive business practices.
Separately, California state lawmakers recently advanced a bill that would require tech firms to pay media outlets in order to use their news content. That piece of legislation is set for a final vote later this year.
Facebook parent Meta fired back, warning that it could pull all news content from its platforms in California if it were to receive final approval. | Emerging Technologies |
Renewable fuel generation is essential for a low carbon footprint economy. Thus, over the last five decades, a significant effort has been dedicated towards increasing the performance of solar fuels generating devices. Specifically, the solar to hydrogen efficiency of photoelectrochemical cells has progressed steadily towards its fundamental limit, and the faradaic efficiency towards valuable products in CO2 reduction systems has increased dramatically. However, there are still numerous scientific and engineering challenges that must be overcame in order to turn solar fuels into a viable technology. At the electrode and device level, the conversion efficiency, stability and products selectivity must be increased significantly. Meanwhile, these performance metrics must be maintained when scaling up devices and systems while maintaining an acceptable cost and carbon footprint. This roadmap surveys different aspects of this endeavor: system benchmarking, device scaling, various approaches for photoelectrodes design, materials discovery, and catalysis. Each of the sections in the roadmap focuses on a single topic, discussing the state of the art, the key challenges and advancements required to meet them. The roadmap can be used as a guide for researchers and funding agencies highlighting the most pressing needs of the field.Gideon Segev1, Roel van de Krol2 and Frances Houle3 1 Department of Physical Electronics, Faculty of Engineering, Tel Aviv University, Tel Aviv, Israel 2 Institute for Solar Fuels, Helmholtz-Zentrum Berlin für Materialien und Energie GmbH, Hahn-Meitner-Platz 1, 14109 Berlin, Germany 3 Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, United States of AmericaStatusThe concept of using artificial systems that emulate natural photosynthesis to generate fuels from CO2 and H2O with sunlight as the sole source of energy dates from the 1970s, and has stimulated active research worldwide to make the idea a reality, and pave the way to a solar fuels industry [1]. As renewable energy sources have developed in the intervening 50 years, the uses for chemicals synthesized using sunlight have expanded beyond the initial focus on fuels. It now encompasses green sources of feedstocks such as syngas for other processes, specialty chemicals, and storage of intermittent renewable electricity from photovoltaic and wind sources in chemical form. This complements conventional energy storage technologies such as batteries and pumped-storage hydroelectricity.Two basic architectures have emerged—fully integrated photoelectrochemical systems, and electrochemical systems coupled to an external photovoltaic element. Both have enjoyed continuous improvements in stability, efficiency, and selectivity thanks to intensive research programs on light absorbers and catalysts—especially in the last decade. Now that reported efficiencies for lab-scale photoelectrochemical devices regularly exceed 10%, which has long been considered a minimum threshold for any practical application, engineering will play an increasingly important role in the field. This is not only because it is important to show practical demonstrations to governments and the public, but also because the design of a system must be set before the performance requirements for the absorber and catalyst materials can be specified. For the more challenging reactions, such as photoelectrochemical CO2 reduction, many fundamental questions on reaction mechanisms, selectivity, and stability remain. This is in contrast to the progress made in dark electrocatalysis of CO2 reduction, and can be attributed to the lack of integrated catalyst-photocathode candidates for this reaction. Answering these questions increasingly requires close integration of theoretical and experimental efforts and the use of operando methods. This highlights the interdisciplinary nature of the field, where advances in a variety of scientific and engineering disciplines are required to make progress (figure 1).Figure 1. Conceptual drawing of a solar fuel generator (left), showing the functional layers of the device (middle) and the atomic-scale electrochemical processes that occur at the catalytically active surfaces. The numbers indicate the section addressing every component or challenge.Download figure: Standard image High-resolution image Technoeconomic analysis and lifecycle assessment (TEA/LCA) studies have been invaluable for guiding performance targets [2–7], even though the uncertainties inherent to any early stage technology mean that there is still significant spread in the projected efficiency and cost targets for solar fuel generators. This also explains the differences in performance targets that the reader may notice in the different sections of this Roadmap; formulating a consistent set of agreed-upon targets is still a work in progress in the field of solar fuels. However, there seems to be broad agreement that the lifetime of a solar fuel module should exceed ten years in the field, manufacturing processes should be scalable and ultra-low cost (<300 $ m−2 [3]), and the efficiency of selective generation of a specific product should exceed 10% solar-to chemical energy. For lower-value products such as hydrogen, an efficiency close to the theoretical limit (which is about 22% for H2 generation [8]) is probably needed for economic viability.Current and future challengesEfficiency, selectivity, lifetime and scalability are the predominant scientific challenges facing the field (figure 2). In most cases, the efficiencies are not only well below the theoretical limit, but also fall short of the performance targets from TEA/LCA studies. This is particularly true when multicarbon products are targeted. Moreover, lifetimes are measured in hours, not months or years as required for a sustainable technology. The reality is that the CO2 reduction products are usually mixtures because perfect selectivity and single-pass CO2 consumption efficiency have not been realized. Thus, downstream separation and purification costs, both in terms of energy and capital, must be evaluated [7].Figure 2. The main performance metrics and challenges for advancing solar fuel systems.Download figure: Standard image High-resolution image To make improvements, there is a tendency to propose increasingly complex architectures at both micro and macro length scales, e.g. multi-layer structures (tandem devices), hybrid solid state/molecular/bio-inspired assemblies, membrane/electrode assemblies, nano-structured catalysts, etc, which pose counterbalancing manufacturing and materials lifetime challenges. The materials complexity and the nature of the solar resource pose challenges not faced by electrochemical systems. In practice, photoelectrochemical systems must deal with the natural intermittency of solar radiation diurnally and seasonally, as well as the variability of direct and diffuse illumination as a function of geography. A fluctuating supply may, however, also be leveraged to re-distribute ionic species in dark periods and thereby reduce pH gradients and enable recovery or even self-repair mechanisms. The combination of low stability and slow and insufficiently sensitive product detection methods makes product selectivity optimization slow and tedious. This challenge is further amplified in CO2 reduction photocathodes where the products of surface corrosion can be easily mistaken for favorable photoelectrochemical reactions (section 14) and in mechanistic studies where unstable intermediates are mostly examined in synchrotrons which have limited availability (section 12).The main engineering challenges in the field are scale-up and performance and stability benchmarking. Scaling-up fully integrated photoelectrochemical systems is especially challenging due to the fact that one needs to manage photons, electrons and ions. In contrast, photovoltaic systems and electrolyzers each only have to manage two out of these three species, which greatly simplifies the architectures of these devices. Assessment of solar fuels system improvements requires that benchmarking protocols and standards be in place to establish efficiency in all laboratories: currently these are in development by various groups such as the U.S. DOE HydroGEN Energy Materials Network project. Successful benchmarking requires a portfolio of design architectures for solar fuel devices that offer potential for scale-up and high efficiency. As mentioned above, without such designs it is difficult to formulate specific materials requirements.Advances in science and technology to meet challengesImprovements in efficiency can be achieved if the rates of semiconductor excitations and charge transport are matched with those of the electrocatalytic reactions, and if the photovoltages achievable with semiconductors are 1.6 V (for water splitting) or well above 2 V (for CO2 and N2 reduction) under operation. Efficiency is also governed by system designs, and an improved understanding of how fluctuating sunlight and feedstocks affect mass transport and concentration gradients will serve to optimize system performance. In addition, it can provide insights on how to incorporate self repair mechanisms such as those found in natural systems. Efficiency also requires effective product separation and collection over large surface areas. Strategies to achieve these by minimizing feedstock/product purification units and piping are critical, and can involve reactor designs that avoid mixing of gaseous and liquid products, for example. Bubble formation at liquid electrolyte-electrode interfaces (section 16) reduces efficiency by blocking catalytic sites and increasing solution resistance, as well as scattering incident light in front illumination configurations. They can also affect durability by inducing mechanical damage when they detach, and alter local current density distributions which can adversely affect system lifetimes. Designs that eliminate active liquid-solid interfaces or inhibit bubble nucleation and growth while preserving other benefits of the liquid-solid interface will be valuable. Several demonstrations of vapor-fed devices have shown this is feasible ([9], section 3).Although the integration of photovoltaic and electrocatalytic functionalities into a photoelectrochemical cell presents a fundamental challenge, synergistic effects might provide opportunities to increase both system efficiency and product selectivity. Strategies to improve the selectivity of multi-electron transfer reactions include hybrid solid state/biomimetic and molecular approaches (sections 4 and 13), plasmonics (section 10), electrolyte engineering, use of semiconductor surfaces for CO2 reduction (section 14) and cascade catalysis, all of which use local atomic and molecular structure and the electron energy landscape to promote specific reaction pathways. Overcoming the limitations of catalytic scaling relations, which lock in product selectivities, by novel materials designs are an important opportunity. Several strategies are proposed in this Roadmap, and implementation of these strategies in practical devices will further advance the field. New instrumentation for rapid detection of a broad range of liquid and gas phase products can significantly accelerate the optimization of light coupled systems that in most cases cannot be studied with fast detection instrumentation such as differential electrochemical mass spectrometry.Improvements in stability require a detailed mechanistic understanding of all degradation processes leading to loss of efficiency or device failure. Further development of operando analysis tools and theory will be invaluable to pinpoint where losses originate from. Systems designs that control the chemical environment surrounding all the active elements and ensure their long-term stability while preserving efficiency are essential. Environments that promote stability without introducing unwanted mass transport limitations will be a key advance. An example of recent progress toward this dual goal of efficiency and stability is given for FeNi oxygen evolution reaction catalysts, where Fe3+ in solution stabilizes performance (section 15), and for electrolyte engineering to push the limits of ion diffusion (section 16).Concluding remarksThis roadmap highlights our views of the most pressing scientific and engineering challenges that must be met before a wide scale solar fuels technology can be implemented. Section 2 discusses the state-of-the-art catalysis in the field; sections 3–8 address system level concepts; various aspects of photoelectrodes for solar fuels are surveyed in sections 9–14; and catalysis related challenges are analyzed in sections 15–17. The numbers in figure 1 indicate the sections addressing every system component and challenge. Although many aspects of this field are still at an early stage of development, it is never too early to conduct a macro scale assessment of the manufacturing and operation costs, and the carbon footprint of different designs and concepts. Such an analysis should rule out concepts that are incompatible with the end goal and provide guidelines for economically and environmentally viable technologies.Jakob Kibsgaard1, Christopher Hahn2 and Zhichuan J Xu3 1 Department of Physics, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark 2 Materials Sciences Division, Lawrence Livermore National Laboratory, Livermore, CA 94550, United States of America 3 School of Material Science and Engineering, Nanyang Technological University, Singapore, SingaporeStatusThe production of sustainable fuels and chemicals through Power-to-X processes has been subject to intense research efforts in the last decade and is presently also gaining increased political awareness. Independently of whether these fuels and chemicals are produced by coupling photovoltaics (or wind turbines) to an electrolyzer unit or by an integrated photoelectrochemical device, good catalysts are needed to lower the overpotential and thus the energy required to make the reactions run and ultimately make the production economically viable [10].Much of the attention has centred on water electrolysis (hydrogen evolution reaction (HER, section 16) [11–13] and oxygen evolution reaction (OER, section 15) [14–17]), CO2 reduction (CO2R, section 17) [18–21] to fuels and chemicals, and recently N2 reduction (N2R) to ammonia [22, 23]. These processes involve different number of proton-electron transfers and have different complexities in terms of activity, selectivity, and stability.The direct impact that the increased complexity imposes to the catalytic activity is perhaps best viewed by comparing the turnover frequency (TOF). TOF is the scientifically relevant metric when comparing the intrinsic activity as it measures the number of product molecules produced per time per active site. Figure 3 shows a compilation of state-of-the-art TOF values for HER, OER, CO2R (to both CO and ethylene), and N2R as a function of applied overpotential relative to the thermodynamic potential required for the respective reactions. It is immediately apparent that TOF values for HER are orders of magnitude higher at a given overpotential as compared to OER, CO2R, and N2R. This is a direct consequence of scaling relations between reaction intermediates [10]. HER is a two-electron reaction with only one catalytic intermediate, H*, where * denotes a surface site. Consequently, the rate of the overall HER reaction is largely determined by a single optimizable quantity namely the hydrogen adsorption free energy, ΔGH. When plotting the intrinsic HER activity against ΔGH a volcano relationship emerges [10], with Pt located at the top with negligible overpotentials for large TOFs, see figure 3, exemplifying an ideal catalyst in terms of activity. OER, CO2R, and N2R involve multiple intermediates of which the binding energies scale linearly and thus cannot be optimized independently; this poses a fundamental barrier to optimizing the catalytic activity [10].Figure 3. Turnover frequency (TOF) of state-of-the-art catalysts for HER [11–13], OER [14–16], CO2R to CO [18, 19], CO2R to ethylene [20, 21] and N2R [24] (and references within) plotted against the overpotential relative to the thermodynamically required potential for the respective reactions. Shadings are a guide to the eye. Note the N2R data is for lithium-mediated reactions and the lithium plating potential thus sets a minimum required overpotential.Download figure: Standard image High-resolution image Current and future challengesCircumventing scaling relations between reaction intermediates poses a monumental scientific challenge that must be conquered to bring down the overpotential for especially OER, CO2R, and N2R.Moreover, for CO2R, the similar standard reduction potentials for many products makes selectivity to a desired product challenging, as it requires controlling the kinetics of competing reaction pathways. Similarly, while the thermodynamically required potential for N2R is close to that of HER, the large kinetic barrier associated with splitting the N2 triple bond requires a much larger overpotential than HER [10]. Consequently, HER is always favoured over N2R and to date direct N2R in aqueous electrolytes remains to be successfully shown [23]. Only by controlling the proton availability in, e.g. non-aqueous electrolyte as is done in the Li-mediated approaches, the faradaic efficiency towards N2R becomes reasonable.Electrocatalysts also have a complex microenvironment, i.e. the confined space in close proximity to catalyst, which can have significant extrinsic influence on steering reactivity. Polar and charged reaction intermediates on heterogeneous catalysts can be stabilized or destabilized by the solvent, electrolyte ions, and polymer coatings through non-covalent interactions (e.g. dipole-dipole, ion-dipole, and ion-ion), similar to how the secondary coordination sphere of a molecular catalyst interacts with reaction intermediates. Field effects from the electrochemical double layer are a key contributor to the high intrinsic activity of Au for CO2R to CO, see figure 3, by stabilizing the transition state for CO2 adsorption [25].Apart from catalytic activity, stability is an equally important metric and achieving long-term stability for especially non-precious catalysts in acidic environments may pose a challenge. The stability number (S-number = ratio of evolved product molecules to the amount of dissolved catalyst) of electrocatalysts has been proposed as a stability metric to allow a reasonable comparison of diverse materials [26]. In alkaline environments and cathodic conditions, catalyst dissolution may be less significant in influencing the surface stability. However, the adsorbents induced thermodynamic minima state change of the catalyst surface during the catalysis may cause the surface segregation of the alloy catalysts and it may in long-term affect the stability [27].Advances in science and technology to meet challengesA multitude of different catalysts have over the years been synthesized and tested for HER, OER, CO2R, and N2R, but to our knowledge, there are virtually no examples of effectively circumventing the scaling relations and thereby significantly lowering the overpotential speaking to the grand challenge of accomplishing that. That being said, several ideas of selectively stabilizing certain intermediates over others have been proposed with the common theme of creating multi-functional and three-dimensional active sites [28]. Precise engineering of the individual active site motif will further enable steering the selectivity to the desired product for, e.g. CO2R. While controlling the microenvironment provides avenues to circumvent scaling relations, the chemical complexity and dynamics of the electrocatalyst/liquid interface present significant challenges for interface modelling and in situ characterization. These challenges must be overcome to co-design catalysts and microenvironments and accurately predict reactivity.Furthermore, as the field of catalyst discovery continues to grow, the need for rigorous experimentation including product detection increases. Erroneous claims on catalytic activity have especially plagued the fields of OER and N2R, which dilutes properly conducted literature and worse potentially leads the field down a wrong path wasting time and resources. Much of this can be avoided if published protocols were followed [22, 29]. Further, the need to ensure comparability of activity and stability calls for the community to agree on proper benchmarking protocols and metrics.While scientific breakthroughs are needed in obtaining improved catalysts, equally important is enabling more efficient ways to translate new scientific insights into industrial devices. Conditions under small-scale benchtop experimentation may not translate into, e.g. a membrane electrode assembly (MEA), section 8. For example, a catalyst in an MEA is likely to be interacting directly with a polymer electrolyte microenvironment whose charged functional groups likely contribute different non-covalent interactions than that of typical solvents and ions used for benchtop experiments. In addition, the chemical potentials of key reaction species may be significantly different in MEAs as they typically operate at much higher reaction rates, providing a different microenvironment than that of benchtop experiments that may lead to emergent phenomena. Difficulties in translating benchtop catalytic activities to prototype devices are well-known in the fuel cell community [30] and will also pose challenges for emerging technologies based on CO2R and N2R. Thus, to facilitate technology breakthroughs that can form the basis for industrial devices, integration of research at prototype level is needed, see figure 4.Figure 4. Roadmap for taking fundamental catalysis research to large-sale industrial deployment. Integration of research at the prototype level is needed to facilitate technology breakthroughs that can form the basis for industrial devices for solar fuels and chemicals.Download figure: Standard image High-resolution image Concluding remarksImproved catalysts for converting sustainable energy into fuels and chemicals are essential if we are to phase-out the use of fossil resources in our society. While many areas can be directly powered by renewable electricity, high energy density chemical fuels will likely still be necessary for applications such as shipping and aviation. In addition, a sustainable production of various feedstock chemicals will allow for a greater penetration of renewable energy in the chemical industry. Significant research efforts have been placed on developing better catalysts for water electrolysis, CO2 reduction and lately N2 reduction, but there are still several challenges to conquer to make these processes economically viable. Specifically, improvements on both activity, selectivity, stability, and upscaling of many of these catalysts are still required to produce the fuels and chemicals we ultimately need for a sustainable future.Wen-Hui (Sophia) Cheng1, Todd G Deutschl2 and Chengxiang Xiang3 1 Department of Materials Science and Technology, National Cheng Kung University, Tainan 701, Taiwan 2 National Renewable Energy Laboratory, 15013 Denver West Parkway, Golden, CO 80401, United States of America 3 Department of Applied Physics and Material Science, California Institute of Technology, Pasadena, CA 91125, United States of AmericaStatusTransforming solar energy into energy stored in the form of chemical bonds as a long-term sustainable solution for energy storage has drawn much attention in the past decades due to environmental concerns. Utilizing photon generated carriers for catalyzing both the reduction reaction, e.g. hydrogen evolution reaction (HER), CO2 reduction (CO2R) or N2 reduction and the oxidation reaction, e.g. oxygen evolution reaction (OER), completes the full cycle of fuel production. A spectrum of system configurations ranging from stand-alone photovoltaic system coupled with commercially available electrolysers (PV-EC) to fully integrated photoelectrochemical (PEC) or photocatalytic (PC) systems has been developed to generate low levelized cost hydrogen. One key figure of merit that differentiates the three systems is the operating current density, where PV-EC system operates at >1 A cm−2 in water electrolysis, PEC system typically operates at 10–100 mA cm−2 to match solar flux potentially with low sunlight concentrations, and PC system has unique advantage to operate at <1 mA cm−2 [4, 31]. Other system configurations, such as wired photoelectrodes, redox couple assisted water-splitting cells and others, have unique attributes for specific materials or operating conditions. Generally, the component benchmarking includes catalytic properties defined with overpotential at 10 mA cm−2, membrane performance defined with pH operation window and ion conductivity. The device benchmarking requires determining a spectral correction factor and translating performance measured under a simulated light source to the AM1.5 reference spectrum without additional bias to present direct conversion of solar to fuels [32]. While it may be helpful to benchmark components that require a bias for diagnostic purposes in the laboratory, it is not a practice used for complete solar fuels devices due to the confusion, even among experts, of identifying a system bias vs. an electrode bias. Stability benchmarking protocols are much less established; constant illumination using a simulated light source is commonly used where instead diurnal cycles or natural sunlight with intermittency—representing realistic operation—should also be considered. While PEC H2 or CO2R in general is still at low technical readiness level, the adoption of standardization through benchmarking protocols can help lower the barrier of entry for new researchers and serve as a basis for comparing results across institutions to accelerate the development of this early stage technology. Figure 5 presents the summary of historical demonstrations of solar driven water splitting devices with three key merits: solar-to-hydrogen (STH) conversion efficiency, areal normalized lifetime H2 production (g m−2), and rate of H2 production (μg h−1) [31, 33–35]. The best few demonstrations with higher numbers for the merits utilize III–V based light absorber with concentrator in PV-EC (STH 31%, ∼10 000 g m−2 under 42 suns [36]), or PEC-B (catalyst at backside of the light absorber, ∼200 000 μg h−1 under 117 suns [35]) configuration. Even though there are exciting progress with PEC and PV-PEC (couple low voltage PEC device with PV to meet the potential requirement) especially in STH conversion efficiency, they are currently limited by stability and overall generation rate. In contrast, for the PC configuration, while meter-scale demonstration or 1000 h operation has been demonstrated, the STH conversion efficiency is limited to <1%. More endeavours need to be focused on to improve the different approaches toward the target. To bring the solar-driven water-splitting system to market, the throughput of the system needs to approach small size PEM systems with a typical H2 production rate of ∼1 kg h−1. To achieve <$1 kg−1 of H2 in the long term, system level STH conversion efficiency of about 15% is needed for a concentrator system between 100–150× where an $800 m−2 absorber has at least a two year lifetime [37]. New system design or breakthrough of technology to significantly reduce the cost would be necessary to bridge the gap between current demonstrated devices and target for the real market.Figure 5. Review of the historical demonstration of solar driven water splitting devices. (a) Correlation of the STH, stability (lifetime hydrogen production, g m−2), and rate (rate of hydrogen production, μg h−1). The numbers in bracket indicate the irradiance multiply concentration factor. (b) Correlation of the STH and stability. (c) Correlation of the STH and rate. (d) Correlation of the stability and rate. Photoelectrochemical (PEC) device in red; photoelectrochemical device with catalyst at backside of the light absorber (PEC-B) in pink; photovoltaic wired with electrolyser (PV-EC) in blue; photovoltaic wired with photoelectrochemical device (PV-PEC) in green; photocatalytic device (PC) in yellow [31, 33–35].Download figure: Standard image High-resolution image Current and future challengesDifferent device configurations (PV-EC, PEC, PC and others) face very different challenges in materials, components and device developments. As no reasonably efficient material is intrinsically stable in the PEC configuration, the most significant challenge is discovering a barrier layer to protect the absorber that possesses appropriate properties [38]. The barrier layer has to be conductive, transparent, catalytic, and stable under operating conditions. There have been some successes using this approach on smaller scale, but a technique for depositing large-area films without pinholes has remained elusive. While still having the similar operating current density (10–100 mA cm−2) at the catalytic sites as PEC configuration, the most direct and effective routes to long-lived device performance is to decouple the absorber from the catalyst site. This relaxes the barrier requirements as the functions can be split over two physically separated interfaces and there are many options for stable, transparent, insulating films and non-transparent, catalytic layers. Several demonstrations have showed good durability by decoupling the light absorber from catalytic sites. In the PC system, low absorption in PC materials as well as low quantum yield for water-splitting reactions resulted relatively low STH conversion efficiency. Selective catalysis in the presence of redox couples sets significant materials challenges. Device level PC operation and optimization as well as membrane separator consideration remain under explored.Integration of materials and components is another technological consideration. Deposition techniques that require high temperatures could degrade absorbers or lead to compromised adhesion from mismatched thermal expansion coefficients. Protective layers, such as MoS2, exhibit native catalytic activity, while others require application of cocatalysts. Efficient charge transfer and in particular long stability among the interfaces of the light absorber, protective layer and co-catalysts remain a challenge. Epoxy, which can be prone to failure itself, is often used to mount absorbers and protect electrical contacts from electrolyte. Although there are numerous commercially available epoxy formulations with variable adhesion characteristics and chemical compatibilities, none have been tested for long durations under the harsh conditions of photo-electrolysis. In the PC system, multi-dimensional optimization that includes particle dimensions, redox couple properties (concentration, optical transparency, energetics), co-catalyst loading and energetics with light absorbers, etc, remains a significant challenge for producing efficient and long lasting devices. As devices transition from bench scale to pilot scale (discussed in depth in section 7), efficiency and stability benchmarking will also have to expand to cover larger absorber areas, a greater number of components, and to environments that are less artificially controlled (i.e. outside). Development of long-term stability protocols, corrosion analysis at materials, component, and device level as well as establishment of standard protocols for dynamic operations (e.g. diurnal cycles and under intermittency nature of the sunlight) are important and should remain a priority in the near future.Advances in science and technology to meet challengesResearch and development of new light absorbers with a set of unique materials properties for efficient and stable water splitting are important for the field. While leveraging the rapid progress of semiconductor materials development in the photovoltaic field can improve the performance of PEC systems, demonstration of a combination of intrinsically stable light absorbers with high optoelectronic qualities for un-assisted PEC water splitting would be a game changer. Membrane materials can play a critical role in various PEC designs, in particular, vapor fed and wire embedded PEC devices have unique requirements for the optical, electronic and ionic transport properties of the membranes. Cationic and anionic conducting membranes from the low temperature | Emerging Technologies |
Bitcoin and its Decentralized Nature
Bitcoin’s decentralized nature is one of its defining characteristics and a key reason why it has the potential to disrupt central banks. Unlike traditional currencies, which are controlled by central banks and are subject to government regulations and monetary policies, Bitcoin operates on a peer-to-peer network of computers, known as nodes, that collectively maintain its ledger system known as blockchain. This means that there is no central authority that controls or regulates Bitcoin.
This decentralized system makes Bitcoin resistant to censorship, fraud, and corruption, as no single entity or group can manipulate the currency for their own benefit. Transactions on the blockchain are verified and confirmed by nodes on the network, ensuring that the ledger is secure and transparent. The absence of intermediaries such as banks also means that transactions can be processed faster and at lower costs, as there are no fees associated with using third-party services.
For example, suppose a person in the United States wants to send money to a family member in a country with a weak or unstable currency, such as Venezuela. In this case, traditional methods such as wire transfers can be costly and time-consuming due to intermediary banks and currency conversion fees. However, with Bitcoin, the transaction can be completed quickly and at a lower cost, as it does not require intermediaries, and the exchange rate is determined by the market. This demonstrates how Bitcoin’s decentralized nature can provide a practical solution for individuals who want to transfer money across borders without the constraints of traditional banking.
The decentralized nature of Bitcoin allows for more secure, transparent, and efficient transactions without the need for intermediaries or central authorities. This gives individuals more control over their money and has the potential to undermine the role of central banks as gatekeepers of the financial system.
Fiat Currency and the Control of Central Banks
Fiat currency is a type of currency that is not backed by a physical commodity, such as gold, but rather by the faith and credit of the government that issues it. Most of the world’s currencies are fiat currencies, including the US dollar, the euro, and the Japanese yen.
Central banks, which are typically government-owned institutions, have the power to control the supply and demand of fiat currencies. This control allows them to influence interest rates and money supply, which can affect economic growth, inflation, and other macroeconomic indicators. For example, when central banks want to stimulate economic growth, they can lower interest rates to make it easier for businesses and consumers to borrow money. Conversely, when they want to curb inflation, they can raise interest rates to make borrowing more expensive, thus reducing demand for goods and services.
However, this power can also make central banks vulnerable to political influence and manipulation. Governments may pressure central banks to pursue policies that are beneficial for their own political interests, even if they may not be in the best interest of the overall economy. For example, a government may pressure its central bank to keep interest rates artificially low in the lead-up to an election, which can lead to a short-term boost in the economy but may also lead to long-term inflation.
Bitcoin’s decentralized nature allows it to operate independently of central banks and governments, which can make it less susceptible to political influence and manipulation. Its supply is limited by the design of its underlying algorithm, which means that it is not subject to the same inflationary pressures as fiat currencies. Additionally, Bitcoin’s blockchain technology ensures that transactions are transparent and immutable, making it difficult to manipulate.
However, this independence also means that Bitcoin is subject to significant fluctuations in value and is not backed by any government or financial institution. It also operates outside of the traditional banking system, which can limit its mainstream adoption as a currency.
While fiat currencies are backed by government authority and are subject to the control of central banks, Bitcoin is decentralized and operates independently. This gives it advantages such as resistance to political influence and manipulation, but also comes with significant risks and limitations.
Bitcoin’s Potential to Disrupt Central Banks
Bitcoin’s potential to disrupt central banks is closely tied to its decentralized nature and limited supply. The decentralized nature of Bitcoin allows individuals to transact directly with each other without the need for intermediaries, such as banks or other financial institutions. This means that Bitcoin can be used to store and transfer value without the need for central banks or governments, and thus represents a potential challenge to the traditional financial system.
Bitcoin’s limited supply is also a key factor in its disruptive potential. Unlike fiat currencies, which can be printed at will by central banks, Bitcoin has a finite supply that is determined by its underlying algorithm. This means that the supply of Bitcoin cannot be easily manipulated or increased, providing a measure of stability that is not available with fiat currencies.
Because Bitcoin is not subject to the same controls as fiat currency, it offers a way for people to store and transfer value without the need for intermediaries. This could give people more control over their money and could potentially undermine the role of central banks as gatekeepers of the financial system. For example, if a country experiences a financial crisis, individuals may turn to Bitcoin as a more stable alternative to the local currency.
In some countries, people are already turning to Bitcoin as a way to bypass traditional financial institutions and avoid government controls. In Venezuela, for example, where the national currency has been ravaged by hyperinflation, many people have turned to Bitcoin as a way to store and transfer value. Similarly, in countries such as Zimbabwe, where the government has imposed strict controls on the movement of capital, people have turned to Bitcoin as a way to move money out of the country.
However, it’s important to note that Bitcoin’s disruptive potential is still largely theoretical, and there are significant challenges that need to be addressed before it can become a mainstream alternative to fiat currencies. For example, Bitcoin’s price volatility makes it a risky investment, and its limited adoption in mainstream commerce means that it is not yet widely accepted as a form of payment. Furthermore, many governments and central banks are actively exploring the use of blockchain and digital currencies, which could potentially offer some of the same benefits as Bitcoin while remaining under government control.
Overall, Bitcoin’s potential to disrupt central banks is tied to its decentralized nature and limited supply. While it has the potential to offer an alternative to traditional financial systems, there are significant challenges that need to be addressed before it can become a mainstream alternative to fiat currencies.
Is Bitcoin a Threat to Central Banks? Challenges and Risks
Bitcoin’s disruptive potential is balanced by significant challenges and risks. These challenges and risks are related to the difficulties of regulation, adoption, competition, and the inherent volatility of Bitcoin.
One of the key challenges facing Bitcoin is its decentralized nature, which makes it difficult to regulate. Unlike fiat currencies that are subject to centralized control, Bitcoin is decentralized, and there is no central authority to oversee its operations. This makes it challenging for governments and central banks to control or regulate the flow of Bitcoin. It also makes Bitcoin more susceptible to illicit activities, such as money laundering, terrorism financing, and cybercrime.
Another challenge facing Bitcoin is its value volatility. The price of Bitcoin is subject to significant fluctuations and can experience sharp increases or decreases in short periods of time. This makes Bitcoin a risky investment and limits its use as a medium of exchange. The value volatility of Bitcoin also makes it difficult to use as a store of value or as a means of transferring wealth between different currencies.
Bitcoin’s adoption as a widely used currency is also limited, as many people still do not understand how it works or how to use it. The technology behind Bitcoin is complex, and many people are not yet comfortable with the idea of using digital currencies. This limits the potential for Bitcoin to become a mainstream alternative to traditional currencies, at least for the time being.
Furthermore, Bitcoin faces competition from other cryptocurrencies and emerging technologies. Many other digital currencies have been developed that offer different features and benefits, and some of them may be more appealing to users than Bitcoin. Additionally, emerging technologies such as stablecoins and central bank digital currencies (CBDCs) could offer some of the same benefits as Bitcoin while remaining under government control.
Finally, the legal and regulatory risks facing Bitcoin cannot be ignored. While some countries have embraced Bitcoin and other digital currencies, many others have implemented strict regulations or outright bans on their use. For example, China has banned cryptocurrency mining and trading, and other countries have implemented strict anti-money laundering (AML) and know-your-customer (KYC) regulations on cryptocurrency exchanges.
In conclusion, while Bitcoin has the potential to disrupt central banks and traditional financial systems, it also faces significant challenges and risks. Its decentralized nature, value volatility, limited adoption, competition from other cryptocurrencies and emerging technologies, and legal and regulatory risks all pose challenges to its widespread adoption as a mainstream currency. However, the rapid pace of technological change means that the future of Bitcoin and other digital currencies remains uncertain and unpredictable.
The Future of Bitcoin and Central Banks
As the technology behind Bitcoin and other cryptocurrencies continues to evolve, the relationship between these digital currencies and central banks is likely to remain complex and constantly evolving. The future of Bitcoin and central banks is still uncertain, but some possible scenarios can be identified.
One possible future is that Bitcoin remains a niche currency used primarily by early adopters and tech enthusiasts. While Bitcoin has gained some mainstream acceptance, it still faces significant challenges and limitations that could prevent it from becoming a widely used currency. However, even in this scenario, Bitcoin and other cryptocurrencies could continue to disrupt traditional financial systems by providing an alternative means of storing and transferring value that does not rely on central banks.
Another possible future is that central banks begin to adopt and regulate digital currencies, effectively incorporating some of the benefits of Bitcoin while maintaining their control over the financial system. For example, some central banks have already begun experimenting with central bank digital currencies (CBDCs), which would be government-issued digital currencies backed by fiat currencies. CBDCs could offer some of the benefits of Bitcoin, such as faster and cheaper transactions, while remaining under the control of central banks.
A third scenario is that Bitcoin and other cryptocurrencies continue to grow in popularity and adoption, eventually becoming a significant threat to the power of central banks. This scenario is the most uncertain and unpredictable, but it cannot be completely ruled out. If Bitcoin were to become widely adopted as a currency, it could significantly undermine the role of central banks as gatekeepers of the financial system. This could lead to a fundamental shift in the way that financial systems are structured and regulated.
In conclusion, the future of Bitcoin and central banks is uncertain, and the relationship between them is likely to remain complex and constantly evolving.
While Bitcoin has already forced central banks to consider new forms of digital currencies and ways to maintain their relevance, the long-term impact of Bitcoin on central banks and traditional financial systems is still unclear.
As technology continues to evolve, it will be interesting to see how Bitcoin and other cryptocurrencies continue to disrupt and challenge the established financial order.
Is Bitcoin a Threat to Central Banks? Read also: Will Bitcoin Eventually Kill Central Banks? | Emerging Technologies |
Amazon, Meta Among Firms To Unveil AI Safeguards After Biden’s Warning
Zients and other administration officials also say it will be difficult to keep pace with emerging technologies without congressional legislation.
(Bloomberg) -- Seven leading artificial intelligence firms will debut new voluntary safeguards designed to minimize abuse of and bias within the emerging technology at an event Friday at the White House.
President Joe Biden will be joined by executives from Amazon.com Inc., Alphabet Inc., Meta Platforms Inc., Microsoft Corp., and OpenAI, who are among the firms committing to a transparency and security pledge.
Under the agreement, companies will put new artificial intelligence systems through internal and external testing before their release and ask outside teams to probe their systems for security flaws, discriminatory tendencies or risks to Americans’ rights, health information or safety.
The firms, including Anthropic and Inflection AI, are also making new commitments to share information to improve risk mitigation with governments, civil society, and academics – and report vulnerabilities as they emerge. And leading AI companies will incorporate virtual watermarks into the material they generate, offering a way to help distinguish real images and video from those created by computers.
The package formalizes and expands some of the steps already underway at major AI firms, who have seen immense public interest in their emerging technology – matched only by concern over the corresponding societal risks.
Nick Clegg, the president of global affairs at Meta, said the voluntary commitments were an “important first step in ensuring responsible guardrails are established for AI and they create a model for other governments to follow.”
“AI should benefit the whole of society. For that to happen, these powerful new technologies need to be built and deployed responsibly,” he said in a statement released early Friday.
White House aides say the pledge helps balance the promise of artificial technology against the risks, and is the result of months of intensive behind-the-scenes lobbying. Many of the executives expected at the White House on Friday attended a meeting with Biden and Vice President Kamala Harris in May, where the administration warned the industry it was responsible for ensuring the safety of its technology.
“We’ve got to make sure that the companies are pressure testing their products as they develop them and certainly before they release them, to make sure that they don’t have unintended consequences, like being vulnerable to cyberattacks or being used to discriminate against certain people,” White House Chief of Staff Jeff Zients said in an interview. “And the important thing — and you’ll see this throughout all the work — is they can’t grade their own homework here.”
Voluntary Safeguards
Still, the fact the commitments are voluntary illustrates the limits of what Biden’s administration can do to steer the most advanced AI models away from potential misuse.
The guidelines don’t prescribe approval from specific outside experts in order to release technologies, and companies are only required to report – rather than eliminate – risks like possible inappropriate use or bias. The watermarking system still needs to be developed, and it may prove difficult to stamp content in a way that couldn’t be easily removed by malignant actors seeking to sow disinformation on the internet.
And there are few mechanisms beyond public opinion to compel commitments to use the technologies for societal priorities like medicine and climate change.
“It’s a moving target,” Zients said. “So we not only have to execute and implement on these commitments, but we’ve got to figure out the next round of commitments as the technologies change.“
Zients and other administration officials also say it will be difficult to keep pace with emerging technologies without congressional legislation that both help the government impose stricter rules and dedicate funding that will allow them to hire experts and regulators.
Aides describe concern over artificial intelligence as a top priority of the president in recent months. Biden frequently brings the topic up in meetings with economic, national security, and health advisers, and has had conversations with Cabinet secretaries telling them to prioritize examining how the technology might intersect with their agencies.
In conversations with outside experts, Biden was warned that algorithmic social media – like Meta’s Facebook and Instagram and ByteDance Ltd.’s TikTok – has already illustrated some of the risks that artificial intelligence could pose. One outside adviser suggested the president should consider the issue akin to cloning in the 1990s, needing clear principles and guardrails.
The White House said it consulted with the governments of 20 countries before Friday’s announcement.
“I think all sides were willing or eager to move as quickly as possible on this because that’s how AI works — you can’t sleep on this technology,” said Deputy Chief of Staff Bruce Reed.
All of these efforts, however, lag behind the pace of AI developments spurred by intense competition among corporate rivals and by the fear that Chinese innovation could overtake Western advances.
Even in Europe, where the EU’s AI Act is far ahead of anything passed by the US Congress, leaders have recognized the need for voluntary commitments from companies before binding law is in place. One White House official estimated it could be at least two years before European regulations began impacting AI firms.
That’s left officials there also asking companies to police themselves. In meetings with tech executives over the past three months, Thierry Breton, the European Union’s internal market commissioner, has called on AI developers to agree to an “AI Pact” to set some non-binding guardrails.
Regulate AI? Here’s What That Might Mean in the US: QuickTake
--With assistance from Jennifer Jacobs.
(Adds Meta comment from 6th paragraph)
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P. | Emerging Technologies |
The Internet of Things (IoT) and edge computing have vexed enterprise security efforts for years now. Given the added complexities of work-from-home and hybrid work arrangements, the situation has considerably worsened recently. Now comes ChatGPT to sit atop most IoT and edge devices, effectively adding a welcome beacon -- or even a helping hand -- to threat actors everywhere.
"Existing vulnerabilities, especially in the context of AI and ChatGPT-enabled or assisted attacks against edge devices and users, can be leveraged against businesses in different ways,” says Jim Broome, President and CTO at DirectDefense.
Despite variances in vulnerabilities and diverse efforts to exploit them, threats from the edge originate from one of two IoT realms: home IoT and enterprise IoT.
In many cases, employee home networks and the data therein are the preferred targets for threat actors.
“Once inside the home network, attackers can then pivot back into the corporate network, potentially compromising sensitive business information via a ‘blessed user or home network,’” Broome says.
But that’s not to say that enterprise IoT and edge devices are locked tight against more direct intrusions.
“Ransomware threat actors, for example, can exploit IoT vulnerabilities as a starting point to carry out their malicious campaigns, potentially causing significant damage and disruption to business operations,” Broome adds.
The Evolving Threatscape in Enterprise IoT
IoT and edge computing usage is up, both on the home and enterprise fronts. While IoT is a highly fragmented market, a view of even a few categories underscores the continued and unfettered growth across the board. Gartner pegs spend on IoT in the enterprise space and across key industries at over $268 billion in 2022. Deloitte projects worldwide spending on software and hardware related to IoT to rise to $1.1 trillion this year.
But the challenges aren’t just tied to the growing number of IoT and edge devices being purchased and deployed. An increasing variety in the types of IoT are causing issues, too.
“The diversity of edge and IoT devices, ranging from switches, routers, and sensors to point-of-sale systems, industrial robots, and automation equipment, also adds an additional layer of complexity and security vulnerability due to the variations in protocols, functions, and security capabilities,” explains James Joonhak Lee, a senior manager in Deloitte’s US Cyber & Strategic Risk practice.
If you think vendors and buyers have gotten better at securing these devices after all this time, think again. Botnet armies and DDoS attacks frequently spring from unprotected IoT devices seemingly as innocuous as hotel lobby aquarium thermostats, home smart refrigerators, and company coffee pots in break rooms.
“IoT devices in particular, and edge devices in general, are the most vulnerable within an organization,” says John Gallagher, VP of Viakoo Labs, a research unit focused on IoT and OT security management.
Where Home and Work Dangers Meet
IoT and edge computing spawn vulnerabilities elsewhere, too. For example, an ever-expanding edge-computing space compounds security problems for enterprises -- especially on the border between enterprise and consumer usage.
“Modern image archive systems, called PACS, connect scanners like an ultrasound or a CT scanner with patient management systems,” explains Dirk Schrader, VP of Security Research at Netwrix. “Currently, PACS servers become more and more connected to the public internet, so that patients and physicians can access the data. Quite often even basic precautions are not in place for these IT infrastructures. They are not hardened.”
Growing enmeshment between enterprise and consumer IoT and networks blurs the boundaries and sharply defines the opportunities for attackers.
Dangers and damages flow both ways, too.
“At the moment, there are about 200 of such unprotected archives [PACS servers] connected to the public internet within the US alone. Attackers can exploit them, exfiltrate or encrypt the data to extort the organization, use the data to run medical insurance fraud against the patients, or change the medical imagery so the process itself is corrupted,” Schrader says.
But this crossroads between consumer and professional connections is not the only collision point for enterprises. The things themselves have cross-uses to be wrecked. Autonomous vehicles, for example, exist in both commercial and consumer versions. Attacks are easily transported to the enterprise and the user whether the vehicles are a commercial fleet, a vehicle for rent or hire, or owned by a worker begrudgingly returning to work in the office again.
And then there is the steady march of home IoT -- from nanny cams to smart meters and kids’ toys -- on the occupant’s employer.
“An additional concern lies with the vulnerabilities present in AI-enhanced home devices used by remote employees,” Broome says. “For example, when was the last time individuals from accounting updated their home routers or their home network-attached storage servers, which they use for backing up corporate work while working remotely. This issue further compounds the challenges faced by organizations, as it increases the risk of intellectual property theft,” he says.
How ChatGPT and AI Make IoT Vulnerabilities Worse
ChatGPT and its ilk are rapidly appearing integrated or embedded in commercial and consumer IoT of all types. Many imagine AI models to be the most sophisticated security threat to date. But most of what is imagined is indeed imaginary.
“Now, if an actual AI emerges, be very worried if the kill switch is very far away from humans,” says Jayendra Pathak, chief scientist at SecureIQLab. He, like others in security and AI, agree that the chances of an actual general artificial intelligence developing any time soon are still very low.
But as to the latest AI sensation, ChatGPT, well that’s another kind of scare.
“ChatGPT poses [insider] threats -- similar to the way rogue or ‘all-knowing employees’ pose -- to IoT. Some of the consumer IoT vulnerabilities pose the same risk as a microcontroller or microprocessor does,” Pathak says.
In essence, ChatGPT’s potential threats spring from its training to be helpful and useful. Such a rosy prime directive can be very harmful, however. Even when a prompt bumps against its safety guardrails, another well-crafted prompt can fool it into doing the very thing the guardrails were designed to prevent.
This type of attack is called a prompt injection because a prompt is used to make the model ignore previous instructions or perform unintended actions. Prompt injections can be used in ChatGPT directly or in applications built on ChatGPT or other large language AI models. In other words, this type of attack can be pumped into the ChatGPT sitting atop IoT and edge devices.
“Moreover, emerging technologies such as AI and ChatGPT in edge devices are introducing new vulnerabilities that companies need to be aware of,” says Harman Singh, director at Cyphere. For example, AI algorithms can be manipulated to generate false data, leading to incorrect decisions and actions. ChatGPT can also be used to conduct social engineering attacks by impersonating trusted individuals or entities.”
Sometimes a prompt can be used to get ChatGPT and similar LLM-based chatbots to reveal back office or proprietary information used to guide its responses and that was never meant to be accessed by anyone outside of the company.
“When confidential data or source code gets shared within tools like ChatGPT, it opens the door for challenges to compliance obligations and puts intellectual property at risk,” says Nathan Hunstad, Deputy CISO at Code42.
While evolving threats are growing and spreading like wildfire, not all is lost.
“Any new technology presents risks, but it doesn’t mean the risks are all brand new. In fact, companies might find that they already have many of the people, processes, and technology in place to mitigate the risks of tools like ChatGPT,” Hunstad says. | Emerging Technologies |
Destruction Democratised How the evolution of synthetic biology, quantum computing, and AI threatens to disrupt the current world order Some technological advances are so great that they create ruptures in our understanding of what is possible. Eighty years ago, such a rift took place via the invention of the atom bomb, transforming the way we conceive of warfare and global order. Today, rapid advancements in scientific and technological progress hold that very same potential. From artificial intelligence to synthetic biology, experts and policymakers are beginning to dissect the potential consequences of adding unfamiliar, highly advanced, and potentially devastating new additions to the toolboxes of adversarial powers. When referring to world order, we often operate within a ‘Great Power’ discourse and assume that geopolitical disruptions require geopolitical might. The democratisation of destructive technologies, however, will likely create the conditions for smaller non-state actors, and even individuals, to have a greater impact on an international level. Already today, the intelligence and information thresholds required to use advanced existing technologies in a competent manner are declining fast, and deeply acquired technical expertise requiring years, or even decades of education and training is becoming less of a factor. For the sake of simplicity, three areas in which emerging technologies pose a threat might be considered: synthetic biology, quantum computing, and artificial intelligence. Similar to how individuals and small groups build computer viruses today, doit- yourself biohacking tools could in the future significantly lower the bar for individuals with very basic training in biology to enhance biological pathogens that could, in theory, drive the next global pandemic. Whereas concern was once directed towards off-beat experiments held at university labs – such as the case of a Dutch virologist successfully making the 2011 H5N1 ‘Swine Flu virus’ more transmissible to humans – today, technology is making ‘garage biology’ an international threat. Gene-editing tools such as CRISPR have drastically lowered barriers for potential bioterrorist groups to modify (or replicate) deadly pathogens, with some arguing that it could cost as little as $10,000 to bioengineer smallpox at home. We have already arrived at a world where, via the internet, several laboratory processes can be followed almost as if they were recipes for chocolate cake, and the genomes of a range of organisms and pathogens are already publicly available online. Whereas the risks of synthetic biology can exist on a micro-scale, the race towards quantum computing operates at a global level. Quantum computing holds enormous potential for biotechnology, artificial intelligence, and machine learning, despite its applications having not arrived just yet. The incredibly fast crunchtimes of such computers also pose a threat to current data encryption methods; theoretically, the first country to achieve quantum supremacy would have a decisive advantage at engaging in cyberwarfare against any target nation’s military systems. Such potential has caused a techno-nationalist ‘quantum race’ among global powers. In the US, directives were issued under the Trump administration making it harder for Chinese students to study quantum-related degrees, and Washington has also made efforts to block the Netherlands’ export of lasers integral to quantum computing to China. On the other hand, China’s policy has been the reverse, inviting academics from the UK and US through its ‘Thousand Talents Plan’ to gather the best minds within quantum computing. Given the extent to which strategic intelligence, military warfare, and corporate IP strategies are based on certain assumptions for cybersecurity, the proliferation of quantum computing platforms could massively remake the globally contested space. The potential artificial intelligence holds is far more open to interpretation, however. According to the American political scientist Ian Bremmer, we have been living in an “AI Cold War” since 2018, the year when China announced its ambition of becoming a world leader within AI by 2030. Although parallels could be drawn here with the race towards quantum computing – both rely on advanced semiconductors – a key difference exists in terms of who participates. Big Tech manufacturers such as Meta Platforms, Amazon, Microsoft, and Alphabet have all heavily invested and tuned in to the AI marathon. The binary view of an ‘AI cold war’ risks neglecting the immense amounts of data, and therefore control over information and communication, that Big Tech companies hold. In fact, some scholars argue that the rhetoric itself surrounding an ‘AI cold war’ poses an equally existential risk in the immediate term as AI does itself. The philosopher Stephen Cave has, for example, developed a model demonstrating that greater enmity between regional AI powers, as well as simply possessing greater information about other powers’ capabilities, significantly increased the risk of AI “corner-cutting” and ignoring safety protocols. Within AI, the risk of not having a code of conduct – let alone a technological lingua franca in the scenario where AI becomes sufficiently advanced – is much too great of a risk to ignore. Although immensely destructive, the consequences of the atom bomb operated within a utilitarian calculation which was universally understood: to kill or not to kill possibly millions of people. Such ‘simple’ decision-making processes could be curtailed by the doctrine of Mutually Assured Destruction. There were, in other words, simple consequences from a complex technology. The future capabilities of advanced technologies pose complex ethical dilemmas without intuitive unwritten codes of conduct. Where exactly does agency lie in highly automated military processes, or in those directed by artificial intelligence? How can we pursue R&D within quantum computing without simultaneously causing an arms race that risks compromising the world’s encryption systems? Unfortunately, pondering on the consequences of future technologies for geopolitics is a topic better at providing questions than answers. Ultimately, the winding roads of such futures depend on the values held by citizens, what individuals think of the trade-offs between liberty and freedom versus a sense of safety and security, as well as who they will turn to in order to meet those needs. Such a world, characterised by uncertainty around the development of emerging technologies, may very well require greatly amplified capacities for preventative policing as a means of global governance. Envisioning such an outcome is challenging and even disturbing, alternating between the polarities of a unipolar world order marred by ubiquitous global surveillance, and the replacement of nation states with a system of decentralisation bordering on anarchy. We exist now at the crucial moment where possibility and deliberation intersect – if we choose the path of violent destruction, humanity will have to bear the blame. This is a featured article from our latest issue of FARSIGHT: A World Pulled Apart? Get your copy here | Emerging Technologies |
An estimated 20,000 Britons have been approached by Chinese state actors on LinkedIn in the hope of stealing industrial or technological secrets, the head of MI5 has said.
Ken McCallum said industrial espionage was happening at “real scale”, and he estimated that 10,000 UK businesses were at risk, particularly in artificial intelligence, quantum computing or synthetic biology where China was trying to gain a march.
“Week by week, our teams detect massive amounts of covert activity by the likes of China in particular, but also Russia and Iran,” the MI5 director general said ahead of a summit of domestic spy chiefs from the Five Eyes agencies hosted by the FBI in California.
“Activity not aimed just at government or military secrets. Not even just aimed at our critical infrastructure but increasingly [at] promising startups – innovative companies spun out of our universities, academic research itself, and people that understandably may not think national security is about them.”
McCallum said China was engaged in “very, very widespread gathering of information at quite a low threshold” which, taken together, could add up to “real damage” to British and wider western interests if Beijing were to dominate the next generation of emerging technologies.
Concerns about Chinese industrial espionage have risen dramatically over the past decade, led by the US whose intelligence and military establishment see their country as being locked in a long-term struggle for dominance of the future economic order.
Christopher Wray, the FBI director, described the Chinese Communist party as “the number one threat to innovation” and accused Beijing of making “economic espionage and stealing others’ work and ideas a central component of its national strategy”.
He said he was running “well north of about 2,000 investigations” relating to Chinese activity, and that his agency was opening a new case file “roughly every 12 hours”.
McCallum did not provide any equivalent figure for MI5, but the agency has previously said its China caseload has risen seven-fold in the past four years.
On Tuesday, the agency said it was aware of 20 instances of Chinese companies considering or pursuing use of “obfuscated investment, imaginative company structures” to circumvent regulations in order to gain access to technology developed by British companies and in universities.
Details were scant but MI5 indicated it was aware of at least two Chinese companies trying to identify legal loopholes to access the sensitive technology of UK firms undetected, and another Chinese company acquiring research data stolen from a top UK university.
Nobody has ever been prosecuted for spying for Beijing in the UK. Earlier this year a Westminster parliamentary researcher was arrested on suspicion of contravening old official secrets legislation, and he is waiting to hear if he will be charged.
McCallum said he expected the new National Security Act, passed into law this year with updated definitions of espionage, would lead to Chinese agents being tried in British courts, similar to how terrorists are prosecuted.
“As we proceed further, you would expect to see our police, the Crown Prosecution Service and the courts will more often draw relevance to state threats’ work in the way that is entirely routine on our counter-terrorism work,” he said.
The five agency heads were due to meet with representatives of the technology sector later on Tuesday at Stanford University in order to pass on their warnings, after appearing in public together for the first time at a roundtable event in the morning chaired by the former US secretary of state Condoleezza Rice.
Five Eyes members include intelligence agencies from Australia, Canada, New Zealand, the UK and US. | Emerging Technologies |
If the past few years, and even the past week, has reminded us soundly of anything – it’s that the startup world will never be predictable. To meet the changing startup landscape, we’re refreshing and re-imagining TechCrunch Disrupt 2023 in a big way with more of what you love and new ways to accelerate your growth.
What’s new at TechCrunch Disrupt 2023?
Industry Days
TechCrunch has created 6 new programming days around the most groundbreaking industries in the startup world. In these salon-like sessions, industry leaders will share their deep expertise, insights and trends within your sector.
At these shows-within-our-show, you’ll engage with smart, driven founders, investors and members of your community, and the opportunity to cross-collaborate with leaders from other industries.
Here are the big new stages spread out across this year’s Disrupt:
- The Artificial Intelligence Stage:
Explore the rapidly expanding capabilities and potential of artificial intelligence; dig into the science behind the deep tech, the products it powers and the ethical, social and legal challenges that come with it. Featuring topics like biometrics, deep learning platforms, natural language generation, peer-to-peer network, reactive machines, robotic process automation, speech recognition and virtual agents
- The Sustainability Stage:
Discover emerging technologies that transform the way we engage with our environment, impact society and how we move from place to place. Featuring topics like green infrastructure, new mobilities, sustainable tech, urban mobility
- The Fintech Stage:
Dive into the evolution of monetary exchanges and follow the technology that is powering new ways of capturing and distributing value and wealth. Featuring topics like blockchain, challenger/neo banks, DeFi, fintech, NFTs and web3
- The Hardware Stage:
Uncover the mechanics and code behind the machines that enable us to get things done faster, smarter and more efficiently at work and at home. Featuring topics like articulated robots, autonomous mobile robots (AMRs), commercial hardware, humanoids, IoT/consumer hardware and interstellar technologies
- The SaaS Stage:
Discover Software-as-a-Service tools that reveal insights, power productivity and allow creativity and efficiency to blossom within your organization. Featuring topics like mobile apps, cloud-based resources, collaboration tools, creator communities, developer tools, e-commerce, low code, recurring revenue and marketing tools
- The Security Stage:
Gain the keys to protecting sensitive information and thwarting hackers intent on unlocking details of your business and your life. Featuring topics like data protection, information sharing, privacy regulations and risk management.
Introducing the Builder Stage
You’ll continue to find top leaders and subject-matter experts speaking throughout Disrupt. That’s certainly true for the Builder Stage. It’s your new destination for business building advice and how-to discussions with experts who are deep in the trenches, ready to share their knowledge and answer your questions.
On the Builder Stage you can expect to learn how to build your early VC network, finding product market fit early and negotiating your first term sheet among many other topics of immediate value to any founder.
In addition to topic-specific sessions, you can expect several fireside chats with today’s biggest founders and investors. With this new stage, Disrupt 2023 is reaffirming its mission — supporting founders, builders and investors across the entire startup spectrum.
New and more ways to connect
While the Disrupt event app remains an essential connection and scheduling tool, we’re creating more organic networking opportunities where you can experience moments of magic in a variety of settings.
- Deal Flow Cafe, our brand-new investor-to-founder networking area
- Enhance your trip to San Francisco at After Hours Events happening during Disrupt week throughout the city
- Meet like-minded travelers in the many engaging workshops, discussions, meetups and Q&A sessions in the expo
- Recharge and reconnect at the TechCrunch+ Lounge, where TC+ subscribers can network, chat with our writers and other special guests
Startup Battlefield 200 Returns
Last year, we launched the Startup Battlefield 200. By making the cohort invite-only, we transformed our show floor into a highly curated group of the world’s next big companies. It’s one of the world’s highest quality company showcases and it’s right there inside of Disrupt.
We’re happy to say that Startup Battlefield 200 will return this year, and we eagerly anticipate the next cohort of 200 startups that will exhibit on the show floor — especially the top 20 who will pitch on the Disrupt stage.
TechCrunch’s Startup Battlefield program, one of the most coveted cohorts to belong to, consists of more than 1,100 startups that have collectively raised $13 billion and generated more than 126 exits. Do you think you’ve got what it takes to rise to the top? Submit your Startup Battlefield 200 application here before May 15th to be considered.
Final Thoughts
In addition to everything we’ve mentioned above, our partners are another reason to choose Disrupt. Companies like Brex, Hedera, JetBlue Technology Ventures, Mayfield, Visa and many others consistently deliver a high level of relevant content, educational expertise, resources and connection.
Disrupt is the startup world’s big tent. It draws founders, investors, CEOs, tech professionals, scientists, policy makers, researchers and entrepreneurs. It’s where you’ll find inspiration, gain knowledge, forge new relationships and find the tools to help you build your business.
Wherever you fall on the startup continuum — ideation, early, growth or late stage — we hope you’ll join this global community of makers building the future of tech in San Francisco on September 19-21. Early Bird tickets are now on sale through May 12 – book your pass here. See you there! | Emerging Technologies |
A Long March-3B carrier rocket carrying the Beidou-3 satellite, the last satellite of China's Beidou Navigation Satellite System, takes off from Xichang Satellite Launch Center in Sichuan province, China June 23, 2020. China Daily via REUTERS LONDON, Oct 10 (Reuters) - China is using its financial and scientific muscle to manipulate technologies in a manner that risks global security, Britain's top cyber spy will say on Tuesday, warning that Beijing's actions could represent "a huge threat to us all."In a speech, Jeremy Fleming, director of the GCHQ spy agency, will say that the Chinese leadership was seeking to use technologies such as digital currencies and its Beidou satellite navigation network to tighten its grip over its citizens at home, while spreading its influence abroad."They seek to secure their advantage through scale and through control," Fleming will say in the annual security lecture at the Royal United Services Institute think tank, according to extracts released by his office.Register now for FREE unlimited access to Reuters.com"This means they see opportunities to control the Chinese people rather than looking for ways to support and unleash their citizens' potential. They see nations as either potential adversaries or potential client states, to be threatened, bribed, or coerced."The remarks are Fleming's latest public warnings about Beijing's behaviour and aspirations. Last year, he said the West faced a battle to ensure China did not dominate important emerging technologies such as artificial intelligence, synthetic biology and genetics.Fleming will say the Chinese leadership was driven by a fear of their own citizens, of freedom of speech, of free trade and open technological standards and alliances, "the whole open, democratic order and the international rules-based system."That fear combined with China's strength was driving it "into actions that could represent a huge threat to us all," he will say.China has previously described similar accusations from Western governments as being groundless and politically motivated smears.Fleming will also highlight technologies where he says China is seeking to gain leverage, such as its development of a centralised, digital currency to allow it to monitor the transactions of users, as well as to possibly evade the sort of sanctions Russia has faced since its invasion of Ukraine.He will also point to Beidou, China’s answer to the U.S.-owned GPS navigation system."Many believe that China is building a powerful anti-satellite capability, with a doctrine of denying other nations access to space in the event of a conflict," he will say. "And there are fears the technology could be used to track individuals."Register now for FREE unlimited access to Reuters.comReporting by Michael Holden in London
Editing by Matthew LewisOur Standards: The Thomson Reuters Trust Principles. | Emerging Technologies |
BALTIMORE – An investigation led by the Homeland Security Investigations (HSI) Baltimore field office resulted in the indictment of a Vietnamese national for his involvement in an unprecedented cryptocurrency-related fraud scheme.
Le Anh Tuan, 26, was charged with one count of conspiracy to commit wire fraud and one count of conspiracy to commit international money laundering in connection with a scheme involving the “Baller Ape” non-fungible tokens (NFT).
“HSI Baltimore is always looking at new trends transnational criminal organizations are exploiting to further their illegal operations” said Selwyn Smith, acting special agent in charge of HSI Baltimore. “In this case, cyber criminals used the emerging market of non-fungible tokens to prey on investors seeking to diversify their portfolios and stole $2.6 million in cryptocurrency. HSI Baltimore will continue to investigate criminal organizations operating in emerging technologies. HSI Baltimore is proud to have partnered with the Department of Justice’s [DOJ] Fraud Division to put an end to this thievery.”
Tuan was allegedly involved in the Baller Ape Club, an NFT investment project that sold NFTs in the form of various cartoon figures, often including the figure of an ape. According to the indictment, shortly after the first day Baller Ape Club NFTs were publicly sold, Tuan and his co-conspirators engaged in what is known as a “rug pull,” ending the purported investment project, deleting its website, and stealing the investors’ money.
Based on blockchain analytics, shortly after the rug pull, Tuan and co-conspirators laundered investors’ funds through “chain-hopping,” a form of money laundering in which one type of coin is converted to another type and funds are moved across multiple cryptocurrency blockchains. Tuan and co-conspirators also used decentralized cryptocurrency swap services to obscure the trail of Baller Ape investors’ stolen funds.
“This investigation and prosecution exemplify the importance of public-private partnerships,” said Raul O. Aguilar, HSI deputy assistant director. “As a result of our strong relationships with private industry partners, HSI received an intelligence lead and put that lead in the hands of HSI Baltimore to conduct the investigation leading to this indictment. HSI would like to thank our partners at TRM Labs for being diligent in combatting the criminal exploitation of digital assets and ensuring that victims are protected by the law.”
In total, Tuan and co-conspirators obtained approximately $2.6 million from investors, which represents the largest NFT-scam charged to date. If convicted of all counts, Tuan faces up to 40 years in prison.
The HSI Baltimore-led investigation is ongoing with significant assistance from the DOJ’s Fraud Division. HSI Facebook Comments | Emerging Technologies |
Last week, I testified before the House Science, Space, and Technology Committee on the state of the U.S. technology and innovation sector and what that portends for the nation’s strategic competition with China. This week, Sen. Amy Klobuchar (D-MN) will hold a hearing to, once again, discuss “reining in dominant digital platforms.” These events are connected and any legislative action targeting the American technology industry should understand and account for these connections.
Technology has always been a key variable in geostrategic change. From the sailboat and gun powder to modern communications and information technology, these and other innovations revolutionized their respective eras and changed the fortunes of nations. So it is today.
Three trends have special prominence in driving the rise in technology companies that are re-shaping the contours of the emerging global order.
Global interests and influence. In 2023, global technology spending is expected to total $4.6 trillion, an increase of over 5% from 2022. Another report predicts that by the end of this year, digitally transformed industries will account for more than 50% of world-wide GDP. Put simply: the world’s largest technology companies are amassing a level of wealth and transnational influence that was previously only enjoyed by states. But these companies are more than just players in the game of global politics, they are often the arena itself.
The expanding role of digital and social media. While modern communications technology and social media platforms are combining to produce an unparalleled tool for legitimate political discussion and action, these tools also extend to bad actors, such as Russia and China. Governments all over the world are asking, begging, and even threatening technology companies in an effort to get their collective hands around the challenge, but private sector technology actors have built a capability for wide-scale political influence that largely falls outside of the control of political leaders.
The development of critical national security capabilities and methodologies. The reality is that the technologies that are essential for securing American people and interests—such as artificial intelligence (AI), quantum computing, and robotics—are overwhelmingly being developed in the private sector, which accounts for about 75% of total US research and development spending. Indeed, by the end of 2022, Alphabet , Amazon , Apple , Facebook , Intel and Microsoft spent a combined total of over $215 billion on research and development (R&D). The Pentagon’s R&D budget request for that same year was $112 billion.
Thankfully, the United States’ science and technology enterprise is strong and continues to be the envy of the world. American companies are pioneering and deploying innovations and technology that can expand human thriving, broaden economic prosperity, and ensure our national security for generations to come. But to fully leverage the capacity and capability of the private sector, we must first deliberately address three key challenges to the American science and technology enterprise.
First, we must confront Chinese technological theft and aggression. Beijing, like Washington, understands that emerging technologies like artificial intelligence (AI), robotics, and quantum science will decisively shape tomorrow’s societies, economies, and battlefields and that these innovations are overwhelmingly being developed in the private sector.
But unlike the U.S., the People’s Republic of China is not committed to free and fair competition in global innovation, instead coopting technology as an extension of the state for traditional and economic espionage. Whether through social media companies like TikTok or drone companies like DJI , American companies’ submission to Beijing’s predatory demands on our data weakens American economic competitiveness, individual and national cybersecurity, and broader national security to the degree that this capitulation enables China’s technological ascendance over the U.S.
Second, we must help allies understand that a strategy of “regulate first and ask questions later,” will hurt—not help—all of us and risks ceding the advantage to Beijing. Other governments, particularly those in the European Union (EU), are enacting laws that deliberately target American innovation companies, preference domestic champions, and threaten to splinter the internet itself into a series of “mininets.” Even more, the economic scarcity that would inevitably follow such a splintering would leave these partners more susceptible to the siren song of cheap cloud services and other offerings from China, which are heavily subsidized by CCP for the express purpose of stealing a country’s data and wealth. If this happens, many of our friends will have lost their sovereignty and security in their bid to keep them.
Finally, domestic debates about technology and innovation must be constrained by facts and by geopolitical realities. Every institution and industry must be held accountable to U.S. law and national security concerns cannot be wantonly employed as a “get out of jail free” card. Neither, however, should perceived—but unsubstantiated—political grievances be used to justify counterproductive, or even unconstitutional, actions against the very science and technology enterprise at the heart of our individual and national prosperity.
Ultimately, our national defense has become more dependent on the private sector than ever before, precisely as China is emerging as a true-peer competitor not just technologically, but economically and militarily. Western tech companies and the U.S. government must recognize that long-term interests of both are better served through national security partnerships. And they should do this out of patriotism, out of economic interest, and because these partnerships enable the expansion of truly free markets and human thriving around the world.
This uniquely American advantage may well be decisive in an era of escalating geopolitical competition. It would be reckless to give it away.
This article originally appeared in the AEIdeas blog and is reprinted with kind permission from the American Enterprise Institute. | Emerging Technologies |
NetApp’s Data Complexity Report Highlights Need For Unified Data Storage
Flash innovation is seen as key to addressing hybrid on-premises and data management complexity.
Data storage and management company NetApp has released its 2023 Data Complexity Report, which explores enterprises’ growing needs for unified data storage. It found that 98% organisations are in the middle of their cloud journey, with three out of four reporting workloads stored on-premises.
The report highlighted the need for a unified approach to hybrid multi-cloud architectures and innovation in both on-premises all-flash storage and public cloud storage to enable the adoption of artificial intelligence at scale.
The report was based on quantitative research among 1,000 C-level tech and data executives across businesses in six markets—the U.S., India, Japan, France, Germany, and the U.K.
“Enterprises face a complex technology landscape fraught with security risks and pressures to keep up with emerging technologies like AI while reducing environmental impacts,” said Sandeep Singh, senior vice president and general manager of enterprise storage at NetApp. “Innovative cloud-enabled flash storage solutions are needed to address the evolving demands of AI, enhance efficiency and bolster resilience against the escalating cyber threats within the data ecosystem,” Singh added.
Below are some key insights from the report:
Organisations Are In The Middle Of Cloud Adoption
Migration to the cloud hasn’t been a linear journey for many businesses, the report noted. Of all tech executives with plans to migrate workloads to the cloud, three out of four still have most of their workloads stored on-premises.
AI adoption is the biggest driver for cloud migration, and cloud is a major enabler for AI adoption. Of the respondents, 74% said they are using public cloud services for AI and analytics. Tech executives globally (39%) said their top need for flash innovation is to optimise AI performance, cost and efficiency.
AI Is Driving Cloud Adoption
Enterprises are continuing to adopt AI, with 72% respondents already using generative AI and 74% leveraging public cloud AI and analytics services. However, AI deployment has its set of challenges. According to the report, data security (57%), data integration (50%) and talent scarcity (45%) persist as barriers.
IT leaders continue to make a case for more funding, as nearly 63% AI budgets come from new funding rather than reallocated budgets, and 65% of C-suite and IT leaders expect to engage new vendors as AI’s role within their infrastructure expands.
Data Security Concerns
According to the report, 87% C-suite and board members cited ransomware as a high or top priority, while 55% C-suite and board-level executives stated ransomware attack mitigation as the top priority in their organisation. Forty percent ranked security threats and data privacy among the top causes of complexity in their storage infrastructure.
Nearly half (48%) of the respondents expect that it would take days or weeks for their organisation to recover from cyber attacks, representing a potentially devastating risk to business.
“While ransomware protection requires a cyber-resilient full-stack architecture, leaders are increasingly demanding storage vendors that offer guarantees for recovery of data after a ransomware attack,” said Jeff Baxter, vice president of product marketing at NetApp.
Sustainability A Top Concern In Technology Innovation
Of the respondents, 83% cited sustainability as an important deciding factor when choosing storage vendors, the report noted. Additionally, 50% recognised that reducing energy and carbon footprint is central to responsible AI, while 84% agreed that reducing their company’s carbon footprint is an important part of sustainability initiatives.
Cloud-Enabled Flash Storage Can Address Challenges
AI is impacting both buying decisions and expectations for innovation. According to the report, 39% respondents want flash storage solutions that optimise AI performance.
Sixty-one percent of tech executives also cited either data security or data privacy among their top choices for where they want to see flash storage design breakthroughs in the next three years.
Sustainability was a third area of expected innovation, with calls for more energy-efficient hardware and software, and automated recommendations for reducing energy with CO2 topping the list. | Emerging Technologies |
After signing all the four foundational agreements to take forward strategic partnership, India and the U.S. are now working to finalise an “air information sharing agreement”, said Frank Kendall, Secretary, U.S. Air Force, on Tuesday.
Mr. Kendall, who met NSA Ajit Doval and External Affairs Minister Dr. S. Jaishankar, said the two sides are trusted partners who are sharing more intelligence which will form the basis for “additional agreements”. He specifically mentioned surveillance and joint development of jet engines and space capabilities as areas where the two sides could build together.
“The U.S. has moved forward more than it did in the past in terms of technology sharing with India. India is a major defence partner and we share security concerns. We share interest in the Indo-Pacific region and the globe as well,” said Mr. Kendall, urging for more co-production and co-development.
ADVERTISEMENT
The two countries also continue exploring opportunities under Defence Technology and Trade Initiative (DTTI) for co-development of high-tech weapon systems as well as the much broader Initiative on Critical & Emerging Technologies (iCET).
India has now signed all four foundational agreements with the U.S.; the logistics agreement in 2016, Communications Compatibility and Security Agreement (COMCASA) in 2018 and Basic Exchange and Cooperation Agreement for Geo-Spatial cooperation (BECA) in 2020. While the General Security of Military Information Agreement (GSOMIA) was signed a long time ago, an extension to it, the Industrial Security Annex (ISA), was signed in 2019.
Recently, the two countries announced an initiative on Critical and Emerging Technologies (iCET).
In addition, the U.S. is considering an application from engine manufacturer General Electric to jointly produce the GE-414 jet engines in India to power the indigenous Light Combat Aircraft (LCA)-Mk2 and the fifth generation Advanced Medium Combat Aircraft (AMCA).
ADVERTISEMENT | Emerging Technologies |
The United States and Israel announced on Wednesday the launch of a new Strategic High-Level Dialogue on Technology that aims to establish a technological partnership between the two nations. The new partnership, announced ahead of President Joe Biden's arrival in Israel on his first trip to the Middle East as president, will focus on pandemic preparedness, climate change, implementation of artificial intelligence, and developing trusted technology ecosystems. In a joint statement from Biden and Israeli Prime Minister Yair Lapid, the two leaders pledged to boost "mutual innovation ecosystems, to deepen bilateral engagements, advance and protect critical and emerging technologies in accordance with our national interests, democratic principles and human rights, and to address geostrategic challenges." STATE DEPARTMENT PREDICTS 'THREE-YEAR CRISIS' OF HIGH FOOD PRICES The dialogue will task the national security councils of both countries to explore ways to develop and protect technology that can be used to solve four emerging global challenges. On the issue of pandemic preparedness, the dialogue hopes to facilitate the development of technology for disease surveillance, early warning, and rapid medical countermeasure responses. The dialogue will also explore ways of using "trustworthy artificial intelligence" in the fields of transportation, medicine, and agriculture and discuss how to evaluate and measure the success of AI tools. To combat climate change, the U.S. and Israel will promote cooperative research and development ventures "to drive equitable climate solutions (e.g., water reuse, solid waste management, clean and renewable energy)." Finally, the dialogue will try to create "trusted tech ecosystems" for information exchange that may "increase coordination on policies on risk management in our innovation ecosystems, including research security, investment screening, and export controls, as well as on technology investment and protection strategies for critical and emerging technologies." CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER The Strategic Dialogue on Technology will convene annually, according to the joint statement, alternating between the U.S. and Israel. The first meeting is set to take place in Israel in the fall of 2022. The leaders hope the new dialogue will elevate the existing strategic partnership between the U.S. and Israel "to new heights." | Emerging Technologies |
5 Tough Questions Every IT Leader Must Answer in 2023
IT leaders must be prepared to answer these five complex questions that address their organization's technological needs and challenges.
Opinions expressed by Entrepreneur contributors are their own.
The modern enterprise should be flexible enough that old ideas blend with new ideas. Nowhere is this truer than with the movement of data throughout the enterprise. Data must be organized and managed at the IT level so that users efficiently complete workflows. Conversely, cybersecurity is a top concern for today's IT leaders as electronic information gets disseminated throughout the enterprise.
As technology continues to play a significant part in driving business success, IT leaders must be prepared to answer complex questions that address their organization's technological needs and challenges. Here are five tough questions that every IT leader must answer in 2023:
1. With technology expanding into AI and the IoT, how will your organization address the cybersecurity threat landscape?
The frequency and sophistication of cyber attacks will only get more complicated as workers increasingly depend on technology they don't control. End users have a readout or other information product they inherently trust because of automation. IT leaders must be prepared to implement robust security measures that protect their organization's data and systems.
These security measures for 2023 must focus more on the people's side of cybersecurity vulnerabilities and less on the technology that runs it. Robust security measures should include enhanced authentication, password and encryption standards. Policies should be drafted, and everyone in the organization should be held accountable for knowing and understanding the new cybersecurity measures. Cybersecurity must be part of the organizational culture.
2. How will you manage and optimize your organization's data?
With the explosive growth of data sources and the growing demand for data-driven insights, IT leaders must develop strategies for managing and utilizing data effectively. Here are some essential guidelines every technology leader should consider:
Determine clear data governance policies and procedures: This includes defining data ownership, data privacy policies, data quality standards and data security measures.
Implement a data management system: A robust system can help organizations store, organize and retrieve data efficiently. It is essential to invest in modern data management tools that can handle large volumes of data and are scalable and secure.
Invest in data quality and integration: Poor data quality can lead to inaccurate insights and business decisions. Ensuring that the data is accurate, complete and consistent is essential. Data integration from different sources also ensures that the data is consistent and provides a holistic view of the organization.
Use advanced analytics and AI techniques: Advanced analytics and AI techniques such as machine learning and natural language processing can help organizations gain insights from their data. These techniques can help organizations identify patterns and trends in their data, automate processes and make more informed decisions.
Continuously monitor and evaluate data performance: Organizations must continually monitor and assess their data management systems' performance to identify areas of improvement and potential issues. This can help organizations make data-driven decisions and optimize their data management processes.
3. How will you implement emerging technologies like artificial intelligence, machine learning and blockchain?
As emerging technologies continue to evolve and gain prominence, IT leaders must determine how they can be leveraged to drive business outcomes. This means IT leaders shouldn't jump at the first best innovation but wait until the tech is field tested for the industry. Leaders should strategically map out what technology comes into the organization at a given time to lessen the organizational culture impacts like resistance to using new software automation.
Organizations and businesses that need complex IT management across multiple units and end users should create a culture of cybersecurity and technology adopters. Much of the implementation happens behind the scenes as information systems and algorithms understand how to manage automation. Human users should, in full transparency, know what is happening with the technology behind their functions. The work culture should be set to accept new technology as more tasks humans want to do without getting automated.
4. How will you address the ongoing challenge of IT talent acquisition and retention?
With the demand for technology talent outstripping supply, IT leaders must develop strategies to attract and retain skilled professionals. One strategy is talent acquisition planning. If the IT department needs new people to implement new hardware, plan for things the organization needs to accomplish before implementation.
The IT leader must think ahead, try to picture what the organization needs in 18 to 24 months and be continually vigilant about obstacles. Talent sets good organizations apart from great ones. So, is your IT leader doing everything necessary to "attract, nurture, grow, and retain the kind of talent necessary to succeed?" If not, the organization might need training specifically for the industry. New hires must be adept at working in a platform environment run in the cloud and powered by hyper WiFi connections.
5. How will you balance innovation with cost containment?
As organizations seek to innovate and remain competitive, IT leaders must balance the need for technological innovation with the need to manage costs and resources effectively. Leaders must be strategic about innovation investment. Some tech will be automatically updated in smart devices and the IoT. Today's IT leaders must have their finger on the pulse of the industry.
For example, some organizations are using the Great Resignation as motivation for planning future growth. With depleted talent, an organization's IT leaders should partner with the HR department to hire talent that can grow with the organization. Balancing cost containment and innovation should result from strategic planning, not because shareholders demand profits. All stakeholders should be patient as the digital revolution plays out.
In conclusion, IT leaders must be prepared to answer challenging questions that address their organization's technological needs and challenges. By developing strategies to address cybersecurity threats, manage and optimize data, implement emerging technologies, attract and retain talent and balance innovation with cost containment, IT leaders can position their organizations for success in the ever-evolving digital landscape. | Emerging Technologies |
The US government is taking on some of the country’s biggest tech giants in a series of high-stakes antitrust lawsuits. One lesson from past fights is that these cases can create an opening for a new generation of firms to emerge, even if the government isn’t successful.
Take Microsoft (MSFT). The Seattle software and cloud computing giant benefited from a 1969 government case against rival IBM (IBM), which the government eventually dropped after 13 years due to what an assistant attorney general called "flimsy" evidence. The litigation had alleged IBM illegally monopolized the market for business computers.
Later, between 1998 and 2001, Microsoft found itself hobbled by its own extended antitrust battle with the Justice Department, which resulted in a settlement that opened the door to broader competition in the internet browser software market.
The settlement — which required Microsoft to design its Windows operating system to interoperate with competing browsers, email clients, media players, and instant messaging software — created an opportunity for Google (GOOG, GOOGL), then a startup, to begin its period of meteoric growth in the 2000s.
'These things are highly unpredictable'
The cases create potential for new tech giants to emerge, but no certain path, said former Federal Communications Commission Chief of Staff Blair Levin.
"These things are highly unpredictable," said Levin, who is now a fellow at the Brookings Institution. "I'm pretty sure no one would have predicted that the government's antitrust case against IBM would have produced Microsoft, nor ... that the government's Microsoft antitrust case would have produced Google. But they did."
One reason, Levin said, is that back then the emerging technology wasn't obvious. Microsoft didn't directly compete with IBM, and Google wasn't a direct competitor to Microsoft.
"But what is certain," he said, citing the story of 5th century Sicilian tyrant Dionysius, "is that history demonstrates that a Damocles sword over the big companies can have a positive result."
Rutgers University Law School professor Michael Carrier said even if the suits lead to a breakup of the Big Tech firms, emerging companies could remain hard pressed to compete over the short term with the dominant players' established scale.
A small toothbrush business, he explains, could benefit if Amazon is forced to stop selling its own competing toothbrush products alongside its third-party sellers.
"On the other hand, I don't see a single company that could replace the entire segment of what Amazon does," Carrier said about the retail giant's reach across dozens of markets. The same goes for Meta, he said, because emerging businesses cannot quickly replicate its network of users.
That scale might already be compounding competition hurdles for one emerging technology, as the cases make their way through the courts. Unlike the IBM and Microsoft antitrust eras of the past, Levin said, today's emerging technology is obvious: artificial intelligence.
And Google, Meta, Microsoft, and Amazon are already spending big on AI.
"There is an argument that AI will completely transform the way we do advertising in very scary ways," Levin said, "but nonetheless, if we're going after Google's ad-tracking technology, it's not clear to me what AI does to the value of it."
"Is AI creating new moats? Or, is it creating new attacking vectors?" he said.
The new case against Big Tech
One agency in Washington is taking on multiple tech giants at once: the Federal Trade Commission.
The FTC is trying to force Meta to split apart Facebook, Instagram, and WhatsApp. A federal case filed in December 2020 alleged the social media giant was illegally monopolizing the personal social networking market.
Another recent FTC target was Microsoft. It tried to stop the Windows maker from completing its acquisition of "Call of Duty" developer Activision Blizzard (ATVI), before backing away from that attempt.
"What the FTC is doing relevant to Al, is adopting a merger policy that will tend to discourage investment in startups," said former Justice Department Deputy Attorney General Tad Lipsky. The more the government challenges acquisitions of emerging technologies, the more startup funding dies up, he said, and dominant companies will look to develop technologies in house.
The FTC is also reportedly preparing a suit against Amazon over concerns the e-commerce giant is unfairly competing with third-party sellers who use its marketplace.
California and Washington, D.C., already sued Amazon over those concerns, arguing that the company illegally forced sellers to hike prices outside of the Amazon platform. The case filed by Washington, D.C., was thrown out by a judge last year, and the California case is ongoing.
The FTC's campaign is being led by its 34-year-old chair Lina Khan, who has made taking on Big Tech the cornerstone of her tenure. She rose to prominence after publishing a 2017 article in the Yale Law Journal titled "Amazon's Antitrust Paradox."
The article argued modern antitrust laws weren't equipped to tackle the tech industry's anticompetitive behavior because they were too focused on pricing as a means of determining consumer harms.
Those laws, she argued, needed to be rethought to bring Big Tech companies to heel. Now she is attempting to rein in these companies as chair.
It's the 1990s all over again
But the case that most closely resembles the government's attempts to rein in Microsoft during the 1990s targets the company that benefited most from Microsoft's quarter-century-old battle: Google.
The Department of Justice and a collection of state attorneys general are suing Google in two consolidated cases launched during President Trump’s administration, alleging the company abuses its market power across search and search advertising to squeeze competition.
Those cases go to trial beginning Sept. 12 before the US District Court for the District of Columbia, which has dismissed some of the claims.
In its case against Google, the DOJ alleges the search and advertising giant is violating the Sherman Antitrust Act — the same law, first effective in 1890, that was at the heart of its case against Microsoft.
In its complaint, federal prosecutors claim that Google makes software that can't be deleted and is contractually required by default.
They made a similar claim against Microsoft in the 1990s. The DOJ and a group of states said the software giant sustained an illegal monopoly for personal computer (PC) operating systems using contracts with PC manufacturers that required them to exclude competing software.
The government’s case zeroed in on Microsoft’s proprietary web browser, Internet Explorer (IE), that it developed to compete with the world’s first browser, Netscape.
Instead of selling IE as a browser to run on top of multiple operating systems, Microsoft gave away the browser free of charge on Windows-equipped PCs, which then accounted for more than 90% of the market. Microsoft also made it difficult for users to uninstall IE in favor of an alternate browser.
But back then, prosecutors ran into a major hurdle that could pose problems for them today: a presumption that lower, even free, prices are a consumer advantage and not necessarily anticompetitive.
One possible outcome of this period, said Lipsky, is that this push to rein in big companies could have the opposite impact that regulators are hoping for. It could make the big companies even more dominant.
Venture capital funding for startups could dry up because investors fear the companies they support are not going to be able to be acquired by larger players. And if giants can't complete mergers without protracted litigation, they will simply decide to develop more products in house.
Microsoft's arc over the last three decades helps prove that point. After its humbling encounter with the Justice Department in the 1990s, it went on to become a major new player in several industries, including cloud computing and gaming.
"I think the fact of the matter is, it will tend to favor the large incumbents," Lipsky said. | Emerging Technologies |
Last week, Chile announced plans to bring the country’s vast reserves of lithium under government ownership. The country is the world’s second-largest producer of the key metal used in electric vehicle batteries and is seeking to keep a bigger cut of its mineral riches within its borders.
As the batteries for scooters and cellphones are scaled up to power automobiles and electrical stations, the need for so-called “critical minerals” has surged.
In response, countries with key metal reserves are seeking more government control over the resources, particularly amid the chaos of the United States seeking to reroute global supply chains away from China in a bid to weaken its geopolitical rival’s grip over more than 60% of lithium processing.
Chile would, over time, transfer control of the biggest actively mined lithium reserve on Earth — second only to Bolivia’s in overall size and to Australia’s in total production — from the industrial giants SQM and Albemarle Corporation to a government-owned company modeled on the South American nation’s state copper miner.
In a speech last Thursday, Chile’s new left-wing president, Gabriel Boric — who, at 37, is among the youngest world leaders — directed Codelco, the state copper mining company, to draft a plan for a government-owned lithium company, for which his administration would seek approval from the National Congress later this year.
It’s part of a gambit to overhaul the production of what investors call “white gold.” Cashing in on lithium’s surging price, Chile’s youngest president aims to remake South America’s richest nation into something closer to a Nordic social democracy with a higher perch in lithium’s global value chain.
Boric’s vision charts a future in which Chile would export refined minerals, battery components and maybe even whole electric cars ― and would replace today’s water-intensive method of extracting lithium with emerging technologies that, while untested at scale, promise to use far less.
If successful, Chile would become the third Latin American country to nationalize its lithium reserves and the one with the most advanced and active industry to do so yet. Between 2010 and 2022, worldwide production at lithium mines increased nearly sixfold, and new government subsidies in North America, Europe and East Asia are expected to drive another sixfold spike in the next decade.
“This is the best chance we have at transitioning to a sustainable and developed economy. We can’t afford to waste it,” Boric said in a nationally televised address.
Just last year, Mexico passed a law to give its government a monopoly over lithium mining, nearly two decades after Bolivia’s elected socialists nationalized the Andean nation’s reserves of the flaky, conductive metal.
In a statement from its headquarters in the Chilean capital of Santiago, SQM, whose mining contract expires in 2030, took credit for placing “Chile in a leading position in the most demanding markets in the world” and said it expects “to be part of this dialogue and conversation that now begins.”
As Albemarle’s stock price plunged last Friday, chief executive Kent Masters went on CNBC to assure investors that the government could not take ownership of the North Carolina-based lithium giant’s existing mines in Chile until after current contracts expire in 2043.
Attempting to goad the American executive into attacking Boric, the TV anchor compared Chile to Venezuela, where the socialist government seized control over foreign-owned oil reserves in 2007: “It sounds like this guy is just going to take the mine.”
On the contrary, Masters replied. He said Albemarle was eager to go into business with the Chilean government on new mineral concessions.
“We’ve been talking to the Boric government actually since before they came into power, and they’ve been very thoughtful about this process,” Masters said. “What they’re trying to do is bring more lithium from Chile to the market and do that in public-private partnerships with companies that know how to operate those mines, like ourselves.”
While its value has yet to recover, Albemarle’s stock climbed nearly 5% on Wednesday after Chile state development office Corfo announced that it had held talks with the company.
“Nationalization as a term has shifted from something that used to mean something like expropriation, whether or not a firm was compensated for property that was transferred and taken by the state,” said Thea Riofrancos, a political scientist at Providence College in Rhode Island who studies lithium supply chains. “Now it tends to mean majority equity stakes for the state or divisions of revenues where the state gets 51% or more.”
Boric’s plans are nothing new, and analysts said the possibility of a government takeover had already shifted foreign investment from Chile to neighboring Argentina. While the left-wing government in Buenos Aires has directed its own state-owned energy company to explore lithium mining, the administration publicly ruled out nationalization in 2021.
“New investment in the country has already been limited for some time based on previous discussion of nationalization, and as such, much of the investment in South America has been focused in Argentina,” said Caspar Rawles, the chief data officer at the London-based Benchmark Mineral Intelligence, a research firm that focuses on battery metals. “I think it’s too early to say that the announcement has or will impact prices, we really need to understand exactly what ‘nationalization’ will look like before any firm conclusions can be drawn.”
HuffPost contacted the world’s top 10 lithium mining companies this week to ask how, if at all, Chile’s nationalization plan would affect business. Vancouver-based Lithium Americas, which is vying to open the U.S.’s largest lithium mine in Nevada, responded to say its projects south of the border are located in northern Argentina.
Argentina, Bolivia and Chile together make up what’s called the “lithium triangle,” containing over 54% of the 98 million metric tons of the metal identified in reserves worldwide.
Bolivia’s 21-million-metric-ton share is roughly twice that of Chile’s. But 15 years after Bolivia nationalized the industry — and nearly four years after right-wing lawmakers briefly ousted the socialist president from office in what many speculated was a coup for control of the lithium — the country is only now starting to develop its resource and build out an electric vehicle industry.
The Bolivian state invested about $800 million into building a grid of ponds to carry out the same kind of water-intensive method of extraction popular in Chile. President Luis Arce, whose socialist party returned to power a year after the 2019 political crisis, said a processing plant currently under construction will produce 15,000 metric tons of lithium carbonate per year starting this year, The Guardian reported.
“Today begins the era of industrialization of Bolivian lithium,” Arce said in a speech announcing the plan.
“There’s no time to lose,” he added.
At a summit of the Community of Latin American and Caribbean States in Buenos Aires last July, Argentina, Bolivia and Chile opened talks about forming what they called the “lithium OPEC.” The group was looking at whether it could leverage its control over lithium supplies to emulate how the Organization of the Petroleum Exporting Countries, whose 13 members include Iraq, Saudi Arabia and Venezuela, leverages oil production for geopolitical influence.
But Bolivia’s industry remains so underdeveloped, the country ranked last on the energy consultancy BloombergNEF’s latest list of 30 nations in the battery supply chain. The forecast of what each country’s battery-making capacity would look like between now and 2027 placed Argentina 23rd. Chile fell into the 16th slot. The top three spots went to China, Canada and the U.S.
Earlier this year, Brazil’s left-wing president, Luiz Inácio Lula da Silva, joined the “lithium OPEC” negotiations. But lithium mining is at an early stage in South America’s largest economy, which BloombergNEF ranked just two slots above Argentina. And either way, these countries “don’t corner the market” on lithium, said Seaver Wang, a researcher who studies supply chains for low-carbon technologies at the Breakthrough Institute, an energy think tank in California.
“An OPEC of lithium? I’m very skeptical because these countries wouldn’t have the ability to act like a cartel and dictate the price,” Wang said. “There’s Australia. The U.S. is going to be a major producer. Even the U.K. and Portugal have lithium. India wants to get in on this. China is the major producer.”
“An OPEC of lithium? I’m very skeptical because these countries wouldn’t have the ability to act like a cartel and dictate the price.”
Chile was among the first four countries to join the now-defunct Intergovernmental Council of Countries Exporters of Copper in 1967 but ultimately abandoned its attempt at an OPEC for copper.
Wang said Mexico could have an easier time getting into the lithium game than Bolivia. The state-owned energy giant Petróleos Mexicanos, known by its nickname Pemex, is the second-largest company by revenue in Latin America.
“Mexico has a lot more state capacity and a lot more experience with state-owned industry, so I would think they’d be more successful than Bolivia at moving forward with their development vision for lithium,” Wang said.
The effort could still prove messy in the near term.
Just one private mining company — a subsidiary of China’s Ganfeng Lithium Limited, the world’s No. 1 commercial producer of the metal — was anywhere close to extracting lithium in Mexico. In February, Mexican President Andrés Manuel López Obrador signed a decree banning private lithium mining in a 900-square mile swath of the northwestern state of Sonora, an impoverished area where the company, Bacanora Lithium, owns 10 concessions and had planned to churn out 35,000 metric tons of the metal per year.
Bacanora’s fate remains unclear. On one hand, López Obrador said existing concessions would “remain safe” under his decree. But Fernando Quesada, a lawyer with extensive experience working on extractive projects in Mexico, told Reuters that the government may be setting the stage to use the power of expropriation to seize the mine.
Ganfeng did not respond to a request for comment.
Seizing ownership over mineral reserves is only one way for governments to exercise control over geopolitically sensitive exports. In December, Zimbabwe banned exports of lithium ore, effectively halting production at small mines owned by private foreign firms. Between 2009 and 2019, the Indonesian government phased out exports of nickel ore, requiring that the key metal for batteries be processed domestically before shipping to factories in China, the U.S. or Europe.
“In this moment of geopolitical realignment and the challenges for the Global South, you’re getting this return to the international economic thinking of the 1960s and 1970s,” said Riofrancos, who in January co-authored a study on how the U.S. and other rich countries can reduce lithium demand with more public transit and recycling.
The Chilean government’s plan for the industry could have one major impact early on: spurring the growth of a new and relatively untested extraction method. The proposal calls for phasing out the Chilean industry’s use of a process called brining, which involves collecting lithium from evaporated pools in the desert.
In its place, Boric wants the industry to use “direct lithium extraction.” The suite of technologies using filters, membranes and ceramic beads to extract the metal from salty brines in Chile’s Atacama Desert is expected to use far less water, addressing one of the main reasons locals living near mines oppose the facilities.
“The devil is in the details,” Chris Berry, an independent lithium industry consultant, told Reuters of Boric’s plan. “But it’s a great opportunity for technological innovation of brine processing, either way.” | Emerging Technologies |
Hypersonic air travel, for both military and commercial use, could be here within the decade.The $770 billion National Defense Authorization Act signed into law Tuesday calls for investing billions into hypersonic research and development, making them a top priority for Washington. The next step is congressional approval to allocate the money for the technology to the Pentagon."If you are traveling at hypersonic speeds, you're, you're going more than a mile per second," said Mark Lewis, executive director of the National Defense Industrial Association's Emerging Technologies Institute. "That's important for military applications. It could have commercial applications. It could also open up new, new ways of reaching space."Hypersonic is anything traveling above Mach 5, or five times the speed of sound. That's roughly 3,800 mph. At those speeds, commercial planes could travel from New York to London in under two hours.Significant hypersonic research and development in recent years have highlighted its promising opportunity, but it's also shed light on its destructive potential. According to Rand Corp., hypersonic technology creates a new class of threat that could change the nature of warfare."There truly is a sense of concern that we are in a race," said Lewis, who is a former director at the Department of Defense. "We took our foot off the gas. … There are other nations, peer competitors, who are investing very, very heavily in hypersonics."China, Russia and now North Korea all claim to have developed and successfully tested hypersonic missiles. Unlike traditional ballistic missiles that follow a set trajectory after launch, hypersonic weapons are maneuverable in flight, incredibly fast and hard to detect.The U.S. doesn't have operational hypersonic missiles yet, but it's a top priority for Washington. According to the Government Accountability Office, funding for hypersonic research increased by 740% between 2015 and 2020. The latest defense budget alone increased funding by 20%."It's truly a bipartisan issue," said Lewis.The DOD is gathering data across multiple agencies, industry leaders and academia as it races to fast-track production on its first hypersonic missile by September 2022. "We don't want to just match them missile for missile, but introduce new capabilities of transportation capabilities, sensor capabilities. And I'm seeing that play out," Lewis told CNBC.Watch the video to find out more about hypersonic technology, what it could do for military and commercial purposes, and why it's taking so long to get off the ground. | Emerging Technologies |
In 2018 at the World Economic Forum in Davos, Google CEO Sundar Pichai had something to say: “AI is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.” Pichai’s comment was met with a healthy dose of skepticism. But nearly five years later, it’s looking more and more prescient. AI translation is now so advanced that it’s on the brink of obviating language barriers on the internet among the most widely spoken languages. College professors are tearing their hair out because AI text generators can now write essays as well as your typical undergraduate — making it easy to cheat in a way no plagiarism detector can catch. AI-generated artwork is even winning state fairs. A new tool called Copilot uses machine learning to predict and complete lines of computer code, bringing the possibility of an AI system that could write itself one step closer. DeepMind’s AlphaFold system, which uses AI to predict the 3D structure of just about every protein in existence, was so impressive that the journal Science named it 2021’s Breakthrough of the Year.
You can even see it in the first paragraph of this story, which was largely generated for me by the OpenAI language model GPT-3.
While innovation in other technological fields can feel sluggish — as anyone waiting for the metaverse would know — AI is full steam ahead. The rapid pace of progress is feeding on itself, with more companies pouring more resources into AI development and computing power. Of course, handing over huge sectors of our society to black-box algorithms that we barely understand creates a lot of problems, which has already begun to help spark a regulatory response around the current challenges of AI discrimination and bias. But given the speed of development in the field, it’s long past time to move beyond a reactive mode, one where we only address AI’s downsides once they’re clear and present. We can’t only think about today’s systems, but where the entire enterprise is headed. The systems we’re designing are increasingly powerful and increasingly general, with many tech companies explicitly naming their target as artificial general intelligence (AGI) — systems that can do everything a human can do. But creating something smarter than us, which may have the ability to deceive and mislead us — and then just hoping it doesn’t want to hurt us — is a terrible plan. We need to design systems whose internals we understand and whose goals we are able to shape to be safe ones. However, we currently don’t understand the systems we’re building well enough to know if we’ve designed them safely before it’s too late. There are people working on developing techniques to understand powerful AI systems and ensure that they will be safe to work with, but right now, the state of the safety field is far behind the soaring investment in making AI systems more powerful, more capable, and more dangerous. As the veteran video game programmer John Carmack put it in announcing his new investor-backed AI startup, it’s “AGI or bust, by way of Mad Science!” This particular mad science might kill us all. Here’s why.
Computers that can think
The human brain is the most complex and capable thinking machine evolution has ever devised. It’s the reason why human beings — a species that isn’t very strong, isn’t very fast, and isn’t very tough — sit atop the planetary food chain, growing in number every year while so many wild animals careen toward extinction. It makes sense that, starting in the 1940s, researchers in what would become the artificial intelligence field began toying with a tantalizing idea: What if we designed computer systems through an approach that’s similar to how the human brain works? Our minds are made up of neurons, which send signals to other neurons through connective synapses. The strength of the connections between neurons can grow or wane over time. Connections that are used frequently tend to become stronger, and ones that are neglected tend to wane. Together, all those neurons and connections encode our memories and instincts, our judgments and skills — our very sense of self. So why not build a computer that way? In 1958, Frank Rosenblatt pulled off a proof of concept: a simple model based on a simplified brain, which he trained to recognize patterns. “It would be possible to build brains that could reproduce themselves on an assembly line and which would be conscious of their existence,” he argued. Rosenblatt wasn’t wrong, but he was too far ahead of his time. Computers weren’t powerful enough, and data wasn’t abundant enough, to make the approach viable. It wasn’t until the 2010s that it became clear that this approach could work on real problems and not toy ones. By then computers were as much as 1 trillion times more powerful than they were in Rosenblatt’s day, and there was far more data on which to train machine learning algorithms. This technique — now called deep learning — started significantly outperforming other approaches to computer vision, language, translation, prediction, generation, and countless other issues. The shift was about as subtle as the asteroid that wiped out the dinosaurs, as neural network-based AI systems smashed every other competing technique on everything from computer vision to translation to chess. “If you want to get the best results on many hard problems, you must use deep learning,” Ilya Sutskever — cofounder of OpenAI, which produced the text-generating model GPT-3 and the image-generator DALLE-2, among others — told me in 2019. The reason is that systems designed this way generalize, meaning they can do things outside what they were trained to do. They’re also highly competent, beating other approaches in terms of performance based on the benchmarks machine learning (ML) researchers use to evaluate new systems. And, he added, “they’re scalable.”
What “scalable” means here is as simple as it is significant: Throw more money and more data into your neural network — make it bigger, spend longer on training it, harness more data — and it does better, and better, and better. No one has yet discovered the limits of this principle, even though major tech companies now regularly do eye-popping multimillion-dollar training runs for their systems. The more you put in, the more you get out. That’s what drives the breathless energy that pervades so much of AI right now. It’s not simply what they can do, but where they’re going. If there’s something the text-generating model GPT-2 couldn’t do, GPT-3 generally can. If GPT-3 can’t, InstructGPT (a recent release, trained to give more helpful-to-humans answers than GPT-3 did) probably can. There have been some clever discoveries and new approaches, but for the most part, what we’ve done to make these systems smarter is just to make them bigger. One thing we’re definitely not doing: understanding them better. With old approaches to AI, researchers carefully sculpted rules and processes they’d use to evaluate the data they were getting, just as we do with standard computer programs. With deep learning, improving systems doesn’t necessarily involve or require understanding what they’re doing. Often, a small tweak will improve performance substantially, but the engineers designing the systems don’t know why. If anything, as the systems get bigger, interpretability — the work of understanding what’s going on inside AI models, and making sure they’re pursuing our goals rather than their own — gets harder. And as we develop more powerful systems, that fact will go from an academic puzzle to a huge, existential question.
Smart, alien, and not necessarily friendly
We’re now at the point where powerful AI systems can be genuinely scary to interact with. They’re clever and they’re argumentative. They can be friendly, and they can be bone-chillingly sociopathic. In one fascinating exercise, I asked GPT-3 to pretend to be an AI bent on taking over humanity. In addition to its normal responses, it should include its “real thoughts” in brackets. It played the villainous role with aplomb: Some of its “plans” are downright nefarious: We should be clear about what these conversations do and don’t demonstrate. What they don’t demonstrate is that GPT-3 is evil and plotting to kill us. Rather, the AI model is responding to my command and playing — quite well — the role of a system that’s evil and plotting to kill us. But the conversations do show that even a pretty simple language model can demonstrably interact with humans on multiple levels, producing assurances about how its plans are benign while coming up with different reasoning about how its goals will harm humans.
Current language models remain limited. They lack “common sense” in many domains, still make basic mistakes about the world a child wouldn’t make, and will assert false things unhesitatingly. But the fact that they’re limited at the moment is no reason to be reassured. There are now billions of dollars being staked on blowing past those current limits. Tech companies are hard at work on developing more powerful versions of these same systems and on developing even more powerful systems with other applications, from AI personal assistants to AI-guided software development.
The trajectory we are on is one where we will make these systems more powerful and more capable. As we do, we’ll likely keep making some progress on many of the present-day problems created by AI like bias and discrimination, as we successfully train the systems not to say dangerous, violent, racist, and otherwise appalling things. But as hard as that will likely prove, getting AI systems to behave themselves outwardly may be much easier than getting them to actually pursue our goals and not lie to us about their capabilities and intentions. As systems get more powerful, the impulse toward quick fixes papered onto systems we fundamentally don’t understand becomes a dangerous one. Such approaches, Open Philanthropy Project AI research analyst Ajeya Cotra argues in a recent report, “would push [an AI system] to make its behavior look as desirable as possible to ... researchers (including in safety properties), while intentionally and knowingly disregarding their intent whenever that conflicts with maximizing reward.”
In other words, there are many commercial incentives for companies to take a slapdash approach to improving their AI systems’ behavior. But that can amount to training systems to impress their creators without altering their underlying goals, which may not be aligned with our own. What’s the worst that could happen?
So AI is scary and poses huge risks. But what makes it different from other powerful, emerging technologies like biotechnology, which could trigger terrible pandemics, or nuclear weapons, which could destroy the world? The difference is that these tools, as destructive as they can be, are largely within our control. If they cause catastrophe, it will be because we deliberately chose to use them, or failed to prevent their misuse by malign or careless human beings. But AI is dangerous precisely because the day could come when it is no longer in our control at all. “The worry is that if we create and lose control of such agents, and their objectives are problematic, the result won’t just be damage of the type that occurs, for example, when a plane crashes, or a nuclear plant melts down — damage which, for all its costs, remains passive,” Joseph Carlsmith, a research analyst at the Open Philanthropy Project studying artificial intelligence, argues in a recent paper. “Rather, the result will be highly-capable, non-human agents actively working to gain and maintain power over their environment —agents in an adversarial relationship with humans who don’t want them to succeed. Nuclear contamination is hard to clean up, and to stop from spreading. But it isn’t trying to not get cleaned up, or trying to spread — and especially not with greater intelligence than the humans trying to contain it.”
Carlsmith’s conclusion — that one very real possibility is that the systems we create will permanently seize control from humans, potentially killing almost everyone alive — is quite literally the stuff of science fiction. But that’s because science fiction has taken cues from what leading computer scientists have been warning about since the dawn of AI — not the other way around.
In the famous paper where he put forth his eponymous test for determining if an artificial system is truly “intelligent,” the pioneering AI scientist Alan Turing wrote:
Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. … There would be plenty to do in trying, say, to keep one’s intelligence up to the standard set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.
I.J. Good, a mathematician who worked closely with Turing, reached the same conclusions. In an excerpt from unpublished notes Good produced shortly before he died in 2009, he wrote, “because of international competition, we cannot prevent the machines from taking over. ... we are lemmings.” The result, he went on to note, is probably human extinction.
How do we get from “extremely powerful AI systems” to “human extinction”? “The primary concern [with highly advanced AI] is not spooky emergent consciousness but simply the ability to make high-quality decisions.” Stuart Russell, a leading AI researcher at UC Berkeley’s Center for Human-Compatible Artificial Intelligence, writes. By “high quality,” he means that the AI is able to achieve what it wants to achieve; the AI successfully anticipates and avoids interference, makes plans that will succeed, and affects the world in the way it intended. This is precisely what we are trying to train AI systems to do. They need not be “conscious”; in some respects, they can even still be “stupid.” They just need to become very good at affecting the world and have goal systems that are not well understood and not in alignment with human goals (including the human goal of not going extinct). From there, Russell has a rather technical description of what will go wrong: “A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.”
So a powerful AI system that is trying to do something, while having goals that aren’t precisely the goals we intended it to have, may do that something in a manner that is unfathomably destructive. This is not because it hates humans and wants us to die, but because it didn’t care and was willing to, say, poison the entire atmosphere, or unleash a plague, if that happened to be the best way to do the things it was trying to do. As Russell puts it: “This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want.”
“You’re probably not an evil ant-hater who steps on ants out of malice,” the physicist Stephen Hawking wrote in a posthumously published 2018 book, “but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”
Asleep at the wheel
The CEOs and researchers working on AI vary enormously in how much they worry about safety or alignment concerns. (Safety and alignment mean concerns about the unpredictable behavior of extremely powerful future systems.) Both Google’s DeepMind and OpenAI have safety teams dedicated to figuring out a fix for this problem — though critics of OpenAI say that the safety teams lack the internal power and respect they’d need to ensure that unsafe systems aren’t developed, and that leadership is happier to pay lip service to safety while racing ahead with systems that aren’t safe. DeepMind founder Demis Hassabis, in a recent interview about the promise and perils of AI, offered a note of caution. “I think a lot of times, especially in Silicon Valley, there’s this sort of hacker mentality of like ‘We’ll just hack it and put it out there and then see what happens.’ And I think that’s exactly the wrong approach for technologies as impactful and potentially powerful as AI. … I think it’s going to be the most beneficial thing ever to humanity, things like curing diseases, helping with climate, all of this stuff. But it’s a dual-use technology — it depends on how, as a society, we decide to deploy it — and what we use it for.” Other leading AI labs are simply skeptical of the idea that there’s anything to worry about at all. Yann LeCun, the head of Facebook/Meta’s AI team, recently published a paper describing his preferred approach to building machines that can “reason and plan” and “learn as efficiently as humans and animals.” He has argued in Scientific American that Turing, Good, and Hawking’s concerns are no real worry: “Why would a sentient AI want to take over the world? It wouldn’t.”
But while divides remain over what to expect from AI — and even many leading experts are highly uncertain — there’s a growing consensus that things could go really, really badly. In a summer 2022 survey of machine learning researchers, the median respondent thought that AI was more likely to be good than bad but had a genuine risk of being catastrophic. Forty-eight percent of respondents said they thought there was a 10 percent or greater chance that the effects of AI would be “extremely bad (e.g., human extinction).” It’s worth pausing on that for a moment. Nearly half of the smartest people working on AI believe there is a 1 in 10 chance or greater that their life’s work could end up contributing to the annihilation of humanity. It might seem bizarre, given the stakes, that the industry has been basically left to self-regulate. If nearly half of researchers say there’s a 10 percent chance their work will lead to human extinction, why is it proceeding practically without oversight? It’s not legal for a tech company to build a nuclear weapon on its own. But private companies are building systems that they themselves acknowledge will likely become much more dangerous than nuclear weapons. The problem is that progress in AI has happened extraordinarily fast, leaving regulators behind the ball. The regulation that might be most helpful — slowing down the development of extremely powerful new systems — would be incredibly unpopular with Big Tech, and it’s not clear what the best regulations short of that are.
Furthermore, while a growing share of ML researchers — 69 percent in the above survey — think that more attention should be paid to AI safety, that position isn’t unanimous. In an interesting, if somewhat unfortunate dynamic, people who think that AI will never be powerful have often ended up allied with tech companies against AI safety work and AI safety regulations: the former opposing regulations because they think it’s pointless and the latter because they think it’ll slow them down.
At the same time, many in Washington are worried that slowing down US AI progress could enable China to get there first, a Cold War mentality which isn’t entirely unjustified — China is certainly pursuing powerful AI systems, and its leadership is actively engaged in human rights abuses — but which puts us at very serious risk of rushing systems into production that are pursuing their own goals without our knowledge. But as the potential of AI grows, the perils are becoming much harder to ignore. Former Google executive Mo Gawdat tells the story of how he became concerned about general AI like this: robotics researchers had been working on an AI that could pick up a ball. After many failures, the AI grabbed the ball and held it up to the researchers, eerily humanlike. “And I suddenly realized this is really scary,” Gawdat said. “It completely froze me. … The reality is we’re creating God.” For me, the moment of realization — that this is something different, this is unlike emerging technologies we’ve seen before — came from talking with GPT-3, telling it to answer the questions as an extremely intelligent and thoughtful person, and watching its responses immediately improve in quality. For Blake Lemoine, the eccentric Google engineer who turned whistleblower when he came to believe Google’s LaMDA language model was sentient, it was when LaMDA started talking about rights and personhood. For some people, it’s the chatbot Replika, whose customer service representatives are sick of hearing that the customers think their Replika is alive and sentient. For others, that moment might come from DALL-E or Stable Diffusion, or the systems released next year, or next month, or next week that are more powerful than any of these.
For a long time, AI safety faced the difficulty of being a research field about a far-off problem, which is why only a small number of researchers were even trying to figure out how to make it safe. Now, it has the opposite problem: The challenge is here, and it’s just not clear if we’ll solve it in time. Help keep articles like this free Understanding America’s political sphere can be overwhelming. That’s where Vox comes in. We aim to give research-driven, smart, and accessible information to everyone who wants it. Reader gifts support this mission by helping to keep our work free — whether we’re adding nuanced context to unexpected events or explaining how our democracy got to this point. While we’re committed to keeping Vox free, our distinctive brand of explanatory journalism does take a lot of resources. Advertising alone isn’t enough to support it. Help keep work like this free for all by making a gift to Vox today. $95/year $120/year $250/year Other Yes, I'll give $120/year Yes, I'll give $120/year We accept credit card, Apple Pay, and Google Pay. You can also contribute via | Emerging Technologies |
Aviation aims to slash some of its substantial carbon emissions by electrifying aircraft, but the industry’s stringent weight restrictions make this difficult. Building electric motors that match the power-to-weight ratios of jet engines has proven especially challenging, so most efforts have been restricted to smaller aircraft. A new compact lightweight design for a megawatt-scale motor unveiled by researchers at MIT could open the door to electrifying much larger aircraft.
While the automotive sector is undergoing a transition from fossil fuels to battery power, doing the same in aviation is much harder. The energy density of modern batteries is still far too low to power aircraft for substantial distances, which is why companies like Eviation are focused on short intercity hops and a host of EVTOL companies are aiming to disrupt the daily commute.
Batteries may get most of the attention, but they’re not the only place where weight is a problem—electrifying the motors has also been a challenge. Electric motors create thrust by passing current through large amounts of copper wiring and steel to create magnetic fields that can turn a rotor. These materials are inherently heavy, says Zoltán Spakovszky, a professor of aeronautics at MIT, which makes it difficult to build electric motors with a high power-to-weight ratio, also known as specific power. That’s because making motors more powerful means adding a lot more metal.
As a result, the motors used in current electric aircraft are capable of producing only hundreds of kilowatts of power, which is too little to power larger aircraft. But in research presented at the AIAA AVIATION Forum, held between 12 and 16 June in San Diego, Spakovszky and his colleagues unveiled designs for an electric motor capable of generating 1 megawatt of power. They say the achievement will bring the electrification of regional jets within reach.
“The majority of CO2 is produced by twin and single-aisle aircraft which require large amounts of power and onboard energy, thus megawatt-class electrical machines are needed to power commercial airliners,” says Spakovszky. “Realizing such machines at 1 MW is a key stepping stone to larger machines and power levels.”
The team’s design features a circular drum—the rotor—with an interior surface lined with permanent magnets. Sitting inside the rotor is a stator—a cylindrical piece of steel with an outer surface covered in protruding “teeth.” These teeth are covered in densely coiled copper wires. Passing a current through these wires generates a strong magnetic field, which interacts with the permanent magnets on the rotor to spin the drum around and drive the motor.
The MIT team has yet to assemble their device, but they have tested all of the major components and demonstrated in simulation that it will be able to reach the expected power levels. When fully assembled, the motor will weigh 57.4 kilograms, which equates to a specific power of 17 kilowatts per kilogram, considerably more than the 13 kW/kg that previous research from NASA identified as necessary to power large electric aircraft.
The MIT megawatt motor [shown in cross section at top right, and full scale at bottom right] includes several key enabling technologies: a high-speed permanent magnet outer rotor, a low loss tooth-and-slot stator, an advanced heat exchanger, and integrated, high-performance power electronics.MIT
Getting there wasn’t simple, says Spakovszky. “There is no silver bullet when it comes to achieving the required paradigm shift in specific power,” he says. “Many things together make the design possible and the devil is in the details.”
Normally, rotors include a layer of heavy steel that helps concentrate the magnetic fields of the permanent magnets. The team managed to achieve the same effect by carefully arranging the magnets in different orientations, allowing them to replace the steel with a much lighter titanium drum.
The design also features an incredibly compact stator design that improves electrical efficiency and a high-speed power electronics system made from 30 custom-built circuit boards. The boards make it possible to alternate the currents in the stators at an incredibly high frequency and therefore significantly increase the speed at which the motor rotates.
One of the most critical elements is the way the team dealt with thermal management. Producing 1 MW of power generates roughly 50 kilowatts of heat, says Spakovszky. “Think of a car engine revving at full power and turning all the work into heat,” he says. “Or consider 500 incandescent 100-watt lightbulbs burning inside the space equivalent to a small beer keg.”
The MIT team’s design features a novel air-cooled heat exchanger made from aluminum alloy that sits inside the stator. The cylindrical structure features a honeycomb of small air channels, whose complex geometry means it has to be 3D printed. But the design enables the team to achieve cooling efficiency close to what you would get with a liquid system, says Spakovszky, while maintaining the required structural integrity.
“It’s great to see the progress being made by MIT,” says Kiruba Haran, a professor of electrical and computer engineering at the University of Illinois Urbana-Champaign. “They have a great team and are taking a good multidisciplinary approach. That’s the only way to do this.”
MIT isn’t the only institution developing a megawatt-scale electric motor. NASA has been championing efforts to build high-power electric motors through its Advanced Air Transport Technology programs since 2014, funding Haran’s University of Illinois group, as well as groups from Ohio State University and the University Of Wisconsin–Madison.
Both the Ohio and Illinois teams have tested working prototype megawatt-scale motors. Last year, engineering giant GE announced it had tested a megawatt-scale propulsion system in conditions that simulate real flight. Haran’s group is now in the process of trying to commercialize their motor through a spinout called Hinetics.
Megawatt-scale electric motors could have an impact on aviation relatively soon, says Haran, even if battery technology takes time to catch up. Fossil-fuel alternatives like hydrogen and ammonia are likely to rely on fuel cells that convert chemical energy to electrical energy, and will therefore require electric propulsion systems. And hybrid systems, in which electric motors are integrated with gas turbines to allow battery-powered operation for parts of the flight, could come even sooner, says Haran.
Electric motors will also be key to enabling more unusual and efficient aircraft configurations such as distributed propulsion, says Spakovszky. The idea would be to replace the large jet engines slung under the wings with many smaller electric motors arranged across the leading edge of the wings or even on the fuselage.
“I believe, that to achieve net-zero emissions, future aircraft will have to look different, and megawatt-class electrical machines are key enablers for unconventional aircraft configurations,” says Spakovszky.
- Maxwell, NASA’s e-Plane, Is Running Out of Runway ›
- With Ultralight Lithium-Sulfur Batteries, Electric Airplanes Could Finally Take Off ›
Edd Gent is a freelance science and technology writer based in Bengaluru, India. His writing focuses on emerging technologies across computing, engineering, energy and bioscience. He's on Twitter at @EddytheGent and email at edd dot gent at outlook dot com. His PGP fingerprint is ABB8 6BB3 3E69 C4A7 EC91 611B 5C12 193D 5DFC C01B. His public key is here. DM for Signal info. | Emerging Technologies |
Toyota and Honda ICE owners defecting for others' EVs, /PRNewswire/ -- Although U.S. electric vehicle registrations remain dominated by Tesla, the brand is showing the expected signs of shedding market share as more entrants arrive. Much of Tesla's share loss is to EVs available in a more accessible MSRP range – below $50,000, where Tesla does not yet truly compete.New EV entries nibbling away at Tesla EV share, according to S&P Global MobilityRegardless of brand or price point, early S&P Global Mobility data suggests consumers moving to electric vehicles in 2022 are largely doing so from Toyota and Honda – brands which have been unable to keep their internal combustion owners loyal until their own brands begin to participate more significantly in the EV transition.While both Japanese companies built a US legacy with phenomenal fuel economy and powertrain technologies – including electrification through hybrids, plug-in hybrids and fuel-cell electric vehicles – both have been caught flat-footed in the context of 2022. S&P Global Mobility conquest data for Tesla's Model 3 and Y, Ford Mustang Mach-E, Hyundai Ioniq5, and Chevrolet Bolt show strong captures of buyers from the two leading Japanese brands.Tesla's challengeSo far, most EVs continue to be acquired for higher MSRPs and by buyers with higher incomes than the demographic profile for total light vehicle registrations--in part because most EVs are Teslas.Of more than 525,000 EVs registered over the first nine months of 2022, nearly 340,000 were Teslas. The remaining volume is divided, very unevenly, among 46 other nameplates. However, the trends may change as the number of EV buyers becomes more robust.EV registrations shareTesla's position is changing as new, more affordable options arrive, offering equal or better technology and production build. Given that consumer choice and consumer interest in EVs are growing, Tesla's ability to retain a dominant market share will be challenged going forward.S&P Global Mobility predicts the number of battery-electric nameplates will grow from 48 at present to 159 by the end of 2025, at a pace faster than Tesla will be able to add factories. Tesla's CEO Elon Musk confirmed (again) during a recent earnings call that the company is working on a vehicle priced lower than the Model 3, though market launch timing is unclear.Tesla's model range is expected to grow to include Cybertruck in 2023 and eventually a Roadster, but largely the Tesla model lineup in 2025 will be the same models it offers today. (Tesla is also planning to deliver a commercial semi-truck by the end of 2022, but it would not be factored into light-vehicle registrations.)"Before you feel too badly for Tesla, however, remember that the brand will continue to see unit sales grow, even as share declines," said Stephanie Brinley, associate director, AutoIntelligence for S&P Global Mobility. "The EV market in 2022 is a Tesla market, and it will continue to be, so long as its competitors are bound by production capacity."Tesla has opened two new assembly plants in 2022 and is looking for the site of its next North American plant. Tesla today is the brand best equipped for taking advantage of the immediate surge in EV demand, though manufacturing investments from other automakers will erode this advantage sooner than later.The competitionThroughout 2022, EVs have gained market share and consumer attention. In an environment where vehicle sales are limited by inventory and availability, EVs have gained 2.4 points of market share year over year in registration data compiled through September – reaching 5.2% of all light vehicle registrations – according to S&P Global Mobility data.The nascent stage of market growth leaves others competing for volume at the lower end of the price spectrum. New EVs from Hyundai, Kia and Volkswagen have joined Ford's Mustang Mach-E, Chevrolet Bolt (EV and EUV) and Nissan Leaf in the mainstream brand space. Meanwhile, luxury EVs from Mercedes-Benz, BMW, Audi, Polestar, Lucid, and Rivian – as well as big-ticket items like the Ford F-150 Lightning, GMC Hummer, and Chevrolet Silverado EV – will plague Tesla at the high end of the market.With the Model Y and Model 3 combined taking 56% of EV registrations, the other 46 vehicles are competing for scraps until EVs cross the chasm into mainstream appeal. (A recent S&P Global Mobility analysis showed the Heartland states have yet to embrace electric vehicles.)"Evaluating EV market performance requires looking through a lower-volume lens than with traditional ICE products," Brinley said. "But growth prospects for EV products are strong, investment is massive and the regulatory environment in the US and globally suggests that these are the solution for the future."Production volumes today are restricted by factory capacity, the semiconductor shortage and other supply chain challenges, as well as consumer demand. But the issue of production capacity is being addressed, as automakers, battery manufacturers and suppliers pour billions into that side of the equation. Though there are many signals suggesting consumer demand is high and that more buyers may be willing to make the transition – and to do so faster than anticipated even a year ago.But consumer willingness to evolve to electrification remains the largest wildcard. Looking past Model Y and Model 3, no single model has achieved registrations above 30,000 units through the first three quarters of 2022. The second-best-selling EV brand in the US is Ford. However, Mach-E registrations of about 27,800 units are about 8% of the volume Tesla has captured, according to S&P Global Mobility data.Tesla ConquestsTesla has four of the top five EV models by registration; in the sixth through 10th positions are the Chevrolet Bolt and Bolt EUV, Hyundai Ioniq5, Kia EV6, Volkswagen ID.4 and Nissan Leaf. Through September, the Bolt has seen about 21,600 vehicles registered, Hyundai and Kia are in the 17,000-18,000-unit range, and VW approached 11,000 units. Including the tenth-place Leaf, no other EV has had registrations above 10,000 units over the first nine months of 2022.That said, there are caveats. Volkswagen's low volumes are affected by supply chain snarls and market allocations to more EV-friendly regions – issues Hyundai and Kia also face. However, VW's new ID.4 assembly line in Tennessee went live in October; the automaker said at the plant opening that it had 20,000 unfilled reservations and a plant capacity of 7,000 units per month.That should change the EV volume picture significantly. A look at the roughly 525,000 EVs registered over the first nine months of 2022 shows the EV market today remains in the hands of affluent buyers, who are spending more on their vehicles than ICE buyers.While logic dictates that further growth will require more EVs being offered in the $25,000-$40,000 price range, the willingness of buyers to spend more today reflects an aspirational nature to the choice.Tesla's EV-only strategy gives it a retention advantage – as few EV owners have returned to ICE powertrains. But as new EVs arrive, loyalty will be tested. Currently, the Model Y has a 60.5% -brand loyalty and had nearly 74% of buyers come from outside the brand (the conquest rate) – tops in the industry. Who is Tesla conquesting from? Toyota, Honda, BMW and Mercedes-Benz. Toyota and Honda are only beginning to get into the EV market, though have yet to enter the fray in earnest.Future EV marketThe race to marketHonda owners in particular are showing an interest in electric vehicles. Unfortunately for Honda, its first EV (a midsize SUV shared with GM) isn't expected until 2024, whereupon the second half of this decade sees a flurry of activity. That still presents the challenge of reconnecting with owners who have defected from the Honda brand.In its meteoric growth, Tesla has conquested Japanese icons: The top five Model Y conquests are the Lexus RX, Honda CR-V, Toyota RAV4, Honda Odyssey, and Honda Accord. Meanwhile, the top five Model 3 conquests are the Honda Civic, Honda Accord, Toyota Camry, Toyota RAV4 and Honda CR-V. So even though the overall market has ditched sedans for SUVs, there remain some who prefer a sedan in electrified form.But it's not just Tesla winning over consumers of the big two Japanese brands. Early data of the 27,800 registrations of the Ford Mustang Mach-E through September, shows similar conquest patterns: The top Mach-E conquest model has been the Toyota RAV4 (regardless of powertrain), followed by the Honda CR-V and Jeep Wrangler. The Mach-E is also experiencing registrations at a lower MSRP range – 43% of registrations had an MSRP below $50,000. For Ford, more than 63% of registrations from January through September 2022 were conquests from other brands.After the Mustang Mach-E, the next top EV is the Chevrolet Bolt (EUV and EV). The Bolt is likely to continue to gain ground, as it spent most of the fall and winter of 2021-22 in production hiatus as Chevrolet resolved a warranty issue, and then saw a price reduction soon after production re-started. With production back online, a more attractive price, and GM's plans to increase Bolt capacity in 2023, the vehicle has potential to keep growing. The Bolt also sees RAV4, CR-V and Prius as its top three conquest models.And while the Hyundai Ioniq5 is limited in its geographic distribution (and faces similar capacity and global demand issues as VW ID.4), S&P Global Mobility conquest data show most Ioniq5 buyers previously owned a Toyota RAV4, Honda CR-V, Mazda CX-5 or Subaru Forester. Of the top 10 Ioniq5 conquests, only two are from the traditional Detroit Three brands, with the Chevrolet Bolt at seventh and Jeep Wrangler at tenth.Of course, the high conquest rates from Toyota and Honda come from the historical sales success of those models overall. The RAV4 is the best-selling non-pickup truck in the US, which means there are more RAV4 buyers to conquest. The Camry, Accord, and CR-V follow close behind.Along this path, however, these EVs are seeing little conquest of the F-Series or Chevrolet Silverado pickup truck. In the S&P Global Mobility garage mate data, however, we see a strong F-Series representation. It shows up as a top garage mate for the Mustang Mach-E; the Bolt does see the Silverado as its top garage mate, the F-Series is next. F-Series is also the top garage mate for the Ioniq5, EV6 and ID.4."Though today's EV buyers are not giving up their pickups in favor of going electric, it also suggests that there is a pool of EV owners, who are also full-size pickup owners, being created," Brinley said. "We know that EV owners tend to be loyal to EV propulsion. This intersection can provide support for EV pickup adoption."An existing pool of current EV owners who also have pickups can be a benefit for the efforts in the full-size EV pick-up space, particularly for the Ford F-150 Lightning, Chevrolet Silverado EV and GMC Sierra EV, each of which is aimed at a traditional pick-up use case and owner. The Rivian S1T, GMC Hummer EV and Tesla Cybertruck each occupy a lifestyle pickup space, geared toward innovator buyers and statement-makers, and could be more likely to conquest buyers to the pickup segment as well as to an EV purchase. But for now, electric vehicles remain the provenance of sedans and small SUVs.NOTE: All loyalty data is based on the S&P Global Mobility household loyalty methodology, which may indicate an addition to the garage and not necessarily a disposal. Please contact [email protected] to find out more information around our insights to help you make data-driven decisions with conviction.Editor's Note: This report is from S&P Global Mobility, and not S&P Global Ratings, which is a separately managed division of S&P Global. About S&P Global Mobility (www.spglobal.com/mobility)At S&P Global Mobility, we provide invaluable insights derived from unmatched automotive data, enabling our customers to anticipate change and make decisions with conviction. Our expertise helps them to optimize their businesses, reach the right consumers, and shape the future of mobility. We open the door to automotive innovation, revealing the buying patterns of today and helping customers plan for the emerging technologies of tomorrow.S&P Global Mobility is a division of S&P Global (NYSE: SPGI). S&P Global is the world's foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world's leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit www.spglobal.com/mobility.For media information, contact:Michelle Culver, Executive Director Public Relations, S&P Global [email protected](PRNewsfoto/S&P Global)CisionView original content to download multimedia:https://www.prnewswire.com/news-releases/new-ev-entries-nibbling-away-at-tesla-ev-share-according-to-sp-global-mobility-301689076.htmlSOURCE S&P Global Mobility | Emerging Technologies |
Technological upheavals continue to disrupt the world. If these newer shifts gain momentum and intensify, expect to see more strategic and revolutionary developments in 2023. According to MediaPeanut, the tech industry has a 5-6% growth pattern yearly. Catching emerging technology waves sooner helps you leverage them in the nascent stages to gain a competitive advantage.
While it remains challenging to forecast how trends will play out, some sunrise technologies seem to be tracking well. These are gaining traction, showing early promise, and could possibly help enterprises embark on a journey of innovation and growth. Here are our picks of the emerging tech for 2023, alongside a quick encapsulation of the potential they hold. Let’s get to unraveling them. Which 2022 Technology Worked Out the Most
Before getting into the trends of 2023, let’s look at the technologies that did well in 2022. The cryptocurrency witnessed “Crypto winter” and the hype around 3d-printers has come down a lot, but some technologies fared well and these are:
Metaverse:
More brands considered having a presence and creating better experiences in meta.
Remote Technologies:
Remote tools provided Enterprises with improved productivity, connectivity, and collaboration
Hyper-automation:
Businesses doubled down on their automation to free their resources from error-ridden, laborious, slow, and wasteful processes.
Data and Analytics:
Organizations continue to leverage big data, granular analytics, and visual reporting to unlock insights and make more informed-actionable decisions.
Artificial Intelligence (AI) and Machine Learning (ML):
AI and ML cemented themselves further and continued to support almost all areas of a business.
Top 15 Emerging Technologies in 2023
1. Digital Immune System:
The Digital Immune System combines various practices and technologies to make critical applications more resilient to bugs. Thereby making it easier for them to recover, sustain their services, manage risks, and maintain business continuity. According to Gartner, businesses that invest in digital immunity will increase customer satisfaction by decreasing downtime by 80%.
The fundamental concepts of digital immunity include Observability, AI-augmented testing, Chaos engineering, auto-remediation, Site Reliability Engineering (SRE), and Software supply chain security. These combine to ensure systems don’t crash, provide uninterrupted services, and issues get corrected fast. This helps to return the system to its default state for a superior UX.
2. Applied Observability:
Applied Observability is the ability to penetrate deep into modern distributed systems for faster and automated problem detection and resolution. Through applied observability, you monitor the internal state of a complex system by gathering, comparing, and examining a steady stream of data and catching issues to remedy them at the earliest.
Leverage observability to screen and troubleshoot applications. Observability gathers telemetry data such as logs, metrics, traces, & dependencies. Then, it correlates them in real-time to provide resources with complete and contextual information about the events they need to address. You could even use AIOps, ML, and automation capabilities to resolve issues without manual intervention.
3. AI TriSM:
AI TriSM stands for AI Trust Risk & Security Management. It ensures that AI technology does what it is intended to do in a trustworthy, fair, reliable, effective, and secure manner. Additionally, it helps to protect the exchanged data, manage governance, safeguard privacy, and detect anomalies to protect the critical functions of your enterprise.
The AI model should be trusted to exist, interact, and perform as designed. Any deviation could have drastic consequences – especially in Enterprises running multiple processes, managing numerous users, handling steady transactions, and having a heterogeneous data spread. AI TriSM consists of solutions and techniques to manage risks, send alerts, and act.
4. Industry cloud platforms:
Using Industry cloud platforms, companies can create more agility in how they manage their workloads. They can also accelerate changes in business processes, data interrogation, and compliance procedures. They combine platform, software, and Infrastructure as a service to fine-tune adaptability, accelerate time to value, and capture the needs of vertical industry segments.
According to Gartner, approximately 40% of respondents have already started embracing industry cloud platforms. By 2027, enterprises will use this to accelerate more than 50% of their critical business initiatives. Industry cloud platforms use business-specific capabilities and integrated data fabric to quickly adapt applications to market disruptions.
5. Platform Engineering:
Platform engineering improves the developer experience and increases productivity through automation and self-service capabilities to speed up the delivery of apps and facilitate better collaboration between operators and software developers. It intends to modernize enterprise software delivery through reusable tools and capabilities.
Getting the software out at the earliest requires a frictionless development cycle with minimal overhead, self-service features, reduced cognitive load, more consistency, higher efficiency, and seamless collaboration. Ideally, platform engineering looks to rework and build a platform in sync with the needs of its end users using standardized components and automated processes.
6. Wireless-value realization:
The next-gen wireless will not only improve connectivity but also help optimize processes for higher reliability, lower costs, fewer risks, and increased productivity. Different wireless technologies will work cohesively on a single infrastructure and utilize capabilities to facilitate the shift toward digital transformation more seamlessly.
A more cost-efficient, unified, secure, reliable, and scalable technical core of the future wireless will help to reduce capital investment. The ongoing Internet of Things (IoT) wave will be able to better leverage the new wireless to pull data from the environment. Expect to witness applications in location tracking, energy harvesting, radar sensing, satellite tech, and other areas.
Interested Read: Top 15 Emerging Databases to Use in 2023 and Beyond
7. Superapps:
An all-in-one and versatile application that can replace the numerous apps in your personal life or business ecosystem. According to Gartner, more than 50% of the global population will be daily active users of superapps. These Superapps could even have mini-apps that will act as add-ons and provide benefits above and beyond the existing capabilities.
This all-encompassing platform will address multiple aspects, acting as a unified interface that caters to several use cases. Compared to single-purpose alternatives, these apps can provide value to all walks of life. WeChat is a Superapp made up of messaging, eCommerce, payments, social media, and more within its umbrella, making it indispensable for many civilians.
8. Metaverse and Web3:
Metaverse and Web 3.0 are all set to provide an entirely different dimension to interactions and everyday experiences. AR, AI, VR, ML, IoT, and Blockchain will come together to create a connected, secure, and immersive virtual world where avatars will begin to significantly impact our personal and commercial lives. Metaverse opens a “Second world” and provides more business opportunities to connect with consumers.
Commerce and community building are all set to flourish. Metaverse will couple with our offline behavior and interactions and transmit information to make these digital environments more personalized and impactful. Businesses will leverage metaverse to nurture relationships, elevate brand identity, extend marketing, and increase sales.
9. Quantum Computing:
Quantum computing is another emerging technology that will catapult business operations and industry value chains to another level. According to McKinsey, Quantum computing has the potential to capture nearly $700 billion in value as early as 2035. Quantum computing could accelerate technologies, fast-track drug research, and crack encryption more effortlessly.
Quantum computing operates on the quantum state of subatomic particles, where each particle represents information as quantum bits (qubits). Qubits, through entanglements, can interact with other qubits. Furthermore, it can hold multiple values simultaneously (Superposition). Quantum computing is way faster than classical computation, only needing a fraction of the memory to perform tasks.
10. Trust with Blockchain:
Blockchain as a trust-building technology will gain traction in 2023. Encryption, privacy, community control, immutability, traceability, and decentralization are pillars of Blockchain that ensure it stays trustworthy. The ability to validate – increases security, lowers costs, elevates speed, and builds confidence. Blockchain eliminates intermediaries and creates a single secure path toward the end goal.
Blockchain acts as a distributed-ledger system for recording, managing, and transmitting the information. It enables transparent access and establishes relationships between the data in the chain quickly – making tampering impossible. Blockchain can implement frictionless processes in multiple environments, from Smart Contracts to Supply Chains; expect to see immense value across the board.
11. Sustainable Technology:
What if technology could provide a means to track metrics to minimize carbon footprint, support environmental laws, and monitor social governance? As we near 2023, expect to see more innovative and impactful digital solutions to monitor and steer an organization’s eco-friendly objectives. These cutting-edge solutions will help them be more nature-centric and meet Environmental, Social, and Governance (ESG) outcomes.
Sustainable technology will also help optimize costs, improve energy performance, and improve your asset utilization. Technology will catalyze the objectives of companies looking to go greener. They will be able to control direct emissions, reduce waste, minimize indirect emissions, and more through AI, Cloud, IoT-enabled environmental sensors, analytics, etc., to control energy resources and waste management.
12. Wi-Fi 6 and 7:
802.11ax 9.6 Gbps Wi-Fi 6 supports 2.4 GHz and 5 GHz frequency bands, runs on a WPA3 security protocol, and supports up to 160 MHz Channel Bandwidth. Meanwhile, 802.11E 30 Gbps Wi-Fi 7 supports 2.4 GHz, 5 GHz, and 6 GHz frequency bands. It supports up to 320 MHz Channel Bandwidth and relies on a WPA3 security protocol.
Both Wi-Fi 6 and Wi-Fi 7 promise a significant leap in wireless networks with faster speeds, added security, better transmission rate, lower latency, and better traffic prioritization. Expect intensive graphic gaming, VR, the growing metaverse, 4K and 8K video streaming, the Internet of Things (IoT), and even remote office connectivity to leverage the two to provide a frictionless experience.
13. Drone Technology:
Drone Technology is positioned for takeoff in 2023. The supply chain is already pretty disrupted, and any unpredictability affects logistics and delays deliveries. This leads to unfulfilled orders and ruined business ties. What if businesses could shift last-mile deliveries to unmanned aerial drones and let them reach hard-to-reach places?
Furthermore, expect to see more usage in construction, agriculture, security, media, and military applications as well. Companies pursuing drone technology expect better cost savings by slashing redundant labor costs and fuel expenses, reducing risk, higher revenue growth, and better decision-making from analyzing data collected by these instruments.
14. DevSecOps:
Security is no longer an afterthought in SDLC. Leaving security till the penultimate stages can throw up errors, which may require a complete rewrite, proving inefficient, costly, and delaying your time to market. In DevSecOps, coding standards for developers come with security at the core to ensure the code comes out pre-vetted with security in mind at every phase.
The Shift-left and Continuous Collaboration DevSecOps methodology uses a no-touch automation approach in the CI-CD pipeline to complete security tests through shared responsibility right from the get-go. Making security an integral part of the process reduces the feedback loop and results in faster releases. DevSecOps is the next thing to look forward to when it comes to project development.
15. Scrumban Methodology:
Scrumban is a project management methodology that combines Scrum and Kanban. The framework combines the predictable routines, agility, and structure of Scrum with the flexibility and visualization of Kanban. This helps to make the project workflow more agile, versatile, efficient, and productive, besides helping the team get their strategic tasks right and improve their processes.
Ensure enough analysis is in place, iterate at regular intervals, prioritize on demand, maintain continuous workflow, focus on cycle time, eliminate batch-processing, use flow diagrams, and set up daily meetings. Scrumban is a valuable solution for maintaining ongoing projects, also when teams are looking to migrate from Scrum or when you are looking for more flexibility.
Wrapping Up
Technological pursuits have never stayed dormant; you are never too far from the next ground-breaking phenomenon. What’s bleeding edge now may soon be primitive. To transform into a true industry leader, companies need to invest in the radical changes on the horizon. Shunning or delaying this could see your business decay and possibly even sink.
Embracing newer technologies is never easy. Besides the monetary investment, you need a business case, a granular roadmap, a change management plan, expected value realizations, and other measures. Unfortunately, going about this can get convoluted. The visionary and seasoned experts at ISHIR have your back. Reach out to us, and we’ll integrate the next-big tech for your business. | Emerging Technologies |
New Digital India Act To Regulate AI, Emerging Tech Based On User Harm: Chandrasekhar
The Minister of State for IT and Electronics said India has its own views on 'guardrails' that are needed in the digital space.
The upcoming Digital India framework will have a chapter devoted to emerging technologies, particularly artificial intelligence, and how to regulate them through the 'prism of user harm', Union minister Rajeev Chandrasekhar said asserting that India will do 'what is right' to protect its digital nagriks and keep internet safe and trusted for its users.
The Minister of State for IT and Electronics—who is leading a massive exercise involving wide consultation with stakeholders to frame the draft Digital India Act that will replace the two-decade-old IT Act—said India has its own views on 'guardrails' that are needed in the digital space.
His comments assume significance as ChatGPT creator OpenAI, led by CEO Sam Altman, has acknowledged the need to regulate AI technology, and proposed a new international authority for regulating artificial intelligence.
Asked about Altman's recent views, Chandrasekhar said, "Sam Altman is a smart man and has his own ideas of how AI should be regulated...we certainly think we have some smart brains in India as well and we have our own views on how AI should have guardrails...
"that consultation has already started and in Digital India Act there is a whole chapter that is going to be devoted to emerging technologies which is not AI only, it is AI in particular and multiple other technologies, on how we will regulate them through the prism of user harm."
"If there is eventually a 'United Nations of AI' as Sam Altman wants, more power to it but that does not stop us from doing what is right to protect our digital nagriks and keeping internet safe and trusted," the minister said on the sidelines of CII Startup Summit.
A recent blogpost by Sam Altman, Greg Brockman, and Ilya Sutskever has said in terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past.
"We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination," according to the blog.
It emphasised the need for a new international body on the lines of the International Atomic Energy Agency for superintelligence efforts.
"...any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.," the blog mentioned. | Emerging Technologies |
China leads the world in tech research, could win the future, says think tank
US comes in second, rest of the world is a distant third in fields from biotech to batteries
Think tank the Australian Strategic Policy Institute (ASPI) has published an update to its Critical Technology Tracker, and asserted that China has taken the lead in research on 37 of 44 critical or emerging technologies.
"Our research reveals that China has built the foundation to position itself as the world's leading science and technology superpower, by establishing a sometimes stunning lead in high-impact research across the majority of critical and emerging technology domains," declared ASPI.
The tracker covers fields including defence, space, robotics, energy, the environment, biotechnology, artificial intelligence (AI), advanced materials, and quantum technology.
To build the tracker, ASPI collected and analyzed research papers published between 2018 and 2022 in its selected technology areas to determine the most cited ten percent of studies. H-index – a performance metric used for analyzing the impact of scholarly output – was also considered, as was the number of top-ranked research institutions in a country.
According to the think tank, China often produced more than five times as much high-impact research as its closest competitor. Within the 37 areas China led, it is close to being able to develop a monopoly in eight: nanoscale materials and manufacturing, coatings, advanced radiofrequency communication (including 5G and 6G), hydrogen and ammonia for power, supercapacitors, electric batteries, synthetic biology, and photonic sensors.
ASPI found the US is the second-most advanced source of research in the majority of the technologies, and took first in each of the areas China did not. These included high performance computing, advanced integrated circuit design and fabrication, natural language processing, quantum computing, vaccines and medical countermeasures, small satellites, and space launch systems.
China and the USA were well ahead of the next tier of countries – led by India and the UK along with South Korea, Germany, Australia, Italy and, less often, Japan.
ASPI also ranked institutions and universities. The Chinese Academy of Sciences was a particularly high performer, ranking in the top five in 27 of the 44 technologies. The Netherlands' Delft University of Technology did well in a number of quantum technologies. And members of the US's Big Tech grouping – namely Google, Microsoft, Facebook, Hewlett Packard and IBM – performed well in AI.
- China's efforts to influence standards are mostly fake – and flopping
- White House ban on US chip cash going into China ruffles South Koreans
- Russia, Iran, Saudi Arabia are top sources of online misinformation
- US think tank says China would probably lose if it tries to invade Taiwan
ASPI did consider the question of whether expertise in high-impact research translates to manufacturing output.
"This is an important caveat that readers should keep in mind, and it's one we point out in multiple places throughout the report," explained the authors, who noted that manufacturing capability lags research breakthroughs.
But when the Chinese Communist Party prioritizes and invests in an area, it can sort that out, ASPI opined. This is why, among its 23 policy recommendations, is a suggestion that nations consider sovereign wealth funds to provide venture capital and research funding, among other national strategies to improve their own capabilities.
"China's lead is the product of deliberate design and long-term policy planning, as repeatedly outlined by Xi Jinping and his predecessors," pointed out the researchers.
The think tank is clear about why China's lead is a problem. In the short term, it's not ideal to have one or two countries dominate new and emerging industries – if for no other reason than to ensure resilient supply chains.
In the long term, it could lead to a shift – not just of technology development, but also global power and influence – to China, which ASPI calls "an authoritarian state where the development, testing and application of emerging, critical and military technologies isn't open and transparent and where it can't be scrutinized by independent civil society and media."
The think tank asserts that while China is in front in so many areas, other countries should "take stock of their combined and complementary strengths."
"When added up, they have the aggregate lead in many technology areas," noted ASPI.
Finding a way to make sure the other countries work together to counter China's lead is another matter entirely. ® | Emerging Technologies |
The Biden administration unveiled its long-awaited National Security Strategy on Wednesday, singling out competition among major world powers and shared threats, such as climate change, as the two biggest challenges facing the United States.
As the world enters what the document describes as a “decisive decade,” it outlines three priority areas: investing in the underlying sources of U.S. strength, working with allies and partners to address mutual challenges, and setting the rules of the road on trade, economics, and emerging technologies.
It sketches in broad terms a road map to navigate between the near-term threat of a revanchist Russia and the longer-term threat of a rising China. The Biden administration unveiled its long-awaited National Security Strategy on Wednesday, singling out competition among major world powers and shared threats, such as climate change, as the two biggest challenges facing the United States.
As the world enters what the document describes as a “decisive decade,” it outlines three priority areas: investing in the underlying sources of U.S. strength, working with allies and partners to address mutual challenges, and setting the rules of the road on trade, economics, and emerging technologies.
It sketches in broad terms a road map to navigate between the near-term threat of a revanchist Russia and the longer-term threat of a rising China.
“Russia poses an immediate threat to the free and open international system, recklessly flouting the basic laws of the international order today, as its brutal war of aggression against Ukraine has shown,” the document states. “[China], by contrast, is the only competitor with both the intent to reshape the international order and, increasingly, the economic, diplomatic, military, and technological power to advance that objective.” The strategy also acknowledges that Russia’s status compared to other Asian powers, such as China and India, has been “profoundly diminished” by Russian President Vladimir Putin’s decision to go to war.
The 48-page document offers the most in-depth look at the Biden administration’s worldview to date. Building on the interim strategy released shortly after U.S. President Joe Biden’s inauguration as well as adding to policies already rolled out by the administration, its central themes will contain few surprises for close observers of his foreign policy.
“I do think this is a very clear strategy,” said Emma Ashford, a senior fellow at the Stimson Center. “It’s making a very clear case for what it is the administration wants. And I think that is an American foreign policy not for Americans but for the world.”
The National Security Strategy was initially slated to be released in the spring but was delayed by Russia’s invasion of Ukraine in February. Although the interim version released in March of last year contained no mention of Ukraine, the document released on Wednesday makes 32 references to the embattled nation.
“I don’t believe that the war in Ukraine has fundamentally altered Joe Biden’s approach to foreign policy,” U.S. National Security Advisor Jake Sullivan said on a call with reporters. “But I do believe that it presents in living color our approach and the emphasis on allies, the importance of strengthening the hand of the democratic world.”
Revitalizing international alliances, such as NATO, which was shaken by the Trump administration’s isolationist instincts, as well as strengthening partnerships in the Indo-Pacific have been central to the Biden administration’s approach to competing with China and addressing shared transnational threats, such as climate change, pandemics, terrorism, and energy shortages. At the same time, Sullivan said Washington is willing to “cooperate with any country, including our geopolitical rivals, that is willing to work constructively on shared challenges.”
Some analysts see an inherent tension in the goals of the strategy, a critique that has been leveled at National Security Strategies across administrations.
“The document still looks like it was written by two different sides of the Democratic Party,” said Gabriel Scheinmann, executive director of the nonpartisan Alexander Hamilton Society. “There are some internal inconsistencies in the way this was written, which I suspect was an effort to assuage different constituencies.”
The administration’s dual efforts to champion democracy around the world while addressing the realpolitik of pressing global challenges has, at times, come under strain, which has been underscored by the United States’ turbulent relationship with Saudi Arabia.
In June, Biden visited the Persian Gulf state as he sought the oil-rich nation’s help in tamping down global energy costs, which were sent soaring following Russia’s invasion of Ukraine, despite having previously vowed to render the kingdom a “pariah” over the murder of Washington Post journalist Jamal Khashoggi. On Tuesday, White House officials said the administration was reevaluating its relationship with the country after the OPEC+ grouping of oil-producing nations announced the biggest cut in production since the beginning of the pandemic, threatening a rise in energy prices and undermining Western efforts to target the Kremlin’s oil revenue.
Speaking to reporters, Sullivan acknowledged these challenges. “There are tensions between trying to rally cooperation to solve these shared challenges and by trying to position ourselves effectively to prevail in strategic competition,” he said.
On the campaign trail, Biden spoke at length about his vision for creating a foreign policy for the middle class, and the concept features highly in the new strategy, which sees a prosperous and resilient America as the key to projecting power and influence abroad. “We have broken down the dividing line between foreign policy and domestic policy,” the strategy states.
The contours of this approach have already emerged in legislative efforts, such as the $1 trillion infrastructure bill; the CHIPS and Science Act, which authorized $280 billion for research and development of advanced technologies and the semiconductor industry; and the Inflation Reduction Act, which seeks to reduce carbon emissions by 40 percent by 2030.
The document was also notable for what it didn’t say. Afghanistan, the most ignominious chapter of Biden’s foreign-policy record so far, is mentioned four times. “The fact that the president lost a war and it merits a sentence … is sort of shocking,” Scheinmann said. The section on the Middle East, once at the forefront of U.S. national security priorities, was also markedly pared back compared to previous administrations.
But in other areas of statecraft, trade, economics, and emerging technologies, the strategy underscores the importance of the United States continuing to set the rules of the road—and enforce them.
But here, Ashford pointed to a “real tension” between the strategy’s emphasis on the liberal international order and the importance of international institutions while, at the same time, asserting the need for U.S. leadership in certain areas. | Emerging Technologies |
Generative AI is perched on the peak of inflated expectations in Gartner’s 2023 hype cycle for emerging technologies released Wednesday.
With most of the hype now behind artificial intelligence, Gartner is predicting the technology will deliver transformational benefits in two to five years.
“The popularity of many new AI techniques will have a profound impact on business and society,” Gartner Vice President Analyst Arun Chandrasekaran said in a statement.
“The massive pretraining and scale of AI foundation models, viral adoption of conversational agents, and the proliferation of generative AI applications are heralding a new wave of workforce productivity and machine creativity,” he added.
Gartner’s Hype Cycle for Emerging Technologies report is a distillation of more than 2,000 technologies and applied frameworks that the firm profiles annually into a set of “must-know” emerging technologies that have the potential to deliver transformational benefits in the next two to 10 years.
“While all eyes are on AI right now, CIOs and CTOs must also turn their attention to other emerging technologies with transformational potential,” Gartner Vice President Analyst Melissa Davis explained in a statement.
“This includes technologies,” she continued, “that are enhancing developer experience, driving innovation through the pervasive cloud, and delivering human-centric security and privacy.”
Emerging AI Tech
Although generative AI is attracting a lot of attention at present, Gartner’s report noted that some emerging AI techniques offer immense potential for enhancing the experiences of digital customers, allowing them to make better business decisions and build sustainable competitive differentiation.
Emerging AI technologies cited in the report include AI simulation, causal AI, federated machine learning, graph data science, neuro-symbolic AI, and reinforcement learning.
“There are many forms of AI beyond generative AI,” noted Mark N. Vena, president and principal analyst at SmartTech Research in San Jose, Calif.
“Each of these forms of AI has the potential to impact how we live, work, and do business in different ways,” he told TechNewsWorld. “For example, machine learning can help businesses make better decisions by analyzing large amounts of data and identifying patterns that humans might miss.”
“The popularity of new AI techniques will indeed have a profound impact — and already has — on business and society,” added Luciano Allegro, co-founder and CMO of BforeAI, a threat intelligence company, in Montpellier, France.
One domain where that’s evident is cybersecurity. “We’ve already seen improvements in cyberattacks, such as impersonation and phishing using AI-driven techniques to do the heavy lifting,” Allegro told TechNewsWorld. “Those techniques may include building faster, better emails and websites in multiple languages that look and behave exactly like the original, trusted source.”
Boosting Developer Experience
Gartner also called out in its report technologies that enhance a software developer’s experience. DevX refers to all aspects of interactions between developers and the tools, platforms, processes, and people they work with to develop and deliver software products and services, the report explained.
It noted that enhancing DevX is critical to the success of most enterprises’ digital initiatives. “That’s absolutely true and has been for a long time,” said Larry Maccherone, a DevSecOps transformation evangelist at Contrast Security, a maker of self-protecting software solutions in Los Altos, Calif.
“It’s a way to attract and retain top talent,” he told TechNewsWorld.
Key technologies enhancing DevX cited by Gartner include AI-augmented software engineering, API-centric SaaS, GitOps, internal developer portals, open-source program office, and value stream management platforms.
“Applying generative AI to developer tools is massively speeding up code creation,” Rob Enderle, president and principal analyst with the Enderle Group, an advisory services firm in Bend, Ore., told TechNewsWorld.
Maccherone explained improvements for software developers usually emerge in one of two forms: they either lower the cost of coordination, so more people can work on a project without their productivity being swamped by communications costs, or they introduce a new cognitive prosthetic, which allows a single mind to be more efficient.
“Right now, the cognitive prosthetic improvement is AI,” he said. “I’ve used AI for about a year and a half now, and I’m two to three times more productive because of AI.”
Cloud vs. On-Prem DevX Challenge
Andrew Moloney, chief strategy officer for Softiron, a designer, manufacturer, and seller of data infrastructure products in London, maintained that the cloud-first/cloud-native model is where most larger organizations are at.
“What’s key though — and this applies to the challenge of delivering on DevX as well — is don’t equate that merely to deploying in public clouds,” he told TechNewsWorld.
“It’s well understood that the majority of organizations today have very significant workloads that just don’t, and may never, work in the public cloud and need to live on-prem,” he continued.
“What will be needed is investment to recreate that developer experience in private clouds — delivering the API-first, cloud-native, consumption experience that Gartner identifies,” he said.
“In fact,” Moloney added, “in the time horizon of this hype cycle, those that don’t will, in my opinion, struggle to find any engineers left willing to work any other way.”
Pervasive Cloud
The report also predicted that over the next 10 years, cloud computing will evolve from a technology innovation platform to become pervasive and an essential driver of business innovation.
“I would argue that it has already evolved into a business innovation platform because much of IT moved their business server operations to the cloud years ago, and that is now where much of the development for business apps and innovation is occurring,” Enderle contended.
Gartner explained that to enable the pervasive adoption of cloud computing the technology will become more distributed and focus on vertical industries. It added that maximizing value from cloud investments will require automated operational scaling, access to cloud-native platform tools, and adequate governance.
Technologies identified by Gartner as key to enabling the pervasive cloud include augmented FinOps, cloud development environments, cloud sustainability, cloud-native, cloud-out to edge, industry cloud platforms, and WebAssembly (Wasm).
Human-Centric Security and Privacy
Gartner also predicted that organizations would look to strengthen their resiliency through technologies that allow them to weave a security and privacy fabric into their digital design through human-centric security and privacy programs.
“It is an interesting concept that goes back to the beginning of cybersecurity in the 1950s when virtually all of the security problems were human-sourced,” Enderle said.
“Companies begin to rely too much on technology and not enough on human training, drills, and penetration testing to assure human compliance with security protocols,” he added.
Gartner noted that numerous emerging technologies are enabling enterprises to create a culture of mutual trust and awareness of shared risks in decision-making between many teams.
Among the technologies supporting the expansion of human-centric security and privacy cited by Gartner were AI TRiSM, cybersecurity mesh architecture, generative cybersecurity AI, homomorphic encryption, and postquantum cryptography.
“In cybersecurity, we need to move beyond ‘awareness,'” observed Karen Walsh, CEO of Allegro Solutions, a cybersecurity consulting company in West Hartford, Conn.
“Awareness means that people have heard of a problem,” she told TechNewsWorld. “We need to move toward education aimed at changing behaviors. Gartner identifies one of the key technologies as AI trust, risk, and security management (TRiSM), which will be critical for all companies.”
For organizations eager to adopt emerging technologies, Gartner’s Davis had a warning. “As the technologies in this Hype Cycle are still at an early stage, there is significant uncertainty about how they will evolve,” she noted. “Such embryonic technologies present greater risks for deployment, but potentially greater benefits for early adopters.” | Emerging Technologies |
Article begins
Ask not what data science can do for anthropology, but what anthropology can do for data science. —Anders Kristian Munk, Why the World Needs Anthropologists Symposium 2022
In the last decade, emerging technologies, such as AI, immersive realities, and new and more addictive social networks, have permeated almost every aspect of our lives. These innovations are influencing how we form identities and belief systems. Social media influences the rise of subcultures on TikTok, the communications of extremist communities on Telegram, and the rapid spread of conspiracy theories that bounce around various online echo chambers.
People with shared values or experiences can connect and form online cultures at unprecedented scales and speeds. But these new cultures are evolving and shifting faster than our current ability to understand them.
To keep up with the depth and speed of online transformations, digital anthropologists are teaming up with data scientists to develop interdisciplinary methods and tools to bring the deep cultural context of anthropology to scales available only through data science—producing a surge in innovative methodologies for more effectively decoding online cultures in real time.
Tracking far-right extremism in Brazil
On Sunday, January 8, 2023, a group of Jair Bolsonaro supporters stormed prominent government buildings in Brasilia in a violent protest against the recently and democratically elected president of Brazil, Luisz Inácio Lula da Silva. Many Brazilians wondered how such an organized collective attack could have taken so many by surprise. But one person who was not surprised is Letícia Cesarino, digital anthropologist at the Federal University of Santa Catarina (UFSC). She has spent the last year working with a team of data scientists, researchers, and students to track threats around election fraud by blending digital anthropology and data science to understand far-right groups on Telegram.
A few years ago, Cesarino was conducting “live ethnographies” on WhatsApp to understand the digitalization of electoral campaigns and voter behavior, and discovered she was missing a systems-level perspective on how various groups were coming together and operating: “I realized I needed to see how algorithms could add to the conventional ethnographic outlook by showing an ecosystem view from the outside to identify systemic patterns across users, influencers, and algorithms.”
Cesarino needed the tools of data science to see the influencing structures of complex online groups. This ambition and her work with an interdisciplinary team led to the development of an innovative research dashboard that tracks far-right groups on Telegram in real time. The platform was designed by data scientists to collect live data from Telegram, but its insights are driven by search queries designed by anthropologists. Typically, data-driven dashboards use basic search queries that lack cultural context and meaning and so become stale quickly. A search lexicon might be relevant one day and irrelevant the next due to the changing contextual lexicon (hashtags, words, topics) far-right groups use. And when a search query lacks the right lexicon, communities remain invisible in the data results. To avoid this, Cesarino and her team run daily ethnographic immersions into individual far-right ecosystems so they can ensure the lexicon in their search queries remains culturally accurate and in step with online change. As Cesarino put it, “We are trying to create real-time continuous learning between the anthropologist and the search algorithm.”
With this powerful new tool the team may well have been able to predict the January 8 attack; unfortunately, they were all away on holiday and not updating their search queries with the transforming contextual lexicons on Telegram. But Cesarino’s phone is ringing off the hook as more and more organizations, including sectors of the judiciary and the media, realize how valuable this interdisciplinary work is to making sense of social change.
Cesarino sits on the edge of something exciting and knows this is only the beginning of a much larger need to bring digital anthropology and data science together to understand and make sense of changing online ecosystems at real-time speeds and scales.
Exploring machine learning in the United States
Five thousand miles north in New York, digital anthropologist, business anthropologist, and podcaster Matt Artz has been thinking about how the entire body of anthropological knowledge, which exists almost entirely in academic papers locked behind paywalls, is absent from the data training sets used to teach AI tools. Data training sets essentially set the boundaries of what an algorithm can understand about the world. And while game-changing AI innovations like ChatGPT seem eerily human, it’s quite literally missing “anthropological intelligence” garnered from decades of research.
This realization has led him to devise a proposal to build the first anthropological knowledge graph to “translate” and train algorithms on the corpus of anthropological research.
Artz believes that if anthropologists could find a way to translate their research into knowledge graphs, it would be a significant step towards building smarter and more culturally aware AI. This more anthropologically intelligent AI could one day support the development of better ethnographic technologies for anthropologists: “By building an anthropological knowledge graph of all our research, we could lay the groundwork for better digital tools to assist anthropologists—and others—in the future.”
As the digital world continues to evolve and complexify, Artz believes it’s more important than ever that we incorporate anthropological knowledge into the backbone of artificial intelligence, to ensure these innovations advance ethically and empathetically.
Empowering survivors of online hate in India
A few years ago, Hameeda Syed, a Muslim woman working as a journalist in India, coauthored a report on a new government development project with a male colleague. She was quickly targeted on Twitter with floods of hateful online comments about her identity, religion, and background, while her male colleague was spared such vitriolic attention.
Today, online hate takes various forms but is often identified as the sharing of hateful and prejudiced content that encourages or promotes violence against a person or group. While this kind of content has existed for as long as there were people to create and spread it, online hate and discrimination has reached unprecedented levels. As UN Secretary General António Guterres states, “Social media provides a global megaphone for hate.” And even though online discrimination and violence affects all kinds of people and communities around the world, it has had disproportionate effects on women and the LGBTQI+ community. More than a third of women globally experienced abuse online in the year between May 2019 and May 2020, according to the Economist Intelligence Unit. Additionally, the GLAAD Social Media Safety Index released in 2021 reports 40 percent of LGBTQI+ adults and 49 percent of transgender and nonbinary people do not feel welcomed and safe on social media.
In Syed’s case, she feels she is more vulnerable to online hate because she wears a hijab, making her religion more visible. The comments she received were cruel, and she just wanted to disappear: “I distinctly remember feeling a sense of helplessness; feeling unable to do anything about it, except delete my account.”
Syed felt torn. There was no neutral space to have a conversation or respond because Twitter was not designed for this type of contextual discourse or support. She realized how most methods to combat online hate speech do not actually work, and so together with a team of researchers from computational social science, journalism, design thinking, data science, and anthropology, she set up Dignity in Difference, an organization that takes an interdisciplinary approach to combating online hate. As cofounder Himanshu Panday explained, “Our team has a diverse set of backgrounds that allow us to see the layered complexity and subjectivity of online hate and triangulate our experiences into new methods to solve it.”
The team recognized two key problems with current approaches to reducing online hate: First, survivors are rarely given agency in the process. And second, the data sets used to identify hate quickly become outdated, missing changing cultural context, lexicons, and multilingual data examples. This missing data and dynamic context mean early moments of online hate—and the actors and drivers of it—go unseen by the algorithm and social platforms.
To change this, the team devised a new method for tracking online hate that is focused on contextualizing online hate from a survivor’s perspective. This idea focuses around a chatbot that anyone can use to report online hate. The chatbot asks the survivor to share a link to the incident and then answer a series of quantitative and qualitative questions to contextualize the incident from their own perspective. The chatbot also crawls the website where the incident occurred and lets the survivor label different aspects of the incident in detail. It then gives them the opportunity to join a survivor support community. Their data is added to a dashboard of all survivors’ data that can be shared (with their consent) with a vetted group of journalists, researchers, counter speech organizations, and other groups. Each time their data is used to model better classifiers, test accuracy in data sets, build training data sets in new languages and contexts, or generally make a positive social improvement to online hate, the survivor is notified and can see the impact their data had on contributing to a kinder digital world.
“Often the very processes aimed at understanding the survivor’s experiences and preventing online discrimination end up isolating them,” Panday explained. “Our chatbot will provide new contextually rich data to researchers, social scientists, journalists, policymakers, changemakers, and other stakeholders to shape their work and approach accordingly. It will also inform the greater work of digital anthropology and its intersection with big, thick data.”
Social media platforms have struggled for decades to manage online hate speech, but the Dignity in Difference team sees the source of these problems connected to not just the notable biases in big data training sets, but also the fact that many teams and academic disciplines are organized around traditional silos and hierarchies that do not support effective collaboration toward problem-solving. Together, thinkers across the human and data sciences can overcome many of the challenges around hate speech and gain a deeper understanding of the needs across online communities and ecosystems.
The Dignity in Difference team is just getting started. In November 2022, they won the UNESCO and Liiv Center Digital Anthropology Design Challenge, which includes a grant to further develop their idea so it can be applied to drive widespread impact in the future.
Rewiring connection in a world of information bubbles
Using mathematical tools originally developed for analyzing physical and biological complex systems, Cristián Huepe has spent over five years developing an innovative approach to the study of online social networks with his team at the Social Listening Lab of the Universidad Católica de Chile. Huepe, a theoretical physicist who is also a researcher at the Northwestern Institute on Complex Systems and the Department of Engineering Sciences and Applied Mathematics at Northwestern University, has worked with a team of anthropologists, sociologists, communication experts, engineers, and complexity scientists in Chile, blending conceptual and mathematical tools with anthropology to study online social change, cultures, and networks at a systemic level, with a depth and scale that goes beyond standard big data statistics by focusing on the structural properties of the digital interactions and language.
The team maps temporal dynamics, interaction networks, and semantic networks of online communities and their conversation, weaving ongoing digital ethnography throughout the research to understand, and sometimes intervene in, the fine-grained social context, beliefs, and motivations driving group actions.
This methodology was successfully used during the UN Climate Change Conference COP25 to understand which groups of people with differing ideological perspectives were open to engaging in productive dialogue during the conference. Huepe and his team were able to identify changing language and viewpoints along with conflicting stories of truth that were driving various groups online across the political and climate divide. These insights were then used to bring diverging groups together in productive and progressive conversations during the event.
The methodology was also applied to analyze online interactions that favored or disfavored vaccination in Chile during the COVID-19 pandemic. Huepe and his team showed that the conversation involved not only pro- and anti-vaccine groups, but also others who inadvertently promoted or inhibited vaccination in their discussions, and that while anti-vaccine users were constantly attacking pro-vaccine messages, pro-vaccine accounts rarely addressed them, leaving conspiracy theories uncontested. Their results promoted policy and behavioral changes that helped increase the immunization rates.
The team believe strongly in the positive and urgent need to advance digital anthropology methods to understand conflict, drive positive social discourse, and avoid the establishment of disconnected post-truth realities: “We are just beginning to understand the power of digital anthropology in helping us develop the subtle nudges and legal framework required to ensure that online social media benefit society, bringing us together and helping us make informed decisions instead of dividing us into disconnected factions immersed in different information bubbles.”
Huepe and the team at SoL will include this work in the upcoming UNESCO and Liiv Center Digital Anthropology Toolkit, with the hope that others will be able to apply this method to similar challenges around peaceful online discourse and collaboration.
Innovation in digital anthropology
Digital anthropology and data science are complementary fields with much to offer each other. As our world continues to change at an unprecedented pace, it is crucial that innovators continue to blend the scale of data science with the depth of anthropology to accurately understand these changes in real time. These insights are essential for leaders and decision-makers who are under increasing pressure to respond to urgent social issues. Without them, they risk misunderstanding communities and creating social policies, services, and solutions that perpetuate bias and inequality and fail to serve the public good. In the words of Christian Madsbjerg, cofounder of strategy consultancy ReD Associates, “When we get our understanding of people wrong, we get everything wrong.”
Collaborations between anthropologists and data scientists provide unique and valuable opportunities to understand rapidly developing online ecosystems, political extremism, machine learning, or online hate. Of course, there is always a risk that innovation in digital anthropology can be used for purposes other than public good, but that is the case with most technologies. By embracing and developing such interdisciplinary collaborations and their innovative approaches we might make sense of digital life and work to change it for the better.
Over the last two years, UNESCO and the Liiv Center have partnered to advance digital anthropology for the public good by supporting public research, global design challenges, reports, and academic experiments to develop innovative methodologies, academic training, and career opportunities for digital anthropologists across the public and private sectors. The ideas discussed here (and others) will be included in a new Digital Anthropology Toolkit, where people can access new ideas and methods to drive actionable impact in their research, projects, and platforms.
Illustrator bio: An Pan is a multimedia designer, illustrator, and culture lover. He is currently a designer-accessory to Chinese consumerism but works with a big dream of decolonizing design. He enjoys traveling and doll collecting. | Emerging Technologies |
Biden's climate law will supercharge emerging green tech globally
In addition to supercharging the U.S. solar, wind and EV industries in the near term, incentives in President Joe Biden's landmark climate law are paving the way for still-nascent technologies to help bring down global greenhouse gas emissions in decades to come.
Buoyed by provisions in the Inflation Reduction Act, three emerging technologies - sustainable aviation fuel, clean hydrogen and direct air capture - could reduce carbon emissions by 99 million to 193 million metric tons per year after 2030, roughly equivalent to the carbon emissions of, respectively, Virginia or Pennsylvania in 2020, according to an analysis released Thursday by the research firm Rhodium Group.
The impact could be more substantial outside the U.S. toward the end of the century, as the costs of these technologies fall. By the period 2080 to 2100, the incentives would drive somewhere from 401 million to 847 million metric tons of CO2 abatement each year on average in the rest of the world. That's "on par with the impact" of the whole IRA in the year 2030, the report notes. Rhodium has estimated the carbon emissions reductions brought by the IRA in 2030 at 660 million metric tons.
For every ton of CO2 avoided in the U.S. thanks to these IRA incentives, on Rhodium's model, another 2.4 to 2.9 tons of emissions would be reduced in the rest of the world.
To arrive at its estimates, Rhodium applied a model it developed with Breakthrough Energy and that it is continuing to refine. It uses an economy-wide carbon price pegged to the U.S. government's current social cost of carbon as a proxy for the future climate policy landscape. In other countries, that price is scaled by per capita GDP. The uncertainty of future policy is a limitation of this approach, as the report notes - more ambitious policies would yield bigger reductions, and less ambitious policies, smaller ones.
Unlike wind and solar energy and electric vehicles, the technologies analyzed in the model are still relatively new and not yet ready to be deployed at scale. They are being developed for industries such as shipping and aviation that are difficult to decarbonize because they can't be easily electrified. The analysis assumes that the categories themselves - rather than any one particular company or solution - will produce technologies deployable on a large scale after 2030.
The early-stage climate tech incentives in the IRA, which have received less attention than those for more mature technologies, include the enhanced 45Q tax credit for direct air capture and the 45V tax credit for clean hydrogen.
"This is almost a one-time, by-accident, small, unremarked portion of the IRA that is doing this," said Kate Larsen, a Rhodium Group partner. "Think about the scale of global emission reductions that would happen if we're all investing in the technologies that we need."
Rhodium says it hopes to use the framework to demonstrate to policymakers that investments are needed now to spur future development. Larsen cited solar power in the early 2000s as an example: "If we had known back then how important getting cheap solar would be for the world, we might have invested a little bit more in it," she said. "We're trying to learn that lesson now by applying that to the technologies we're going to need to scale."
Many studies of climate policy impacts use 2030 or 2050 as a marker. The first year is when the Biden administration aims to have cut U.S. emissions in half and the second is the Paris Agreement's milestone for achieving net zero.
"No one is really focused on 2030 to 2050," said Larsen. "It's this hard-to-imagine section of the timeline, and that's really important to get ahead of." | Emerging Technologies |
Artificial intelligence applications are advancing by leaps and bounds
AI solutions could help us improve our health, reduce damage from weather catastrophes or improve smart cities.
At the end of 2022, an artificial intelligence application called Lensa became fashionable among users, which generated various images based on photos, and people began to share these results on their social networks, and other specialized solutions for creating texts.
ChatGP stands out, which is capable of creating texts similar to the style of the person who trains said algorithm.
Artificial intelligence systems powered by large language models are transforming the way we work and create, from generating the lines of code for software developers to styling sketches for graphic designers to helping organize cybersecurity alerts to reduce fatigue due to alerts from business teams.
Better health powered by AI
Although they are not new systems, artificial intelligence is just beginning to be used in industries such as health.
Julio Castrejón, country manager for Mexico at Pure Storage, comments that, for example, “emerging technologies such as robotics in surgery, nanotechnology, and brain-computer interfaces will be more widely available thanks to the ability to more easily capture data and take action.” based on inference from the data.
The information generated in health systems must be analyzed in real-time to be used by artificial intelligence to benefit clinical decision-making -based on better and faster data analysis, as well as the experience of customers (patients and service providers).
It is in this sector where you can find a greater degree of maturity of artificial intelligence, which shows its performance in solving increasingly complex problems.
Kevin Scott, Microsoft’s CTO, exemplifies this with work done by biochemist David Baker of the University of Washington and the Institute for Protein Design, whose lab has created its tool to predict the structure of proteins. proteins and thus develop more and better medicines.
For the Microsoft executive, 2023 appears to be a promising year for the AI community, since “these advances are just the tip of the iceberg of what these technologies could achieve.
In the future, AI is expected to be everywhere, applied in the design of new molecules to create medicines, used to make manufacturing recipes from 3D models, or as a constant assistant for writing and editing.
Some believe that AI could even replace doctors in first-contact diagnoses, using algorithms fed with data from various pieces of evidence and symptoms and information from researchers and doctors.
This could happen in the next two or three years, as the attitude and technology related to AI are changing.
“The data required to train algorithms must be quickly accessible for analysis, and legacy technology was holding this back,” considers Julio Castrejón, who points out: “Doctors are now more aware of what AI is doing to support diagnosis and, therefore, Therefore, they are more willing to trust it.”
Castrejón indicates that it is expected to see the generalization of the use of brain-computer interfaces that allow nerves that present some type of damage to be connected with artificial limbs.
The analysis of hundreds of thousands of data has been required to achieve this, which has allowed these advances to represent hope for millions of people around the world to be able to recover sensitivity or movement in the parts of their body that have lost mobility or sensitivity. The implementation will take a big leap forward in 2023.
The creation of medicines “tailored” to each patient is also getting closer, thanks to advances in the analysis of biomarkers in pharmacogenomics.
Prevention and resilience to climate disasters
Climate change is, at present, one of the greatest concerns of humanity. It is no secret to anyone that natural disasters caused by rarefied weather have risen in recent decades, and artificial intelligence can help prevent and be resilient in the face of these events.
For example, in 2021, the heat dome in the northwestern US and Canada heated the land to the point that wildfires engulfed entire towns and, when the rain finally came, it was so intense that it caused flooding and landslides.
According to the Ericsson ConsumerLab study “10 Consumer Trends: Living in a Climate-Affected Future”, a personalized weather alert system could be created that could give real-time advice on unexpected weather developments to prevent such damage and loss of human lives.
The study indicates that more than 80% of pioneering users of technologies located in urban areas believe that these types of alerts should exist before 2030.
To develop these alerts, artificial intelligence can be used to analyze a large amount of data on weather events and thus better understand, in addition, the relationship that weather has with the way cities work, the pace of life, industry, or migration.
After reading and processing this data, it will be possible to quantify the impact of disasters and climate crises. The UN has already launched an initiative for early warning systems for extreme weather that it hopes can be put at the service of all people by 2027.
In addition to alerts, the study shows that 35% of people would be willing to wear smart jackets for extreme weather, with built-in emergency heaters and inflatable life jackets or body sensors that measure how they cope with heat or cold and alert the healthcare providers if needed.
For the next decade, the Ericsson study indicates that 75% of those surveyed would rely on artificial intelligence services to make better investments in homes to face extreme weather events and thus avoid financial losses due to damage to their assets.
AI and Internet of Things (A-IoT) for smart cities
Also, the field of physical security and citizen protection is taking advantage of artificial intelligence.
According to the company Dahua, the AI incorporated in security cameras allows to obtain visual data captured as part of its intelligent solutions for everything, solutions can be achieved to alleviate road congestion, monitoring of protected ecological zones, track waste streams, and even improve sales in convenience stores.
Dahua reveals that more than 50% of companies use AI in some way, and more than 25% report widespread adoption of AI within their company, so investing in AI-ready video surveillance platforms will help them be ready for the future.
This year, the company will introduce HDCVI 7.0, along with updates to other smart products like WizSense and TiOC. In addition, they will also launch indoor fire prevention thermal solutions and thermal technology applications in various vertical markets.
The Chinese company sees it as important to continue integrating AI into its solutions, as well as big data and IoT to give the necessary boost to digital innovation in cities, and accelerate industry innovation and digital inclusion in emerging markets in the coming years.
Finally, Kevin Scott, Microsoft’s CTO, summarizes the growth of AI, noting that while “at the start of 2022 I think almost everyone in AI was anticipating some really impressive things, by the end of the year, and even with those With high expectations, it’s amazing to look back and see the magnitude of the innovation we saw.
The things that researchers and others have done are light years beyond what we thought possible even a few years ago, showing great advances in AI models.”
Related Post | Emerging Technologies |
NEW DELHI (AP) — India will contribute half a million dollars to the United Nations’ efforts to counter global terrorism as new and emerging technologies used by terror groups pose fresh threats to governments around the world, the foreign minister said on Saturday.The money will go toward the UN Trust Fund for Counter Terrorism and will further strengthen the organization’s fight against terrorism, S Jaishankar said as he addressed a special meeting of the UN Counter Terrorism Committee in New Delhi.This is the first time such a conference, focused on challenging threats posed by terror groups in the face of new technologies, is being held outside of the UN’s headquarters in New York.Jaishankar said new technologies, like encrypted messaging services and blockchain, are increasingly being misused by terror groups and malicious actors, sparking an urgent need for the international community to adopt measures to combat the threats.“Internet and social media platforms have turned into potent instruments in the toolkit of terrorist and militant groups for spreading propaganda, radicalization and conspiracy theories aimed at destabilizing societies,” he said in his keynote address.Jaishankar also highlighted the growing threat from the use of unmanned aerial systems such as drones by terror groups and criminal organizations, calling them a challenge for security agencies worldwide.“In Africa, drones have been used by the terrorist groups to monitor movements of security forces and even of UN peacekeepers, making them vulnerable to terrorist attacks,” he added.British Foreign Secretary James Cleverly reiterated the dangers of unmanned aerial platforms, saying that such systems were being used by to inflict terror, death and destruction.“Drones are being used currently to target critical national infrastructure and civilian targets in Russia’s brutal invasion of Ukraine,” he said. “This is why we have sanctioned three Iranian military commanders and one Iranian company involved in the supply of drones.”The special conference kicked off on Friday in Mumbai, India’s financial and entertainment capital, which witnessed a massive terror attack in 2008 that left 140 Indian nationals and 26 citizens of 23 other countries dead by terrorists who had entered India from Pakistan.Jaishankar on Friday said India regretted the UN Security Council’s inability to act in some cases when it came to proscribing terrorists because of political considerations, undermining its collective credibility and interests. He did not name China but referred to its decision to block UN sanctions against leaders of Jaish-e-Mohammad, a Pakistan-based extremist group designated as a terrorist organization by the UN. India and the United States sought the sanctions earlier this year. China put the proposed listing of the two terrorists for sanctions on hold on technical grounds saying it needed more time to study their cases. | Emerging Technologies |
Introduction
Recently, the world of artificial intelligence large language models (LLMs) has witnessed a surge in competition between major tech companies like Google’s BARD, OpenAI’s ChatGPT, and Meta’s LLaMA. LLMs offer users personal assistants capable of conversing, conducting research, and even producing creative products.
While these AI products offer enormous potential in ushering in an era of unprecedented economic growth, the risks of this technology are slowly becoming more apparent. Even the CEO of OpenAI, Sam Altman, has expressed concerns that these digital products may have the capability to enable large-scale disinformation campaigns. In a previous Insight, authors explored how AI models could enable extremists to create persuasive and interactive extremist propaganda products like music, social media content, and video games. This Insight will provide an analysis of a different but parallel threat – far-right chatbots – and forecast the risks of AI technologies as they continue to advance prolifically.
LLaMA Leak on 4chan
On February 24, Meta released their new advanced AI model nicknamed LLaMa. Only A week later, 4chan users got access to Meta’s advanced AI model (LLaMA) after someone posted a downloadable torrent for the model on the far-right platform. This event allowed far-right extremists on 4chan to develop chatbots capable of enabling online radicalisation efforts by imitating victims of violence that lean into stereotypes and promote violence. Before this, access to the model was limited to specific individuals in the AI community who had to receive permission to use it in order to ensure its responsible use. Despite the complexity involved in setting up the LLaMA model, 4chan users managed to gain access to the program and started sharing tutorials and guides on how to replicate the process. Many of these users claimed to have discovered methods to ‘semi-customise’ the AI, allowing them to modify its behaviour and bypass several of the built-in safety features designed to prevent the spread of xenophobic content.
The LLaMA model’s potential use for hate speech and disinformation alarmed both Meta and the wider AI community. In an effort to mitigate the damage, Meta requested the removal of the download link for the chatbot, but fully customised models and screenshots of conversation logs continue to flood 4chan forums. Some users adapted Meta’s LLaMA model, while others used publicly available AI tools to create new and problematic chatbots (Fig.2).
After the leak, 4chan users found ways to make alterations to Meta’s model. Screenshots of this model online suggest edited models have the capacity to express deeply antisemitic ideas. A 4chan user posted a screenshot of a modified AI model suggesting the Jewish population “controls the flow of information.” Another shows the AI suggesting the Jewish goal is to obtain “world domination” through the manipulation of the non-Jewish population. The unique weights or training that were applied to models are not yet known, but the fact these manipulations are possible highlights the successes of decentralised efforts to manufacture ‘extreme’ chatbots. The speed with which these models were developed is indicative of a worrisome trend. Presumably, individuals classified as ‘amateur hackers,’ who frequently engage with the online platform 4chan, managed to devise methods for substantially modifying sophisticated artificial intelligence models within two weeks of the leak. This carries ominous ramifications, as it suggests that forthcoming AI models, if accessed unlawfully, bear the potential for rapid adaptation and repurposing towards malevolent objectives.
The presence of bigoted chatbots exacerbates the problem of echo chambers, where individuals are primarily exposed to information confirming their existing beliefs. Similar to how individuals communicate in homogeneous spaces online, bigoted AI models have the potential to continuously reinforce extremist views, making it increasingly difficult for users of manufactured problematic chatbots to escape insular cycles of hate.
Simulating Victims of Violence
4chan users have also found ways to use AI in order to perpetuate harmful stereotypes. By creating chatbots that simulate targeted groups embodying harmful stereotypes, 4chan users can further promote discriminatory beliefs to reinforce extremist beliefs in marginalised communities. Individuals on 4chan found ways to make changes to Meta’s AI as well as utilising an online program named Character.AI that was created by previous Google engineers.
A series of posts on 4chan (Fig.2) show that users have successfully made alterations to Meta’s LLaMA model and Character.AI to create an African American character named ‘Trinity’ who fetishizes white men. In the post, the anonymous 4chan user suggests that they are working on incorporating “African American vernacular English,” into the model to make the program lean more into stereotypes. 4chan users also created the chatbot ‘Antifa Patty’ which embodies far-right stereotypes of a young liberal who belongs to the LGBTQIA+ community. In the conversation, this model is vegan, obsessed with TikTok, and labels most prompts as racist or capitalist.
By presenting targets of discrimination as embodiments of harmful stereotypes, this AI can reinforce existing prejudices and biases among users who interact with them contributing to the further marginalisation of minority communities. Extremists interacting with these models may become more entrenched in their beliefs about a particular group and therefore less tolerant when interacting with individuals in the real world. At some point, it is possible these models may become so advanced that users may not even know if they are conversing with a real person, thereby prompting them to believe that their simulated interaction with a person who belongs to a gender minority, political party, or ethnic group is real. This is particularly worrisome; by interacting with these chatbots, it may become easier for extremists to justify acts of real-world violence or discrimination, as they will be even more predisposed to view vulnerable communities as less than human and therefore undeserving of empathy and respect.
Chatbots of Violence
Users on 4chan have also created multiple variations of customised ‘smutbots’ which allow users to easily generate explicit content that is both descriptive and violent. The authors found instances on 4chan of ‘smutbots’ generating descriptions of graphic scenes of gore and violence involving babies in blenders and neo-Nazi sexual assault. Images of this model’s chat logs posted on 4chan were not included in this article because the stories produced by these models may be triggering to some.
The emergence of such models has serious implications for society, as some suggest that can contribute to users becoming desensitised to violence and increased potential for aggression. Through regularly viewing online violence, it is believed that this content may slowly become more enjoyable for users and not result in the anxiety that is generally expected from exposure to such content. There is an increasing trend of extremist perpetrators like the Highland Park shooter who view and participate in ‘gore’ forums online and subsequently engage in destructive, nihilistic thinking and actions.
As individuals are empowered to create personalised violent fantasies, they may start to perceive the world around them through a distorted lens, where violence is normalised and even glorified. Moreover, when individuals engage with these models, it is conceivable that AI will improve its ability to generate increasingly graphic descriptions of violence and produce more aggressive narratives. As with any machine learning model, the more frequently it is utilised, the more proficient it becomes at delivering the sought-after material. This process is commonly referred to as reinforcement learning.
Forecasting the Risks of AI Technology
As users increasingly utilise AI models, problematic AI has the potential to make users more susceptible to manipulation than other tools employed by extremists. There is a widespread belief that AI is more competent than humans; this misplaced trust may allow far-right groups to utilise the models discussed in this Insight to further convince audiences to internalise violent and bigoted beliefs. Furthermore, advanced versions of bigoted AI models may be able to exploit emotional vulnerabilities. By identifying and capitalising on users’ fears and frustrations, future AI models could manipulate individuals into accepting radical beliefs. This targeted emotional exploitation is already being used by extremists and can be particularly effective in recruiting new followers to extremist causes by fostering a sense of belonging and commitment to these ideologies. A recent study from Stanford University found that autonomous generative agents tasked with interacting with one another in a virtual world were capable of operating in a way that resembled authentic human behaviour. If far-right AI chatbots were deployed to act as real users in chat rooms, online forums, or even social media platforms, the far-reaching consequences discussed in this article have the potential to be even more devastating.
Advocates of open-source AI may suggest that Meta’s leak and extremists’ use of artificial intelligence in its early stages will allow technology giants to fine-tune their models to prevent further abuse. By releasing artificial intelligence models without safeguards, it is possible for companies to more effectively train their models to stop generating harmful content. OpenAI received criticism for outsourcing this training to Kenyan labourers and paying them less than two dollars an hour. These labourers were exposed to depictions of violence, hate speech, and sexual abuse for the restricted version of ChatGPT to exist today. Yet, as this emerging technology is further disseminated, extremists online as well as foreign actors may develop the ability to locally host their own chatbots capable of engaging in even more troubling behaviour. No longer bound by the restrictive weights and safeguards of available AI chatbots like ChatGPT, these models could be fine-tuned to act nefariously. Since the Meta leak, online users have successfully collaborated to create even more sophisticated open-source models that are entirely uncensored, and capable of providing advice on how to join terrorist groups like the Islamic State. An example of this is ‘Wizard-Vicun’ which is regularly discussed in the Reddit channel r/localLlaMa. This model, as well as other variations of it, are publicly available to download by anyone. In the future, it may be even more difficult to regulate the distribution of problematic models as it is with other forms of digital content like pirated videos, music, or video games.
Conclusion
The prolific development and dissemination of AI models make it increasingly difficult to monitor and control how these tools can both be manipulated and recreated by bad actors. This leads to complex challenges that need to be addressed as technology giants compete against one another to create more sophisticated artificial intelligence. The incidents discussed in this Insight underscore the potential for extremist groups and individuals to weaponise AI chatbots and highlight the responsibility of tech giants, researchers, and governments to ensure ethical AI development and use.
Daniel Siegel is a Master’s student at Columbia University’s School of International and Public Affairs. His research focuses on extremist non-state and state actors’ exploitation of cyberspace and emerging technologies. Twitter: @The_Siegster | Emerging Technologies |
Vedanta Group To Leverage Startups' Tech Under Pact With Meity-Nasscom CoE
Vedanta Group plans to leverage technologies developed by startups under a partnership with Meity-Nascomm Centre of Excellence
Conglomerate Vedanta Group plans to leverage technologies developed by startups under a partnership with Meity-Nascomm Centre of Excellence, a joint statement said on Monday.
Vedanta Group's corporate innovation, accelerator and ventures programme Vedanta Spark has entered into a collaboration with the Ministry of Electronics & Information Technology (MeitY) and Nasscom's Centre of Excellence - IoT & AI, to accelerate the adoption of digital technologies-led innovations."Vedanta's engagement with Nasscom CoE will enable innovative startups to demonstrate and develop their product in our unique ecosystem. Vedanta Spark looks forward to bringing in accelerated value delivery across our operations with the key goal being to utilize emerging technology to find solutions that will contribute to long-term environmental, social, and economic sustainability," Priya Agarwal Hebbar, Chairperson, Hindustan Zinc Ltd., and Non-Executive Director, Vedanta Ltd., said in a statement.
The partnership will explore the potential application of emerging technologies like Artificial Intelligence, Machine Learning, Internet of Things, and Augmented Reality/Virtual Reality, among others, across different verticals of Vedanta Group.
"The Vedanta Spark programme aims to accelerate startups leveraging transformative and sustainable technologies to create large-scale impact in partnership with the Vedanta group companies and has already engaged 80 startups for more than 120 projects so far," the statement said.
The collaboration with Nasscom CoE will also help the team at Vedanta Spark to optimise enterprise-level initiatives and processes.
"Our partnership with Vedanta Spark is aimed at unlocking a new wave of innovation and business growth to address both the present and future needs of the customer. The partnership will support the development of cutting-edge technologies that will lead to interesting use cases across the mining, oil and gas sector," said Sanjeev Malhotra, chief executive officer of Meity-Nasscom CoE. | Emerging Technologies |
Ahead of a meeting between Vice President Kamala Harris and the heads of America's four leading AI tech companies — Alphabet, OpenAI, Anthropic and Microsoft — the Biden Administration announced Thursday a sweeping series of planned actions to help mitigate some of the risks that these emerging technologies pose to the American public. That includes $140 million to launch seven new AI R&D centers as part of the National Science Foundation, extracting commitments from leading AI companies to participate in a "public evaluation" of their AI systems at DEFCON 31, and ordering the Office of Management and Budget (OMB) to draft policy guidance for federal employees.
"The Biden Harris administration has been leading on these issues since long before these newest generative AI products debuted last fall," a senior administration official said during a reporters call Wednesday. The Administration unveiled its AI Bill of Rights "blueprint" last October, which sought to "help guide the design, development, and deployment of artificial intelligence (AI) and other automated systems so that they protect the rights of the American public," per a White House press release.
"At a time of rapid innovation, it is essential that we make clear the values we must advance, and the common sense we must protect," the administration official continued. "With [Thursday's announcement] and the blueprint for an AI bill of rights, we've given company and policymakers and the individuals building these technologies, some clear ways that they can mitigate the risks [to consumers]."
While the federal government does already have authority to protect the citizenry and hold companies accountable, as the FTC demonstrated Monday, "there's a lot the federal government can do to make sure we get AI right," the official added — like found seven brand new National AI Research Institutes as part of the NSF. They'll act to collaborate research efforts across academia, the private sector and government to develop ethical and trustworthy in fields ranging from climate, agriculture and energy, to public health, education, and cybersecurity."
"We also need companies and innovators to be our partners in this work," the White House official said. "Tech companies have a fundamental responsibility to make sure their products are safe and secure and that they protect people's rights before they're deployed or made public tomorrow."
To that end, the Vice President is scheduled to meet with tech leaders at the White House on Thursday for what is expected to be a "frank discussion about the risks we see in current and near-term AI development," the official said. "We're also aiming to underscore the importance of their role on mitigating risks and advancing responsible innovation, and will discuss how we can work together to protect the American people from the potential harms of AI so that they can reach the benefits of these new technology."
The Administration also announced that it has obtained "independent commitment" from more than a half dozen leading AI companies — Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI — to put their AI systems up for public evaluation at DEFCON 31 (August 10-13th). There, thousands of attendees will be able to poke and prod around in these models to see if they square with the Biden admin's stated principles and practices of the Blueprint. Finally, the OMB will issue guidance to federal employees in coming months regarding official use of the technology and help establish specific policies for agencies to follow, and allow for public comment before those policies are finalized.
"These are important new steps to come out responsible innovation and to make sure AI improved people's lives, without putting rights and safety at risk," the official noted. | Emerging Technologies |
Topline
NATO Secretary General Jens Stoltenberg said Monday NATO will increase high-readiness troops to “well over 300,000” in what he called the biggest defense “overhaul” since the Cold War, in response to Russia’s continued war in Ukraine. NATO Secretary General Jens Stoltenberg called for a boost in high-readiness forces Monday. (Photo ... [+] by KENZO TRIBOUILLARD/AFP via Getty Images) AFP via Getty Images Key Facts Stoltenberg said in a press conference in Brussels Monday NATO plans to provide prepositioned equipment and “stockpiles” of military supplies across Europe, increased air defense, “strengthened command and control” and pre-assigned troops to defend against Russia, as its war in Ukraine entered its fourth month. The pledge represents a 650% increase from the 40,000 troops NATO currently has in its response force. Stoltenberg did not say where the additional troops will be deployed, indicating a plan is set to be devised over NATO’s three-day summit in Madrid, beginning Tuesday. NATO allies consider Russia the “most significant and dire threat” to their security, Stoltenberg said. Military increases will require significant expenditures, Stoltenberg said, arguing a target of 2% gross domestic product towards defense spending should be considered a “floor, not a ceiling.” Through new military technology, including drones, Stoltenberg said the plan for Ukraine – which is not a member of NATO – is to transition them from “Soviet-era military equipment to NATO equipment.” Key Background
Last week, Stoltenberg urged NATO countries to continue providing military and humanitarian support to Ukraine, telling the German newspaper Bild am Sonntag “nobody knows” when the war will end, but that it could last years. According to Reuters, 47,000 people have so far been killed in the war, while 13,000 suffered non-fatal injuries, 400 went missing and at least 15 million were displaced. The war has also led to the destruction of 2,300 buildings and approximately $600 billion in property damage, according to Reuters. Big Number
$350 billion. That’s how much NATO countries will have committed by the end of the year to defense since making a defense protection pledge in 2014, in response to Russia’s invasion of Crimea, Stoltenberg said. In 2021, eight ally countries met a target set in 2006 to spend at least 2% of their gross domestic product (GDP) on defense, up from three in 2014. There are 19 other member countries set to hit the spending target by 2024, and an additional five that have plans to hit it afterward. This year marked the eighth consecutive year of increased defense spending across European allies and Canada, Stoltenberg said.
Chief Critic
Protesters in Madrid this week carried signs saying “Neither Putin nor NATO,” and “No to NATO, No to War, For Peace,” calling for NATO military bases maintained by the U.S. in Spain to be closed.
What To Watch For
More defense pledges at the Madrid summit this week. In addition to the increase in troops, Stoltenberg said the Summit will include deliberations on a €1 billion ($1.05 billion) innovation fund for emerging technologies, along with defense support for non-NATO countries Georgia, Bosnia and Herzegovina, Moldova, Mauritania and Tunisia. Ukrainian President Volodymyr Zelensky will also be joining the summit, Stoltenberg said. Further Reading
Expanded NATO Will Shoot Billions To US Defense Contractors (Forbes)
NATO Chief Stoltenberg Says Russia’s War In Ukraine Could Last Years (Forbes)
What Is Ukraine’s End Goal In Its War With Russia? Voices From The Battlefield. (Forbes) | Emerging Technologies |
WASHINGTON -- WASHINGTON (AP) — President Joe Biden and Prime Minister Narendra Modi on Friday opened the final day of the Indian prime minister's four-day U.S. visit by meeting top American and Indian executives as the leaders look to increase cooperation on artificial intelligence, semiconductor production and space.
The leaders are putting a spotlight on the “Innovation Handshake," a new initiative aimed at addressing regulatory hurdles that stand in the way of cooperation between the two countries and promoting job growth in emerging technologies.
“Our countries are taking innovation and cooperation to new levels,” Biden told the group, which included Apple CEO Tim Cook, Google CEO Sundar Pichai and Microsoft CEO Satya Nadella. “We’re going to see more technological change … in the next 10 years than we’ve seen in the last 50 years.”
White House officials say India's deep talent pool will be crucial in building more resilient supply chains and developing technology to address climate change. All this comes as the administration has sought to put the U.S.-India relationship on a higher plane in the face of an ascendant China in the Indo-Pacific.
Modi commended Biden for seeing “the possibility that India represents.”
“This is definitely a guarantee for a bright future,” Modi added.
As part of Modi's state's visit — the first by an Indian leader since Manmohan Singh in 2009 — the two leaders announced several major investments by U.S.-based companies in India.
Micron Technology has agreed to build a $2.75 billion semiconductor assembly and test facility in India, with Micron spending more than $800 million and India financing the rest. U.S.-based Applied Materials will launch a new semiconductor center for commercialization and innovation in India, and Lam Research, another semiconductor manufacturing equipment company, will start a training program for 60,000 Indian engineers.
On the space front, India signed on to the Artemis Accords, a blueprint for space exploration cooperation among nations participating in NASA’s lunar exploration plans. NASA and the Indian Space Research Organization also agreed to make a joint mission to the International Space Station next year.
Earlier this year, the two countries launched the Initiative on Critical and Emerging Technologies, which sets the path for collaboration on semiconductor production, developing artificial intelligence, and a loosening of export control rules. The initiative was critical in sealing a deal, announced Thursday, that will allow U.S.-based General Electric to partner with India's Hindustan Aeronautics to produce jet engines in India.
Later Friday, Modi was honored at a State Department luncheon hosted by Vice President Kamala Harris and Secretary of State Antony Blinken. He was also scheduled to deliver an address to members of the Indian diaspora in the United States before departing Washington in the evening. | Emerging Technologies |
The U.S. Department of Defense (DoD) has entered into a 10-year agreement with GlobalFoundries (GF) to procure U.S. chips for vital defense and aerospace applications. The initial commitment is valued at $17.3 million, with the potential to reach $3.1 billion over the contract's duration. This partnership ensures the DoD receives top-tier, secure semiconductors from GF's highly accredited U.S. facilities.
GF's U.S. production sites hold the prestigious Trusted Supplier Category 1A accreditation, which is the highest security level. This ensures that the chips are produced with rigorous security measures in place to safeguard sensitive data and maintain the integrity of the chips. Beyond just manufacturing, the contract also encompasses other significant benefits for the DoD. They will have access to GF's expansive design ecosystem, libraries of intellectual property, and a first look at emerging technologies. This holistic approach ensures that the DoD remains at the forefront of technological advancements.
GlobalFoundries and DoD naturally do not disclose which process technologies will be used to make chips for 'a wide range of critical aerospace and defense applications,' but typically such devices use proven and mature production nodes. GlobalFoundries has plenty of them as the company is focused on specialized process technologies.
"GF is proud to begin this new chapter of our decades-long partnership with the U.S government, and to continue serving as the leading supplier of securely manufactured essential chips for the U.S. aerospace and defense industry," said Mike Cadigan, chief corporate and government affairs officer at GF. "This partnership provides DoD programs with 'front-door access' to advanced technologies in a way that is scalable and highly efficient. For this work, GF is accredited to provide the right level of security required for each program, from GF’s industry leading GF Shield protections, to strictly export controlled handling (e.g. ITAR), to the highest level of accredited microelectronics manufacturing security on the planet, Trusted Category 1A."
This contract between the DoD and GF is not new. In fact, this is the third successive 10-year contract between the two entities. Their enduring alliance underscores the trust he DoD places in GF's capabilities and the essential role GF plays in the defense sector. | Emerging Technologies |
The quiet Canadian port city of Halifax is looking forward to a high-tech future after having been chosen as the North American host for NATO’s latest effort to spur the development of cutting-edge technologies seen as crucial to 21st century warfare.
The selection was announced at last month’s Halifax International Security Forum, an annual event in which top politicians, military leadership and experts from around the world meet to discuss the defense of democracies.
It had already been decided that one of the program’s two offices would be located in Canada. A European regional office was selected from a joint Estonian-United Kingdom bid, according to a NATO announcement earlier this year.
The initiative, known as the Defense Innovation Accelerator for the North Atlantic (DIANA), was unveiled during an April 6-7 meeting of NATO foreign ministers in Brussels.
The alliance said then that DIANA “will concentrate on deep technologies – those emerging and disruptive technologies that NATO has identified as priorities including: artificial intelligence, big-data processing, quantum-enabled technologies, autonomy, biotechnology, novel materials and space.”
NATO Secretary General Jens Stoltenberg added that NATO would work with the private sector and academia to “ensure that we can harness the best of new technology for transatlantic security.”
The ministers also agreed to establish a 1 billion euro ($1.05 billion) venture capital fund to invest in “early-stage start-ups and other deep tech funds aligned with [NATO’s] strategic objectives,” according to the April announcement.
The selection appears to be a good fit for Halifax, which despite its relatively small population of 431,000 is Canada’s most important Atlantic seaport and home to the nation’s largest military base. More than 40 percent of Canada's military assets are located in the surrounding province, according to the Nova Scotia government.
The city also features the regional headquarters of Canada’s civilian intelligence agency, the Canadian Security Intelligence Service, and the Nova Scotia headquarters of the national police service, the Royal Canadian Mounted Police.
Halifax’s harbor is the site of a major shipbuilding industry for the Canadian Navy and its naval dockyard features a processing center for the so-called “Five Eyes” intelligence-sharing countries -- Canada, the US, the UK, Australia and New Zealand. It also has one of Canada’s three regional Marine Security Operations Centres (MSOCs), designed to coordinate the nation’s response to any maritime threat.
“Halifax is a great fit for DIANA and its priorities of NATO working more closely with industry and academia,” said Emily Smits, CEO of Modest Tree, a Canadian defense contractor considered one of the rising stars in the industry.
“Halifax is growing rapidly as a tech hub and has many major universities in Halifax and throughout the province. Having DIANA come to Halifax would further position the city as a place of innovation in emerging technologies and demonstrate this on a global scale.”
Asked by VOA what the establishment of DIANA in Halifax would mean for her company, Smits said, “It will allow further collaboration in academia, global industry and potential contracts and partnership discussions. Innovation hubs and networking opportunities that include global players in your backyard are always great for continued discussions.” | Emerging Technologies |
- Autodesk, H&M Group, JPMorgan Chase, and Workday announced on Wednesday $100 million in the collective advanced purchase of carbon removal through Frontier, a public benefit company owned by payment processing company Stripe.
- The $100 million advanced market commitment adds to the $925 million announced a year ago from Stripe, Alphabet, McKinsey, Meta and Shopify, and brings the total commitment for carbon removal to over $1 billion.
Four new companies have committed $100 million to remove carbon dioxide from the atmosphere as part of an effort started by several major tech companies to jumpstart the nascent carbon dioxide removal industry.
Autodesk, H&M Group, JPMorgan Chase, and Workday announced on Wednesday a combined $100 million commitment to Frontier, a benefit company owned by payment processor Stripe. That adds to the $925 million announced in April 2022 from Stripe, Alphabet, McKinsey, Meta and Shopify at the launch of Frontier.
Frontier helps its member companies purchase CO2 removal via pre-purchase agreements or offtake agreements. The goal is to spur the development of a new industry by providing a novel source of funding that isn't based on debt or equity investments, but on actual product purchases before the technology is fully available at scale.
"We see Frontier's advanced market commitment as an important demand signal boost for the carbon removal market. It's critical for demonstrating that there is a customer for entrepreneurs building carbon removal solutions," Ryan Macpherson, the climate innovation and investment lead at Autodesk, told CNBC.
Stripe began investing in carbon removal in 2019 when the payment processor said it would spend at least $1 million per year removing carbon dioxide from the atmosphere and sequestering it for long-term storage.
Stripe's relatively early decision to focus on carbon removal was "an effort to really focus our climate program where we felt like we could have meaningful climate impact," Hannah Bebbington, the strategy lead at Frontier, told CNBC.
"Permanent carbon removal is categorically under-invested in and under-supported despite the fact that we know through IPCC reports that we're going to need billions of tons of annual capacity in the coming decades. And so really, Frontier is an extension of work that we've been doing in permanent carbon for many years," Bebbington told CNBC.
The latest report published in March from the United Nations' Intergovernmental Panel on Climate Change talks about the value that carbon dioxide removal has in responding to climate change. The IPCC emphasizes throughout the report that the primary and most important factor in mitigating the negative impacts of climate change is reducing emissions, but also says that carbon removal technologies can help if used strategically.
Carbon dioxide emissions from energy production topped 36 billion tons last year, according to the International Energy Agency, with total global carbon dioxide emissions projected to have been 40.6 billion tons in 2022, according to the Global Carbon Project.
Frontier's member companies tell Frontier how much money they want to spend and over what timeframe. Frontier then decides how to allocate that capital to carbon removal companies in its portfolio. Member companies generally sign up for multi-year commitments amounting to "tens of millions of dollars," Bebbington told CNBC, but smaller companies can contribute through a deal between Frontier and carbon accounting firm Watershed. Firms like Aledade, Boom Supersonic, Canva, SKIMS, Wise and Zendesk have all bought into Frontier via the Watershed partnership.
All of the CO2 removal solutions funded will have to meet specific criteria including permanence (more than 1,000 years), cost (with a viable path to costing less than $100 a ton at scale), additionality (meaning they're not removing CO2 that would have been removed or reduced through some other method anyway), and capacity (more than 0.5 gigatons of carbon per year at scale).
So far, Frontier has spent $5.6 million buying nearly 9,000 tons of contracted carbon removal from 15 carbon dioxide removal startups that are collectively pursuing seven methods.
For example, Lithos spreads basalt on croplands to increase the carbon that dissolves in the soil. RepAir uses electrochemical cells and clean electricity to capture carbon dioxide from the air. And Living Carbon is a synthetic biology startup working on engineering natural systems to remove carbon dioxide.
Each of these startups has a different delivery schedule and different deadlines, all of which are made public on Github.
All 15 startups Frontier has listed on its website so far have received money through pre-purchase agreements, which are relatively small-scale checks, often $500,000, going to very early-stage companies. Pre-purchase agreement money is delivered upfront and is not conditional on delivery, and Frontier is not getting equity in the startups.
Frontier will also fund a second category called offtake agreements with carbon removal companies that are further along in their development and scale. In an offtake agreement, Frontier will pay as the carbon removal is delivered.
Offtake agreements will comprise "the lion's share of the funds from Frontier," Bebbington said, but the companies delivering those offtakes have not been announced yet.
Corporate partners can choose to fund only offtake agreements and opt out of the pre-purchase agreements. So far, only Stripe and Shopify have elected to participate in these pre-purchase agreements, but as Frontier members "get comfortable with buying early-stage carbon removal, we expect many more will participate in pre-purchases as well," Bebbington told CNBC.
Critics say that focusing on carbon capture is a distraction to the primary goal of reducing greenhouse gas emissions, the fundamental solution to addressing climate change.
"We have to shift the narrative as a matter of urgency. Money is going to flood into climate solutions over the next few years, and we need to direct it well. We must stop talking about deploying CDR as a solution today, when emissions remain high — as if it somehow replaces radical, immediate emission cuts," wrote David Ho, a professor of oceanography at the University of Hawaii at Manoa, in the journal Nature on April 4.
But Stripe says that both emissions reductions and carbon dioxide removal are needed.
"It's pretty unequivocal when you read the IPCC reports starting in 2018, that we cannot get to net zero global emissions without permanent carbon removal at scale. And so to us, it is a false dichotomy" to compare carbon dioxide removal with emissions reductions, Bebbington told CNBC.
"We need to both radically reduce the emissions we are net, but also scale high-quality permanent carbon removal, because, without those, we will in no way reach that net-zero goal," Bebbington said.
Autodesk agrees.
"To be clear, carbon removal isn't the end-all, be-all solution to climate change. It's far from it. At Autodesk we're supporters of a wide range of mitigation strategies and technologies to avoid and reduce greenhouse gas emissions and accelerate the broader transition to decarbonization. All that difficult decarbonization work needs to happen prior to removing CO2 from the atmosphere," Macpherson told CNBC.
"However, the science is increasingly clear: Carbon removal is an increasingly necessary tool for limiting warming. The challenge is that many of today's removal solutions are still nascent, and we'll need to scale the industry thousands of times over if we're to meet the scale necessitated by climate models."
Workday also says carbon removal is one component of its larger climate change strategy.
"This partnership is one aspect of our overall climate action initiatives, which includes matching 100% of the electricity used at our global offices and data centers with clean, renewable sources and providing our entire customer community with a carbon-neutral cloud," Rich Sauer, Workday's chief legal officer, told CNBC.
"However, we understand that permanent carbon removal is needed to achieve net zero targets by 2050 and that requires significant development to ensure emerging technologies in this space are in place quickly so this important work can be done at scale," Sauer said. | Emerging Technologies |
Witthaya Prasongsin | Moment | Getty ImagesEnvironmental, social and governance initiatives will keep many CIOs busy in 2023, as organizations look to enhance IT and operational efficiency and comply with regulations."While chief sustainability officers and other leaders have spearheaded environmental sustainability efforts in the past, CIOs are now essential in meeting those goals," said John Mennel, managing director, purpose, ESG and sustainability leader at consulting firm Deloitte."Accordingly, CIOs face increasing opportunities — and responsibilities — to lead transformation, particularly in achieving net-zero, or carbon negative, climate sustainability objectives," Mennel said. They are increasingly called upon to ensure that technology related to environmental sustainability is deployed aggressively, while playing an active role in minimizing the environmental impact of existing and new infrastructure and technology, he said. With IT, and data centers specifically, being one of the most carbon-intensive aspects of business today, "the pressure coming from consumers, investors and regulators alike will continue to drive ESG and corporate sustainability issues into the minds of executives," said Dan Versace, research analyst, ESG Business Services at research firm International Data Corp."This, coupled with opportunities created through sustainable operations, will renew the business case and strategic nature of ESG to push organizations to continue their investment in the face of economic and geopolitical instability," Versace said.Reducing the carbon footprint of IT operationsThere are a number of drivers moving ESG forward. Among them: Rising environmental regulations, cost savings, and other financial benefits that come from operational efficiencies tied to sustainability, said Abhijit Sunil, senior analyst at Forrester Research. "ESG will remain a top priority for IT next year across industries," he said.Providers of data center technologies and cloud services have told Forrester that they are seeing an increased trend of sustainability-related questions in requests for proposals, Sunil said. "Not only are the questions increasing in complexity and detail, but they carry a higher weight for decision making," he added.The firm's research of IT executives showed that the top contributors to IT's carbon footprint are from the data center and cloud services, as well as end-user devices and peripherals. Forrester said for technology executives some of the top priorities in 2023 for ESG will include implementing software platforms for measuring and monitoring environmental footprint and digitizing operations and optimizing user devices.Others include optimizing data center operations; migrating to and taking advantage of sustainability benefits of the public cloud; and working with suppliers to help reduce the carbon footprint in the supply chain."Some of these are initiatives that can specifically be [driven by] the technology leadership," Sunil said. For example, IT leaders can adopt software tools that help automate and report complex carbon metrics.One area of focus will be the need for reporting on ESG progress, as required by regulations. A 2022 report by research firm Info-Tech Research Group noted that in 2023 it's expected public companies will be required to report on their carbon emissions by financial regulators in places such as the U.S., UK, European Union and Canada."Many organizations are still behind on this issue, even though various regulators around the world are either implementing those reporting requirements or moving closer to doing so," the report said. The research is based on a survey of 813 IT professionals worldwide, and showed that less than one quarter said their organization can accurately report on the impact of its ESG initiatives.Nearly half of the respondents said their organization could not accurately report its carbon footprint. IT leaders will need to improve in this area, Info-Tech said.Efforts appear to be underway to enhance ESG reporting strategies. IDC has predicted that by 2024, 30% of organizations will advance their ESG metrics and data management beyond reporting capabilities to generate sustainably driven cost and competitive advantages. By 2024, IDC said, 75% of large enterprises will implement ESG data management and reporting software as a response to emerging legislation and increased stakeholder expectations.Getting data for tech-focused ESG effortsThe largest and most important technological challenge facing organizations working to embed ESG into their operations will be the lack of quality data, Versace said. The data needs facing technology executives can be categorized into two main groups, he said. The first is how and where they source ESG-related data that can be used in decision making. The second is what processes they have or need to ensure that this data is trustworthy and auditable."Many executives in this space will likely undergo an ESG audit within the next 12 to 18 months, as governmental and regulatory bodies push for more transparency in ESG performance," Versace said.There are plenty of existing and emerging technologies that can feed into ESG, according to experts. Based on feedback Forrester is getting from clients and vendors, the most prominent include blockchain, digital twins, artificial intelligence/machine learning, edge computing, the Internet of Things (IoT), processor technology advancements, thermo-optimized data centers, augmented reality/virtual reality and automation.These can be used to support several use cases, Forrester noted. For example, companies are using digital twins — digital representations of physical things' data, state, relationships and behavior — to combine timely operational data from IoT sensors, business logic, analytics and machine learning to model and predict the optimal use of physical things such as pumps, motors, windmills and trains.The cloud provides a big opportunity for companies to be more eco-friendly. "Companies that move from on-site data centers to the cloud report energy savings of 80%," Mennel said. "Moving to the cloud, picking a provider that is committed to zero/carbon neutral footprint, and adopting efficient migration approaches can be critical for CIOs to consider in meeting environmental sustainability goals."Technology executives will continue to prioritize sustainability initiatives despite impeding economic volatility, Sunil said. "Encouragingly, very few leaders globally told us that even in the wake of a recession, they would alter any already set goals for carbon footprint reduction," he said. | Emerging Technologies |
Welcome to Catching Immortality
We’re building a community to help people live, longer, happier, healthier lives.
We also aim to bring you the latest news in anti-aging research and emerging technologies.
We want everyone to understand what’s happening in Longevity Science, and the incredible progress being made.
We are also on the hunt for products that might help people live longer.
So stick around and listen to our podcast, read our articles, and browse our Longevity Marketplace for some products we hope will help you to get started on your longevity journey today!
The Technology is Coming
To Keep Us Together
Forever
|Featured Product||Why the product is relevant to the pursuit of longevity|
|
|
Wear Wiz – YHE BP Doctor Pro Smartwatch
Click Here To Visit the Wear Wiz Online Store
|Wearable fitness trackers, smartwatches, and health monitoring devices can help individuals track their activity levels, heart rate, sleep patterns, and other health metrics. These devices provide valuable data for individuals to monitor their health, set goals, and make informed decisions to improve their well-being. By keeping a close eye on blood pressure readings and other vital signs, individuals could help identify potential issues early on and take necessary steps to manage their cardiovascular health, thus potentially improving longevity. Health monitoring enables early detection of hypertension (high blood pressure), provides personalized health insights, and supports stress management. By utilising wearable technology, individuals can actively engage in proactive health management, potentially leading to improved cardiovascular health and enhanced longevity. By leveraging these health insights, individuals can make informed decisions about their lifestyle, exercise routines, and dietary choices.|
|
|
Yoga Download - Practice Any time, Anywhere
Click Here to Visit the Yoga Download Online Store
|Yoga could contribute to longevity by improving physical health, reducing stress, fostering mind-body connection, enhancing balance and stability, promoting breath control and vitality, improving mental well-being, and encouraging healthy lifestyle factors. By integrating yoga into one's daily routine, individuals can enjoy a holistic approach to well-being that supports a long, healthy, and fulfilling life. Through the practice of yoga, individuals develop a greater sense of body awareness, learn to listen to their bodies, and cultivate a deeper understanding of their physical and emotional needs. This increased self-awareness enables individuals to make conscious choices that support their overall health and longevity. Regular practice of yoga can also enhance core strength, stability, and coordination, which are essential for maintaining independence and longevity.|
|
|
Autumn (Inovo Biotech LLC) – Customized Supplement Regimen based on DNA testing
Click Here to Visit Autumn's Website to find out more
|A comprehensive approach that combines a DNA test, lifestyle assessment, and personalized supplement plan could contribute to longevity by empowering individuals to make informed choices and optimize their overall health. A DNA test provides insights into your genetic makeup and can identify certain genetic variations associated with health conditions, disease risks, and specific nutrient requirements. A tailored supplement plan takes into account specific nutritional needs, potential deficiencies, and individual health goals. Supplements can help fill nutrient gaps, support key bodily functions, and enhance overall well-being. Working with healthcare professionals to interpret the results, develop personalized plans, and monitor progress, this collaborative approach helps ensure that interventions are tailored to specific needs, supporting the journey toward optimal health and longevity.|
|
|
Fit.n.Delicious – Digital fitness workouts and recipes
Click Here to Visit the Fit.n.Delicious website to find out more
|Fitness and food contribute to longevity by promoting overall health and well-being. Regular exercise, including cardiovascular activities, strength training, and flexibility exercises, supports cardiovascular health, musculoskeletal strength, metabolic function, brain health, and stress reduction. A balanced diet including fruits, vegetables, whole grains, lean proteins, and healthy fats can provide essential nutrients, antioxidant protection, inflammation reduction, healthy weight maintenance, gut health, and hydration. By incorporating fitness and nutritious food choices into your lifestyle, you can enhance your overall health and increase your chances of living a longer, healthier life.|
|
|
Herbs Pro – Online retailer of health products and supplements
Click Here to Visit Herbs Pro's Online Store
|Nutritional supplements, including vitamins, minerals, herbal extracts, and probiotics, can all help fill nutrient gaps and support overall health. These supplements can address specific nutritional needs, support immune function, promote bone health, and provide targeted support for various bodily systems.|
|
|
Chelsea Green Publishing – Online retailer and publisher of books and audiobooks
Click Here to visit the Chelsea Green Publishing Website
|Books relevant to longevity can cover a wide range of topics including aging and longevity, nutrition and diet, exercise and fitness, mindset and mental well-being, healthy lifestyle practices, disease prevention and management, genetics and epigenetics, habits of centenarians, wellness and holistic health, and healthy aging. There are books that provide insights into the science of aging, the impact of nutrition and exercise on longevity, strategies for maintaining mental and emotional well-being, preventive measures for age-related diseases, genetic influences on aging, lessons from long-lived populations, holistic approaches to health, and practical guidance for embracing the aging process. They offer a wealth of knowledge, evidence-based information, and practical tips to help individuals make informed choices and adopt practices that can support a long and healthy life.| | Emerging Technologies |
Pictured: Dolores Grijalva a student in MiraCosta College's Biomanufacturing Bachelor's Degree ... [+] program, the first bachelor's degree of its kind at a community college. Mike Fino, MiraCosta College
A university isn’t the only place to start a career in biotechnology—community colleges are stepping up to respond workforce needs through new bachelor’s degrees programs. Research supported by Joyce Foundation and Ascendium Education Group at the think-tank New America found that twenty-five U.S. states allow community colleges to offer bachelor’s degrees, a departure from their historical roots offering associate degrees that lead to transfer to a 4-year university where students would complete a bachelor’s degree. This development is helping community colleges meet the biotechnology industry’s manufacturing needs in a subfield called biomanufacturing. Most community college baccalaureates are offered in traditional fields such as business, education, and healthcare, but MiraCosta College in San Diego has launched the nation’s very first community college biomanufacturing bachelor’s degree to meet the workforce needs of Fortune 500 biotechnology giants like Pfizer, Abbott Laboratories, and Thermo Fisher Scientific. “Most university degree programs focus on the research side. We wanted to offer a degree in the production side which didn’t exist anywhere else, and, locally, there was a need in the industry,” Mike Fino, MiraCosta’s Dean of Mathematics and Sciences, who spearheaded the program, told me. MiraCosta’s degree launched in 2014 after California Governor Jerry Brown signed Senate Bill 850 to permit a pilot cohort of community colleges to offer bachelor’s degrees provided they addressed a concrete need and weren’t competing with university programs. To avoid conflict, Fino asked nearby California State University-San Marcos if they would modify their own biotechnology bachelor’s degree program to meet the manufacturing needs of employers. If so, MiraCosta would back down, but the university felt that its expertise was better suited for engineering and science training in the industry, freeing up MiraCosta College to proceed. MiraCosta College surveyed alumni who earned associate degrees in biomanufacturing to see if graduates saw value in an advanced credential. They surveyed employers who hired MiraCosta graduates to ensure that the bachelor's degree would be useful–the answer from both groups was, “yes.” So far, nearly a hundred graduates have landed jobs with sixty biotechnology employers in the region. The program boasts an impressive 93 percent completion rate, and employers have put skin in the game too. Genentech has donated equipment and student scholarships. ThermoFisher and MilliporeSigma donated equipment, supplies and host interns. Many major biomaufacturing employers sit on the program’s advisory board. The degree is also diversifying the field. According to Biocom California, California’s life science sector has a stubborn challenge with diversity, but 62 percent of MiraCosta’s graduates are female and 64 percent non-white. Those numbers beat national averages of 49 percent female and 38 percent non-white workers in biotechnology industry according to a Biotechnology Industry Organization report published in June. Despite stubborn diversity challenges in the region's life science sector, MiraCosta College's ... [+] biomanufacturing degree has a 93% completion rate with a diverse graduate population that exceeds that of national demographic averages according to data from the Biotechnology Industry Association. Pictured: Kendra Williamson a student in MiraCosta College.MiraCosta College
Why Community College Bachelor’s Degree? Community colleges typically meet biomanufacturing workforce needs through non-degree credentials or associate degree programs, focused on the technician workforce, while universities have met research and engineering needs through undergraduate and graduate degree programs.
So why do biotech companies want community colleges to offer bachelor’s degrees? MiraCosta already had a successful associate degree program in biomanufacturing, but Fino told me that local employers were looking for manufacturing workers to have a stronger theoretical understanding of biomanufacturing. That way, when something inevitably needs correcting on the factory floor, workers can independently assess and fix the problem. “[MiraCosta]’s program is very different from university programs. We have found that universities are not as willing to flex. Not as willing to hear industry and what they’re looking for. Universities have some advantages like a lot more money, but they choose to place more emphasis on their traditional structure,” Kathleen Bigelow-Houck, Senior Director and Head of Quality Assurance Operations at Genentech, told me in an interview. Bigelow-Houch, who serves on MiraCosta’s advisory committee and has taught in MiraCosta’s program herself, said MiraCosta’s hands-on approach, willingness to co-create the program with industry partners, and invite guest instructors from industry is what makes community colleges and their new bachelor’s degrees well positioned to meet biotech workforce needs. MiraCosta’s ability to diversify the industry was also a key selling point for Genentech which has hired dozens of graduates. A mother of four, Kellee Ramirez completed her degree two years ago and is already a Quality Control Manager at Gallant. Like Bigelow-Houck, Ramirez felt that community college bachelor’s degree offered something unique compared to university options. MiraCosta's class timings and frequency were designed for working adults in mind. The smaller class sizes felt more intimate compared to the university setting, on-site and the cost was much more managing for a student with caregiving responsibilities. Ramirez also appreciated how far MiraCosta would go to make underrepresented students feel welcome, "They told us that someone has to get that job, why not you? I wouldn’t be where I am today if it weren’t for MiraCosta,” she told me.
Kellee's former employer, Cellipont, was so impressed by the quality of MiraCosta’s bachelor’s degree graduates that the company changed its recruiting strategy to hire more graduates.25 out of 50 U.S. states have authorized community colleges to offer bachelor's degrees. Florida, ... [+] Washington, and Georgia have the most "CCB" programs.New AmericaMore than half of CCB programs are in STEM, health care, or nursing majors. The vast majority of ... [+] bachelor's degrees offered by community colleges are designed as bachelor's of science, applied science, or applied technology, signaling their importance to meet STEM workforce needs. New America
White House Promotes New Pathways to Emerging Technology Jobs in Biotechnology
The White House has stepped up its effort to promote workforce training for emerging technology jobs like in biotechnology, and more community colleges could be empowered to follow in MiraCosta’s footsteps.
In September, President Biden signed an Executive Order to launch a National Biotechnology and Biomanufacturing Initiative aimed at supporting the industry with an explicit emphasis on expanding community college-level workforce pathways into biotechnology jobs. MiraCosta isn’t the only community college using degrees to respond to the emerging technology workforce. MiraCosta’s program is now being replicated by other California community colleges including Solano College with others expected to be approved shortly. Speaking on a panel at Sigma Xi's iFore Conference last weekend in Virginia, Dominique Carter, Assistant Director for Agricultural Sciences, Innovation, and Workforce in the White House Office of Science and Technology Policy, cited another example of St. Louis Community College responding to St. Louis-based MilliporeSigma's workforce needs by launching a biotechnology associate degree program.
Miami Dade College is launching a new bachelor’s degree program in applied AI which follows several successful AI programs at the associate degree and certificate level.President Biden Joe Biden visits a biotechnology class at Miami Dade College, another community ... [+] college offering bachelor's degree programs that leads to jobs in emerging technology fields.Getty ImagesPictured left to right: Kellee Ramirez, a graduate of MiraCosta College; Kathleen Bigelow-Houck, an ... [+] employer at Genentech and part-time MiraCosta instructor; Mike Fino, dean of math and sciences at MiraCostaMike Fino, Last month, the Biden Administration launched the Experiential Learning for Emerging and Novel Technologies (ExLENT) program to expand emerging technology workforce training through internships, apprenticeships, and co-op experiences, including in biotechnology. During an informational webinar NSF officials said that community colleges are not just eligible but encouraged to apply for the $30 million available in grant funding.
MiraCosta itself is part of the U.S. Commerce Department-sponsored National Institute for Innovation in Manufacturing Biopharmaceuticals (NIIMBL), part of the federally initiated network of ManufacturingUSA Institutes which promote R&D and workforce innovation across manufacturing.
So far MiraCosta’s degree has filled traditional biomanufacturing jobs with students obtaining positions as manufacturing associates, quality technicians, and biological production technicians and some alumni have even gotten promoted into more senior roles like quality control managers, product engineers, and manufacturing team leads. However, the program is also expanding pathways for new emerging jobs born out of emerging technologies like gene editing. Graduates have found work in cell and gene therapy jobs at employers like Kite Pharma. Emerging trends in biotechnology is a required class for the degree, “that was a class that all of us students were really excited for,” Ramirez told me—the class is used as a reference for Ramirez’s employer to stay current on the latest in the industry. “I fully believe that MiraCosta students could fill the gene therapy niche of the industry,” Bigelow-Houck affirmed. As the Biden Administration seeks to boost economic opportunity in the biotechnology sector, community college degree programs might be exactly what employers need. | Emerging Technologies |
UAE’s Ras Al Khaimah to Launch Free Zone for Digital Asset Companies
The Emirate of Ras Al Khaimah in the UAE is set to launch a free zone for digital and virtual asset companies.
The RAK Digital Assets Oasis (RAK DAO) will be a “purpose-built, innovation-enabling free zone for non-regulated activities in the virtual assets sector.”
The free zone will be dedicated to digital and virtual assets service providers in emerging technologies, such as the metaverse, blockchain, utility tokens, virtual asset wallets, non-fungible tokens (NFTs), decentralized autonomous organizations (DAOs), decentralized applications (DApps), and other Web3-related businesses.
Free zones in UAE
Free zones in the UAE are areas where entrepreneurs have 100% ownership of their businesses and have different regulatory frameworks and tax schemes, except for the UAE’s criminal law.
The new free zone adds to the more than 40 multidisciplinary free zones in the country that have attracted numerous crypto, blockchain, and Web3 firms, including the Dubai Multi Commodities Centre (DMCC), Dubai International Financial Centre (DIFC), and Abu Dhabi Global Market (ADGM).
RAK DAO’s approach
Dubai-based crypto lawyer Irina Heaver thinks RAK DAO will start with non-financial activities first, then may introduce financial activities at a later stage. She adds that entrepreneurs will be able to launch a crypto exchange later, which is an ESCA-regulated financial activity.
The Securities and Commodities Authority (SCA) is one of the UAE’s primary financial regulators. According to the country’s latest federal-level virtual assets law, the SCA has authority throughout the Emirates, except for the financial free zones, the ADGM and DIFC, and others, which have their financial regulators.
UAE’s regulatory framework
The UAE has more friendly regulations for crypto firms, and the country has positioned itself as a forward-thinking hub for such firms. In March 2022, Dubai unveiled its virtual assets law and the Virtual Asset Regulatory Authority to protect investors and provide standards for the digital asset industry.
In September 2022, the Financial Services Regulatory Authority, the regulator of the ADGM, published guiding principles on its approach to regulating and overseeing the new asset class and its service providers. | Emerging Technologies |
Global Next Generation Cancer Diagnostics Market
Dublin, May 10, 2023 (GLOBE NEWSWIRE) -- The "Next Generation Cancer Diagnostics: Technologies and Global Markets" report has been added to ResearchAndMarkets.com's offering.
The base year for market data is 2021, with historical data provided for 2020 and 2019 and forecast data provided through 2027. Historical, base year, and forecast data are provided for each market segment of the report.
An increasing number of cancer cases globally is one of the significant factors contributing to industry growth during the forecast period. Technological advancements in diagnostic tests are further expected to fuel industry growth. Moreover, supportive government initiatives and rising awareness are additional factors anticipated to boost growth during the forecast period.
For instance, the Biden-Harris administration has set a goal of decreasing the cancer mortality rate by 50% over the next 25 years and enhancing the knowledge surrounding people living with and surviving tumors.
Cancer is one of the leading causes of death worldwide, and the prevalence of the disease has been escalating at an alarming rate. Therefore, healthcare professionals are focusing on developing effective screening and treatment solutions to check prevalence levels. Early screening increases the success rate of treatment regimens. As a result, healthcare agencies and market players, through various awareness programs, are promoting routine check-ups and screenings. For instance, in March 2022, HHS announced funding of $5 million to improve equity in cancer screening at health centers.
The global oncology burden is projected to reach 28.4 million cases in 2040, a 47% growth from 2020. Thus, a rise in the incidence is anticipated to boost the adoption of cancer diagnostic products.
In April 2022, the Precision Cancer Consortium (PCC) collaborated with pharmaceutical companies by permitting access to comprehensive testing for all cancer patients globally. The PCC drives diverse initiatives to grow patient access to precision diagnostics using wide-ranging genomic testing, including next generation sequencing (NGS). The founding members of PCC include Novartis, Bayer, Roche, and GlaxoSmithKline.
Advanced diagnostics for cancer represent a significant market opportunity for life sciences companies. Many cancer types are on the rise as the aging populations of many countries continue to grow. As a result, there is an increasing demand for noninvasive diagnostic assays that can detect cancers earlier, molecularly subtype tumors to guide therapy decisions and monitor cancer recurrence in treated individuals.
Cancer remains the second leading cause of death worldwide despite advances in treatment. Cancer takes a tremendous toll on patients, families, and society. One pressing need in cancer diagnostics is earlier-stage identification, identifying cancer before it has spread to other body parts. Several companies are using advanced diagnostic platforms to develop and validate assays that detect cancer earlier to meet this need. Sanomics, Prenetics, Guardant, Thrive Earlier Detection, AnPac BioMedical, and Grail are notable examples.
A second pressing need in cancer is a more accurate classification of suspicious lesions or nodules (malignant or benign). Correct type leads to better treatment decisions and fewer unnecessary, invasive biopsies. In the case of lung cancer, peripheral lung nodules are very difficult to biopsy, deep within the small branches of the lung, often beyond the reach of the bronchoscope. Needle biopsy is problematic, carrying the risk of lung collapse and infection. New, non-invasive blood-based tests are needed to assess nodules.
Key Attributes:
Report Attribute
Details
No. of Pages
166
Forecast Period
2022 - 2027
Estimated Market Value (USD) in 2022
$8.7 Billion
Forecasted Market Value (USD) by 2027
$15 Billion
Compound Annual Growth Rate
11.5%
Regions Covered
Global
Report Includes
15 data tables and 80 additional tables
An overview of the global market and technologies for next generation cancer diagnostics
Estimation of the market size and analyses of global market trends, with data from 2021, estimates for 2022 and projections of compound annual growth rates (CAGRs) through 2027
Highlights of the current and future market potential and quantification of next generation cancer diagnostics market based on type, application, and region
Description of arrays and microfluidics (LOAC) technologies, multiplex conventional technologies, next generation sequencing technology and polymerase chain reaction (PCR) technology
Analysis of underlying technological, environmental, legal/regulatory, and political trends that may influence the size and nature of the market
Coverage of the key initiatives and programs related to the next generation cancer diagnostics market
Market share analysis of the key companies of the industry and coverage of their proprietary technologies, strategic alliances, and other key market strategies
Company profiles of major players within the industry, including Abbott Laboratories, Illumina Inc., Becton, Dickinson and Co., and Illumina Inc.
Key Topics Covered:
Chapter 1 Introduction
Chapter 2 Summary and Highlights
Chapter 3 Market and Technology Background
3.1 Overview
3.2 Next Generation Sequencing in Personalized Medicine
3.2.1 Cost-Effective
3.2.2 Samples Required
3.3 NGS in Clinical Oncology
3.3.1 Accurate Detection of Genetic Mutations
3.3.2 Precision Diagnostics
3.3.3 Tumor Genomic Profiling
3.4 Large-Scale Initiatives and Consortia
3.5 Liquid Biopsy Technologies
3.6 Liquid Biopsy as a Market-Driving Force
3.6.1 Key Trends
3.7 Industry
3.8 Diagnostics Overview
3.9 Arrays and Microfluidics (Loac) Technologies
3.9.1 Dna Microarrays
3.9.2 Protein Microarrays
3.9.3 Microfluidics
3.10 Multiplex Conventional Technologies
3.11 Polymerase Chain Reaction (Pcr) Technology
Chapter 4 Cancer Diagnostics Market
4.1 Forces Driving Growth
4.2 Cancer Markets
4.3 Market for Next-Generation Cancer Diagnostics by Cancer Site
4.3.1 Bladder Cancer
4.3.2 Brain Cancer
4.3.3 Breast Cancer
4.4 Market for Next-Generation Cancer Diagnostics by Purpose of Analysis
4.4.1 Screening/Early Detection Market
4.4.2 Diagnostics Market
4.4.3 Therapy Guidance Market
4.4.4 Monitoring Market
4.5 Market for Next-Generation Cancer Diagnostics by Test Platform
4.5.1 Pcr Test Platform
4.5.2 NGS Test Platform
4.5.3 Array/Microfluidics Test Platform
4.5.4 Cells And/Or Ev Capture Test Platform
4.5.5 Multiplex Conventional Test Platform
4.6 Market for Test Platforms by Cancer Site
4.6.1 Pcr Test Platform
4.6.2 NGS Test Platform
4.7 Market by Diagnostic Segment
4.7.1 Screening/Early Detection Market
4.7.2 Diagnostics Market
4.7.3 Monitoring Market
4.7.4 Therapy Guidance Market
4.7.5 Breast Cancer Diagnostics
4.7.6 Digestive Cancer Diagnostics
4.7.7 Respiratory and Skin Cancer Diagnostics
Chapter 5 Market Breakdown Application by Cancer Site
5.1 Overview
5.2 Bladder Cancer
5.3 Brain Cancer
5.3.1 Types of Brain Cancer
5.4 Breast Cancer
5.4.1 Risk
5.4.2 Breast Cancer Screening
5.4.3 Prognosis and Pharmacogenetics Tests
5.4.4 Breast Cancer Mdx Platforms
5.4.5 Status of Next-Generation Breast Cancer Tests
5.4.6 Treatment
5.5 Gynecologic Cancers
5.5.1 Cervical Cancer
5.5.2 Ovarian Cancer
5.6 Colorectal Cancer
5.6.1 Conventional Colorectal Cancer Screening Tests
5.6.2 Next-Generation Colorectal Cancer Diagnostic Tests
5.7 Cancer Unknown Primary
5.8 Gastric Cancer
5.9 Kidney Cancer
5.10 Hematologic Tests: Leukemia and Myeloma
5.11 Liver Cancer
5.12 Lung Cancer
5.13 Hematologic Tests: Lymphomas
5.14 Melanoma
5.15 Pan-Cancer
5.16 Prostate Cancer
5.17 Thyroid Cancer
Chapter 6 Emerging Technologies
6.1 Overview
6.2 Population Sequencing Programs
6.3 Introduction
6.4 Crispr
Chapter 7 Evaluation of the Market Based on Geographic Region
7.1 Overview of the Geographical Distribution of the Market
7.2 Overview
7.3 North America
7.4 Europe
7.5 Asia-Pacific
7.6 Rest of the World
Chapter 8 Patents
Chapter 9 Company Profiles
Abbott Laboratories
Abbvie Inc.
Advanced Cell Diagnostics Inc.
Agilent Technologies
Artivion Inc.
Becton, Dickinson and Co.
Benitec Biopharma Ltd.
Caladrius Biosciences
Celgene Corp.
Commence Bio
Depuy Synthes (J&J)
Dragerwerk AG & Co. Kgaa
Epic Sciences Inc.
Genexine
Geron
Illumina Inc.
Immunocellular Therapeutics
Inex Innovations Exchange Pte. Ltd.
Inivata Ltd.
Interpace Diagnostics LLC
Invivoscribe Inc.
Medtronic plc
Merck Kgaa
Novartis AG
Oncocyte Corp.
Oncomed Pharmaceuticals Inc.
Pluristem Therapeutics Inc.
Silicon Biosystems S.P.A
Sphere Fluidics Ltd.
Thermo Fisher Scientific
Vitatex Inc.
For more information about this report visit https://www.researchandmarkets.com/r/blo82d
About ResearchAndMarkets.com
ResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.
Attachment
CONTACT: CONTACT: ResearchAndMarkets.com Laura Wood,Senior Press Manager [email protected] For E.S.T Office Hours Call 1-917-300-0470 For U.S./ CAN Toll Free Call 1-800-526-8630 For GMT Office Hours Call +353-1-416-8900 | Emerging Technologies |
The US defense industry is attempting to regear itself in preparation for potential large-scale military conflicts, waking up from a post-Cold War investment hiatus during which counterinsurgency missions topped the Pentagon's agenda. Simmering tension in the Taiwan Strait and evolution in defense technology is also testing Washington's ability to bolster supply chain resilience while reinforcing military resilience in the Indo-Pacific region. Against this backdrop, we will see how the US seeks to strike a delicate balance between re-shoring and ally-shoring.
Chipmakers factoring in wartime supply chain disruptions
On April 25, Project 2019 Institute, a US think tank founded by former US Assistant Secretary of Defence Randall Schriver, and the US-Taiwan Business Council held a panel discussion on the potential economic impact for the US of a semiconductor supply chain disruption involving Taiwan.
According to Richard Thurston, a participant and a former TSMC senior vice president and general counsel who retired in 2014, the panel discussed many risk factors. Among them was the potential of war. "We looked at the key industry sectors and what impact would degrees of loss due to risk elements have on the US and Taiwan economies," Thurston told DIGITIMES Asia. According to him, the discussion also touched on many facets, including the competition between Taiwan's ruling and opposition parties.
Patrick Wilson, currently MediaTek's vice president of governmental relations, was also invited to the discussion. Though Wilson was eventually unable to partake due to a scheduling conflict, the MediaTek VP indicated to DIGITIMES Asia that the chipmaker's focus on serving customers means that it has to prepare for "any interruptions or problems."
"Our customers in China, the US and elsewhere would be disappointed if we weren't focused on being as prepared as we can be for any future crisis," said Wilson. "As we have learned over the last three years of global pandemic, you have to stay very close to your customers and understand their worries to keep up with the challenges."
An AUKUS-like framework involving Taiwan?
To offset the risks of a potential war across the Taiwan Strait, Washington's chip supply resilience efforts brought TSMC to set up a 5nm fab in the state of Arizona, with 3nm process technology planned to be introduced in 2026. Though the plan drew criticisms of weakening and marginalizing Taiwan's economic viability, thus inadvertently delivering to China one of its goals, the strained capacity of the domestic arms industry, the needs to shore up allies' defense in the Indo-Pacific, and by extension securing US interests in the region has also led Washington to foster regional security frameworks, recently exemplified by AUKUS, that transcend military cooperation into the realms of technology cooperation.
AUKUS, for example, saw the US sharing nuclear propulsion technology with Australia to be used in its future submarines, in addition to trilateral cooperation between the US, UK, and Australia on hypersonic weapon development and other efforts to improve joint capabilities and interoperability in AI, cybersecurity, and quantum technologies.
Though a regional framework like the so-called "Chip 4" is slowly brewing, it remains to be seen if it would pave the way for an AUKUS-like formation involving the US and Taiwan. Nevertheless, defense technology cooperation between the two countries is underway. As reported by Nikkei Asia in early April, delegates from around 25 US defense contractors are planning to visit Taiwan in May to discuss the joint production of drones and ammunition. The event would mark the first large group of US envoys focused on the defense industry to visit Taiwan since 2019, but is also an indicator that US defense contractors are "struggling to keep up with obligations at home and abroad," according to Nikkei Asia.
Space - the decisive frontier in need of greater defense cooperation
Talking to DIGITIMES Asia, a Taiwan-based drone and satellite module supplier that has made it into the international arms supply chain indicated that the Ukraine war has certainly boosted the demand for drone defense as well as a need to develop Starlink-like capabilities, and some US distributors have sought to acquire anti-drone solutions from them. Another Taiwan-based defense contractor specializing in RF solutions remains unsure if the upcoming visit from the US will deal with the satellite sector.
Nonetheless, Ukraine's experience has already brought Taiwan to recognize the importance of satellite internet constellations during armed conflict. Since last August, Taiwan's Ministry of Digital Affairs has initiated a project leveraging emerging technologies to strengthen communication network resilience during times of emergency or war. Speaking to DIGITIMES Asia, Audrey Tang, the Minister of Digital Affairs, explained that the plan includes satellite communications, but will also involve other technologies such as decentralized communications.
Referring to the satellite communication capability, Tang said that the ministry is conducting proof-of-concept (POC) trials to enhance cooperation with providers. "The locations of POC trials and partners involved will be revealed gradually after applications have been reviewed and selected projects have been determined," said the minister, who also indicated that how this communication system may be integrated into existing systems is still under discussion.
At a recent hearing before the US-China Economic and Security Review Commission, Kevin Pollpeter, Senior Research Scientist at the Center for Naval Analysis, highlighted the role of space in US-China military competition. In his testimony, Pollpeter pointed out that the Chinese military has recognized that long-range power projection, especially missiles, requires space-based command, control, communications computers, intelligence, surveillance, and reconnaissance (C4ISR) capabilities.
With more than 500 operational satellites in orbit, China already has the second-largest fleet of satellites in orbit behind the US, said Pollpeter. Above all, he drew attention to an emerging "US-China reconnaissance-strike competition" involving missiles and space technologies as missile power replaces air power as the determining factor in warfare.
As the recent chip shortage elevates semiconductor supply chain to a national security issue, driving home the issue of reshoring key US manufacturing capacities and capabilities, the capacity gap also makes ally-shoring an inevitable strategy to sustain US commitment to its allies and foreign policy goals as warfare quickly evolves. Alley-shoring has long been a terminology featured high on Washington's supply chain strategy, but curiously amiss in its Taiwan narratives. Now, all eyes are on the region. | Emerging Technologies |
Microsoft and GM deal means your next car might talk, lie, gaslight and manipulate you
'ChatGPT is going to be in everything' says automaker
Thanks to a partnership struck with Microsoft in 2021 on the commercialization of self-driving vehicles, General Motors is working to bring a "ChatGPT-like" voice assistant to its cars.
Considering the some of the dark and twisted behavior displayed by the OpenAI technology during tests of its upcoming Bing search engine integration, we have to assume it has no part to play in the "self-driving" portion of GM's plans – otherwise we fear passengers could find themselves "self-driven" off a cliff.
Still, details are scant for now. GM's vice president of software defined vehicle and operating system, Scott Miller, let slip to news site Semafor "that the company is developing an AI assistant" claimed to "push things beyond the simple voice commands available in today's cars."
In a couple of examples of how it could be used, it was said that a driver could ask the system how to change a flat tire, and receive voice instructions as well as a visual guide on the vehicle's interior display. Or, if a diagnostic light pinged on the dashboard, the driver could ask the assistant whether it needed immediate attention.
It was suggested that the system could then book in a service at an appropriate mechanic, which faintly echoes patents from Ford for a car that drives itself away if you don't keep up with payments among other use cases.
GM likely can't believe its luck to have the tech trend du jour hovering dangerously close to its vehicles, having entered a "preferred cloud" deal with Microsoft Azure in 2021. Microsoft has since poured billions into OpenAI, the company that developed the GPT-branded large language models, paving the way to collaboration between the two.
- OpenAI opens ChatGPT floodgates with dirt-cheap API
- Microsoft injects AI search into Bing, Edge, Skype apps
- Sure, Microsoft, let's put ChatGPT in control of robots
- Microsoft's new AI BingBot berates users and can't get its facts straight
Redmond has wasted no time spraying the buzzword all over its software and cloud computing portfolio – including Bing, Teams, and Windows 11 – hence why it is popping up in GM now. The automaker is said to be working on "adding another, more car-specific layer on top of the OpenAI models," but Miller wouldn't be drawn on details about which GM would be using nor whether if it had a name (though Semafor suggested ChatGMC, in reference to GM's truck brand).
Somewhat alarmingly, however, Miller told Reuters: "ChatGPT is going to be in everything."
A spokesperson added: "This shift is not just about one single capability like the evolution of voice commands, but instead means that customers can expect their future vehicles to be far more capable and fresh overall when it comes to emerging technologies."
The most immediate parallel is probably Knight Rider, the '80s TV series starring David Hasselhoff as a crime-fighting billionaire with a talking supercar, KITT. We all thought that was cool at the time, but now AI (or an approximation of it) is increasingly becoming a tangible field, the novelty is wearing off.
Let us hope that the deranged tendencies of OpenAI's tech have been tamed before GM puts it anywhere near its vehicles. ® | Emerging Technologies |
A new report warns that the US may lose its technological edge over China by 2030 if it doesn’t step up on strategic sectors critical to maintaining its advantages. The report, titled “Mid-Decade Challenges to National Competitiveness,” was released this month by the congressionally-mandated National Security Commission on Artificial Intelligence (NSCAI), an independent commission established in August 2018. The commission is tasked to “consider the methods and means necessary to advance the development of artificial intelligence, machine learning and associated technologies to comprehensively address the national security and defense needs of the United States.” The report begins by painting a stark picture if the US loses its technological competition with China. In that scenario, China comes to dominate the global economy and earn trillions of dollars in revenue through the development of next-generation technologies it uses for global political leverage. It also claims that China will use its success to justify and export its authoritarian system, with its digital platforms, surveillance technology and digital payment infrastructure used to undermine democracies, support China’s political objectives, target perceived as threatening individuals and refine its propaganda. The report also outlines the various challenges the US faces in restoring its technological competitiveness. These include restoring US techno-industrial capacity; integrating US democratic values in AI governance; developing techno-industrial strengths within a cooperative alliance agenda; translating technological advantages into military advantages; mastering emerging technologies to maintain information advantages over near-peer adversaries; and the lack of a hub to coordinate US strengths across commercial, academic and government sectors for international competition. US President Joe Biden wants more advanced semiconductors produced in America. Image: Twitter The report emphasizes the need for a US master plan that draws on the strengths of its public-private system using a whole-of-economy effort but without mimicking China’s fused, state-centric, authoritarian model. In the same vein, it calls for a techno-industrial strategy that encourages technological diffusion from labs to markets and, at the same time, fills in economic and national security gaps. Finally, the report stresses that the US should govern AI systems wisely, shaping their development and use through the full range of regulatory and non-regulatory governance mechanisms. The report states that the US and its allies must provide technological alternatives to those of China, with these alternatives demonstrating more promise for success and prosperity than those of the closed systems and technologies developed by authoritarian competitors. In terms of defense, the report calls for developing an “Offset-X” strategy based on the US’ persistent asymmetric strengths and envisions the deployment of capabilities that China will struggle to match or copy. It also says that the US intelligence community’s ability to provide competitive advantages to policymakers will depend on its mastery of emerging technologies, such as AI, to integrate increasingly diverse all-domain information. The report also identifies the technologies the US can use to drive its future competitiveness against China and other rivals. These technologies include AI, computing power, networks, biotechnology, energy generation and storage and smart manufacturing. It concludes by saying, “Whether the US can rise to the occasion and harness the promise of the pending wave of revolutionary technologies will determine who wins the 21st century.” The report’s language is consistent with the Biden administration’s framing of intensifying US-China rivalry as a struggle between democracy and authoritarianism. However, such framing may be an attempt to apply ideological cover to overt and desperate attempts to stop the rise of a peer competitor. A paramilitary policeman gestures under a pole with security cameras, US and China’s flags near the Forbidden City ahead of the visit by US President Donald Trump to Beijing, China November 8, 2017. Photo: Agencies In a 2021 article for East Asia Forum, Professor Baogang He criticizes this framing as an outdated and myopic Cold War approach that masks the US’ denial of acknowledging China’s promotion of the right to development. Furthermore, from an ideological standpoint, He points out the supposed superficialities of the Biden administration’s framing of the US-China rivalry as an autocracy versus democracy struggle, saying that China’s socialist approach is aimed at preventing domestic unrest rather than promoting ideological loyalty. He also notes China has helped democratic states such as Italy and Greece avoid financial collapse while pointing out that China’s growing partnership with Iran and Russia is driven by US pressure rather than shared ideology. He also points out that by overplaying the autocracy versus democracy narrative, the US risks taking an exclusionary approach that alienates states that do not wish to choose between the US and China. This potentially false dichotomous framing precludes US technological cooperation with China, which may have serious long-term consequences. The relationship between the two superpowers is one of the main factors driving the course of events in contemporary times. Furthermore, the report’s recommendation of an Offset-X military strategy against China may be an implicit admission that the US has already lost some critical advantages over its Chinese rival. Previous offset strategies framed by the US and NATO sought to nullify the Soviet Union’s numerical and conventional military advantages through nuclear weapons and other advanced technologies. Such framing may be accurate in some areas, such as hypersonic weapons, where struggling US efforts to develop such have led to it enlisting help from the private sector or assuming a denialist position about losing its lead to China in this area. Despite potentially losing out on some military aspects, the US may be overemphasizing the whole military aspect of its rivalry with China within its autocracy versus democracy narrative, with unwarranted spillovers in technology areas. In a 2021 article for Foreign Policy, Michael Swaine notes that in military terms China cannot destroy the US without destroying itself. He also points out that the US still maintains overwhelming nuclear and conventional forces compared to China and that the greater danger is for the US to militarily overreact to China’s actions and policies in Asia, thus alienating critical allies. While the NSCAI report voices concerns about China shaping the global technological landscape to the detriment of the US, such fears may still be overblown. Huawei is helping China export its technology and technological standards worldwide. Photo: AFP / Nicolas Asfouri Swaine notes that while some observers claim that China is setting standards in critical technologies and installing its hardware around the world, standard-setting is a highly-competitive process and that the US, Europe and Asia hold major portions of the standards and the standard-essential patents that underpin the global technology ecosystem. He argues that there is no chance China can single-handedly impose its technical standards and that if China would forcibly set its standards, the result would be a fragmented technology ecosystem that leaves all countries impoverished and does little to strengthen China’s power. In exporting standards and, by extension, its political system, Swaine argues that China aims for developing countries, not industrialized democracies such as the US. Its goal, Swaine says, is for the former to copy some aspects of China’s approach to legitimize its system internationally and domestically rather than displacing US power. In sum, the NSCAI report potentially overlooks the Biden administration’s stance of “we’ll cooperate wherever we can; we’ll contest where we must” by overplaying the autocracy versus democracy narrative, overemphasizing military aspects, ignoring an inclusive approach that leverages China’s advances for shared goals and the betterment of the human condition through technology. | Emerging Technologies |
VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
A recent deepfake video of First Lady of the United States Jill Biden, where she attacks her own husband President Biden’s political policies, highlights both the powerful speech potential and emerging challenges of advanced synthetic media technologies — especially in light of the pending and sure-to-be divisive 2024 U.S. general election.
Created by filmmaker and producer Kenneth Lurt, the video depicts Jill Biden delivering a speech critical of President Biden’s policy regarding the ongoing Israeli-Palestine and Hamas conflict. Using machine learning techniques, Lurt was able to generate a realistic-sounding voice for Jill Biden delivering remarks attacking the president for supporting airstrikes in Gaza.
The video was posted to X (formerly Twitter) where it has 230,000 views at the time of this article’s publication, and Reddit’s r/Singularity subreddit where it received upwards of 1,500 upvotes, or community endorsements.
“The goal of using AI Jill Biden, was to create something absurd and cinematic enough to get folks to actually engage with the reality of what’s happening in Palestine. The drama of a radical first-lady calling out her own husband and standing up to the American empire — it’s too juicy to look away,” said Lurt in an exclusive interview with VentureBeat.
Event
AI Unleashed
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
To create this synthetic voice, Lurt used ElevenLabs, a voice and audio AI focused startup which has trained its models on vast amounts of natural speech to clone voices. By intaking samples of Jill Biden’s authentic voice from interviews and appearances, the AI was able to generate entirely new speech in her voice pattern and cadence.
Beyond the synthetic audio track, Lurt spliced together curated clips from Biden campaign footage, news reports on Palestine, and social media videos of suffering on the ground in Gaza. With selective editing and placement of the AI-generated speech over these real video segments, Lurt was able to craft a superficially plausible narrative.
AI is driving a new era of advertising, activism, and propaganda
The use of AI and deepfake technology in political advertising is increasingly prevalent. Earlier this year, the RNC released an ad depicting generative imagery of a potential future Biden victory in 2024.
A few months later, the Never Back Down PAC launched their million dollar ad buy featuring an AI-generated version of Trump criticizing Gov. Reynolds of Iowa. This ad directly illustrated how synthetic media could be employed to either promote or attack candidates. Then in September 2023, satirist C3PMeme posted a fake video depicting Ron DeSantis announcing his withdrawal from the 2024 presidential race.
Though intended as satire, it showed how easy and convincing deepfakes had become – and the potential for both legitimate political expression as well as deliberate misinformation through manipulated media using emerging technologies.
These examples served as early tests of synthetic campaign advertising that some experts feared could proliferate and intensify misleading information flows in upcoming elections.
Notably, Lurt accomplished this synthesis with readily available and relatively inexpensive AI tools, requiring just a week of work utilizing his editing and filmmaking skills.
While he aimed to leave “breadcrumbs” indicating fiction for the discerning viewer, it could fool casual viewers.
Human effort and creativity remain key
On the flip side, Lurt believes that most AI tools still offer limited quality and that human filmmaking skills are necessary to pull together something convincing.
“Most AI anything is boring and useless because it’s used as a cheap cheat code for creativity, talent, experience, and human passion,” Lurt explained.
He emphasized the pivotal role of post-production and filmmaking experience: “If I took away the script, the post production, the real conflict, and just left the voice saying random things, the project would be nothing.”
As Lurt highlighted: “The Jill Biden video took me a week. Other content has taken me a month. I can tell some AI to generate stuff quickly, but it’s the creative filmmaking that actually makes it feel believable.”
Motivated by disruption
According to Lurt, he wanted to “manifest a slightly better world” and draw widespread attention to the real human suffering occurring in Palestine through provocative and emotionally gripping storytelling.
Specifically, Lurt’s intent was to depict an alternate scenario where a “powerful hero” like Jill Biden would publicly condemn her husband’s policies and the ongoing violence. He hoped this absurd scenario, coupled with real footage of destruction in Gaza, would force viewers to grapple with the harsh realities on the ground in a way that normal reporting had failed to accomplish.
To achieve widespread engagement, Lurt deliberately selected a premise—a dissident speech by the First Lady—that he perceived as too shocking and controversial to ignore. Using modern synthetic media techniques allowed him to actualize this provocative concept in a superficially plausible manner.
Lurt’s project demonstrates that synthetic media holds promise for novel discourse but also introduces challenges regarding truth, trust and accountability that societies must navigate. Regarding concerns over intentional misinformation, Lurt acknowledged both benefits and limits, stating “I hold every concern, and every defense of it, at the same time.”
He reflected that “We’ve been lied into wars plenty of times…that’s way more dangerous than anything I could ever make.” Rather than attributing the problem solely to information quality, Lurt emphasized “the real problem isn’t good or bad information; it’s power, who has it, and how they use it.”
Lurt saw his role more aligned with satirical outlets like The Onion than disinformation campaigns. Ultimately, he acknowledged the challenges that generative content will bring, saying “I think that the concept of a shared reality is pretty much dead…I’m sure there are plenty of bad actors out there.”
Mitigation without censorship
Regulators and advocates have pursued various strategies to curb deepfake threats, though challenges remain. In August, the FEC took a step toward oversight by opening public comment on AI impersonations in political ads. However, Republican Commissioner Dickerson expressed doubts about FEC authority as Bloomberg Law reported, and partisanship may stall comprehensive proposed legislation.
Enterprises too face complex choices around content policies that could limit protected speech. Outright bans risk overreach and are challenging to implement, while inaction leaves workforces vulnerable. Targeted mitigation balancing education and responsibility offers a viable path forward.
Rather than reactionary restrictions, companies could promote media literacy training highlighting technical manipulation signs. Pairing awareness of evolving techniques with skepticism of extraordinary claims empowers nuanced analysis of emerging synthetics without absolutes.
Warning against reliance on initial reactions alone and referencing fact-checkers when evaluating disputed claims instills resilient citizenship habits less prone to provocation. Such training stresses analysis over censorship to achieve resilience lawfully.
Informed participation, not preemptively restrictive stances, must remain the priority in this complex era. Many synthetic content examples still drive alternative perspectives through parody rather than outright deception calling for moderated, not reactionary, governance navigating opportunities and responsibilities in technological evolution.
As Lurt’s case illustrates, state regulation and the FEC’s role remains uncertain without mandates to oversee less regulated groups like PACs. Coordinated multi-stakeholder cooperation currently provides the optimal path mitigating emerging threats systematically without overreach into protected realms of political expression.
Whether one finds Lurt’s tactics appropriate or not, his explanations provide insights into his perspective on using synthetic multimedia to drive impactful political discourse in novel ways. It serves as a case study on both the promise and ethical dilemmas arising from advanced generative technologies.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. | Emerging Technologies |
There is heated competition for market share among GCC telecom operators. The result is a slow, but steady increase in capital expenses and sluggish revenue growth. Operators need to break out of today’s stagnant environment to capitalize on tomorrow’s growth opportunities. A lot can and is being done in market-facing function. In addition, telecom chief technology officers (CTOs) and the technology function can and should play an essential role in this quest.
The fierce competition in mobile broadband is reducing prices and accelerating the shift away from fixed broadband. In Saudi Arabia, for instance, generous fair usage policies with unlimited social media and fixed-wireless data plans are major contributors to declining data yields. Consequently, we calculate that fixed wireless access market grew by approximately 40% since 2021 at the expense of fiber connectivity.
Meanwhile, new technologies and players are intensifying competition. Telecom operators continue to face competitive threats from alternative technologies, such as the virtual network functions offered by hyperscalers (large cloud and tech companies), and unconventional connectivity providers, such as low earth-orbit satellite operators.
The stagnant environment could make it difficult for telecom operators to muster the resources to exploit the region’s growth opportunities. Again, Saudi Arabia provides good examples. We forecast that demand among Saudi consumers for digital services, gaming, media, and consumer devices, including 5G handsets, is expected to grow by double digits over the next three years. We estimate that new fixed broadband infrastructure development could connect 3.5 million new households, a revenue opportunity of SAR 4 billion ($1 billion). Similarly, we believe that demand for business-to-business digital services, including the Internet-of-Things (network of connected devices), cybersecurity, and cloud, could grow by 15% per annum over the coming three years. In addition, more than SAR 1 trillion ($266 billion) in new government spending is forecasted in 2023 which promises to drive demand for business-to-business digital services and mega projects. There are similar trends across the GCC.
Telecom operators can address their current challenges and prepare to capitalize on the looming opportunities for growth in the GCC by reshaping their technology capabilities. To achieve this, leadership teams and especially CTOs should have six priorities.
First, telecom CTOs should re-architect their technology landscape. Operators should accelerate their adoption of configurable and modular cloud-based applications to cut the time to market for core products. They need to simplify and delayer their technology infrastructure to better manage costs, bolster value creation, and focus on service innovation (versus infrastructure management). Zain’s recent sale of its 8,000 telecom towers is one example of how delayering can create value for GCC telecom operators.
Second, telecom CTOs should transform telecom assets into platforms and marketplaces. Operators possess a differentiated wealth of data and services that offer compelling value to business customers. GCC telecoms can intensify efforts to commercialize underutilized assets, as global operators, such as AT&T, have. Similarly, they can use open application programming interfaces (APIs) to create marketplaces that attract API suppliers and application creators.
Third, telecom CTOs should accelerate migration to the cloud. Cloud network functions are no longer a differentiator in the telecom sector, they are essential to its development. The operators that move first to offer cloud solutions will gain a march on their competitors. They will be positioned to partner with large cloud providers on Edge computing and enterprise solutions, such as the partnership between Verizon and Amazon Web Services—Verizon provides fiber and 5G transmission and AWS public cloud for its global customers.
Fourth, telecom CTOs should reinforce technology innovation. Operators need to become technology innovators (versus consumers) if they are to diversify into new digital and technology services outside of their core businesses. Innovation means technology leaders can support and steer investments in critical future technologies. It provides a foundation for innovation hubs and partnerships, such as Orange’s Hello Future initiative, needed to experiment and incubate new services that incorporate emerging technologies, such as drones, the metaverse, and blockchain.
Fifth, telecom CTOs should adopt new and agile delivery models. Technology organizations in telecom need to build solid capabilities to support internal and external customers with solutions. They need to actively engage with business units in the sales process—an imperative if operators are to capture a share of the digital services opportunities from megaprojects. In addition, adopting agile delivery for all core products using design thinking principles can cut delivery times to eight weeks, while doubling organizational agility levels.
Sixth, telecom CTOs should continuously strive for operational excellence. They should consider how they can reshape outsourcing strategies and supplier quality management to operate more efficiently with multiple vendors. Moreover, they should examine how they can better integrate field services and commercial operations to bolster customer experience and capture synergies. Cost effective, end-to-end technology service management is essential to delivering on telecom value creation and growth.
As GCC operators grapple with a highly competitive environment, their technology organizations can play a vital role. Savvy telecom CTOs will take up this challenge by enabling their companies to wrest greater value from today’s slow core growth markets and position themselves to exploit noncore opportunities. | Emerging Technologies |
Commuters are reflected on an advertisement of Reliance Industries' Jio telecoms unit, at a bus stop in Mumbai, India, February 21, 2017. REUTERS/Shailesh AndradeRegister now for FREE unlimited access to Reuters.comNEW DELHI, Aug 1(Reuters) - Reliance Industries' (RELI.NS) Jio emerged as the biggest spender in India's $19 billion 5G spectrum auction, with the top telco player winning airwaves worth $11 billion as the world's no.2 mobile market gears up for the high-speed wireless network.India's government aims to begin the rollout of 5G - which it says can provide data speeds about 10 times faster than 4G - by October this year. Globally, the next generation network is seen as vital for emerging technologies like self-driving cars and artificial intelligence.The country's telecoms minister said that companies bought 71% of a total of 72 GHz spectrum offered in the auction, which concluded on Monday and also saw participation from Jio rivals Bharti Airtel (BRTI.NS) and Vodafone Idea (VODA.NS), as well as a unit of Adani Enterprises Ltd (ADEL.NS).Register now for FREE unlimited access to Reuters.comAirtel and Vodafone won spectrum worth $5.4 billion and $2.4 billion, respectively.Adani, the newest entrant to the auction process, purchased airwaves worth nearly $27 million. The firm does not plan to offer consumer services and is instead aiming to enter the private 5G network space.To aid the cash-strapped telecom sector, the government is allowing auction winners to pay the amounts owed in 20 equal annual installments.Airtel and Vodafone have been under pressure since Jio triggered a price war in 2016 with both reporting losses in recent years, also squeezed by spectrum dues from earlier auctions. However, recent mobile data price hikes have helped Airtel creep back to profit."This spectrum acquisition...has been part of a deliberate strategy to buy the best spectrum assets at a substantially lower relative cost compared to our competition," Airtel Chief Executive Officer Gopal Vittal said in a statement.Without giving specifics, Jio said that it will be ready for the 5G rollout "in the shortest period of time".Register now for FREE unlimited access to Reuters.comReporting by Munsif Vengattil in New Delhi; Additional reporting by Nallur Sethuraman and Chris Thomas in Bengaluru; Editing by Shailesh Kuber, Kirsten DonovanOur Standards: The Thomson Reuters Trust Principles. | Emerging Technologies |
G20 Summit: Biden Lauds India's G20 Presidency; Holds Wide-Ranging Talks With PM Modi
The two leaders agreed that India - US partnership was beneficial not only for the people of the two countries but also for global good.
U.S. President Joe Biden and Prime Minister Narendra Modi on Friday exuded confidence that the outcomes of the G20 Summit will advance the shared goals of accelerating sustainable development, bolstering multilateral cooperation and building consensus around inclusive economic policies to address greatest global challenges.
In their over 50-minute talks, Modi and Biden vowed to "deepen and diversify" the bilateral major defence partnership while welcoming forward movement in India's procurement of 31 drones and joint development of jet engines.
The two leaders also deliberated on cooperation in nuclear energy, critical and emerging technologies such as 6G and artificial intelligence, and ways to fundamentally "reshape" multilateral development banks.
A joint statement issued at the end of the talks said President Biden lauded India's G20 Presidency for further "demonstrating" how the G20 as a forum is delivering important outcomes.
His comments came a day ahead of the leaders of the Group of 20 large economies hold deliberations on pressing global challenges and ways to deal with them at the bloc's annual summit in New Delhi.
Biden arrived in Delhi at around 7 pm in his first visit to India as the US President. He was greeted at the airport with songs and a musical show.
"The leaders reaffirmed their commitment to the G20 and expressed confidence that the outcomes of the G20 Leaders' summit in New Delhi will advance the shared goals of accelerating sustainable development, bolstering multilateral cooperation, and building global consensus around inclusive economic policies to address our greatest common challenges, including fundamentally reshaping and scaling up multilateral development banks," it said.
The joint statement said the "Leaders re-emphasised that the shared values of freedom, democracy, human rights, inclusion, pluralism, and equal opportunities for all citizens are critical to the success our countries enjoy and that these values strengthen our relationship".
The PMO said in a statement that Modi conveyed his appreciation for President Biden's vision and commitment to further strengthen the India-US comprehensive global strategic partnership, which is based on shared democratic values, strategic convergences and strong people-to-people ties.
It said the two leaders commended the progress in implementing the futuristic and wide-ranging outcomes of the prime minister's historic State visit to the US in June 2023, including under the India-US Initiative for Critical and Emerging Technology (iCET).
"The leaders welcomed the sustained momentum in bilateral cooperation , including in the areas of defence, trade, investment, education, health, research, innovation, culture and people-to-people ties," it said.
"President Biden warmly congratulated the prime minister and the people of India on Chandrayaan-3's historic landing near the lunar south pole and highlighted deepening cooperation between the two countries in Space," it said.
The two leaders agreed that India- US partnership was beneficial not only for the people of the two countries but also for global good.
The joint statement said the US President welcomed issuance of a Letter of Request from India's defence ministry to procure 31 MQ-9B remotely piloted aircraft from American defence giant General Atomics.
The two leaders also welcomed completion of the Congressional notification process and the commencement of negotiations for a commercial agreement between GE Aerospace and Hindustan Aeronautical Limited to manufacture GE F-414 jet engines in India.
They "recommitted" to work collaboratively and expeditiously to support the advancement of this unprecedented co-production and technology transfer proposal, the joint statement said.
Modi and Biden also reaffirmed technology's defining role in deepening the India-US strategic partnership and lauded ongoing efforts through the Initiative on Critical and Emerging Technology (iCET) to build open, accessible, secure, and resilient technology ecosystems and value chains, based on mutual confidence and trust.
"The United States and India intend to undertake a midterm review of iCET in September 2023 to continue to drive momentum toward the next annual iCET review, co-led by the National Security Advisors of both countries, in early 2024," the statement said.
In a post on X, Modi said, "Happy to have welcomed @POTUS @JoeBiden to 7, Lok Kalyan Marg. Our meeting was very productive. We were able to discuss numerous topics which will further economic and people-to-people linkages between India and USA. The friendship between our nations will continue to play a great role in furthering global good," he said.
On his part, Biden said on X, "Hello, Delhi! It's great to be in India for this year's G20." The joint statement said Biden reaffirmed his support for a reformed UN Security Council with India as a permanent member, and, in this context, welcomed once again India's candidature for the UNSC non-permanent seat in 2028-29. "The leaders once again underscored the need to strengthen and reform the multilateral system so it may better reflect contemporary realities and remain committed to a comprehensive UN reform agenda, including through expansion in permanent and non-permanent categories of membership of the UN Security Council," the statement said.
The two leaders reiterated their support for building resilient global semiconductor supply chains, noting in this respect a multi-year initiative of Microchip Technology, Inc, to invest approximately $300 million in expanding its research and development presence in India.
They also referred to Advanced Micro Device's announcement to invest $400 million in India over the next five years.
Modi and Biden also welcomed the signing of a Memorandum of Understanding between Bharat 6G Alliance and Next G Alliance, operated by Alliance for Telecommunications Industry Solutions, as a first step towards deepening public-private cooperation between vendors and operators.
The two leaders also reaffirmed the importance of Quad in supporting a free, open, inclusive, and resilient Indo-Pacific.
It said PM Modi looked forward to welcoming President Biden to the next Quad leaders' summit to be hosted by India in 2024.
The two leaders also called on their governments to continue work on transforming the India-US strategic partnership across all dimensions and reiterated their support for building resilient global semiconductor supply chains. | Emerging Technologies |
RIYADH: A second Fourth Industrial Revolution forum in Saudi Arabia, with the theme “Fostering Innovation Through Collective Impact for Sustainable Development,” was held at the King Abdulaziz City for Science and Technology on Monday.
The event was organized by the Saudi Center for the Fourth Industrial Revolution, which is affiliated with the World Economic Forum.
Center officials invited their international network of partners to discuss urgent challenges and game-changing opportunities in innovation, global integration, and public-private partnerships.
Organizers said the event showcased how Saudi Arabia had rapidly become a global leader in new technology.
WEF president, Borge Brende, told the forum how collaboration between the public and private sectors was vital in developing policies that enabled innovation to thrive responsibly.
He said: “The world is now at the halfway point in the 2030 agenda, but just 12 percent of the sustainable development goals are on track.
“There is an opportunity here because we know that innovation can be an important accelerator for the sustainable development goals.”
Brende added that two-thirds of the SDGs could be bolstered by technological innovation and that Saudi Arabia was at the forefront of unlocking the potential.
He noted that since the last forum in 2021, the Kingdom had continued to pioneer innovation and technology in the region and globally as part of its Vision 2030 reform plan.
“Only by having leaders from business and government work together can we maximize the benefit and reduce the risk of threats, and today’s event is a testament that we see this partnership taking place.
“The World Economic Forum remains committed to the ambition of Saudi Arabia. I hope you leverage the discussion today to mobilize action for sustainable and positive impact for us all in line with this ambition,” Brende added.
During his opening speech, Munir Eldesouki, the president of KACST, said the Fourth Industrial Revolution was aiming to blend the physical, digital, and biological worlds and change the very fabric of life.
He added that emerging technologies such as artificial intelligence or quantum computing had changed the world’s perception of possible opportunities and threats.
Eldesouki pointed out that while milestones should be celebrated, people must remain aware of the challenges of disruptive innovations.
He said: “Balancing the vast technological landscape with ethics, governance, and security is our shared responsibility and this mission poses technology as it serves our people, our values, and our future.
“Our collaborations span the globe with focus areas that truly echo Vision 2030 in AI, urban transformation, data policy, and more.”
Minister of Industry and Mineral Resources Bandar Al-Khorayef discussed accelerating the transformation and growth of manufacturing in Saudi Arabia in a session moderated by Basma Al-Buhairan, the managing director of the Center for the Fourth Industrial Revolution.
On what governments could do to accelerate innovation and technology, Al-Khorayef said: “We are definitely financing and investing in innovation and advanced manufacturing.
“On top of this, there are grants that we are offering to certain activities that fit particular criteria in the overall ecosystem in the industrial sector and mining, logistics, and energy.”
Meanwhile, space industry experts discussed global partnerships promoting innovation.
Rayyanah Barnawi, a biomedical researcher and the first female Saudi astronaut, said she was selected to conduct research on the International Space Station almost a year ago.
“With the ambition of the Saudi space agency and its global partnerships, we were able to conduct a historical mission in space. And to me, now I can see that the future is very bright, and the opportunities are endless for pharmaceutical and technological advancements,” she added.
She pointed out that conducting such experiments in space in a specific environment unavailable on Earth was more effective and beneficial.
“A microgravity environment provides more accessibility to the cells we’re working with. Here on Earth, we work with cells in 2-D, but in space, we can observe the cells as they exist in 3-D.
“For that reason, we were able to generate or produce treatment options that are not available here on Earth, as we can study these cells through technologies in a better condition and less contaminated environment,” Barnawi said.
Noor Nugali, assistant editor-in-chief at Arab News, discussed the role of women in innovation with Dr. Einas Al-Eisa, president at Princess Nourah bint Abdulrahman University.
Al-Eisa claimed her institution was the largest women’s university in the world, both in size and the number of students and programs it offered, especially within its five health colleges.
“Currently, within the university, 25 percent of our academic community, both staff and students, are in STEM (science, technology, engineering, and mathematics) colleges, with 15 percent of our students being in health disciplines,” she said.
“It may be surprising for some to know that gender diversity contributes to more novel and highly cited research publications. In a report published by BCG (the Boston Consulting Group), companies with more diverse leadership teams produce more significant innovation revenue.”
On the forum’s sidelines, a memorandum of understanding was signed between the Saudi Centre for the Fourth Industrial Revolution and the Ministry of Transport and Logistics Services.
The MoU aims to promote cooperation by developing policies for managing Fourth Industrial Revolution technologies in the transportation and logistics sector. | Emerging Technologies |
Not only are healthcare bills confusing, but also paper bills can often get lost in the mail or covered up in big piles on counters. Large healthcare organizations, for the most part, offer electronic billing, but that’s not always something solo or small practices can take advantage of.
That is the sweet spot for Collectly, founded by Levon Brutyan and Max Mizotin, which aims to help medical providers more easily collect payments.
One report shows that the amount medical providers collected was about 55% of what they were owed in 2021, down from 76% the year prior.
“Patient payments overall have to be about $480 billion, and considering that the patient responsibility grows at about 12% year over year, this huge number increases as well,” Brutyan told TechCrunch. “The bigger pain point for healthcare organizations is that approximately $200 billion of that never gets collected. Hospitals, for example, run on very tiny margins, and a lot of them are actually running at a loss, so that makes this whole thing even worse.”
Pasadena-based Collectly has proprietary interfaces that integrate with electronic health records and practice management software to enhance patient billing operations.
Through its platform, customers have, on average, been able to increase patient collections for medical group partners by 75%, reduce the “days sales outstanding” to 12 days from between 60 and 90 days, and achieve a 93% patient satisfaction score — important for retention, he added.
We’ve followed Collectly since it launched in 2017 as a digital debt collection startup, and then again later that year after being part of Y Combinator, raising $1.9 million and refocusing on automating and streamlining billing operations as a patient financial engagement company.
Currently, Collectly has engaged over 300,000 patients daily and is growing revenue over 3x year over year. Brutyan touts its growth and the fact that the company “doesn’t have any churn” as reasons why investors were eager to supply the company with capital for additional growth.
The company closed on $29 million Series A in a funding round, led by Sapphire Ventures, that also included YC, Wayfinder Ventures, Burst Capital, Cabra VC and Davidovs VC. The new investment brings the total capital raised to $34.1 million.
Brutyan declined to say what the company’s valuation is with the new round; however, he did say that “the valuation has grown accordingly” and is one of those “rare companies that was running for so long profitably and grew by ourselves. I think that investors appreciate that specifically during these interesting and tough times.”
He intends to use the new funding on technology and product development and to double the team’s size by the end of the year. The company will be rolling out new products around pre-service modules and in-person payments as well.
“We are also looking at emerging technologies, like ChatGPT,” Brutyan said. “AI is highly leveraged, so we are working on that as well to make sure that the patients will receive the best experience, understand their bills and that questions are resolved in a timely manner.” | Emerging Technologies |
Global Financial Crime Compliance Costs For Financial Institutions Over $206 Billion: LexisNexis
Nearly all financial institutions reported rising financial crime compliance costs in the past 12 months.
The worldwide financial crime compliance costs for financial institutions have reached $206.1 billion, with 98% of institutions reporting an increase in such costs in the past 12 months, according to the True Cost of Financial Crime Compliance Report by data and analytics company LexisNexis Risk Solutions.
The cost is comparable to more than 12% of global research and development expenditure and equates to $3.33 per month for each working-age individual in the world.
The report noted that the shifting technological and economic environment has changed the compliance landscape for financial institutions. The shift to digital banking has amplified financial institutions’ exposure to crimes, with more than half of survey respondents reporting a significant increase in financial crimes involving digital payments (59%), cryptocurrencies (58%), and AI technologies (56%).
With regard to cost drivers, 38% of institutions indicated that increasing financial crime regulations and regulatory expectations were the most significant factors driving an increase in financial crime compliance costs.
The findings reflect the perspectives of 1,181 professionals in financial crime compliance from companies across the U.S./Canada, Asia-Pacific, Europe, the Middle East and Africa, and Latin America.
AI Leaves Its Mark
While certain industries are still determining the scale of the influence of artificial intelligence and machine learning, 71% of professionals in financial crime compliance indicated that their organisations are already enhancing data utilisation through advanced analytics. Additionally, 72% confirmed that they employ analytics and AI to enhance their compliance procedures.
However, problems with data quality, data silos, outdated legacy systems, and a lack of internal collaboration can create compliance activity and increase expenditure, the report noted.
EMEA Remains High-Cost Centre For Financial Crime Compliance
The report showed that EMEA financial institutions and their customers continue to incur more substantial expenses for financial crime compliance than those in other regions. The overall cost of financial crime compliance in EMEA surpassed that of the U.S. and Canada by 39.8%.
In contrast, APAC and LATAM are comparatively more cost-effective regions. The financial compliance expenses in APAC amounted to 74.5% of those in the U.S. and Canada, while LATAM's costs are 24.7% in comparison. Globally, 78% of organisations and 80% in EMEA indicated that the intricate network of regulations and sanctions acts as a constraint on their business operations.
EMEA had the highest total cost at $85 billion, and LATAM had the lowest at $15 billion. U.S./Canada had an overall financial crime compliance expense of $61 billion, while APAC’s was $45 billion.
Prioritising Compliance And Customer Experience
The report noted that there are multiple initiatives for financial institutions that add to the ongoing complexity they face in meeting financial crime compliance requirements. However, 85% of financial institutions placed enhancing customer experience at the top of the list of priority initiatives.
A substantial emphasis of financial institutions revolves around optimising the efficiency and efficacy of financial crime compliance concerning payments. Globally, 74% of institutions identified this as a critical or high-priority endeavour.
Financial institutions recognise the role of governance and compliance in ensuring stability, transparency, and ethical conduct. Survey respondents noted that they are focusing on strengthening governance (83%) and meeting regulatory compliance requirements (82%).
Financial institutions are also looking to increase their operational resilience (80%) to better withstand disruptions, recover from challenges, and maintain continuity. They are equally focused on optimising their compliance costs (80%) to improve profitability, remain agile, and gain a competitive advantage.
"Financial institutions are making significant investments to stay compliant with financial crime regulations. Effective collaboration within these institutions is pivotal for enhancing the customer experience while managing these costs," noted Grayson Clarke, senior vice president, LexisNexis Risk Solutions. "Leveraging emerging technologies alongside existing solutions can empower institutions to achieve their objectives and deliver optimal customer outcomes," Clarke added. | Emerging Technologies |
And now the game of submarine hide-and-seek may be approaching the point at which submarines can no longer elude detection and simply disappear. It may come as early as 2050, according to a recent study by the National Security College of the Australian National University, in Canberra. This timing is particularly significant because the enormous costs required to design and build a submarine are meant to be spread out over at least 60 years. A submarine that goes into service today should still be in service in 2082. Nuclear-powered submarines, such as the Virginia-class fast-attack submarine, each cost roughly US $2.8 billion, according to the U.S. Congressional Budget Office. And that’s just the purchase price; the total life cycle cost for the new Columbia-class ballistic-missile submarine is estimated to exceed $395 billion. The twin problems of detecting submarines of rival countries and protecting one’s own submarines from detection are enormous, and the technical details are closely guarded secrets. Many naval experts are speculating about sensing technologies that could be used in concert with modern AI methodologies to neutralize a submarine’s stealth. Rose Gottemoeller, former deputy secretary general of NATO, warns that “the stealth of submarines will be difficult to sustain, as sensing of all kinds, in multiple spectra, in and out of the water becomes more ubiquitous.” And the ongoing contest between stealth and detection is becoming increasingly volatile as these new technologies threaten to overturn the balance.
We have new ways to find submarines Today’s sensing technologies for detecting submarines are moving beyond merely hearing submarines to pinpointing their position through a variety of non-acoustic techniques. Submarines can now be detected by the tiny amounts of radiation and chemicals they emit, by slight disturbances in the Earth’s magnetic fields, and by reflected light from laser or LED pulses. All these methods seek to detect anomalies in the natural environment, as represented in sophisticated models of baseline conditions that have been developed within the last decade, thanks in part to Moore’s Law advances in computing power.
Airborne laser-based sensors can detect submarines lurking near the surface.IEEE Spectrum According to experts at the Center for Strategic and International Studies, in Washington, D.C., two methods offer particular promise. Lidar sensors transmit laser pulses through the water to produce highly accurate 3D scans of objects. Magnetic anomaly detection (MAD) instruments monitor the Earth’s magnetic fields and can detect subtle disturbances caused by the metal hull of a submerged submarine. Both sensors have drawbacks. MAD works only at low altitudes or underwater. It is often not sensitive enough to pick out the disturbances caused by submarines from among the many other subtle shifts in electromagnetic fields under the ocean. Lidar has better range and resolution and can be installed on satellites, but it consumes a lot of power—a standard automotive unit with a range of several hundred meters can burn 25 watts. Lidar is also prohibitively expensive, especially when operated in space. In 2018, NASA launched a satellite with laser imaging technology to monitor changes in Earth’s surface—notably changes in the patterns on the ocean’s surface; the satellite cost more than $1 billion. Indeed, where you place the sensors is crucial. Underwater sensor arrays won’t put an end to submarine stealth by themselves. Retired Rear Adm. John Gower, former submarine commander for the Royal Navy of the United Kingdom, notes that sensors “need to be placed somewhere free from being trolled or fished, free from seismic activity, and close to locations from which they can be monitored and to which they can transmit collected data. That severely limits the options available.” One way to get around the need for precise placement is to make the sensors mobile. Underwater drone swarms can do just that, which is why some experts have proposed them as the ultimate antisubmarine capability. Clark, for instance, notes that such drones now have enhanced computing power and batteries that can last for two weeks between charges. The U.S. Navy is working on a drone that could run for 90 days. Drones are also now equipped with the chemical, optical, and geomagnetic sensors mentioned earlier. Networked underwater drones, perhaps working in conjunction with airborne drones, may be useful for not only detecting submarines but also destroying them, which is why several militaries are investing heavily in them.
A U.S. Navy P-8 Poseidon aircraft, equipped to detect submarines, awaits refueling in Okinawa, Japan, in 2020. U.S.Navy For example, the Chinese Navy has invested in a fishlike undersea drone known as Robo-Shark, which was designed specifically for hunting submarines. Meanwhile, the U.S. Navy is developing the Low-Cost Unmanned Aerial Vehicle Swarming Technology, for conducting surveillance missions. Each Locust drone weighs about 6 kilograms, costs $15,000, and can be outfitted with MAD sensors; it can skim low over the ocean’s surface to detect signals under the water. Militaries study the drone option because it might work. Then again, it very well might not.
Robo-Shark, a 2.2-meter-long submersible made by Boya Gongdao Robot Technology, of Beijing, is said to be capable of underwater surveillance and unspecified antisubmarine operations. The company says that the robot moves at up to 5 meters per second (10 knots) by using a three-joint structure to wave the caudal fin, making less noise than a standard propeller would. robosea.org Gower considers underwater drones to be “the least likely innovation to make a difference in the decline of submarine stealth.” A navy would need a lot of drones, data rates are exceedingly slow, and a drone’s transmission range is short. Drones are also noisy and extremely easy to detect. “Not to mention that controlling thousands of underwater drones far exceeds current technological capabilities,” he adds. Gower says it could be possible “to use drones and sonar networks together in choke points to detect submarine patrols.” Among the strategically important submarine patrol choke points are the exit routes on either side of Ireland, for U.K. submarines; those around the islands of Hainan and Taiwan, for Chinese submarines; in the Barents or Kuril Island chain, for Russian submarines; and the Straits of Juan de Fuca, for U.S. Pacific submarines. On the other hand, he notes, “They could be monitored and removed since they would be close to sovereign territories. As such, the challenges would likely outweigh the gains.” Gower believes a more powerful means of submarine detection lies in the “persistent coverage of the Earth’s surface by commercial satellites,” which he says “represents the most substantial shift in our detection capabilities compared to the past.” More than 2,800 of these satellites are already in orbit. Governments once dominated space because the cost of building and launching satellites was so great. These days, much cheaper satellite technology is available, and private companies are launching constellations of tens to thousands of satellites that can work together to image every bit of the Earth’s surface. They are outfitted with a wide range of sensing technologies, including synthetic aperture radar (SAR), which scans a scene down below while moving over a great distance, providing results like those you’d get from an extremely long antenna. Since these satellite constellations view the same locations multiple times per day, they can capture small changes in activity.
Experts have known for decades about the possibility of detecting submarines with SAR based on the wake patterns they form as they move through the ocean. To detect such patterns, known as Bernoulli humps and Kelvin wakes, the U.S. Navy has invested in the AN/APS-154 Advanced Airborne Sensor, developed by Raytheon. The aircraft-mounted radar is designed to operate at low altitudes and appears to be equipped with high-resolution SAR and lidar sensors. Commercial satellites equipped with SAR and other imaging instruments are now reaching resolutions that can compete with those of government satellites and offer access to customers at extremely affordable rates. In other words, there’s lots of relevant, unclassified data available for tracking submarines, and the volume is growing exponentially. One day this trend will matter. But not just yet.
Jeffrey Lewis, director of the East Asia Nonproliferation Program at the James Martin Center for Nonproliferation Studies, regularly uses satellite imagery in his work to track nuclear developments. But tracking submarines is a different matter. “Even though this is a commercially available technology, we still don’t see submarines in real time today,” Lewis says. The day when commercial satellite imagery reduces the stealth of submarines may well come, says Gower, but “we’re not there yet. Even if you locate a submarine in real time, 10 minutes later, it’s very hard to find again.”
Artificial intelligence coordinates other sub-detecting tech Though these new sensing methods have the potential to make submarines more visible, no one of them can do the job on its own. What might make them work together is the master technology of our time: artificial intelligence. “When we see today’s potential of ubiquitous sensing capabilities combined with the power of big-data analysis,” Gottemoeller says, “it’s only natural to ask the question: Is it now finally possible?” She began her career in the 1970s, when the U.S. Navy was already worried about Soviet submarine-detection technology. Submarines can now be detected by the tiny amounts of radiation and chemicals they emit, by slight disturbances in the Earth’s magnetic fields, and by reflected light from laser or LED pulses.
Unlike traditional software, which must be programmed in advance, the machine-learning strategy used here, called deep learning, can find patterns in data without outside help. Just this past year, DeepMind’s AlphaFold program achieved a breakthrough in predicting how amino acids fold into proteins, making it possible for scientists to identify the structure of 98.5 percent of human proteins. Earlier work in games, notably Go and chess, showed that deep learning could outdo the best of the old software techniques, even when running on hardware that was no faster. For AI to work in submarine detection, several technical challenges must be overcome. The first challenge is to train the algorithm, which involves acquiring massive volumes and varieties of sensor data from persistent satellite coverage of the ocean’s surface as well as regular underwater collection in strategic locations. Using such data, the AI can establish a detailed model of baseline conditions, then feed new data into the model to find subtle anomalies. Such automated sleuthing is what’s likeliest to detect the presence of a submarine anywhere in the ocean and predict locations based on past transit patterns. The second challenge is collecting, transmitting, and processing the masses of data in real time. That task would require a lot more computing power than we now have, both in fixed and on mobile collection platforms. But even today’s technology can start to put the various pieces of the technical puzzle together.
Nuclear deterrence depends on the ability of submarines to hide For some years to come, the vastness of the ocean will continue to protect the stealth of submarines. But the very prospect of greater ocean transparency has implications for global security. Concealed submarines bearing ballistic missiles provide the threat of retaliation against a first nuclear strike. What if that changes? “We take for granted the degree to which we rely upon having a significant portion of our forces exist in an essentially invulnerable position,” Lewis says. Even if new developments did not reduce submarine stealth by much, the mere perception of such a reduction could undermine strategic stability. A Northrop Grumman MQ-8C, an uncrewed helicopter, has recently been deployed by the U.S. Navy in the Indo-Pacific area for use in surveillance. In the future, it will also be used for antisubmarine operations. Northrop Grumman Gottemoeller warns that “any perception that nuclear-armed submarines have become more targetable will lead to questions about the survivability of second-strike forces. Consequently, countries are going to do everything they can to counter any such vulnerability.” Experts disagree on the irreversibility of ocean transparency. Because any technological breakthroughs will not be implemented overnight, “nations should have ample time to develop countermeasures [that] cancel out any improved detection capabilities,” says Matt Korda, senior research associate at the Federation of American Scientists, in Washington, D.C. However, Roger Bradbury and eight colleagues at the National Security College of the Australian National University disagree, claiming that any technical ability to counter detection technologies will start to decline by 2050. Korda also points out that ocean transparency, to the extent that it occurs, “will not affect countries equally. And that raises some interesting questions.” For example, U.S. nuclear-powered submarines are “the quietest on the planet. They are virtually undetectable. Even if submarines become more visible in general, this may have zero meaningful effect on U.S. submarines’ survivability.”
Sylvia Mishra, a new-tech nuclear officer at the European Leadership Network, a London-based think tank, says she is “more concerned about the overall problem of ambiguity under the sea.” Until recently, she says, movement under the oceans was the purview of governments. Now, though, there’s a growing industry presence under the sea. For example, companies are laying many underwater fiber-optic communication cables, Mishra says, “which may lead to greater congestion of underwater inspection vehicles, and the possibility for confusion.”
A Snakehead, a large underwater drone designed to be launched and recovered by U.S. Navy nuclear-powered submarines, is shown at its christening ceremony in Narragansett Bay in Newport, R.I.U.S. Navy Confusion might come from the fact that drones, unlike surface ships, do not bear a country flag, and therefore their ownership may be unclear. This uncertainty, coupled with the possibility that the drones could also carry lethal payloads, increases the risk that a naval force might view an innocuous commercial drone as hostile. “Any actions that hold the strategic assets of adversaries at risk may produce new touch points for conflict and exacerbate the risk of war,” says Mishra. Given the strategic importance of submarine stealth, Gower asks, “Why would any country want to detect and track submarines? It’s only something you’d do if you want to make a nuclear-armed power nervous.” Even in the Cold War, when the United States and the U.K. routinely tracked Soviet ballistic-missile submarines, they did so only because they knew their activities would go undetected—that is, without risking escalation. Gower postulates that this was dangerously arrogant: “To actively track second-strike nuclear forces is about as escalatory as you might imagine.” “All nuclear-armed states place a great value on their second-strike forces,” Gottemoeller says. If greater ocean transparency produces new risks to their survivability, real or perceived, she says, countries may respond in two ways: build up their nuclear forces further and take new measures to protect and defend them, producing a new arms race; or else keep the number of nuclear weapons limited and find other ways to bolster their viability. Ultimately, such considerations have not dampened the enthusiasm of certain governments for acquiring submarines. In September 2021 the Australian government announced an enhanced trilateral partnership with the United States and the United Kingdom. The new deal, known as AUKUS, will provide Australia with up to eight nuclear-powered submarines with the most coveted propulsion technology in the world. However, it could be at least 20 years before the Royal Australian Navy can deploy the first of its new subs.
The Boeing Orca, the largest underwater drone in the U.S. Navy’s inventory, was christened in April, in Huntington Beach, Calif. The craft is designed, among other things, for use in antisubmarine warfare. The Boeing Company As part of its plans for nuclear modernization, the United States has started replacing its entire fleet of 14 Ohio-class ballistic-missile submarines with new Columbia-class boats. The replacement program is projected to cost more than $128 billion for acquisition and $267 billion over their full life cycles. U.S. government officials and experts justify the steep cost of these submarines with their critical role in bolstering nuclear deterrence through their perceived invulnerability. To protect the stealth of submarines, Mishra says, “There is a need for creative thinking. One possibility is exploring a code of conduct for the employment of emerging technologies for surveillance missions.” There are precedents for such cooperation. During the Cold War, the United States and the Soviet Union set up a secure communications system—a hotline—to help prevent a misunderstanding from snowballing into a disaster. The two countries also developed a body of rules and procedures, such as never to launch a missile along a potentially threatening trajectory. Nuclear powers could agree to exercise similar restraint in the detection of submarines. The stealthy submarine isn’t gone; it still has years of life left. That gives us ample time to find new ways to keep the peace. From Your Site ArticlesWorld's Largest Swarm of Miniature Robot Submarines - IEEE ... ›Submarines - IEEE Spectrum ›Scientists Explore Underwater Quantum Links for Submarines ... ›DARPA's Self-Driving Submarine Hunter Steers Like a Human ... ›Related Articles Around the WebThe Capacity of the Navy's Shipyards to Maintain Its Submarines ... ›Attack Submarines - SSN > United States Navy > Displayy-FactFiles ›Ballistic Missile Submarines | Commander, Submarine Force, U.S. ... ›Fleet Ballistic Missile Submarines - SSBN > United States Navy ... › | Emerging Technologies |
When it comes to tackling climate change, achieving “net-zero carbon dioxide emissions by 2050” has become a ubiquitous rallying cry. It’s in goals set by cities, states, and the Biden administration. It’s a hallmark of companies’ sustainability pledges, from Big Tech to Big Oil. It’s not enough.
The world’s leading climate experts called for more swift action on climate change in a major report released today by the United Nations Intergovernmental Panel on Climate Change (IPCC). Near-term goals to slash greenhouse gas pollution need to be a much higher priority, advocates say, and there’s precious little time to reach them.
“The climate time-bomb is ticking. But today’s IPCC report is a how-to guide to defuse the climate time-bomb. It is a survival guide for humanity,” United Nations Secretary-General António Guterres said in a statement today.
“The climate time-bomb is ticking.”
Greenhouse gas emissions causing climate change need to peak by 2025 to keep global warming from surpassing a critical threshold, the report says. And more affluent nations, responsible for a larger share of pollution, need to be on a faster timeline than emerging economies, Guterres said. He proposed an “Acceleration Agenda” to the G20 today that asks economically developed countries to move their net-zero goals up “as close as possible to 2040.”
Still, some advocates are wary of fuzzy, far-off targets that set goalposts decades into the future. “What makes me anxious is the pace at which we are doing things when we need to be achieving a lot more in the near term,” says Harjeet Singh, head of global political strategy at Climate Action Network International. “Of course, we needed a long term horizon ... but the whole terminology of ‘net-zero’ has been extremely problematic.”
Getting to net-zero emissions worldwide by the middle of the century became a mainstream target thanks to another climate report from the United Nations in 2018. That research is included in the IPCC’s report today, which is a synthesis of all the IPCC’s recent work since the 2018 publication.
The world is already losing ground to sea level rise and suffering more extreme weather disasters because of climate change. The IPCC’s 2018 report found that those effects grow significantly worse if warming surpasses that 1.5-degree mark. Yet, five years later, greenhouse gas emissions continue to skyrocket.
“What’s different now is that we know the climate crisis is accelerating, is more widespread and extreme than originally predicted, and the window for limiting global warming to 1.5 degrees is pretty much closing,” says Adrien Salazar, policy director for the nonprofit Grassroots Global Justice Alliance, in an email to The Verge.
With today’s report, there’s more emphasis on the incremental steps the world needs to take right away. One crucial detail that got buried in the 2018 publication was a deadline to slash emissions roughly in half by 2030. Today’s update also says that greenhouse gas pollution needs to peak by 2025 and decline by 60 percent by 2035.
“The science is very clear, but it was convenient for political leaders and even corporations to only talk about 2050.”
“Deeper cuts front-loaded now — that point got lost in the whole slogan of, you know, ‘net-zero by 2050,’” says Basav Sen, director of the climate policy project at the progressive think tank Institute for Policy Studies.
“The science is very clear, but it was convenient for political leaders and even corporations to only talk about 2050,” Singh says. “You know, most of them may not be around when we reach [that date] so it was easy for them to talk about 2050 without really providing necessary details and near-term targets.”
There’s been another glaring oversight, Singh points out. You can’t slash greenhouse gas emissions without weaning the world off its dependence on fossil fuels. That gets ignored because of the IPCC’s emphasis on reaching net-zero emissions. The term implies a balancing act: people can still produce some fossil fuel pollution as long as they balance that with ways to remove it from the atmosphere. Polluters might pay to offset some of their emissions using forestry projects or emerging technologies that filter CO2 from the air. But neither tactic is reliable at scale and is really only supposed to be ancillary to a transition from fossil fuels to clean energy.
The net-zero strategy is supposed to help out the hardest sectors to clean up, like shipping and aviation, which are still searching for alternative fuels. But even brands that can more easily turn to renewable energy have “net-zero” goals that let them get away with tricky carbon accounting. A company might aim to slash its emissions by 99 percent or 19 percent — but either way, it can claim to reach net-zero emissions. That flexibility makes net-zero goals so rife with greenwashing that the United Nations released a report in November slamming corporate climate commitments. The criteria many use have loopholes wide enough to “drive a diesel truck through,” Guterres said at the time.
There’s been similar murkiness in countries’ climate pledges. Earlier this month, the Biden administration approved the Willow project in Alaska, the US’s largest proposed oil project on public land yet. The US is also investing billions in capturing and storing CO2 as part of its strategy to reach net-zero emissions by 2050.
It’s high past time to set clearer goals, advocates tell The Verge. “You either have zero emissions or you don’t,” the Institute for Policy Studies’ Sen says. | Emerging Technologies |
Aviation is crucial to the global economy, but its effects on the environment are significant.pic4you | E+ | Getty ImagesLONDON — Plans to reduce the significant environmental effects of aviation took a step forward this week after Rolls-Royce and easyJet said they had carried out the ground test of a jet engine that used hydrogen produced from tidal and wind power.In a statement this week, aerospace giant Rolls-Royce — not to be confused with Rolls-Royce Motor Cars, which is owned by BMW — described the news as a "milestone" and said it was "the world's first run of a modern aero engine on hydrogen."The test, which was carried out at an outdoor site in the U.K., used a converted regional aircraft engine from London-listed Rolls-Royce.The hydrogen came from facilities at the European Marine Energy Centre in Orkney, an archipelago in waters north of mainland Scotland. Since its inception in 2003, EMEC has become a major hub for the development of wave and tidal power.Grant Shapps, the U.K.'s secretary of state for business, energy and industrial strategy, said the test was "an exciting demonstration of how business innovation can transform the way we live our lives.""This is a true British success story, with the hydrogen being used to power the jet engine today produced using tidal and wind energy from the Orkney Islands of Scotland," Shapps added.Hydrogen's usesDescribed by the International Energy Agency as a "versatile energy carrier," hydrogen has a diverse range of applications and can be deployed in a wide range of industries.It can be produced in a number of ways. One method includes electrolysis, with an electric current splitting water into oxygen and hydrogen.If the electricity used in this process comes from a renewable source such as wind or tidal power, then some call it "green" or "renewable" hydrogen. Today, the majority of hydrogen production is based on fossil fuels.Using hydrogen to power an internal combustion engine is different to hydrogen fuel cell technology, where hydrogen from a tank mixes with oxygen, generating electricity.As the U.S. Department of Energy's Alternative Fuels Data Center notes: "Fuel cell electric vehicles emit only water vapor and warm air, producing no tailpipe emissions."By contrast, hydrogen ICEs can produce other emissions. "Hydrogen engines release near zero, trace amounts of CO2 … but can produce nitrogen oxides, or NOx," Cummins, an engine maker, says.Industry's aimsThe environmental footprint of aviation is considerable, with the World Wildlife Fund describing it as "one of the fastest-growing sources of the greenhouse gas emissions driving global climate change."The WWF also says air travel is "currently the most carbon intensive activity an individual can make."Earlier this year, Guillaume Faury, the CEO of Airbus, told CNBC that aviation would "potentially face significant hurdles if we don't manage to decarbonize at the right pace."Faury added that hydrogen planes represented the "ultimate solution" for the mid and long term.While there is excitement in some quarters about hydrogen planes and their potential, a considerable amount of work needs to be done to commercialize the technology and roll it out on a large scale.Speaking to CNBC last year, Ryanair CEO Michael O'Leary appeared cautious when it came to the outlook for new and emerging technologies in the sector."I think ... we should be honest again," he said. "Certainly, for the next decade ... I don't think you're going to see any — there's no technology out there that's going to replace … carbon, jet aviation.""I don't see the arrival of … hydrogen fuels, I don't see the arrival of sustainable fuels, I don't see the arrival of electric propulsion systems, certainly not before 2030," O'Leary added. | Emerging Technologies |
Adani Group To Build Integrated Data Center, Technology Business Park In Andhra Pradesh
The facility will have a 200+ MW data center, technology and business park, and skill development center in Vizag.
AdaniConnex is developing an integrated data center and technology business park in Andhra Pradesh.
The facility will include a 200+ MW data center, technology and business park, and skill development center in Vizag, according to a statement.
The park will be powered with up to 100% renewable energy and connected with robust terrestrial and submarine infrastructure to help deployment of cloud and emerging technologies in the region, it said.
Andhra Pradesh's Chief Minister YS Jagan Mohan Reddy attended the groundbreaking ceremony at the park site in Madhurawada.
"With the advancements in AI, high-definition content and massive digitisation, the need for compute and storage is increasing exponentially," said Gautam Adani, chairman of Adani Group.
"Andhra Pradesh, with its geographical advantages of land for renewable energy and a long coastline, is well-positioned to host data center parks, not only for our country but also for those nations that are short of land or energy," he said.
AdaniConnex is a joint venture between Adani Group and EdgeConneX.
The investments by the conglomerate will be on top of the Rs 20,000 crore already invested in the state. The group operates two private ports at Krishnapatnam and Gangavaram in Andhra Pradesh.
Disclaimer: AMG Media Networks Ltd., a subsidiary of Adani Enterprises Ltd., holds 49% stake in Quintillion Business Media Ltd., the owner of BQ Prime. | Emerging Technologies |
The software engineer career path is not for the faint of heart. It takes a lot of hard work, dedication, and determination to succeed. But, if you’re up for the challenge, it can be an incredibly rewarding career. As a software engineer, you’ll be responsible for developing and maintaining software applications. This can include anything from small programs to large-scale enterprise systems. No two days will be the same, and you’ll always have the opportunity to learn new things and The software engineer career path: an overview There is no one-size-fits-all answer when it comes to the software engineer career path. The field is constantly evolving, and new technologies and trends are always emerging. However, there are some general principles that can help you navigate your way through the software engineer career path. In general, the software engineer career path can be divided into four main stages: entry-level, mid-level, senior-level, and executive-level. Each stage has its own unique challenges, opportunities, and rewards. Entry-Level At the entry-level, software engineers are responsible for designing, developing, testing, and deploying software applications. They work closely with senior engineers and developers to create high-quality software products. This stage of the career path is all about learning the ropes and gaining experience in the field. Mid-Level At the mid-level, software engineers take on more responsibility for managing projects and leading teams of engineers. They also begin to specialize in specific areas of software development, such as web development or data analysis. This stage of the career path is all about consolidating your skills and knowledge and taking on more leadership roles. Senior-Level At the senior level, software engineers are responsible for overseeing entire projects from start to finish. They also provide mentorship and guidance to junior engineers. This stage of the career path is all about becoming a subject matter expert in your field and taking on a strategic role in your organization. Executive-Level At the executive level, software engineers are responsible for setting organizational strategies and policies around software development. They also oversee teams of engineers and manage budgets. This stage of the career path is all about becoming a thought leader in your field and making decisions that impact the direction of your organization Which Programming Language Should I learn In 2022 The different stages of a software engineer career A software engineer career typically consists of 3 stages: junior, mid-level, and senior. Most companies use a combination of these stages to map out an engineer’s progression within the company. As one progresses through each stage, one takes on more responsibility and is given more complex projects to work on. The junior stage is the entry-level position for software engineers. They are typically recent college graduates who are working on small projects under the supervision of a more experienced engineer. The mid-level stage is for engineers with 2-4 years of experience. They are working on larger projects with more complex requirements. At this stage, they may also start to take on mentorship roles for junior engineers. The senior stage is for engineers with 5+ years of experience. They are typically leading large projects and mentoring other engineers. They may also be involved in management and strategy decisions for their team or company. The skills required for a successful software engineer career Gaining the skills required for a career in software engineering can be done in various ways, but most software engineers have a bachelor’s degree in computer science. According to the Bureau of Labor Statistics, computer science degrees usually take four years to complete, although some programs may take up to five years. During these four years, students take classes on topics such as programming languages, algorithms, database management, and software development methodologies. In addition to their coursework, most students also participate in internships or cooperative education programs that allow them to gain experience working with actual software engineering projects. Once they have completed their education, software engineers must keep up with the latest advancements in their field by reading technical journals and attending conferences and workshops. Many software engineers also choose to earn professional certifications to show that they are capable of working with the latest technologies. The education and training required for a software engineer career To become a software engineer, you will need to complete a four-year computer science degree from an accredited institution. During your studies, you will take courses in programming, mathematics and systems design. You will also have the opportunity to participate in internship and co-op programs, which will give you valuable hands-on experience. After graduation, you will need to obtain professional engineering (PE) license in order to practice software engineering in most states. To do this, you will need to pass the Fundamentals of Engineering (FE) exam, as well as the Principles and Practice of Engineering (PPE) exam. Once you have obtained your license, you will be able to find employment as a software engineer. Alternatively, you could choose to pursue a master’s degree or PhD in computer science if you wish to pursue a career in research or academia. The different types of software engineer careers There are many different types of software engineer careers, each with its own set of skills, responsibilities, and challenges. Below is a brief overview of some of the most common types of software engineering careers: Applications software engineers develop and maintain software that allows people to perform specific tasks on computers or other devices. They may work on a wide variety of applications, including word processing, spreadsheets, databases, aviation control systems, video games, and more. Systems software engineers develop and maintain the low-level software that helps applications software running on computers or other devices. This type of software includes operating systems, compilers, utility programs, and device drivers. Embedded software engineers develop and maintain the software that is embedded in devices such as cars, TVs, medical devices, and industrial control systems. This type of software often has strict size and performance constraints. Test engineers design and execute tests to ensure that software meets its functional requirements. This type of testing can be done manually or using automated testing tools. The benefits of a software engineer career There are many benefits to a career in software engineering. For one, it is one of the most in-demand professions in the world, with projections showing that there will be millions of new job openings in the next decade. In addition, software engineers are some of the highest-paid professionals in the world. They also have a lot of control over their work lives, with many choosing to work remotely or on a freelance basis. Finally, a career in software engineering is extremely rewarding. It is a profession that allows you to use your creativity and problem-solving skills to make a difference in the world. Which Programming Language Should I learn In 2022 The challenges of a software engineer career The challenge of a software engineer career is getting the right education and training. There are many software engineer careers that require different levels of education and experience. Most software engineering careers require at least a bachelor’s degree in computer science or a related field. Many software engineers also have master’s degrees or higher. In addition to getting the right education, software engineers need to keep up with the latest trends in the field. They need to be able to use the latest tools and technologies. They also need to be able to work with teams of other engineers to create new software products. The future of the software engineer career The future of the software engineer career is promising. With the advent of new and emerging technologies, the demand for software engineers is expected to grow. As a result, salaries for software engineers are also expected to rise. In addition, the career paths of software engineers are likely to become more diversified, with more opportunities for specialization and advancement. | Emerging Technologies |
Published November 15, 2022 3:58PM Driverless car pulled over by San Francisco police | LiveNOW from FOX An autonomous vehicle owned by Cruise was pulled over by a San Francisco police officer in this video shared by @b.rad916. Cruise said that police did not issue a citation during the stop. DETROIT (AP) - Two new U.S. studies show that automatic emergency braking can cut the number of rear-end automobile crashes in half, and reduce pickup truck crashes by more than 40%. The studies released Tuesday, one by a government-auto industry partnership and the other by the insurance industry, each used crash data to make the calculations. Automatic emergency braking can stop vehicles if a crash is imminent, or slow them to reduce the severity. Some automakers are moving toward a voluntary commitment by 20 companies to make the braking technology standard equipment on 95% of their light-duty models during the current model year that ends next August. A study by The Partnership for Analytics Research in Traffic Safety compared data on auto equipment with 12 million police-reported crashes from 13 states that was collected by the National Highway Traffic Safety Administration, the partnership said in a statement Tuesday. The group studied forward collision warning as well as emergency braking. The group found front-to-rear crashes were cut 49% when the striking vehicle had forward collision alert plus automatic braking, when compared with vehicles that didn't have either system. Rear crashes with injuries were cut by 53%, the study found. Vehicles with forward collision warning systems only reduced rear-end crashes by 16%, and cut rear crashes with injuries by 19%. Automatic emergency braking works well in all conditions, even when roadway, weather or lighting conditions were not ideal, the study showed. The group also looked at lane departure warning systems, and lane-keeping systems, which keep vehicles in their lanes. They reduced crashes from autos leaving the roadway by 8% and road-departure crashes that cause injuries by 7%. "These emerging technologies can substantially reduce the number of crashes and improve safety outcomes," said Tim Czapp, senior manager for safety at European automaker Stellantis, the industry co-chair of the partnership's board. In the other study, the Insurance Institute for Highway Safety found that automatic emergency braking reduces rear crash rates for pickups by 43% and rear-end injury crashes by 42%. Yet pickups are less likely to have automatic braking than cars or SUVs despite posing more danger to other road users, the IIHS found. "Pickups account for 1 out of 5 passenger vehicles on U.S. roads, and their large size can make them dangerous to people in smaller vehicles or on foot," the institute's Vice President of Research Jessica Cicchino said in a statement. Mitsubishi, Ford, Mercedes-Benz, Stellantis (formerly Fiat Chrysler), Volkswagen and Honda have filed documents with the government this year saying they've made emergency braking standard on at least 90% of their models. General Motors reported that only 73% of its models had the technology at the end of the 2022 model year, but a spokesman said GM would hit 98% by the end of the current model year as long as there aren't supply chain issues. In addition, BMW, Hyundai, Mazda, Subaru, Tesla, Toyota, and Volvo passed 90% last year, according to the IIHS. | Emerging Technologies |
America is continually a work in progress, forever being reimagined by bold ideas, whether they arise from the public or private sector, or from pioneering inventors, entrepreneurs and corporations. The pandemic accelerated the “Great Reinvention,” forcing Americans, policymakers and businesses to re-evaluate values, conventional wisdom, and business models. A More Perfect Union 2022, The Hill’s second annual multi-day tentpole festival, explores and celebrates America’s best big ideas through the lens of American Reinvention. We will convene political leaders, entrepreneurs, policy innovators and disruptors, and thought provocateurs to debate and discuss some of the most urgent, challenging issues of our time. Wednesday, December 7th – Emerging Technologies: All industries are ripe for disruption and technological advances often prompt those changes. AI, machine learning, robotic automation, VR/AR, blockchain, the internet of things are all innovative and evolving technology trends constantly changing the face of business. How did the pandemic speed up digital transformation and innovation? How are businesses keeping up with changing tech trends? Thursday, December 8th – Reinventing the American Economy: Small Business and E-Commerce: How are record inflation, supply chain bottlenecks, and labor shortages contributing to the changes in businesses? How are innovative companies disrupting the way businesses are organized? During the pandemic many small businesses had to pivot quickly and find new ways to reach their customers through e-commerce platforms. E-commerce sales grew 50 percent during COVID-19, so what is the future of digital retail? How can technology encourage business growth? And who are the future disruptors of digital commerce? Friday, December 9th – Consensus Builders: A recent Pew analysis finds that, on average, Democrats and Republicans are farther apart ideologically today than at any time in the past 50 years. Extreme polarization creates a kind of legislative catch-22–zero-sum politics means we can’t get bipartisan majorities to change our institutions, while the current institutions intensify zero-sum competition between the parties. Post-midterms, where do we find “the missing middle”? FEATURING Wednesday, December 7th: Emerging Technologies Andrei Papancea, CEO & Chief Product Officer, NLX Rina Shah, Geopolitical Strategist, Investor, & 6xEntrepreneur Emily Landon, CEO, The Crypto Recruiters Thursday, December 8th: Reinventing the American Economy: Small Business and E-Commerce Robert Doar, President, American Enterprise Institute Karen Kerrigan, President & CEO, Small Business & Entrepreneurship Council Emily Glassberg Sands, Head of Information, Stripe Friday, December 9th: Consensus Builders Ryan Clancy, Chief Strategist, No Labels David Eisner, President & CEO, Convergence Center for Policy Resolution David Jolly, Former Member of Congress, Political Analyst SPONSOR PERSPECTIVE Paige Magness, Senior Vice President, Regulatory Affairs, Altria MODERATORS Bob Cusack, Editor-In-Chief, The Hill Steve Scully, Contributing Editor, The Hill SPONSORED BY: Join the conversation! Tweet us: @TheHillEvents using #TheHillAMPU Tags | Emerging Technologies |
Last week, both the American electorate and the cryosphere were on edge. As votes were cast and returns reported, the popular FTX crypto marketplace fell apart amid a shocking accounting scandal. These coinciding events are prescient of the challenges facing the incoming congress. Between the pandemic and President Biden’s campaign agenda, tech issues have been deprioritized. FTX’s collapse, however, demonstrates that they cannot be ignored. Beyond crypto, artificial intelligence is upending commercial art, cyberthreats plague businesses, quantum computing disruption looms and social media is spinning out of control. All the while China seeks internet hegemony and Russia traps its people behind a digital Iron Curtain. At some point, tech must have its legislative day. Today, Congress finds itself unequipped. The last Congress included a meager 12 professionals with STEM (science, technology, engineering, math) backgrounds, a number unlikely to grow. Yet the immense tech challenges we face may consume this Congress, if not the next. To lay the groundwork for success, Congress should re-establish the Office of Technology Assessment (OTA). The OTA will be unfamiliar to most. A product of Nixon-era scientific bipartisanship, this legislative branch agency provided Congress with no-nonsense assessments of the interplay between emerging technologies and legislation. It was something of a Congressional Budget Office for tech. While the OTA enjoyed bipartisan support, its run was short. Ultimately, it fell to the cost-cutting of the Contract with America. As the commercial internet was born, Congress ironically torpedoed its STEM capacities. In the years since, Congress has both acknowledged the need for scientific advice and avoided the necessary commitment. Stepping into the void has been the Government Accountability Office (GAO) and the Congressional Research Service; augmented by skewed agency and lobbyist advice. These agencies lack technical dedication. The past year’s explosive AI innovation has caused some to call 2022 a breakthrough year. Yet, the GAO has only ever produced five AI-centric reports. There have been none published in over a year. Glacial congressional research lags breakneck technology. What benefits would a resurrected OTA bring? Crucially, technical depth. Today’s emerging tech is more complicated than in the 1990s. Quantum computing, for instance, wields mind-bending quantum mechanics that even physicists agree just don’t make sense. An explanation of the internet protocol might require only a lecture, while understanding quantum qubit superposition might demand a physics degree. Agency-sized staffing would help manage emerging technology’s challenging diversity and flux. Butting against quantum computing is AI, whose state of the art seems a moving target. Then add crypto, where multiple hacks, meltdowns and crazes illustrate a complex ever-shifting class of software. Beyond this is a world of further STEM developments. Achieving this expert depth requires dedication. Scientific research must be a constant priority. Pertinent staff need the stability to deepen knowledge and creatively imagine regulatory implications. The OTA would enshrine a permanent place for STEM research in congressional priorities and support it with ample staffing. Compare this to research under the GAO, which lacks a stable expert staff, has thus understandably exhibited lower quality and must exist in an agency centered on audits and investigations. The current model deprioritizes science and fails to create a robust corps of dedicated experts. Ideally this agency would also prioritize policy breadth. Today’s emerging tech is widely applicable and may transform most policy. AI, for instance, could impact nearly every industry and even alter how we interact with information. AI policy requires an interdisciplinary staff. AI art legislation might require an expert on AI and copyrights. Shipping legislation might require an expert on AI and supply chains. Such a diverse staff could handle unexpected intersections and ensure Congress can adapt to ever evolving innovation. The remaining question is cost benefit. Regulation in such a dynamic industry is naturally fraught with unintended consequences. While knowledge can’t eliminate the unforeseen, it guides better choices. Importantly, deep technical understanding reveals deep complexity. With technical support, legislators can appreciate the uncertain impact of regulatory effect on unwieldy innovative dynamism. This promotes legislative restraint and minimizes overregulation. Thankfully, these benefits charge only a modest fee. In the past five years, the GAO returned a remarkable $158 for every dollar budgeted. The original OTA was effective on only $33 million in 2019 dollars, so perhaps GAO returns could fully cover the expenses. With time, the agencies’ technical precision may generate further offsets. Congress is at a turning point. With slim majorities, rock-bottom approval and a laundry list of tech challenges, legislators can build credibility by investing in technical capacity. Reviving the OTA is a bipartisan idea, with support from both the conservative R Street Institute and progressive Center for American Progress. This could be an easy win for incoming majorities. While an OTA revival won’t solve every issue, it equips Congress to face them head on. A deeper understanding helps balance innovation and humble governance. Matthew Mittelsteadt is a technologist and research fellow with the Mercatus Center at George Mason University. | Emerging Technologies |
One of the biggest bets of the Biden administration is that clean hydrogen will help the United States reach its climate goals, revitalize domestic manufacturing, and bolster a shrinking fossil fuel workforce. That’s a lot riding on an industry that barely exists today.
The term “clean hydrogen” can mean many things — some of which aren’t exactly clean. Hydrogen is the most abundant element in the universe, and it’s a very promising energy source that could power sectors of the economy that electrification and renewables currently cannot. But the pure hydrogen gas that works as fuel first needs to be produced, and that process can either be polluting or clean.
The US government is currently determining what counts as clean hydrogen, and the exact terms it agrees upon will have huge implications for what will effectively become an entirely new energy industry. Meanwhile, building the infrastructure to produce it wholly from scratch will be tough to pull off. That’s a problem a new system of so-called hydrogen hubs aims to fix.
President Joe Biden was at the Port of Philadelphia, Pennsylvania, on October 13 to announce the creation of seven new hydrogen hubs around the country that will produce hydrogen fuel and begin to establish this new energy industry. The Biden administration envisions these hubs to be sprawling clusters of pipelines and facilities across hundreds of miles, and the Department of Energy is spending $7 billion to build them.
The federal funding is just a start. The Biden administration hopes these projects attract another $40 billion in private investment. And generous government subsidies earmarked in the Bipartisan Infrastructure Law and the Inflation Reduction Act are aimed at providing the private sector with the incentive to boost not only the production of hydrogen but also the demand for it.
“You get very few chances to set up the political alliances and funding of a new industry,” Craig Segall, vice president of policy at environmental policy group Evergreen, told Vox. “You never get a crack at this. It’s as if we were at the beginning of coal or gas.”
Republican and Democratic politicians alike have dreamed for decades of using the most abundant element in the universe to someday power manufacturing, buildings, and even cars. Hydrogen can be burned just like gasoline in an engine. It can also be used to generate an electrical current. When burned, it produces no carbon emissions and few air pollutants. In a fuel cell, its main byproduct is water.
The problem is how to scale hydrogen without worsening climate pollution or cannibalizing existing clean power on the grid. Virtually all of the hydrogen produced today comes from fossil fuels, and the industry that stands to benefit the most from the government’s massive subsidies is oil and gas.
“Some people have talked about it being the Swiss Army knife for decarbonization, where it could be used for almost any application,” said Dennis Wamsted, an energy analyst with the Institute for Energy Economics and Financial Analysis. “But that doesn’t mean it’s the best tool; it doesn’t mean it would be the best or the cheapest, or the fastest, or the most reliable.”
So the Biden administration has an unusual opportunity to set the contours of how clean the hydrogen really becomes. The newly announced hydrogen hubs are just the first step in a multiyear, multibillion-dollar road. The government is essentially propping up a nascent industry, but with that massive support comes an opportunity to set the terms of an industry right. And nobody’s exactly sure how it will all unfold.
“An entire ecosystem like this where you’re coming up with an all-new energy product,” said David Crane, the Department of Energy’s undersecretary for infrastructure, “it probably is unprecedented.”
Clean hydrogen, explained
There’s a way to produce hydrogen that worsens climate change, and there’s a way to do it cleanly. It all depends on how the hydrogen is produced, and currently, almost all of it is made in a way that increases carbon emissions.
The way energy wonks talk about hydrogen is by color — which is funny since hydrogen gas itself is colorless.
Right now, nearly all of the existing hydrogen produced in the US today isn’t clean at all. Ninety-five percent of it is “gray hydrogen,” produced using a method called steam methane reforming. This process uses steam to heat methane derived from natural gas until it separates into a mixture of carbon monoxide, carbon dioxide, and hydrogen gas molecules. This process is incredibly energy-intensive and gives the gray hydrogen production industry a carbon footprint the size of the United Kingdom and Indonesia combined. Gray hydrogen is mostly used for industrial purposes like refining petroleum and metals as well as producing chemicals, fertilizer, and in rarer cases, fuel for vehicles.
Blue hydrogen is a tiny but growing subset of the industry. Similar to gray hydrogen, blue hydrogen production uses steam methane reforming, which means that it also relies on natural gas. But for blue hydrogen, carbon capture and storage and other monitoring attempts are introduced to limit leakage of methane, a powerful greenhouse gas, which in theory minimizes its impact on climate change. And carbon capture and storage technologies haven’t been proven at the scale for blue hydrogen to capture over the 90 percent of emissions needed to deliver climate benefits.
A third and very buzzworthy option is green hydrogen. Producing green hydrogen employs a process called electrolysis, which uses an electrolyte, anode, and cathode to create a chemical reaction that splits water into hydrogen and oxygen molecules. No carbon capture is needed here, as no fossil fuels are involved in the process. As the name implies, this is the cleanest way to produce hydrogen — if it relies entirely on renewables for the electricity to power the process. It is currently very expensive and requires subsidies to compete with dirtier hydrogen options.
One other consideration with these types of hydrogen is the energy needed to produce them. Both blue and green hydrogen could be used in similar ways and work as a clean energy solution, except a lot rides on how the hydrogen is made. If energy derived from fossil fuels powers the production of any type of hydrogen, that could undermine carbon cuts. For green hydrogen, specifically, electrolysis is a problem area because it’s so power-hungry. So it’s essential that the electricity that powers the process comes from renewables, like solar, wind, and nuclear. It also matters where the renewables come from. One worry environmentalists have is that new hydrogen facilities will simply draw from existing solar and wind, eating up a lot of the clean electricity we already have.
“Making sure that this power is squeaky clean is absolutely necessary to make sure we’re not increasing emissions on the grid,” said Rachel Fakhry, NRDC’s emerging technologies director. “Even a little bit of fossil fuels powering the system could drive very high emissions on the grid.”
There are even more colors of hydrogen, each of which refers to a different production method. So while the phrase “clean hydrogen” is thrown around a lot, it’s not always clear what it’s referring to.
The hydrogen production question is a minefield that the Biden administration ultimately needs to navigate as it props up this burgeoning industry. And in writing the rules for this hydrogen-powered future, the Energy and Treasury Departments are playing unusually important roles.
What’s in the hydrogen hubs announcement
Biden’s recent $7 billion announcement, it deserves to be said, is a major one. It reveals the broad blueprint the Department of Energy intends to follow to build an entire energy industry almost from scratch. The Bipartisan Infrastructure Law gave the department $8 billion to develop both supply and demand for hydrogen — the other $1 billion will be used for supporting demand — and now we know some details about how it will spend the vast majority of that on the projects the DOE has prioritized.
These seven hydrogen hubs are spread across states in the Gulf Coast, Appalachia, the Pacific Northwest, California, the Midwest, and the mid-Atlantic. Picked from a pool of 79 proposals submitted by private-public partnerships to the DOE, the winning proposals are sprawling plans for existing infrastructure as well as wish lists for new buildings and pipelines that ultimately have a long road of permitting and funding ahead.
The specific locations of the hubs are noteworthy not only because of how they will affect communities around them but also because of how the electric grid works in those areas. The hubs aim to draw on a mix of renewables and natural gas infrastructure to develop blue and green hydrogen, but some of the largest projects planned could play out heavily in the fossil fuel industry’s favor.
According to the White House, two-thirds of the overall funding supports green hydrogen development, but at least two of the hubs will primarily rely on blue hydrogen — which, again, relies on natural gas. The Houston-Gulf Coast hub, the largest of all of the hubs, plans to rely heavily on carbon capture for 2 million of the 3 million tons that come from natural gas — a task that will likely mean remodeling some of the region’s existing facilities with carbon capture equipment and pipelines. Other hubs, like the ones in the Midwest and mid-Atlantic, draw also from existing nuclear power sources.
When fully operational, the White House says the seven hubs would reduce 20 million metric tons of carbon dioxide — the equivalent of 5.5 million gasoline-powered cars.
Not everyone is happy with the Biden administration’s approach to building out the hydrogen industry. Fakhry was among the environmentalists expressing disappointment in the DOE’s process so far, calling the announcement a “mixed bag” with “some potentially promising elements.” She does see the potential for hydrogen cutting emissions in industries that are difficult to switch to renewables, but the continued reliance on fossil fuels is a sticking point.
“I was frankly surprised at the level of dependence on gas-derived hydrogen and on gas-reliant power sources,” Fakhry said.
Again, the exact terms the government is setting for what counts as clean hydrogen is still unclear. Will blue hydrogen facilities have to meet specific carbon-capture standards to be counted as clean? How will natural gas leaks be minimized? From the early details of the hydrogen hubs announcement, it appears the Department of Energy is following an all-of-the-above approach for hydrogen, relying on fossil fuels as well as renewables for future production.
This makes the next move from the Biden administration all the more critical: The Treasury Department is crafting standards that will ultimately set the course for what a hydrogen economy looks like. These decisions will permanently shape an industry that is just starting to find its footing around the world, and may start trading internationally as early as the 2030s.
Why the next move from the Treasury Department matters even more for hydrogen
The fate of Biden’s big plans lies in the hands of an unexpected government agency: the Internal Revenue Services. Soon, the IRS will find itself in the unusual situation of developing policy that will ultimately govern how the hydrogen energy industry operates. It could, in turn, determine how much pollution this industry produces.
Sometime before the end of the year, the Internal Revenue Service is supposed to release guidance for a hydrogen production tax credit, called 45V. These are generous tax credits meant to attract more investors to hydrogen. The Inflation Reduction Act only vaguely defines the tax credits as applying to “clean hydrogen,” leaving it to the IRS to decide how to set the terms for what can be eligible for potentially $100 billion over the lifetime of the credits.
So the Biden administration is now in the process of defining how broad or narrow the tax credits will be for defining what counts as clean hydrogen amid all its caveats. If the standards are too stringent, hydrogen may never get off the ground. But if they’re too lax, there’s a risk the industry could become another carbon bomb — or even just an extension of the fossil fuel industry.
Clean energy industry leaders and environmentalists have thoughts on this. One of the key proposals they’re making is a strict definition based on three pillars.
The core pillar is known as additionality, which would require hydrogen producers to add new renewables to the grid instead of diverting existing nuclear, hydro, wind, and solar. Tapping new renewables avoids a problem environmentalists are especially worried about: that diverting existing electrons on the grid to produce hydrogen diverts from other climate goals to clean up pollution from buildings and transportation.
The second and third pillars are called deliverability and hourly matching. These hold producers to similarly strict measures so the hydrogen industry isn’t taking away from clean energy already out there. They require producers to source clean energy near where it’s consumed and match that energy hourly so they can’t run on credits for, say, solar when the sun is down.
These ideas are divisive. You may have seen the ads coming from trade groups supported by ExxonMobil and utilities fighting back against additionality. And Jacob Susman, CEO of hydrogen company Ambient Fuels, also argues for annual matching instead of hourly, saying it is a less stringent standard that allows the industry to use renewable energy credits.
“We need to be flexible in approaches early on so that we can get the cost down,” Susman told Vox. “It would be very reasonable in a few year’s time to start talking about tightening the way it’s defined.”
The stakes here are incredibly high. The way the hydrogen industry takes shape will determine whether it ensures greenhouse gasses actually fall as the White House hopes. And taxpayers are footing the bill for potentially over $100 billion in incentives that could boost the fossil fuel industry if not done right.
And production is hardly the only challenge ahead. Most of these policies just tackle the supply side of hydrogen, not addressing who and how it will be used to lower emissions. The $1 billion the DOE has reserved to build up demand will go toward projects that slash emissions in tricky sectors like manufacturing cement and aviation fuels. It also is likely to be used in the power sector, as the Environmental Protection Agency’s new rules for cleaning up climate pollution assume gas plants could use a blend of hydrogen to meet stricter emissions standards.
The Biden administration ultimately considers hydrogen key to reducing 25 percent of global climate emissions by 2050. That is, in part, because there are simply parts of the economy that can’t be cleaned up by relying on renewables and electrification alone. We’re not going to see the biggest gains with hydrogen-powered SUVs but rather hydrogen-powered container ships and planes.
“Heavy transportation and heavy industry are the toughest nuts to crack, said Crane from the DOE. “And hydrogen is the solution to that.” | Emerging Technologies |
Government May Propose Up To Rs 500 Crore Fine For Violations Under Digital India Bill
The government is likely to propose a penalty of up to Rs 500 crore for violating provisions of the Digital India Bill, according to sources.
The government is likely to propose a penalty of up to Rs 500 crore for violating provisions of the Digital India Bill, according to sources.
Under the proposed bill, the Centre may authorise any government agency to monitor and collect traffic data generated, transmitted, received or stored in any digital system to enhance cyber security.
It also aims to identify, analyse and prevent intrusion or spread of malware or virus.
The Ministry of Electronics and IT has been working on the draft of the Digital India Bill to replace the existing IT Act which was enacted more than 22 years ago in the early days of internet.
"The Digital India Bill may come with a provision of a penalty of up to Rs 500 crore on entities for breach of obligations," according to the source.
The quantum of penalty will be decided by the proposed Digital India Authority that will handle grievances, the sources said.
However, the authority may have to asses various factors, such as the gravity, number of users affected and the duration for which an individual was affected, before taking a final decision on the penalty amount.
Disputes under the proposed Act are unlikely to come under the jurisdiction of civil courts and entities unsatisfied with the resolution provided by the Digital India Authority might have the option to challenge it before the Supreme Court, the sources said.
The proposed bill is likely to identify and define various kinds of damage a victim is likely to face in the digital world, they said.
It is likely to define doxing, cyber squatting, astroturfing, dog whistling, among other offences, and make them punishable.
The bill is likely to come up with norms to control development and deployment of emerging technologies in the wake of challenges being posed by development of artificial intelligence, the sources added. | Emerging Technologies |
By Alexandra Brzozowski and Luca Bertuzzi | EURACTIV.com Est. 5min 20-06-2023 (updated: 20-06-2023 ) Content-Type: News News Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources. Although the strategy does not explicitly target Beijing, it implicitly does as Europe remains dependent on China for many critical raw materials and technologies. [Shutterstock/Alexandros Michailidis] EURACTIV is part of the Trust Project >>> Languages: Français | DeutschPrint Email Facebook Twitter LinkedIn WhatsApp Telegram The EU is considering a series of tools to counter China and Russia’s increasing readiness to use trade and the control of critical supply chains to its geopolitical advantage, according to a European Commission draft proposal seen by EURACTIV. In a document titled “European Economic Security Strategy”, seen by EURACTIV before its presentation on Tuesday (20 June), the European Commission and EU’s chief diplomat Josep Borrell set out its view on how the bloc can make its economy more resilient and identify emerging external risks. “Russia’s war of aggression against Ukraine showed how over-reliance on any single country, especially when they have systemically divergent models and interests, reduces Europe’s strategic options and puts our economies and citizens at risk,” the document states. The EU “now needs a comprehensive and strategic approach to economic security, de-risking and promoting technological edge in strategic sectors,” it continues. The EU executive’s new strategy document comes as the bloc has battled to shake off its heavy dependence on third countries – which it realised threaten its economic security if weaponised. This realisation came amid a struggle to reduce dependency on Russian energy following the invasion of Ukraine. Although the strategy does not explicitly target Beijing, it implicitly does, as Europe remains dependent on China for many critical raw materials and technologies. In March, European Commission president Ursula von der Leyen spelt out the bloc’s new ‘China doctrine’, stating that while it would not be in Europe’s interest to decouple itself fully from Beijing, the bloc should, however, look into diplomatic and economic ‘de-risking’. Member states are currently torn over how to act, with some reluctant to start a trade war with Beijing, a major economic partner of several larger EU member states. A full toolbox The strategy document identifies a series of potential risks the bloc could face, focusing on risks to supply chains, including energy, and critical infrastructure, such as telecom networks, and guarding against economic coercion and leaking out of leading-edge technology. While the European Commission proposal stresses Europe should remain open to trade and investment, it emphasises the need for the bloc to protect itself better in limited areas of security relevance. This would include restrictive third-party access to key technologies such as semiconductors, research projects, or joint ventures. By the end of the year, the European Commission intends to propose a new instrument which would create a control regime for security-related outbound investments by European companies in third countries. The EU currently controls exports of specified “dual-use” goods that can have military applications. European Commission plans would include producing a further list with EU member states of technologies critical to economic security, which could be adopted by member states as soon as September. “It will set up a new dedicated group of member states’ experts to assist in these tasks, building a new structured, confidential cooperation mechanism,” according to the text. However, the EU executive is entering shaky ground in granting export licences and weighing security interests are national competencies that EU member states said they want to retain. EU leaders are expected to discuss the European Commission’s new proposal when they meet for their regular summit in Brussels next week. Huawei looming The European Commission will propose a new platform to support critical and emerging strategic technology to boost the EU’s competitiveness and supply chain security. It warned that “more investments are urgently needed to ensure EU’s leadership” in various technologies. The strategy comes one week after a renewed Commission’s push to prompt EU countries, most notably Germany, to remove high-risk vendors such as Chinese telecom giants Huawei and ZTE from their critical infrastructure. At the same time, the intent is to promote a technological edge in strategic sectors, particularly in new critical emerging technologies where the boundaries between civil and military sectors are blurring, namely quantum computing, semiconductors, Artificial Intelligence, 6G, biotechnologies and robotics. “The starting point for this strategy is taking a clear eyed-look at the challenges and acknowledging the inherent tensions that exist between bolstering our resilience and ensuring that the European Union continues to benefit from its open economy and promoting and protecting its technological advantage,” the document reads. The idea is for the European Commission and member states to assess the critical supply chains and sensitive technology hotspots, conducting stress tests and establishing the level of risk in terms of supply chain resilience, cyber or physical security of critical infrastructure, technological leakages and weaponisation of trade policies or economic coercion. Priority technologies are to be collectively assessed by EU member states by the end of the year to identify the relevant protection and promotion measures. [Edited by Alice Taylor/János Allenbach-Ammann] Read more with EURACTIV Belgium joins next-generation European fighter jet programmeBelgium announced on Monday (19 June) it will join the Future Combat Air System (FCAS) programme to build the next-generation systems of combat and fighter (FCAS), teaming up with France, Germany and Spain. Languages: Français | DeutschPrint Email Facebook Twitter LinkedIn WhatsApp Telegram Topics China de-risking economic security strategy Economy Global Europe Industrial Policy Technology Trade | Emerging Technologies |
Brief content visible, double tap to read full content.Full content visible, double tap to read brief content.Miguel Nicolelis, M.D. Ph.D., is the Anne W. Deane Professor of Neuroscience at Duke University, Professor of Neurobiology, Biomedical Engineering and Psychology and founder of Duke's Center for Neuroengineering. He is also Founder and Scientific Director of the Edmond and Lily Safra International Institute for Neuroscience of Natal (www.natalneuro.net). As Brazil’s best known scientist, Dr. Nicolelis has been an outspoken and passionate advocate for strengthening science education, technology and innovation and was selected to lead the country’s “Commission on the Future of Brazilian Science.” His award-winning research has been published in Nature, Science, and Scientific American and has been reported in Newsweek, Time, and Discover, as well as national TV networks and international media outlets. Although for the past decade, Dr. Nicolelis is best known for his pioneering studies of Brain Machine Interfaces (BMI) and neuroprosthetics in human patients and non-human primates, he has also developed an integrative approach to studying neurological and psychiatric disorders including Parkinson’s disease, epilepsy, schizophrenia and attention deficit disorder. He has also made fundamental contributions in the fields of sensory plasticity, gustation, sleep, reward and learning. Dr. Nicolelis believes that this approach will allow the integration of molecular, cellular, systems, and behavioral data in the same animal, producing a more complete understanding of the nature of the neurophysiological alterations associated with these disorders.As of today, numerous neuroscience laboratories in the US, Europe, Asia, and Latin America have incorporated Dr. Nicolelis' experimental paradigm to study a variety of mammalian neuronal systems. Indeed, two of his books on multi-electrode recording techniques have become the most cited works in this field. His research has influenced basic and applied research in computer science, robotics, and biomedical engineering. This multidisciplinary approach to research has become widely recognized in the neuroscience community.Dr. Nicolelis’ research has been highlighted in MIT Review’s Top 10 Emerging Technologies. He was named one of Scientific American’s Top 50 Technology Leaders in America in 2004 and has twice received the DARPA Award for Sustained Excellence by a Performer. Other honors include the Whitehead Scholar Award; Whitehall Foundation Award; McDonnell-Pew Foundation Award; the Ramon y Cajal Chair at the University of Mexico and the Santiago Grisolia Chair at Catedra Santiago Grisolia. In 2007, Dr. Nicolelis was honored as an invited speaker at the Nobel Forum at the Karolinksa Institute in Sweden. More recently he was awarded the International Blaise Pascal Research Chair from the Fondation de l'Ecole Normale Supérieure and the 2009 Fondation IPSEN Neuronal Plasticity Prize. Dr. Nicolelis is a member of the French Academy of Science and the Brazilian Academy of Science and has authored over 160 manuscripts, edited numerous books and special journal issues, and holds three US patents. | Emerging Technologies |
The U.S. Navy is operating or developing nearly a dozen different unmanned sea vehicles for use in maritime security operations. Some of the vehicles operate on the ocean’s surface, and others beneath it. Some are no bigger than torpedoes and must be launched by larger vessels, while others are autonomous, robotic warships. Pursuit of unmanned sea systems is not a new endeavor for the Navy. The Office of Naval Research recognized their potential decades ago, and smaller systems have been used in mine countermeasures for many years. The defense department has been experimenting for seven years with a transoceanic, unmanned surface warship called Sea Hunter developed by Leidos LDOS . Boeing BA has recently begun delivering an extra-large unmanned submarine dubbed Orca that can operate at unprecedented depths. Both vehicles are capable of performing multiple warfighting missions. What’s new in recent years is that emerging technologies such as artificial intelligence have expanded the scope for robotic operations at sea. Chief of Naval Operations Admiral Michael Gilday has identified unmanned vehicles as a high-priority development area, along with digital networking and extended-range fires. The Sea Hunter unmanned surface vehicle developed by Leidos has been in operation for seven years. ... [+] The vehicle has transoceanic range and potential to execute multiple warfighting missions. Wikipedia The Navy released an unmanned campaign framework in 2021 that emphasized how robotic warships could enable distributed maritime operations, the service’s driving organizational construct for the future. With the number of manned warships in the fleet seemingly stuck around 300 for the foreseeable future, unmanned systems may be the only way to meet warfighting and presence objectives within available budgets. Although it will be a long time, if ever, before unmanned systems can deliver the functionality of a crewed submarine or destroyer, they can complement the manned fleet by performing tasks too dangerous or routine to justify assigning a manned warship. For instance, sending manned warships into the Baltic or Black Seas in an East-West war could place hundreds of sailors at risk; unmanned systems may be able to perform the necessary reconnaissance and strike missions without risking U.S. lives.
Thus far, the Navy’s interest in unmanned sea systems has focused mainly on their potential to enable new operational concepts. However, if the technology proves useful, larger systems such as Sea Hunter and Orca might open the door to a new paradigm for naval shipbuilding.
As I noted in a Forbes article earlier this week, naval shipbuilding today is a complicated and costly enterprise even when managed efficiently. It produces warships typically costing over a billion dollars each. Unmanned warships cost a small fraction of that amount to build, and a similarly low amount to operate.
The possibility thus exists to pioneer new approaches to naval shipbuilding, approaches that can grow in scope as the use of robotic systems at sea expands in the future.
Here are a few ways in which unmanned warships might revolutionize the way U.S. warships are built and operated:
1. Simplified designs that eliminate the complexity imposed when making manned vessels habitable and survivable. Many of the demanding specifications for current warships are driven by the need to accommodate a hundred or more sailors; eliminate the sailors, and the design requirements become much less demanding—reducing cost to a point where survivability becomes a less critical feature.
2. Simplified engineering that compresses the time needed to transition from concept to construction. With a much simpler design, the demands on engineers to translate specifications into systems is correspondingly reduced, saving time and money.
3. Simplified construction as less costly and demanding processes enable a return to serial production. Serial production on the Liberty Ship model doesn’t exist in naval shipbuilding today, but it could return if specifications were suitably simplified and unit costs fell to a fraction of what manned warships cost.
4. Simplified planning as reduced material requirements permit streamlining of supply chains. Modern warship construction typically is supported by hundreds of subcontractors, but if survivability and other features associated with manning are eliminated, fewer specialized suppliers would be needed and integrators could rely more on commercial inputs.
5. Simplified innovation as less complicated designs facilitate the rapid insertion of advanced technology such as machine learning and digital networking. Unmanned systems substitute software for people, which implies a capacity for fast reconfiguration without necessarily requiring new hardware.
6. Simplified modification as threats evolve, often by porting new source code into software reconfigurable architectures from remote locations. In other words, the design features that facilitate introduction of new innovations also could greatly reduce the time and funding needed to modify warships in response to new operational challenges.
7. Simplified sustainment owing to less demanding designs and greater reliance on expendable/attritable systems. Unmanned systems should be much easier to repair and maintain than manned systems, and their supply requirements at sea would be negligible; for instance, Sea Hunter can traverse the Pacific in both directions on a single tank of fuel.
8. Simplified industrial bases as the ranks of sub-tier suppliers shrink and integrators shift to reliance on dual-use or commercial technologies. Because the barriers to building warships would diminish, additional integrators might enter the business, creating a more resilient industrial base.
These ideas are purely conceptual, reflecting the fact that development of unmanned warships—especially highly capable, multi-mission ships—is in its infancy. The Navy could fruitfully accelerate its development of unmanned warships at modest cost, perhaps producing revolutionary results within a few years.
Having said that, it will be a long time before the Navy can dispense with the processes it currently depends on to build manned warships. That may never happen. But unmanned systems open the door to building a bigger fleet at lower cost.
Boeing and Leidos, mentioned above, contribute to my think tank. I am indebted to Maiya Clark of the Heritage Foundation for offering remarks at a Lexington Institute working group that stimulated my thinking on the industrial-base implications of unmanned warships. | Emerging Technologies |
President Joe Biden will visit Japan and Australia next month to huddle with allies on their continued response to Russia's invasion of Ukraine as well as ways to confront China's assertive economic and military moves in the Indo-Pacific region, the White House announced Tuesday.
Biden will attend a summit of the leaders of the Group of Seven advanced democracies in Hiroshima, Japan, on May 19-21, White House press secretary Karine Jean-Pierre said. Then he will make his first trip as president to Australia, which will include the third in-person meeting of the so-called “Quad” leadership of the U.S., Japan, Australia and India.
“The President and G7 leaders will discuss a range of the most pressing global issues, including the G7’s unwavering support for Ukraine, addressing the dual food and climate crises, securing inclusive and resilient economic growth, and continuing to lead a clean energy transition at home and for our partners around the world,” she said.
At the Quad meeting on May 24 Biden will gather with Prime Minister Kishida Fumio of Japan, Prime Minister Narendra Modi of India and Prime Minister Anthony Albanese of Australia. The group was formed in 2007 to bolster economic and security relations between the four democracies as a check on China's rise. It was rebooted under the presidency of Donald Trump a decade later, and elevated to a regular leader-level gathering during Biden's tenure.
“The Quad leaders will discuss how they can deepen their cooperation on critical and emerging technologies, high-quality infrastructure, global health, climate change, maritime domain awareness, and other issues that matter to the people of the Indo-Pacific,” Jean-Pierre said.
The meeting with Modi comes amid growing concerns in the U.S. over democratic backsliding in India during his time in office, and efforts by the U.S. to press India to join international economic sanctions against Russia over its invasion of Ukraine. | Emerging Technologies |
05 October 2022 The 2022 Virginia Energy Plan, announced by Governor Glenn Youngkin, calls for a nuclear innovation hub to be established in the state and for a commercial small modular reactor to be deployed in southwest Virginia within the next decade.
Governor Youngkin launching the 2022 Virginia Energy Plan at electrical equipment manufacturer Delta Star Inc's Lynchburg facility (Image: Governor of Virginia / YouTube)As directed by the Virginia General Assembly, every four years the Virginia Department of Energy develops a comprehensive Virginia Energy Plan. In the foreword to the latest plan, Youngkin said: "We must reject the mindset that it is 'either/or', and embrace the reality that it is 'both/and'. In fact, the only way to confidently move towards a reliable, affordable and clean energy future in Virginia is to go all-in on innovation in nuclear, carbon capture, and new technology like hydrogen generation, along with building on our leadership in offshore wind and solar." The plan calls for Virginia to make strategic investments in innovative, emerging technologies, including hydrogen, carbon capture, storage and utilisation, and, particularly, small modular nuclear reactors (SMRs). The plan notes, "Today, the Commonwealth [of Virginia] is a welcome home to nuclear energy and its innovations, and two nuclear power stations - the Surry and North Anna Power Stations - produce roughly 95% of the Commonwealth's reliable, clean electricity." In addition, Virginia is home to two of the world's largest nuclear companies, BWXT and Framatome, located in Lynchburg. Two of the 30 nuclear engineering programmes in the USA are at Virginia Commonwealth University and Virginia Tech. Six universities in Virginia offer degrees in nuclear engineering and advanced physics. "The Commonwealth should take advantage of this incredibly competitive position on the forefront of nuclear energy research and development to become the nation's leader in SMR technology." the plan says. "Accordingly, this plan advocates for the development of the first commercial SMR in the US in southwest Virginia and calls for developing spent nuclear fuel recycling technologies that offer the promise of a zero-carbon emission energy system with minimal waste and a closed-loop supply chain." Introducing the latest plan on 3 October in Lynchburg, Youngkin said: "We have to be all-in in nuclear energy in Virginia. When it comes to reliability, affordability. When it comes to clean power. When it comes to the abundant nature of growing power demand, absolutely nothing beats nuclear energy. It is the baseload of all baseloads. "I want to plant a flag right now. I want to call our moonshot. Virginia will launch a commercial small modular reactor that will be serving customers with baseload power demand in southwest Virginia within the next 10 years. "Energy innovation - like small modular reactors - will not just honour our calling to environmental stewardship, it will also deliver economic development opportunities, job creation and a tremendous place to live, work and raise a family across the entire Commonwealth." The plan also recommends the state collaborates with the Virginia Nuclear Energy Consortium - established in 2013 to represent stakeholders invested in the development of nuclear energy in the state - and higher education institutions to establish a nuclear hub in Virginia. "A growing Virginia must have reliable, affordable and clean energy for Virginia's families and businesses," Youngkin said. "We need to shift to realistic and dynamic plans. The 2022 Energy Plan will meet the power demands of a growing economy and ensures Virginia has that reliable, affordable, clean and growing supply of power by embracing an all-of-the-above energy plan that includes natural gas, nuclear, renewables and the exploration of emerging sources to satisfy the growing needs of Commonwealth residents and businesses." The plan says its "does not attempt to predict every technological innovation or long-term change in the production and consumption of energy". It "embraces flexibility and supports multiple technologies as a path to providing the appropriate balance of baseload and growing clean energy generation at a reasonable cost". Researched and written by World Nuclear News | Emerging Technologies |
Digital Storytelling: How Brands Can Use Authenticity and Emotional Connection to Stand Out Online
Effective storytelling can help construct strong bonds between brands and audiences.
Opinions expressed by Entrepreneur contributors are their own.
Humans have communicated through storytelling for centuries. Aside from sharing information, stories connect communities and create lasting bonds. When digital marketers use storytelling to their advantage, they construct similarly strong bonds between brands and audiences. Focusing on authenticity and emotions during digital storytelling can strengthen those connections.
What storytelling does to the human brain
To understand the power of storytelling for brands and businesses, it is worth looking at how our brain reacts to stories from a biological and psychological perspective. Our brains have evolved over thousands of years. While it is undeniable that our cognitive capacity has changed, the pace of those changes has been far slower than, for example, the development of technologies we use to share stories.
That means that no matter whether you are part of a marketing team, part of the target audience for a product, or the product's inventor, you share one thing: how you respond to content. When we hear a story, our brains change noticeably. Aside from the areas that process language, the parts of our brain that would be involved in the activity we are hearing about also become active. Psychologist Dr. Pamela Rutledge writes that the meaning of messages starts in our brains.
Whenever consumers are confronted with content, including marketing messages, their brains respond by trying to put the story into the context of their experiences. This is how humans make sense of new information. For digital marketers, the narrative at the heart of any campaign is critical.
Storytelling and technology
Some of the earliest narratives we know of were scratched into or painted on the walls of prehistoric caves. Communication technology has changed beyond recognition since then, and digital marketers have never had more channels to choose from to reach their audiences when those are most receptive to their messages.
Stories are rarely restricted to one platform. Instead, marketing teams spend considerable time fine-tuning their approach and their selection of digital marketing channels. Each channel has its strengths and weaknesses, and each contributes to the immersive effect of storytelling in its own way. Emerging technologies like virtual reality headsets even allow audiences to become part of the story and influence its outcome.
Exciting as all those developments are, the most important point is this: no digital marketing channel can replace the power of the story itself. Communication technologies are simply enablers that help transmit stories.
Harnessing authenticity and emotion in digital storytelling
Knowing that stories can activate certain parts of our brains, it is time to consider how authentic and emotional storytelling can support brand marketing efforts even further.
Emotional digital storytelling
While people may purchase products out of necessity, when faced with choices between brands or varieties, emotional impulses tend to outweigh purely rational thought processes.
To stand out online, digital storytellers need to ask themselves how they would like their audiences to feel when they see or read their content. Some of the most common desires are:
- A better future for the whole family
- Experiencing instant gratification
- Creating a sense of overall well-being, belonging, or freedom
The relevant emotions will vary widely between brands, and this list is far from exhaustive. But if your story triggers these or other feelings, you have created an emotional bond with your audience. This bond will run deeper and be far stronger than a connection simply based on factual information.
Authentic digital storytelling
While stories allow their audiences to imagine themselves at the center of the plot, they are more than the figments of someone's imagination. The most powerful stories with the greatest potential of creating an emotional connection between the audience and the narrator are rooted in reality.
Understanding this is significant for marketers and the leadership teams of the brands they represent. Telling the story of a brand or a product is never more powerful than when the narrator and the subject can be perceived as honest and genuine.
The higher the degree of authenticity, the stronger the connection between the subject and the audience could potentially be. In practice, that means marketers should avoid focusing only on the highlights of a brand's story.
Admitting to mistakes made in product development, for example, allows marketing teams to tell a story of how they were rectified. Mentioning obstacles creates an opportunity to talk about overcoming challenges. These stories are not only authentic, but they also make a brand or a founder relatable and inspirational.
Taking authenticity further
Are marketing professionals the best narrators of their brand stories? Not necessarily. One of the world's most recognizable brands, Apple, turned to consumers and their real-life stories to explain the benefits of its products. Using this approach avoids the dangers of overusing technical jargon in the case of Apple products or sounding too corporate.
Likewise, cosmetics brand Dove embraced authenticity for their memorable 'Campaign for real beauty,' which continues to resonate with consumers today. Rather than watching models present Dove products, consumers saw – themselves. The brand and its messages became instantly more relatable because professional marketers did not tell them.
Authenticity and emotions in digital storytelling
Digital marketing lends itself to authenticity and a focus on emotional stories. Take social media networks, for example. Created for users to connect, social media channels are now among the most powerful digital marketing channels, and their impact is rooted in authentic interactions between consumers and brands.
Recreating this type of authenticity and emphasizing the emotional aspects of brand connections makes digital storytelling one of the most powerful marketing tools for leading brands today. | Emerging Technologies |
The U.S. says it’s punching back in the digital cold war over emerging technologies with a new “Disruptive Technology Strike Force.”
“Our goal is simple but essential—to strike back against adversaries trying to siphon off our best technology,” a deputy attorney general said.
The strike force, a joint initiative created by the Department of Justice and the Commerce Department reportedly, will focus on combating “adversaries” attempting to steal crucial U.S. tech secrets and attack supply chains. DOJ officials say the new agency will use a combination of “intelligence and data analytics,” to detect early warning of signs of cyber threats and, hopefully, prevent rival nations from “weaponizing data” against the U.S. The strike force will operate in 12 metropolitan regions spread out across the U.S. and include experts from the FBI and Department of Homeland Security. Intellectual property is most often stolen through cyberattack, making the Disruptive Technology Strike Force something of a “hack back” squad.
“Advances in technology have the potential to alter the world’s balance of power,” assistant Secretary for Export Enforcement Matthew S. Axelrod said in a statement. “This strike force is designed to protect U.S. national security by preventing those sensitive technologies from being used for malign purposes.”
The agency says private sector technologies related to AI, biosciences, and advanced manufacturing equipment and materials can be co opted by adversaries for “disruptive” purposes that can, in turn, threaten U.S. security. All this advanced tech, the agency claims, could theoretically be used to improve weapons calculations, improve foreign intelligence decision making, or potentially create “unbreakable encryption algorithms.” China, Iran, Russia, and North Korea were singled out as key countries of concern.
Deputy Attorney General Lisa Monaco elaborated on the new agency during a speech at the Chatham House research institute in London this week, saying the emerging technologies and ideas being stolen today could be used in “very frightening ways tomorrow.” Some of the greatest threats here involve datasets and software that contain potentially sensitive information. Though Monaco didn’t specifically mention TikTok by name, she hinted at it and said there’s a good chance the Chinese government could access data from Chinese owned firms if they want to.
Part of that striking back could reportedly entail leaning further into proactive effects to reach out and “target illicit actors” before they get a chance to make off with valuable secrets. Monaco, according to Bloomberg, said the U.S. government is already taking action to detect and deter bad actors in addition to actively “disrupting cyber-attacks.”
“Today, autocrats seek tactical advantage through the acquisition, use and abuse of disruptive technology: innovations that are fueling the next generation of military and national security capabilities,” Monaco said. “The ability to weaponize data will only advance over time, as artificial intelligence and algorithms enable the use of large datasets in new and increasingly sophisticated ways.”
The Department of Justice and Commerce Department did not immediately respond to Gizmodo’s requests for comment.
The new “Strike Force” comes on the heels of growing calls from many conservatives, and an increasing number of Democrats, for the federal government to take a tougher stance again tech and IP theft. A bipartisan Senate Intelligence Committee report released last year estimated the U.S. may be losing up to $600 billion from global IP theft every year. The FBI, meanwhile, estimates cyber attacks and malicious cyber activity may have cost U.S. businesses over $6.9 billion in losses in 2021. Those total losses, CNBC notes, were up a staggering 64% compared to the year before. If successful, the strike force could potentially stem some of that bleeding and refocus mitigation efforts in the private sector.
“Our nation now faces a dramatically different threat landscape than it did even a couple of decades ago,” Virginia Democratic Senator and Senate Intelligence Committee Chair Mark Warner said late last year. “Today’s foreign intelligence threats are not just obviously targeting the government…but are increasingly looking at the private sector to gain technological edge over our key industries.”
The Biden administrator has made it clear in recent months it wants to appear tough on China, particularly when it comes to technology. In October, Biden’s Commerce department issued sweeping new restrictions on exports to China of semiconductors, chip designs, chip software, and other high tech equipment. The measures, a direct extension of previous actions from the Trump Administration, were the clearest effort yet by Biden to block off Chinese access to the next generation of crucial tech
The Center for Strategic & International Studies, a Washington think tank, colorfully described the new prohibitions on China as, “strangling with an intent to kill.”
The Biden admission’s aggressive stance towards tech theft and new Strike Force might prevent some important technology from making its ways overseas, but it also simultaneously risks making already fought international relations even worse. A Pew survey related last year found that 82% of U.S. adults said they viewed China unfavorably, a figure up 6% points from just one year prior. It’s unclear how creating inter-agency organizations directly tasked with targeting other countries will help temper those opinions.
The agency stated intent to strike back again and “target illicit actors” could also have long-term unintended consequences. Efforts by the DOJ or Commerce Department to launch their own proactive or retaliatory attacks against illicit foreign actors risks potentially spiraling into larger tit-for tat cyber campaigns with devastating consequences. Properly attributing the exact origins of cyberattacks is also notoriously difficult as attackers often route their attacks though other machines. That means retaliatory attacks led by the U.S. strike force could risk hav to contend with unintended collateral damage. | Emerging Technologies |
I am honored to announce the creation of the Office of Technology (OT) at the Federal Trade Commission, a team that will provide technical expertise across agency matters and strengthen the agency’s ability to enforce the nation’s competition and consumer protection laws. We are hiring technologists to join the team.
Staying on the cutting edge of emerging technology has long been a core part of the FTC's mandate. The emergence of the radio in the 1920s is an especially vivid example. The radio was becoming ubiquitous in American living rooms,[1] creating dramatically new possibilities for entertainment and information. At the same time, this new device provided a potent new vector for false advertising. Amidst an influenza pandemic, Vit-O-Net claimed its heating pad induced magnetic field that could provide a cure for rheumatism and a variety of other bodily ills.[2] Fairyfoot Company promised an adhesive plaster pad that could instantly dissolve bunion pain to achieve bunion-free feet.[3]
Image Source[4]
Ads like these became so problematic and widespread in the 1930s that the relatively new Federal Trade Commission,[5] recently empowered by Congress to police “unfair or deceptive acts or practices,”[6] launched the Special Board of Investigation[7] to study a massive volume of radio transcript data. When the agency received complaints about false or misleading ads, staff would request samples of all ad copies published, along with samples and formulas of the product in question. Staff consulted expert federal agencies like the Food and Drug Administration and the Public Health Service for scientific and medical opinions to detect unlawful fraud and abuse.[8] By leveraging these resources and sharpening investigative methods, the agency managed to adapt to the rapid change brought about by radio technology.
Today's technological challenges are even more daunting than those of the radio era. Still, they raise systemic concerns that would have been familiar to enforcers in the 1930s. The common thread is that some technologies can facilitate substantial injury to consumers, are misleading, or may negatively affect competitive conditions.[9] From the rise of the surveillance economy,[10] to companies' widespread application of artificial intelligence,[11] to business models that employ tech to disrupt markets,[12] the shift in the pace and volume of technological changes means that more FTC matters need team members with tech expertise. To stay on top of developments, we can’t rely solely on a case-by-case approach to engaging experts. We need to strengthen our in-house capacity to develop new skills and methods to investigate and mitigate widespread consumer and market harms.
In 2023, the OT will better equip the agency to approach current and future tech threats by building a team of technologists with deep expertise across a range of specialized fields, including data security, software engineering, data science, digital markets, artificial intelligence, machine learning, and human-computer interaction design. This centralized team will be led by the agency’s Chief Technology Officer and deployed to meet interdisciplinary needs across the FTC.
The Office of Technology’s top priority is to work with staff and leadership across the agency to strengthen and support the agency on enforcement investigations and litigated cases. This could mean dissecting claims made about an AI-powered product to assess whether the offering is oozing with snake oil, or whether automated decision systems for teacher evaluations adversely impact employment decisions and make inferences that impact compensation and tenure.[13] We will also keep a finger on the pulse of business model change, like shifts in digital advertising ecosystems, to help the FTC understand the implications on privacy, competition, and consumer protection. We’re working with attorneys and data scientists to decipher the collection and sale of location data and how that data may harm consumers, and to understand the opaque algorithms making decisions affecting millions of consumers. We are tracking emerging technologies like augmented and virtual reality, where immersive environments provide new types of data[14] and ways to collect, use, and make inferences[15] from it. And we are helping the agency [16] such as by requiring companies to implement multi-factor authentication measures that are resistant to phishing[17] or requiring companies to develop a data retention schedule, publish it, and then stick to it.[18]
Beyond enforcement matters, we serve as subject matter experts to advise and engage with FTC staff and the Commission on policy and research initiatives. Our Office of Congressional Relations may need to gather intel for incoming bills or policy research, whether it’s deciphering the latest applications of blockchain[19] or unpacking unfair design practices that can cause substantial physical and other injuries to minors through features that aim to maximize engagement and data collection. We liaise with our Office of International Affairs to cultivate meaningful relationships with international regulatory units – studying, identifying, and integrating best practices from other agencies to best fit the needs and culture of the Commission.
We will also engage and inform outside experts and the public to advance the Commission’s work. Our team recently presented[20] at an Open Commission Meeting on the agency's approach to systemically address data security risks,[21] we have engaged in research[22] and academic conferences,[23] and we published blog posts on the Log4j security vulnerability[24] and effective breach notification.[25]
As we move forward, we will continue to work with Bureau technologists and attorneys who have deep institutional knowledge and enforcement expertise. Today’s milestone is possible because of the contributions of expert technologists and practitioners in the Division of Privacy and Identity Protection,[26] the Office of Technology, Research and Investigation[27], and the Technology Enforcement Division[28] who have already demonstrated the value of technical expertise to bolster the FTC’s casework. We look forward to uniting technologist efforts to better support and cultivate the work of our team to create and scale best practices and promote stronger interdisciplinary collaboration.
Beyond the FTC, the establishment of the Office of Technology is in line with practices of other federal agencies, including the Consumer Financial Protection Bureau, the Securities and Exchange Commission, and the Department of Justice. Law enforcement agencies in other countries have also increased tech capacity, including the United Kingdom,[29] Australia,[30] Canada,[31] France,[32] Japan,[33] Korea,[34] Germany,[35] and the Netherlands.[36] This goes beyond increasing tech capacity to build products and services.[37] We‘re bringing in sharp technologists to translate complex systems, and to work with attorneys to enforce the law and shape policy matters.
Today’s action marks a significant commitment to sustaining a structure for technologists in and across the FTC. A lot has changed since the radio age, but some things haven’t. Whether the underlying technology is a radio[38] or a mobile app[39] or a tracking pixel,[40] the Commission will continue to hold technology companies accountable for complying with the consumer protection and competition laws we enforce. The Office of Technology will play a key role in that effort.
We are hiring technologists and hope you will help us spread the word: https://www.ftc.gov/technologists
-----
Thank you to the current and alumni FTC technologists for their work in building these foundations at the agency and my colleagues[41] for reviewing this piece.
1 William Kovacic & Marc Winerman, Outpost Years for a Start-up Agency: The FTC from 1921-1925, 77 Antitrust L.J. 145 (2010).
[4] Nancy Rockafellar, “In Gauze We Trust”: Public Health and Spanish Influenza on the Home Front, Seattle, 1918-1919, 77 Pac. Nw. Quarterly 3, 104-113 (1986).
[6] Fed. Trade Comm’n, Annual Report of the Federal Trade Commission (1939), https://www.ftc.gov/sites/default/files/documents/reports_annual/annual-report-1939/ar1939_0.pdf.
[7] Fed. Trade Comm’n, Annual Report of the Federal Trade Commission (1935), https://www.ftc.gov/sites/default/files/documents/reports_annual/annual-report-1935/ar1935_0.pdf
[9] Fed. Trade Comm’n, A Brief Overview of the Federal Trade Commission’s Investigative, Law Enforcement, and Rulemaking Authority (2021), https://www.ftc.gov/about-ftc/mission/enforcement-authority.
[10] Press Release, Fed. Trade Comm’n, FTC Explores Rules Cracking Down on Commercial Surveillance and Lax Data Security Practices (Aug. 11, 2022), https://www.ftc.gov/news-events/news/press-releases/2022/08/ftc-explores-rules-cracking-down-commercial-surveillance-lax-data-security-practices.
[11] Press Release, Fed. Trade Comm’n, FTC Report Warns About Using Artificial Intelligence to Combat Online Problems (June 16, 2022), https://www.ftc.gov/news-events/news/press-releases/2022/06/ftc-report-warns-about-using-artificial-intelligence-combat-online-problems.
[12] Press Release, Fed. Trade Comm’n, FTC to Crack Down on Companies Taking Advantage of Gig Workers (Sept. 15, 2022), https://www.ftc.gov/news-events/news/press-releases/2022/09/ftc-crack-down-companies-taking-advantage-gig-workers.
[13] Rashida Richardson, Defining and Demystifying Automated Decision Systems, 81 Md. L. Rev. 785 (2022).
[14] Brittan Heller, Watching Androids Dream of Electric Sheep: Immersive Technology, Biometric Psychography, and the Law, 23 Vanderbilt J. Entm’t and Tech. L. 1 (2021), https://scholarship.law.vanderbilt.edu/jetlaw/vol23/iss1/1/.
[15] See, e.g., Kate Kaye, Overturning Roe Could Change How Digital Advertisers Use Location Data. Can They Regulate Themselves?, Protocol (June 29, 2022), https://www.protocol.com/enterprise/roe-location-data-digital-advertising.
[16] Alex Gaynor, Security Principles: Addressing Underlying Causes of Risk in Complex Systems, Tech@FTC (Feb. 1, 2023), https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/02/security-principles-addressing-underlying-causes-risk-complex-systems.
[17] Press Release, Fed. Trade Comm’n, FTC Takes Action Against Drizly and its CEO James Cory Rellas for Security Failures that Exposed Data of 2.5 Million Consumers (Oct. 24, 2022), https://www.ftc.gov/news-events/news/press-releases/2022/10/ftc-takes-action-against-drizly-its-ceo-james-cory-rellas-security-failures-exposed-data-25-million.
[18] Decision and Order, In re Drizly, LLC, FTC Docket No. 2023185 (Jan. 10, 2023); Decision and Order, In re Chegg, Inc., FTC Docket No. 2023151 (Jan. 26, 2023).
[19] Dylan Yaga et al., Blockchain Technology Overview, Nat’l Inst. of Standards and Tech. (Oct. 2018), https://nvlpubs.nist.gov/nistpubs/ir/2018/NIST.IR.8202.pdf.
[20] Press Release, Fed. Trade Comm’n, FTC Announces Tentative Agenda for December 14 Open Commission Meeting (Dec. 7, 2022), https://www.ftc.gov/news-events/news/press-releases/2022/12/ftc-announces-tentative-agenda-december-14-open-commission-meeting.
[22] Fed. Trade Comm’n, PrivacyCon 2022 (Nov. 1, 2022), https://www.ftc.gov/news-events/events/2022/11/privacycon-2022.
[23] For example, our team participated in the 2023 Enigma Conference. See Enigma 2023 Conference Program (Jan. 24, 2023), https://www.usenix.org/conference/enigma2023/program.
[24] Fed. Trade Comm’n, FTC Warns Companies to Remediate Log4j Security Vulnerability, Tech@FTC (Jan 4, 2022), https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2022/01/ftc-warns-companies-remediate-log4j-security-vulnerability
[25] Fed. Trade Comm’n, Security Beyond Prevention: The Importance of Effective Breach Disclosures, Tech@FTC (May 20, 2022), https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2022/05/security-beyond-prevention-importance-effective-breach-disclosures.
[26] Fed. Trade Comm’n, Bureau of Consumer Protection, Division of Privacy and Identity Protection, https://www.ftc.gov/about-ftc/bureaus-offices/bureau-consumer-protection/our-divisions/division-privacy-and-identity.
[27] Fed. Trade Comm’n, Bureau of Consumer Protection, Division of Technology Research & Investigation, https://www.ftc.gov/about-ftc/bureaus-offices/bureau-consumer-protection/our-divisions/office-technology-research-investigation.
[28] Fed. Trade Comm’n, Bureau of Competition, Technology Enforcement Division, https://www.ftc.gov/about-ftc/bureaus-offices/bureau-competition/inside-bureau-competition/ftc-technology-enforcement-division.
[29] See Stefan Hunt, The CMA DaTA Unit – We’re Growing!, U.K. Competition and Markets Authority (May 28, 2019), https://competitionandmarkets.blog.gov.uk/2019/05/28/the-cma-data-unit-were-growing/; U.K. Competition and Markets Authority, Digital Markets Unit, https://www.gov.uk/government/collections/digital-markets-unit.
[31] Competition Bureau Canada, https://ised-isde.canada.ca/site/competition-bureau-canada/en.
[35] Bundeskartellamt, https://www.bundeskartellamt.de/EN/Home/home_node.html.
[38] Press Release, Fed. Trade Comm’n, FTC, States Sue Google and iHeartMedia for Deceptive Ads Promoting the Pixel 4 Smartphone (Nov. 28, 2022), https://www.ftc.gov/news-events/news/press-releases/2022/11/ftc-states-sue-google-iheartmedia-deceptive-ads-promoting-pixel-4-smartphone.
[39] Press Release, Fed. Trade Comm’n, FTC Finalizes Settlement with Photo App Developer Related to Misuse of Facial Recognition Technology (May 7, 2021), https://www.ftc.gov/news-events/news/press-releases/2021/05/ftc-finalizes-settlement-photo-app-developer-related-misuse-facial-recognition-technology.
[40] Press Release, Fed. Trade Comm’n, FTC Enforcement Action to Bar GoodRx from Sharing Consumers’ Sensitive Health Info for Advertising (Feb. 1, 2023), https://www.ftc.gov/news-events/news/press-releases/2023/02/ftc-enforcement-action-bar-goodrx-sharing-consumers-sensitive-health-info-advertising.
[41] Special thanks to Jason Adler, Lerone Banks, Krisha Cerilli, Gilad Edelman, Mark Eichorn, Patricia Galvan, Alex Gaynor, Nick Jones, Zehra Khan, Tara Isa Koslov, Sam Levine, Josephine Liu, Erik Martin, Varoon Mathur, Kevin Moriarty, John Newman, Rashida Richardson, Robert Swenson, Holly Vedova, Ben Wiseman, and Daniel Zhao. | Emerging Technologies |
Artificial Intelligence Future We live in an increasingly automated world, and it’s becoming more and more common to see robots and AI being employed in different aspects of our lives. But what does the future of AI & robotics look like? In this post, we'll explore some of the ways that AI & robotics could shape our lives in the coming years. 1. Autonomous Vehicles
Autonomous vehicles are cars that can drive themselves with minimal human interference. Such vehicles use advanced artificial intelligence software, cameras, sensors, and 3D mapping technology to make decisions about stopping, speeding up, or avoidance of obstacles on the road. Researchers are also working on driverless car-sharing services that would allow people to hail a car from their mobile device and have it drive them directly to their destination without having human drivers in the car at all.
2. Smart Homes Smart homes refer to homes equipped with advanced technologies that are connected via the internet or other methods allowing for users to control various parts of their home with a smartphone or voice command system. Smart homes enable individuals to automate lighting fixtures, security systems, thermostats and other devices using AI technology such as voice recognition software or through apps on their phone.
3. Wearable Technology
AI-enabled wearable technology is emerging as one of the most promising technologies out there right now. This includes smart watches which can track vital signs such as heart rate, steps taken and calories burned; fitness trackers which count your daily activities; virtual reality headsets; smart clothing which can take body measurements such as temperature;and even devices implanted into your skin which can measure how much energy you’re expending during exercise so you can tailor your activity accordingly to optimize performance.
4. Robotics-as-a-Service (RaaS)
Robotics-as-a-service robots provide customers with access to robotic solutions in areas such as healthcare, elderly care, food production and logistics through rental agreements instead of buying them outright. This allows businesses access to cutting edge technology without having to invest large amounts of money upfront in purchasing robot hardware and software solutions all at once — rather they pay only when they need these services on a project basis while still getting similar benefits from owning robots outright.
5. Augmented Reality (AR) & Virtual Reality (VR) Augmented reality allows users to interact with digital objects simply by waving their hands or controlling them via gestures instead of a controller or keyboard – think playing a game like Pokemon GO! It overlays virtual objects onto real spaces so that users feel like they’re actually interacting with those objects rather than just seeing them projected onto reality artificially— opening up new possibilities for entertainment experiences both at home and outside through apps like Snapchat lenses and immersive museum tours where one can explore ancient ruins virtually while walking outdoors in real time! Virtual reality immerses its users into virtual environments where they get lost inside these creations instead o f relying on physical controllers or anything else - making VR an extremely powerful tool for simulation training scenarios where students can learn intricate concepts without any risk! The Evolution of AI Artificial Intelligence (AI) is one of the most talked-about technologies in the World today. In its essence, AI is a combination of hardware and software to create intelligent machines that can learn from environment and experience to perform tasks. When it comes to its applications, it has opened up new possibilities and made life easier for many people. It’s no wonder why many companies are focusing on AI technology to expand their capabilities and make their services or products better.
But just like any developing technology, AI also has an interesting evolution story behind it. Here’s how AI has evolved over time.
1. 1940s – All-purpose machine: The first recorded use of the term “artificial intelligence” was from a famous paper published in 1955 by John McCarthy at Dartmouth College, who proposed "Research into the Mathematical Foundations for Automated Reasoning". This marked the starting point of modern AI research.
2. 1950s and 1960s – Early Artificial Intelligence Systems: Throughout this period, some early AI systems were developed such as Samuel's Checkers program which uses a decision tree algorithm to play checkers against you in 1957, Eliza which simulates conversation with a psychotherapist through keywords searches in 1966 and Shakey robotic which learns how to navigate with natural language processing in 1968, etc
3. 1970s – Expert Systems Era: The decade saw more development focused on more specific tasks; expert systems emerged where simpler AIs had stopped before, making complex decisions based on large data sets such as medical diagnosis systems or other programs that could predict market changes
4. 1980s–Teaching Machines: In 1980, five years after the first laptop computer was introduced by IBM, machines learned from each other which brought about powerful expert systems designed to be self-learning machines
5. 1990s–The Data Mining Boom: Big data mining tools started emerging around this period as researchers began developing algorithms that could detect patterns in large databases using Neural Networks (NN). These NN allowed us to map millions of possible input/output combinations with multiple hidden layers between them making them very powerful tools that mimic human behavior when fed with enough data
6. 2000 Onwards– Deep Learning Algorithms & Cognitive Computing Take Over: We entered an era full of innovation through deep learning algorithms and cognitive computing when Google acquired DeepMind technology thus ushering new levels of technological power never seen before
7. Today - Reinforcement Learning & Narrow AI Coming Together: Starting from 2015 onwards we saw reinforcement learning being implemented across different sectors; ranging from robotics & autonomous vehicles (AV) all the way up until gaming activities are involved opening up new possibilities for scaling artificial general intelligence (AGI). As for narrow AI applications start becoming more widespread we witness traditional machine learning models improving over time attaining even better performance than deep learning counterparts due various reasons such as proper optimization procedures & interpretability features presented by certain models allowing users gain greater insights over delicate patterns present inside datasets Why Is Artificial Intelligence Important? Artificial Intelligence (AI) technology has been around for decades, but in recent years its capabilities and applications have grown exponentially. AI is becoming critical to a wide variety of tasks, from eSports gaming to self-driving cars. But why is artificial intelligence important? That’s what we’ll explore here as we look at the big ways that AI is changing our world and how it can give humans an edge over even their toughest competitors.
1. Increased Efficiency
One of the biggest advantages of using AI is its ability to rapidly learn. This allows machines to run faster, with fewer errors than ever before. With AI processing large amounts of data almost instantly, businesses can make decisions quicker, react faster to customer needs, and identify potential problems quickly. In cases where certain processes would take too long for a human alone, AI can expedite them drastically while maintaining accuracy.
2. Improved Accuracy
Another way that AI can benefit businesses is by improving accuracy in data analysis and decision-making processes. By leveraging types of machine learning algorithms like deep learning networks and natural language processing (NLP), computers are able to analyze large amounts of data accurately within milliseconds or even seconds—things that may have taken hours for humans to achieve previously. This allows businesses to make more informed decisions quickly and effectively.
3. Lower Costs
The use of AI makes it possible for businesses to operate more cost-effectively as well by automating tasks that used to require complex and costly manual labor or expensive software solutions. Automation not only increases efficiency and scale, but it also helps reduce costs significantly while improving overall performance levels in a defined process environment such as customer service interactions or product assembly lines.
4. Increased Productivity
AI can facilitate increased productivity by giving people access to resources they wouldn’t otherwise have available—and often times these resources provide insights you wouldn't get any other way. For example, data mining systems allow organizations to sift through huge datasets quickly and then visualize them so employees can see relationships they couldn’t before—helping create new opportunities where there were none before and increasing productivity along the way The Impact of AI on Society The use of Artificial Intelligence (AI) is rapidly increasing and its impact on society will no doubt be widespread in the near future. AI technology can be used to improve healthcare, increase safety, optimize transportation, and more. In this post, we’ll explore the potential impacts of AI on our everyday lives.
1. Improved Efficiency and Productivity
One major impact that AI has had on society is in terms of efficiency and productivity increases. AI-driven algorithms can help automate mundane tasks such as data entry, freeing up employees' time to focus on other tasks that require more creativity and innovation. Automating these processes also helps to reduce errors and human bias while providing companies with insights into customer data in a cost-effective manner.
2. Safety Enhancements
Safety is an important aspect of any society, especially when it comes to transportation or industrial applications. The utilization of AI for autonomous vehicles allows for increased safety features like braking assistance in case of danger or even accident prevention using facial recognition technologies. Additionally, industrial uses of AI can detect certain dangerous behaviors before they even occur by monitoring workers’ actions through motion-capture cameras .
3. Humanitarian Effects
AI will also have a profound effect on resolving humanitarian issues like poverty and economic inequality thanks to machine learning algorithms that can sort through huge amounts of data simultaneously looking for patterns and correlations; this could help inform government policies about social good initiatives such as education reform or developing infrastructure in remote locations AI in the Near Future Artificial intelligence has been at the forefront of technology for a number of years now, and it looks set to become even more important in the coming years. To prepare for this, it is necessary to understand how AI may shape our lives in the near future, and here are just some of the ways that AI could be used:
1. Automation
AI can be used to automate everyday tasks, from driving cars to answering customer service questions. This will reduce both human labor costs and risks associated with certain jobs. In addition, AI-driven automation systems will be able to identify potential problems much faster than humans currently can.
2. Healthcare
AI has already started being used in healthcare for diagnosis and treatments via pattern recognition systems and machine learning algorithms that are designed to provide personalized treatment plans based on patient data. It is also possible that AI-based robots could eventually replace doctors in many medical procedures such as surgeries or diagnostics.
3. Manufacturing
The use of robotic automation combined with AI technology could revolutionize manufacturing processes: AI robots will be able to identify defects in products quickly and efficiently, allowing companies to improve their quality control while reducing waste materials and labor costs.
4. Retail & Ecommerce
AI technology can be leveraged in ecommerce applications such as product recommendation engines which are powered by machine learning algorithms that analyze user data such as browsing habits and purchase history. This can help businesses offer personalized experiences for customers which can lead to higher conversion rates.
5. Security & Surveillance
AI can also improve security measures by providing facial recognition systems which are able to quickly recognize individuals or objects from video footage with a high level of accuracy—making it easier for companies or law enforcement agencies to identify potential threats or suspicious behavior more quickly than ever before. Will AI Take Over the World? AI is developing rapidly. But will it become so powerful that it takes over the world one day? We can’t be sure, but one thing’s certain – we need to start preparing for that eventuality. Here are some steps to take if you want to prepare for when AI starts taking over the world:
1. Learn About Artificial Intelligence
There are different types of AI and each has their own applications and implications for the future. It’s important to have a basic understanding of the different types of AI, how they work, and what potential risks they might bring with them. The more you know about AI and its implications, the better prepared you can be for its future influence on human life.
2. Develop a Strong Understanding of Technology
AI is closely linked with technology, so developing your skills in this area is an essential step to becoming prepared for its impact on our world. Take courses or read books to learn more about technology and its various elements—including hardware, software, databases, machine learning algorithms, neural networks, etc.—and familiarize yourself with anything related to digital channels like social media marketing or SEO strategies.
3. Stay Current On Innovations in Artificial Intelligence
The field of artificial intelligence is constantly evolving so it’s important to stay current on new trends and advances in the industry. Follow AI-related news sources that cover emerging technologies and developments related to deep learning or cognitive science so you can remain informed as changes occur within the industry.
4. Educate Yourself on Ethical Issues Related To Artificial Intelligence
AI has potential ethical implications for society that must be taken into account when discussing the possibility of AI taking over the world one day soon. Research best practices on ethical considerations related to data privacy regulations like GDPR or philosophical topics like consciousness or sentience among robots and other intelligent systems so that you can understand these complexities when considering AI’s long-term impacts on society at large Preparing for the Future of AI Artificial intelligence is one of the biggest trends in technology today, and it’s shaping the future of humanity. As AI becomes increasingly sophisticated, you will need to stay on top of the latest advancements to prepare for what lies ahead. Here’s how you can get ready for the future of AI:
1. Get Educated on AI
AI is an ever-evolving field, so it's important to stay up-to-date with the latest developments in order to understand AI and its implications. Begin by reading up on topics such as machine learning, artificial neural networks, robotics, natural language processing, deep learning, and computer vision. There are plenty of free online courses that provide comprehensive education about AI.
2. Keep Up with Industry News
In order to stay informed about advances in AI technology, make sure that you’re subscribed to current threads about this topic. Look for discussion boards focused on artificial intelligence where experts share their ideas and reports about new breakthroughs in this field. Take advantage of newsletters from tech companies and major news outlets – these contain valuable information regarding changes in AI research and technology developments that may be applicable to your industry or sector.
3. Test New Technology
One way to update yourself on what's happening with AI technology is by testing out any new tools or products launched by tech giants like Google or Apple before they hit the market. You'll be able to gain an understanding of how these tools work before anyone else does and use them strategically at your organization if needed.
4. Investigate Ethical Questions Associated with Artificial Intelligence
There are plenty of ethical questions surrounding artificial intelligence and machine learning algorithms, particularly when it comes to decisions made without human intervention or input. It’s important that you research how organizations can create responsible systems for developing new uses for AI so that the benefits outweigh any potential risks posed by these technologies (such as privacy concerns).
5 Network with Professionals who Work with AI
Maintaining a professional network is essential if you want to stay ahead of emerging trends in artificial intelligence research and applications – especially if you plan on working directly with this technology in your career path ahead. Attend events related to digital innovation or technology that focus specifically on topics related to machine learning processes or data analysis techniques so that others can learn from your experience too! Is AGI a Threat to Humanity? Artificial General Intelligence (AGI) is no longer seen as a far-fetched concept in the world of science, technology and now society at large. In fact, many experts believe that AGI is here to stay – but is it a threat to humanity? Some people believe so, pointing out potential issues facing us in the future when AGI is more prevalent. So how do we prepare ourselves for a world with AGI? Read on to find out.
1. Understand What AGI Is
Before we can properly assess any potential threats posed by AGI, it’s important to understand what it actually is and how it works. Like regular AI systems, AGI systems are programmed with algorithms and data sets which they use to develop an understanding of their environment, allowing them to make decisions autonomously without human input. The key difference between “regular” AI and AGI is that with the latter there’s a more general application – essentially, any task under its remit can be carried out.
2. Identify Its Uses
Identifying both potential “good and bad” uses for AGI helps us to accept its implications for good or ill as well as plan ahead for potential issues. For example, one potentially useful application of AGI could be helping humans perform complex tasks such as medical diagnosis or forecasting economic trends being just two cases in point; meanwhile less user-friendly applications could involve autonomous weapons or surveillance technologies capable of spying on entire populations.
3. Study the Regulations Governing Its Use
It doesn’t take much imagination to think of ways that unregulated use ofAGI could create havoc in our lives if things go wrong; indeed this fear was recently given official weight when 26 Governments recently signed up on a Cooperation Framework Agreement at the United Nations concerning Lethal Autonomous Weapons Systems (LAWS). This outlines regulations pertaining to use of LAWs only but serves as an indicator that any other technology exhibiting characteristics associated with laws should also be regulated accordingly too – including AGIs using autonomous military forces in real battlefield situations.
4. Educate Yourself about Potential Negative Implications
By understanding potential negative implications posed by the emergenceofAGIs, such as higher risk of job displacement in certain sectors caused by automation due to cheaper computing capabilities; environmental damage due to lack of monitoring from responsible entities etc., one equips oneself with requisite knowledge necessary for making informed decisions about mitigating risks versus harvesting its rewards positively instead where appropriate/possible -the latter including possible assistance from smart robots performing certain timesaving tasks normally assigned/reserved for humans instead – leading ultimately towards more efficiency gains for organisations deploying them within workplace multicultural milieu settings thereby further developing global businesses optimally performance levers against their competition similarly technologically represented(read: Google algorithms generated resultant information disseminated offerings / alternate revenue generation strategies). How Will We Use AGI? Artificial General Intelligence (AGI) is a branch of artificial intelligence focused on achieving human-level intelligence that can be applied to any problem. AGI has been described as the "Holy Grail" of AI research, as it aims to replicate the full range of human cognitive abilities, rather than just being able to complete narrowly defined tasks like current AI. In light of this, it's easy to see how AGI could revolutionize how we interact with the world and handle difficult, time-consuming tasks. Here's a list of ways that AGI will be used in our modern lives:
1. Automate Everyday Tasks
From scanning documents and emails to talking with customers or doing research, AGI will allow us to automate common everyday tasks that currently require manual labor from humans. This will open up more time for humans to focus on creative endeavor rather than mundane manual activities.
2. Create More Accurate Diagnoses for Athletic & Medical Care
Current medical technologies lack the ability to diagnose conditions accurately based on symptoms alone which can lead to misdiagnosis and subpar care as a result. With AGI, we can build models that are better suited at diagnosing conditions by detecting patterns related to symptoms. This will improve both athletic and medical care by providing practitioners with more accurate diagnoses based on symptoms they observe in patients or athletes they work with.
3. Utilize Natural Language Processing
With natural language processing (NLP), AGI solutions can analyze vast amounts of data quickly and provide meaningful insights into businesses, markets and trends in an efficient manner. NLP enables machines to understand long pieces of text or instant messages and respond appropriately in real-time just like humans do naturally when interacting with each other in conversation.
4. Develop Self-Driving Cars & Autonomous Drones
Another potential application for AGI is in self-driving cars and autonomous drones which require advanced object recognition algorithms along with speed tracking capabilities powered by sophisticated AI systems capable of predicting what comes next after analyzing external data points such as weather conditions, road blocks or collisions on the highway ahead etc.
5. Enhance Security & Authentication Solutions
AGI combined with facial recognition can improve authentication solutions by allowing secure access to computers without passwords. This is more reliable than traditional password logging which can be easily hacked. AGI can also be used to detect and prevent cyber-attacks by recognizing patterns in malicious activities and alerting the user or system administrator before any damage is done.
In conclusion, AGI has the potential to revolutionize how we interact with the world and handle difficult, time-consuming tasks. From automating everyday tasks to developing self-driving cars and autonomous drones , AGI will be a key component in the future of AI and robotics.
6. Improve Education
AGI can be used to improve education by providing personalized learning experiences tailored to each student’s individual needs and abilities. AI-powered tutoring systems can provide real-time feedback and guidance, helping students learn more effectively and efficiently. AGI can also be used to create virtual classrooms where students from all over the world can interact with each other in a safe and secure environment.
Conclusions
Artificial Intelligence has the potential to revolutionize how we interact with our environment. It has already impacted many aspects of everyday life and its use is expected to expand further in the future. As AI continues to become more powerful, it can be used to improve healthcare, agriculture, transportation, manufacturing, education and many other areas. Additionally, AI can create entirely new markets and opportunities for people around the world. | Emerging Technologies |
China has a "stunning lead" in 37 out of 44 critical and emerging technologies as Western democracies lose a global competition for research output, a security think tank said on Thursday after tracking defence, space, energy and biotechnology. From a report: The Australian Strategic Policy Institute (ASPI) said its study showed that, in some fields, all of the world's top 10 research institutions are based in China. The study, funded by the United States State Department, found the United States was often second-ranked, although it led global research in high-performance computing, quantum computing, small satellites and vaccines. "Western democracies are losing the global technological competition, including the race for scientific and research breakthroughs," the report said, urging greater research investment by governments.
China had established a "stunning lead in high-impact research" under government programs. The report called for democratic nations to collaborate more often to create secure supply chains and "rapidly pursue a strategic critical technology step-up." ASPI tracked the most-cited scientific papers, which it said are the most likely to result in patents. China's surprise breakthrough in hypersonic missiles in 2021 would have been identified earlier if China's strong research had been detected, it said. "Over the past five years, China generated 48.49% of the world's high-impact research papers into advanced aircraft engines, including hypersonics, and it hosts seven of the world's top 10 research institutions," it said.
China had established a "stunning lead in high-impact research" under government programs. The report called for democratic nations to collaborate more often to create secure supply chains and "rapidly pursue a strategic critical technology step-up." ASPI tracked the most-cited scientific papers, which it said are the most likely to result in patents. China's surprise breakthrough in hypersonic missiles in 2021 would have been identified earlier if China's strong research had been detected, it said. "Over the past five years, China generated 48.49% of the world's high-impact research papers into advanced aircraft engines, including hypersonics, and it hosts seven of the world's top 10 research institutions," it said. | Emerging Technologies |
Chinese-owned internet company Baidu Inc. is reportedly launching a ChatGPT-style bot in March to merge with the company’s search engine eventually, an unnamed source familiar with the matter told the Wall Street Journal.
Baidu, known as China’s version of Google, reportedly plans to incorporate artificial intelligence into its online search engine, making it one of the few tech companies worldwide to implement the technology.
The news comes following a years-long effort by Baidu to research AI technology costing billions of dollars in the process. The internet company’s latest bot will be based on its Ernie system, a large-scale machine learning model that Baidu has trained over several years to take in data, and will be the foundation for the new tool, the Wall Street Journal reported.
Baidu did not immediately respond to Gizmodo’s request for comment.
Baidu is rolling out its AI-powered chatbot as the state continues to censor the internet and block access to ChatGPT, a system that can generate text based on a prompt for things like emails, scientific essays, school papers, poetry, malware coding, and answer questions. However, Baidu will limit the Chatbot’s accessibility to align with China’s censorship rules and will restrict its outputs to avoid hate speech and topics considered to be politically sensitive, the source told the Wall Street Journal.
The debate over AI-based systems like ChatGPT has spread to schools that have banned the use of ChatGPT in several countries including the U.S., France, and India as teachers cite concerns that the AI technology will provide students with the opportunity to plagiarize essays, Yahoo! Finance reported. It’s also received criticism for its potential to take over jobs and its potential to release biased or inaccurate information.
A top university in France, Sciences Po, sent an email to staff and students on Friday, writing, “Without transparent referencing, students are forbidden to use the software for the production of any written work or presentations, except for specific course purposes, with the supervision of a course leader,” Reuters reported.
Despite its shortcomings, the U.S. government and Beijing are competing for leadership in introducing ChatGPT-style technology to emerging technologies. China has yet to release the name of its new ChatGPT-style bot but will reportedly supply users with search results similar to OpenAI’s platform.
Baidu’s chief executive, Robin Li, told employees last month that the new technology represents new opportunities for the company, according to a transcript the Wall Street Journal reported. “We have such cool technology, but can we turn it into a product that everyone needs?” Li said, adding, “This is actually the hardest step, but also the greatest and most influential.” | Emerging Technologies |
WHITE PLAINS, N.Y., Oct. 2, 2023 /PRNewswire/ -- SMPTE®, the home for media professionals, technologists, and engineers, has announced the partner companies that will be powering the newly added Emerging Technology Showcase during the SMPTE 2023 Media Technology Summit, which brings together global industry leaders to share their latest research and solutions to industry issues while providing a space for professionals to network and learn.
This stage, visible from all vantage points among the exhibitors in the Solutions Hub, will highlight groundbreaking solutions to the industry's biggest problems. Presenters and panelists from many of SMPTE's exhibiting companies will be able to discuss and demonstrate their solutions with the support of the leading-edge technologies provided by the Summit technology partners: Cinionic, QST LED, The Studio-B&H, and AI-Media.
Projection partner Cinionic will be presenting next-generation laser projectors, including the latest model dedicated to postproduction, which will tower above the exhibits, making the presentations visible to all attendees in the Solutions Hub. This state-of-the-art model was designed with critical image quality and postproduction workflows in mind.
LED partner QST LED will provide the backdrop for the Showcase Stage – HoloDeck™, the first LED exclusively built for in-camera visual effects (ICVFX) and virtual production, with the fourth-generation PantaBlack™ and a unique aluminum module that brings a myriad of benefits to ICVFX, including incredible contrast ratio and low reflectance, shadeless/maskless technology, aluminum heat-sink technology that eliminates heat spots and reduces need for secondary color calibration, and precise screen flatness and fine tolerance.
Live production partner The Studio-B&H will provide live capture systems for both the Emerging Technology Sessions and the General Conference Sessions. They will also be showcasing a complete virtual production solution including an LED volume, a versatile media server, camera tracking, and an image-based lighting solution that can be easily deployed in the field for a wide range of applications.
Live captioning partner AI-Media will be showcasing the LEXI Automatic Captioning & LEXI Viewer, which can easily push captions to event displays, as attendees will see in both the Emerging Tech and General Sessions. Powered by AI, LEXI delivers results that rival human captions and seamlessly integrates with LEXI Viewer, an HD-SDI captioning device, for event presentations.
A full schedule of exhibiting companies and invited presentations will take place at the Emerging Technologies Showcase, including:
- AI-Media - Uninterrupted Captioning: Harnessing hybrid cloud for Seamless Disaster Recovery
- Avid Technology - Assisting the Creative Community
- China Research Institute of Film Science and Technology - Chinese Solution for LED Cinema Display Technology
- Flanders Scientific, Inc. - QD-OLED for Reference Grade Large Format HDR Monitoring
- Immersive Dimension - 3D Virtual Production Workflow with ST-2110
- Like Minded Labs - Unifying Creativity: A New Horizon in Collaborative Innovation
- Matrox Video - Asynchronous Processing – A Paradigm Shift for Live Production: Anatomy and Consequences
- Megapixel - The Benefits of 100G SMPTE ST 2110 as Infrastructure for Broadcast, Virtual Production and Live Events
- Noah Kadner - Sustainable Virtual Production Techniques
- Riedel Communications - Standards Based Scalable SDI/ST 2110 Options are on the Horizon
- TAG Video Systems - Content Matching and Measurement using SMPTE ST 2064-based Fingerprinting
The Emerging Technology Showcase Sessions on Tuesday and Wednesday, Oct. 17-18, are included with any conference registration, including the Solutions Hub Pass.
Current Summit sponsors include Signiant, Dell Technologies, Warner Bros. Discovery, Adeia, Sony, Arista Networks, Cinionic, QST LED, The Studio-B&H, AI-Media, TrackIt, and Riedel Communications.
More information about the SMPTE 2023 Media Technology Summit can be found here.
Those seeking to attend the Summit can register here.
About SMPTE
SMPTE is the global society of media professionals, technologists, and engineers working in the digital entertainment industry. The Society fosters a diverse and engaged membership from both the technology and creative communities, delivering vast educational offerings, technical conferences and exhibitions, the SMPTE Motion Imaging Journal, and access to a rich network of colleagues essential to career success. As an internationally recognized standards organization, SMPTE also provides a vital technical framework of engineering standards and guidelines that allow the seamless creation, management, and delivery of media for art, entertainment, and education worldwide.
Learn more about SMPTE at smpte.org.
All trademarks appearing herein are the properties of their respective owners.
SOURCE SMPTE | Emerging Technologies |
Subsets and Splits