An FP Analytics synthesis report, produced with support from the Qatari Ministry of Communications and Information Technology
Productive applications of artificial intelligence (AI) are exponentially increasing globally in number and scope, holding tremendous potential but also carrying notable risks. Innovations such as ChatGPT launched AI into the mainstream in 2023, showcasing the technology’s capacity to revolutionize entire industries, from content creation to translation services, but also significant repercussions, such as the displacement of workers or violations of data privacy, in some instances. While AI is continuing to demonstrate beneficial uses, from optimizing logistics to enhancing research, new risks also continue to emerge, from discriminatory algorithms to enhanced cyber threats. While there is, indeed, cause for optimism about the constructive impacts that application of the technology can yield, there remains palpable concern as business leaders and policymakers assess how to leverage AI’s positive potential while controlling for significant risks. Today, policymakers and business leaders are grappling with the rapid pace of AI development, the spillover of geopolitical tensions into the AI market, and the concentration of the AI market into the hands of a few companies and countries. These rapidly changing dynamics necessitate cross-sectoral dialogue and collaboration to harness the opportunities while mitigating the risks.
To address these issues, Foreign Policy, in partnership with the Qatar Ministry of Communications and Information Technology, hosted a high-level, Chatham House Rule discussion, Strengthening Actionable Intelligence from AI, Big Data, and Advanced Tech, on the sidelines of the 2024 World Economic Forum in Davos, Switzerland. The discussion featured a range of private-sector leaders and industry representatives with expertise and knowledge at the cutting edge of AI, big data, and advanced technologies. These individuals brought experience to bear from a wide range of sectors, including telecommunications, cybersecurity, education, finance, information technology, and sustainability, in an effort to generate cross-sectoral insights into the state of AI in a variety of contexts.
The participants conveyed an outlook of promise and potential peril across industries, warned about the increasing commercial and geographical concentration of the AI market, and highlighted the need for forward-thinking regulatory guardrails to scale constructive use cases and minimize the potential risks. There was a broad consensus that the technology is at a critical pivot point that necessitates increased investment, political will, and cooperation between industry and government across borders. As one participant concluded, “This is a shared responsibility. All of us have to invest.”
Perspectives on AI’s Promise and Potential Peril
Participants widely acknowledged the near-ubiquity of AI and associated technologies across their spheres and how it is driving new business, altering existing operations, or completely generating new lines of business. For example, a 2023 survey of global business leaders found that nearly a third of companies are already using generative AI tools, such as ChatGPT, for at least one function, with a quarter of C-suite executives saying that they use generative AI in their own work. However, a majority of roundtable participants cited considerable uncertainty about the future of AI—and examples of negative uses today—as reasons to be concerned about the technology’s future, with particular emphasis on the perverse incentives of profit maximization. One discussant pointed out that while AI is being used for a range of health care applications, such as disease identification and diagnosis, there have also been prominent cases of health care providers using AI to identify individuals for whom it would be profitable to deny care—ultimately worsening health outcomes rather than improving them. Elsewhere, in cybersecurity, AI has enabled real-time detection of cyber threats at an unprecedented pace and scale, but participants warned that it is similarly potent in the hands of attackers, enabling new vulnerabilities and enabling mis- and disinformation. Addressing AI’s application across the media landscape is particularly pressing, given that this year, nearly half the world’s population will head to the polls, providing fertile opportunities for AI-augmented mis- and disinformation to have real-world impacts. This balance, between societal benefit and societal risk, played out again and again in the discussion.
Overall, the dialogue conveyed that, in each field, an immense range of potential impacts remains possible, with the future attainment of those realities left hanging in the balance, influenced heavily by how governments and industries choose to approach AI in the coming years. By and large, participants erred on the side of caution, particularly with respect to national security and data privacy. One use case demonstrated the ability to combine surveillance measures with AI to help alter consumer behavior and reduce pollution, but such usage raised clear concerns about the risk of misuse. Indeed, several of the represented companies expressed caution about working in some countries out of such concern. However, inspiring use cases for AI also shone through, such as its ability to increase the efficiency and scalability of climate change solutions or to make education more accessible. Recognizing the imperative to maximize AI’s benefits while controlling for its risks, the discussion turned to the larger structural challenges facing the industry in the years ahead.
Contending with the Concentration of AI Power
One of the more significant issues identified by the discussants was the considerable commercial and geographic concentration of the AI market. From a commercial perspective, discussants expressed particular concern with the AI supply chain being heavily concentrated in a few, North American-based companies, with select others supplying the critical chips upon which much AI-related computation depends. As one participant put it, “It sounds less like a market and more like a cartel.” This market concentration could impact the future direction of AI development, with applications designed to advance the goals of a few companies, with limited ability to control for societal risks or foster competition and broad-based innovation. Another discussant noted that the vast majority of funding to AI-focused startups—as much as 80 percent of series A funding—is being supplied by these three companies. The end result is a fundamental misalignment between the select goals of today’s primary purveyors of AI and the far wider world of potential AI applications that could render the maximum benefit to humanity.
Discussants likewise noted that the concentration of AI capacity and decision-making power in a select few countries could further divide the world between haves and have-nots. The United States and a select few wealthy or powerful states, such as China, the United Kingdom, Japan, and Germany, enjoy a far greater ability to encourage and develop AI-based companies within their territories, while much of the developing world stands to be consumers of foreign AI-derived products and services. As in the commercial marketplace, these advanced countries are primed to direct AI policy development and use. One participant added that wealthier countries are likely to have the capacity to determine how companies use their citizens’ data and what data may be localized versus intermingled, whereas developing countries with under-developed legal and regulatory frameworks may have relatively limited capacity to harness the potential and guard against the risks of AI. It was noted how evolving great power competition among the U.S., China, and other countries, as well as fragmentation of markets from digital sovereignty measures, will further impact these dynamics and remain issues of concern.
Building the Right Guardrails
Acknowledging the promise and potential peril of AI, and in the context of market concentration, the discussion returned repeatedly to the issue of regulation and the need to establish the right guardrails for AI within specific industries, at the country level and around the world. Tellingly, nearly all participants agreed that existing regulatory efforts related to AI are inadequate, with only a single participant believing that current regulation is headed in the right direction. This overall negative picture emerged despite recognition of the positive accomplishments of several countries, including the United States, the United Kingdom, and members of the European Union. Several participants were complimentary of the United States’ practical, flexible approach to AI regulation, citing its ability to encourage innovation, while others favored the more stringent protections established in the United Kingdom and Europe Union, which have taken a more prescriptive approach to AI regulation, such as though the EU AI Act. The task for policymakers is finding the right balance to encourage innovation and development while maintaining adequate protections against misuse. This task is made significantly more challenging by the rapid pace of AI development and its proliferating use cases. Multiple speakers noted the need to accelerate the rate at which governments are developing AI regulation, suggesting the need for more adaptive policy, less bureaucracy, and the closer incorporation of technical experts, innovators, and other stakeholders directly into policy formulation. Several participants acknowledged the positive steps taken by the U.S. National Institute of Standards and Technology to create an AI Safety Institute and Consortium, which, in collaboration with industry, will work to develop guidance and benchmarks for evaluation, publish red-teaming guidelines, and improve content authentication, among other activities.
Successful regulation of the AI marketplace also depends on a number of other factors, including more effective international cooperation, the development of baseline definitions and standards, and the mobilization and sustainment of sufficient political will. Participants discussed the lack of an appropriate international body to regulate AI’s operation across borders, including to facilitate the safe management of data, with one speaker suggesting the need for an oversight organization akin to the International Atomic Energy Agency. Likewise, discussants identified that establishing a shared definition of what systems and applications constitute “AI” was a critical pre-requisite to more effective multilateral cooperation.
Finally, several participants pointed to the mobilization of political will, at the local and international levels, as critical to the success of future regulatory frameworks. These individuals pointed to the positive outcome of the COP 28 climate conference as evidence that, however difficult formulating multilateral agreement may seem, there is potential for incremental, globally beneficial action on seemingly intractable policy challenges. In the case of AI, such cooperation will be critical to ensuring inclusive, sustainable, and safe use for decades to come.
By Phillip Meylan, Affiliate Researcher, FP Analytics; photography by Thomas Oswald. This synthesis report was produced by FP Analytics, the independent research division of The FP Group, with support from Qatari MCIT. FP Analytics retained control of this report. Foreign Policy’s editorial team was not involved in the creation of this content.