A synthesis report by FP Analytics with support from Lenovo

Artificial intelligence (AI) is rapidly transforming societies and economies worldwide, becoming a pivotal driver of innovation, growth, and competitiveness. As AI continues to shape our collective future, countries and companies are developing AI at different rates and under fragmented regulatory schemes. Amid trends of fragmentation globally, international efforts are, however, emerging to harmonize AI governance frameworks and ensure that AI systems contribute to sustainable development for all.  At the recently concluded 80th United Nations General Assembly (UNGA), Member States agreed to establish the United Nations International Scientific Panel on AI and the Global Dialogue on AI Governance. These mechanisms build on and operationalize the Global Digital Compact, a comprehensive framework on digital cooperation and governance of artificial intelligence, a year after it was adopted in September 2024.

Against this backdrop and on the sidelines of UNGA80, Foreign Policy, in partnership with Lenovo, convened a high-level, closed-door roundtable, AI for All: Building Global Capacity to Unlock AI Innovation, bringing together policymakers, academic experts, industry, and civil society leaders to explore practical approaches to fostering AI innovations while ensuring all countries reap the range of potential benefits AI offers. The key takeaways from this roundtable align with the thematic focus and agenda of the forthcoming Global Dialogue on AI Governance, offering insights into the challenges and opportunities to close disparities in AI access and capacity, particularly affecting low-and middle-income countries.

The roundtable brought together a diverse group of participants from governments, multilateral institutions, the private sector, and civil society.

Discussion Topics

Stakeholder coordination and collaboration

  • How can policymakers and industry stakeholders collaborate on establishing global norms and governance frameworks as AI development continues to accelerate? 
  • What practical models of collaboration among governments, industry, civil society, and academia can accelerate AI adoption while avoiding duplication of efforts?

Infrastructure for scaling

  • What foundational enablers should be prioritized to ensure that AI-driven growth is inclusive, especially for emerging markets (e.g., broadband connectivity, digital literacy, and data infrastructure)?
  • How can investments in infrastructure and skills be paired with responsible AI practices to align with public trust and competitiveness while safeguarding against misuse?

Economic components of AI acceleration

  • What role should governments and stakeholders play in designing policies that foster innovation and competitiveness while safeguarding against misuse?
  • How can AI-driven growth strategies and industrial policy foster broader societal benefits, such as job creation, inclusion, and sustainability?
Roundtable participants discussed expanding AI access and capacity of low- and-middle-income countries.

Key Takeaways

Policymakers need to move beyond the binary “to regulate or not to regulate” debate and adopt more tailored approaches to AI governance.  As one participant noted, choosing not to regulate AI is itself a regulatory decision. Acknowledging this, participants emphasized the need for localized and adaptive governance approaches. For instance, regulatory sandboxes create controlled environments where policymakers can test AI systems before market releases,  allowing for targeted policy response. Participants highlighted that sector-specific regulations already exist in areas like financial services and health care; the challenge is adapting and clarifying their scope in the context of AI. Concrete suggestions offered include clarifying data protection frameworks for AI’s use of personal data and updating election laws to address AI-generated deepfakes during campaigns. A key unresolved gap identified was the AI liability regime: accountability must be clarified between model developers and deployers, with a participant suggesting the creation of liability safe harbors to encourage transparency in the development of AI models.

Building infrastructure, institutional foundations, and capacity for AI adoption remain global challenges. The lack of AI readiness with respect to governance strategies, talent, and infrastructure reveals the need to build capacity at multiple levels, from skills development and digital public infrastructure to governance institutions. Expanding affordable access to compute infrastructure in low- and-middle-income countries will enable a range of local actors to harness AI’s benefits and contribute to its development. In terms of governance capacity, a participant noted that only 28 out of 80 countries assessed had a national AI strategy, and just 20 had regulatory capacity. To strengthen governance capacity, providing governments with the right “vocabulary” for discussing AI will enable them to map issues more clearly and identify immediate gaps. In terms of talent, a participant noted that around 60 percent of the AI skilled workforce come from China, India, and the United States, highlighting limited technical capacity elsewhere in the world. Diverse talent pipelines need to be developed to ensure that underrepresented communities are engaged in building more AI that is inclusive and representative.  

Countries, especially low-and-middle-income, should collect, leverage, and safeguard local data to drive innovation and meet local development needs. AI deployment needs to reflect the realities of different communities. A dearth of context-specific data can result in AI applications that produce biased or even harmful outcomes when deployed across diverse settings. Absent locally relevant data, models often default to norms from other contexts, which may not translate well across cultures or sectors. Data collection, however, remains challenging in low- and middle-income countries, particularly for communities whose languages are primarily oral and lack written records. At the same time, participants raised the importance of ensuring ownership over cultural assets that are digitalized and incorporated into AI models. Without proper safeguards, such assets could be commercialized or used by foreign actors without recognition or benefit-sharing. Addressing these issues of linguistic and cultural ownership will be critical to making AI development inclusive and respectful of local identities.

Foreign Policy’s Rishi Iyengar moderated the AI for All Roundtable.

“Small AI” can expand access to the benefits of AI by delivering context-specific solutions without the need for massive data and compute infrastructure.  So-calledSmall AI” offers a more affordable and sustainable way for countries to harness AI as they operate with lower resource requirements than large language models (LLMs) that depend on supercomputers and data centers. In Nigeria, for example, a program that used small AI-powered chatbots condensed training materials that would normally take two years into six weeks. Small AI can be applied in critical sectors such as agriculture and climate and to specialized tasks such as “automated red teaming,” a testing process designed to uncover vulnerabilities in AI. Small AI also requires less energy compared to LLMs, potentially making it a more sustainable approach to AI innovation. The growing momentum around open-source small language models creates new opportunities for innovation that are more inclusive and responsive to local needs.

Governments, multilateral institutions, and civil society should play greater roles in shaping AI. Currently, AI development is largely influenced by the priorities of major technology companies. Yet, the future of AI is not inevitable in terms of which models drive the market, who sets the rules, and how benefits and risks are distributed across countries. The future of AI can still be shaped by collective action: “AI does not just happen to us,” as one participant noted. Strengthening the capacity of government institutions to evaluate AI models and applications will enable them to become more effective in driving AI development. In addition, policymakers can help steer market incentives to ensure that investments in AI prioritize societal benefit, safeguard jobs, and advance inclusive development. Having coherent national strategies and fostering stronger international cooperation are key to ensuring that AI evolves in line with public and societal priorities.


Looking Ahead

A clear throughline from the roundtable is that without more affordable access to compute and data infrastructure, many countries will remain unable to participate significantly in AI development, increasing dependence on a small number of global and private-sector players. AI for All therefore means lowering barriers to access by testing and deploying more affordable alternatives, such as Small AI, and by increasing representation through using context-specific datasets and developing diverse talent pipelines. Smart adjustments to existing regulatory frameworks and clear delineation of responsibilities around agreed “red lines” are needed to close governance gaps and collectively shape the future of AI. These issues will continue to be at the center of upcoming global deliberations, including the Global Dialogue on AI Governance and the AI Impact Summit in 2026, where countries are expected to move closer to shared norms and concrete actions for responsible AI development.

By Angeli Juani (Senior Policy and Quantitative Analyst). Photographs by Jonathan Heisler.


This synthesis report from FP Analytics, the independent research division of The FP Group, was produced with financial support from Lenovo. FP Analytics retained control of the findings of this report. Foreign Policy’s editorial team was not involved in the creation of this content. Lenovo did not have influence over the report and does not retain control of it.