Artificial Intelligence (AI) may become a transformative force at the beginning of the 21st century. Today, the debate centres on how to govern a technology capable of redefining the economy, power and knowledge. The boundaries between what is human and what is algorithmic appear to be dissolving at a speed that challenges institutions.
AI seems to be weaving itself into all kinds of processes faster than societies’ ability to create clear rules for its use, warns the annual report of the Stanford Human-Centered Artificial Intelligence Institute (HAI) — a research centre at Stanford University dedicated to studying and promoting the development of human-centred AI technologies. The report shows that global investment in AI has exceeded 200 billion dollars (with Spherical Insights presenting much higher estimates, around 695 billion dollars), with the United States and China leading the race, while the European Union (EU) seeks to consolidate a regulatory model based on ethical principles and fundamental rights. The dilemma is clear: whoever controls the algorithms may come to control the future, and this technological dispute is also a dispute over values.
Work, economy and inequality in the age of automation
From an economic perspective, the effect of artificial intelligence is ambivalent. The McKinsey Global Institute — the economic and public policy research arm of the international consultancy McKinsey & Company, specialising in measuring the impact of emerging technologies on growth and employment — estimates that by 2030 around 30% of tasks currently performed by humans could be automated. This transition, while boosting productivity, will profoundly transform the global labour market, requiring new skills and the reconfiguration of all sectors of activity.
At the same time, new occupations are emerging, particularly in data science, robotics, software engineering and intelligent systems design. However, according to the Organisation for Economic Co-operation and Development (OECD), the risk lies not only in job losses, but in the unequal redistribution of opportunities, with developing countries falling behind in the integration of digital value chains and automation gains. In Africa, for example, the lack of robust data infrastructures and the shortage of digital skills limit the ability to fully harness the benefits of the technological revolution. Even so, promising examples are beginning to emerge. Startups in Kenya, Nigeria and Rwanda, for instance, are using AI to improve medical diagnostics, predict agricultural patterns and optimise access to microcredit. The continent is not condemned to being merely a consumer, but can become a creator of local data-driven solutions — provided such startups are not entirely dependent on AI engines from the US or China. The major challenge is to ensure that these innovations incorporate as many Africa-based components as possible and help consolidate sustainable local ecosystems in technology, research and development.
Science, education and creativity reach new frontiers
In the scientific field, AI is accelerating human knowledge. DeepMind, a British research company owned by the Alphabet (Google) group and a global reference in the development of learning systems, stood out by creating AlphaFold — a system capable of predicting the three-dimensional structure of proteins with accuracy comparable to laboratory methods.
Tools such as ChatGPT and AlphaFold are already accelerating and democratising knowledge and innovation. At the same time, they raise dilemmas around authenticity and digital inequality.
Published in the journal Nature, this breakthrough is considered a milestone in biomedicine, opening new pathways for the development of therapies and personalised medicines, while promoting more open and collaborative science. According to the European Molecular Biology Laboratory, an intergovernmental life sciences research centre based in Heidelberg (Germany), AlphaFold has already catalogued millions of protein structures accessible to researchers worldwide, accelerating discoveries that previously would have taken decades. In education, universities such as Stanford and the Massachusetts Institute of Technology (MIT) in the United States, as well as Tsinghua University in China, are integrating generative AI systems into adaptive learning platforms capable of personalising curricula and assessing skills in real time.
The World Economic Forum highlights that these tools have the potential to transform higher education and reduce barriers to access to knowledge. However, they also expose new ethical dilemmas, including respect for copyright, the risk of algorithmic plagiarism and the erosion of critical thinking.
As UNESCO warned in its 2023 Guidance on Generative AI in Education and Research, “AI must serve learning, not replace it.” Its unreflective adoption may exacerbate digital inequalities and reduce the intellectual autonomy of students and teachers, particularly in developing countries. How can copyright be respected?
In the realm of creativity, the explosion of generative tools — such as ChatGPT, DALL·E, Gemini and Claude — has reshaped the way text, images, music and sound are produced. The International Journal of Communication (an international scientific journal published by the Annenberg School for Communication and Journalism at the University of Southern California) observes that many AI experiments are taking place in cinema, journalism and advertising. AI writes scripts, synthesises voices and creates digital characters indistinguishable from real people. However, AI often makes errors (or hallucinates), and human review remains irreplaceable in products that require rigour — with cases where the use of AI actually reduces productivity due to the time lost on revisions.
On the other hand, many complaints have been raised regarding copyright infringement, authenticity and the veracity of information. The World Intellectual Property Organization (WIPO), a United Nations agency, warns that the boundary between human creation and algorithmic production requires new international rules.

Power and the ethics of automated decision-making
The ethical dimension is perhaps the most urgent in current debates. According to the Stanford HAI AI Index Report, more than 70 countries have already introduced some form of national AI strategy, but few have robust oversight mechanisms. Europe is trying to lead with the AI Act, which classifies systems according to their level of risk, from low-impact systems to prohibited uses, such as mass biometric surveillance.
The lack of governance may generate an “algorithmic oligopoly”, concentrating data, knowledge and influence in a handful of corporations. The risk is global, affecting jobs, copyright and even digital sovereignty.
In the United States, the White House published a “Blueprint for an AI Bill of Rights”, but its practical application remains uncertain. China, meanwhile, is pursuing a model of state control, with regulation focused on social stability. The central problem is that algorithms make decisions that affect lives — from job candidate selection to credit approval or medical triage — often without transparency. The so-called “black box” of AI, in which even the programmers do not fully understand model decisions (as they result from statistical calculations and probabilities over word sequences), puts fundamental principles of justice and accountability at risk.
The new geopolitics of artificial intelligence
At the international level, AI is already a factor of power. Major powers are competing for dominance over microchips, data and quantum computing. Control of semiconductor production chains has become strategic, as shown by the trade conflict between the US and China, with restrictions on the export of advanced technology and massive investment in technological autonomy. According to the Center for Security and Emerging Technology (CSET) — an independent research centre based in Washington DC that analyses the impact of emerging technologies — this race will shape future global alliances: whoever controls the digital infrastructure will also control the pace of innovation and the global narrative on security and technological sovereignty.
But there are signs of a search for balance. In 2024, the Global Partnership on AI (GPAI) — an international initiative created in 2020 by the G7 and other strategic partners to promote the responsible development and use of AI — launched initiatives to encourage responsible use of the technology and mitigate inequalities. In parallel, the UN is discussing the creation of an intergovernmental AI panel aimed at minimising risks.
An era of fascination, but also of risks
Artificial intelligence carries an essential ambiguity: it is simultaneously a promise of progress and an existential threat. HAI warns that the lack of solid governance could lead to an unprecedented concentration of technological power, where a handful of corporations control algorithms, data and knowledge — an “algorithmic oligopoly” that could redefine the very concept of digital sovereignty.
This concern is shared by entities such as CSET, which highlights the geopolitical risks of an AI dominated by private giants capable of influencing public policies and even electoral processes through information manipulation. WIPO, for its part, draws attention to the impact on copyright and the boundaries of creation in a scenario where machines also become authors.
Yet there is also a narrative of hope. Some research institutions argue that generative AI, when used ethically, can expand human capacity, democratise knowledge and create new creative economies. GPAI echoes this view, arguing that the challenge is not to halt technological progress, but to ensure it serves human rights and inclusion.
Ultimately, the real dilemma is not technological, but moral. The future of AI will depend on the choices we make today about who holds power, who benefits from it, and the principles under which it is exercised.
Text: Celso Chambisso • Photography: D.R.


