Artificial Intelligence and Global Governance: Constructions for the Future of Public Health

In the era of machine-learning and artificial intelligence (AI), many question the ethics and efficiency of integrating AI in all sectors of society. Society both vilifies and lauds AI for the problems and solutions it poses for the future of human security, with some critics forecasting “the end of modern society as we know it.” 

Regardless of one’s stance on the subject, there is little doubt that AI holds enormous potential to spur a technological, economic, and social revolution—for better or for worse. AI already aids public health professionals with medical diagnostics, monitoring the spread of infectious disease, and optimizing patient treatment pathways. 

Given AI’s revolutionary potential, there is a need for effective global governance to regulate the use of AI in the healthcare sector, alleviate current and future concerns with algorithmic models, and provide necessary resources for integration on a global scale. 

Medical Applications of AI

To realize the potential of AI, one must hold a basic understanding of its core components: machine learning, deep learning, and neural networks. 

Machine learning, broadly, is the ability for a computer to independently identify patterns and make predictions using a given dataset. Neural networks and deep learning are subcategories of machine learning. The former refers to a set of algorithms that mimic the human brain to analyze and make conclusions about data, while the latter combines three or more neural networks in its decision-making for improved accuracy. 

The use of AI in medical diagnostics is particularly important in radiomics, which pinpoints key elements within medical images. Oncological diagnostics often rely on researchers and computer-aided tools to identify potentially cancerous lesions. AI’s growing capabilities promise greater diagnostic accuracy than previous generations of computer-aided detection. Advanced algorithms are on par with or better than experienced radiologists in recognizing disease, pointing toward a brighter future for physicians and patients alike. 

AI can also help with disease prevention by monitoring infectious disease outbreaks. Because traditional health surveillance methods rely on statistical methods, they generally predict patterns for large populations. Meanwhile, AI uses advanced algorithms to account for fluctuating socioeconomic and environmental dynamics,  generating accurate predictions for local communities. Public health professionals, for example, used AI to anticipate and minimize the impact of mosquito-borne illnesses in African communities, such as viral encephalitis, the Zika virus, the West Nile virus, dengue, malaria, yellow fever, and chikungunya. 

It should be noted that in these case studies, AI-led machinery is not fully autonomous and requires human intervention to transmit data, interpret findings, and make final decisions on patient care. 

However, many AI-led technologies are growing too sophisticated for their users to understand; patients may not have enough awareness to provide informed consent. As such, officials must reconsider how AI changes the existing principles of patients’ rights. 

Existing Limitations

The European Union’s General Data Protection Regulation (GDPR) maintains that physicians must request consent for digital data collection using “clear and plain language” and data subjects must consent in a “freely given, specific, informed, and unambiguous” manner. 

For many AI-led treatment plans, datasets contain sensitive information about genetic predispositions and family history, while physicians face the challenge of clearly explaining how an algorithm will use a patient’s medical information to produce a diagnosis or treatment plan. 

However, certain AI models use black box algorithms that prevent anyone from seeing how each algorithm produces a result. There is no legal framework that requires physicians to disclose that they cannot fully interpret the reasoning or process behind AI’s diagnoses, effectively leaving patients in the dark.

Those familiar with AI have also long criticized the technology for racial discrimination. Physicians and underserved patients in vulnerable areas may lack access to information sharing platforms, which create a lack of datasets that can be utilized by AI to make accurate predictions.  These gaps in data can cause AI to miss diseases or health risks in patients. 

Already, data gaps affecting African American populations cause AI to underestimate patient risks for kidney stones, death from heart failure, and other critical medical problems. Certain algorithms associate darker skin colors with a decreased chance of disease survival because foundational datasets exclude information about survivors of color. 

Another key issue with global AI integration lies in how nations diverge in their trust of emerging technologies. Leaders from Africa and Latin America present at the World Trade Organization (WTO) have expressed concern with AI implementation on three major fronts: reduced job availability, the exploitation and destruction of local companies by international technology giants, and the lack of advanced infrastructure in the developing world. 

These observations illustrate the inequalities that AI may exacerbate internationally; future summits on technological innovation must consider the differing environments states face. 

Existing Governance and Issues to Consider

To this end, global governance is needed to facilitate the involvement of international governmental organizations (IGOs) in promoting sustainable plans for AI’s integration in healthcare. 

The first of these frameworks must contend with the evolution of patients’ rights and physician responsibilities.

The European Commission’s Artificial Intelligence Act (AIA) of 2021 is currently one of the most advanced international frameworks for regulating AI use. The AIA largely functions via systematic registration of all European AI-led machinery in a database. Technologies deemed by the commission to be more at risk for misuse or biased measurements require stricter regulations and the presence of human intervention. 

The AIA also requires professionals to demonstrate proficiency in understanding AI components and processes as a prerequisite for use. However, there is still widespread support for codifying patients’ rights in the AIA. 

Human rights organizations have criticized the AIA for lacking a “human-centered approach.” Advocates encourage legislators to reframe digital rights as fundamental human rights, ensuring that matters of dignity and consent remain at the forefront of all future AI-related negotiations. Once the European Commission sufficiently expands these regulations, the AIA may become the global touchstone of future regulatory frameworks. 

The world also needs a legal framework to mitigate other risks of AI implementation. These risks include the loss of diagnostic clarity for patients, data misuse, and systemic algorithmic biases, ideas that align with the codification of patient rights in the digital age. There have been several attempts to detail digital rights and responsibilities, such as the 2019 Ethics Guidelines published by the European Commission’s High-Level Expert Group on AI (AI HLEG). 

The AI HLEG maintains that AI systems should not undermine human autonomy, and urges developers to clearly explain the intentions and constraints of algorithms. Moreover, the AI HLEG calls for careful monitoring of AI systems wherever they are in use; if an AI system is found to negatively impact the well-being of individuals, societies, democracies, or environments, it must be barred entirely from use. 

The Spanish Charter of Digital Rights also touches on this subject and ensures  their citizens’ right to appeal AI-based decisions. Additionally, under the Spanish framework, information sharing with third parties cannot occur without express consent from clients or patients. By protecting civilians and promoting technological innovation, the 2019 Ethics Guidelines and Spanish Charter of Digital Rights provide a thorough foundation from which further international negotiations must sprout. 

As of July 2022, the WHO and International Digital Health and AI Research Collaborative (I-DAIR) have signed a joint memorandum outlining an agreement to “harness the digital health revolution” for urgent public health matters. Consequently, individual member states can work in conjunction with WHO standards to sustainably implement AI technologies while receiving needed guidance from the international order. 

This initiative rightly recognizes the variance in state needs, obligations, and capabilities while laying a strong foundation for a robust legal tool. Further international collaboration is needed to ensure that data biases do not lead to adverse patient outcomes via utilizing diverse datasets and upholding patient digital rights. 

AI possesses tremendous potential to revolutionize global healthcare systems. From increasing physician confidence in vital diagnostics to instantly sharing data regarding vector-borne diseases, there is no doubt that AI can save millions of lives—so long as users adhere to proper ethical and technical considerations at all stages of algorithmic processes. 

But, there is still much work that needs to be done to ensure that algorithmic errors and inaccessibility do not hinder the global implementation of advanced technologies. These qualms once again emphasize the need for a legally-binding framework that states can use to promote information sharing and secure privacy for civilians. Most fortunately, there exists a global will within prominent international governmental organizations to ascertain specific conditions from which further negotiations can be built. For the future of states, physicians, and patients to be bright, the penchant for change and global goodwill must shine down upon it.

Related articles

The Qatar-Gulf Crisis in Context

Akshat recently traveled to Doha as a member of the Qatar Exchange Fellowship, sponsored by the National Council on U.S.-Arab Relations and the Northeastern University International Relations Council. The content of this article is largely sourced from conversations with officials from the U.S. Departments of State and Defense, Qatar’s Foreign Affairs and Defence Ministries, Al […]

Making Oppression Backfire

Although Turkey, Egypt, Brazil, and Syria may not at first glance seem to have much in common politically, over the last few months they have been united by a current that has been growing stronger for decades.  Most visibly manifested through mass protests, the phenomenon of nonviolent social change is much more than that, proven […]