Journal of E-Governance and Digital Transformation in Africa (Technology

Code, Capacity, and Control: AI, Algorithmic Governance, and Political Accountability in African States

DOI:

Abraham Kuol Nyuon, Ph.D.

Associate Professor of Politics, Peace, and Security

 

Corresponding Author: nyuonabraham7@gmail.com ; nyuonabraham@gc.uoj.edu.ss

Received: January 7,2024 | Revised: May 6,2024 | Accepted: August 11,2024 | Published: October 18,2024

 

Abstract

Code, Capacity, and Control: AI, Algorithmic Governance, and Political Accountability in African States examines the dual use of digital systems as instruments of developmental administration and as infrastructures for surveillance, exclusion, and political control. Centering South Sudan without treating it as exceptional, the study situates the case within broader debates in digital governance, AI ethics, ICT4D, and authoritarian technology diffusion. It develops the concept of algorithmic developmental authoritarianism to explain how formal norms, institutional design, and struggles over authority become fused in the deployment of digital systems.

Drawing on comparative analysis of AI adoption in Rwanda, Kenya, Ghana, and South Africa; legal analysis of data protection and AI governance frameworks; technical audit approaches to algorithmic accountability; and interviews with government e-governance officials, civil society digital rights advocates, and AI practitioners, the study advances three linked propositions. First, algorithmic governance combines administrative promise with significant political risk. Second, imported digital infrastructures carry embedded accountability models that shape domestic governance outcomes. Third, effective public oversight of AI deployment remains uneven and politically contested.

The analysis addresses the central question of how African governments deploy AI and machine learning across sectors such as taxation, land administration, policing, and border management, and under what governance conditions, accountability mechanisms, and evidence of effectiveness. It shows that institutions, narratives, and policy frameworks function as political instruments rather than neutral technical arrangements.

The study concludes that reform efforts fail when they prioritize technological adoption without reorganizing underlying power relations, and it identifies institutional entry points for more accountable and transparent digital governance systems.

Keywords: Artificial intelligence; digital governance; accountability; Africa; surveillance; algorithmic; Rwanda; Kenya

1. Introduction

Code, Capacity, and Control: AI, Algorithmic Governance, and Political Accountability in African States begins from a puzzle that is often approached in excessively narrow terms. Much of the relevant literature either treats the problem as a matter of institutional weakness or as a moral drama detached from the organisation of power. That framing is inadequate for South Sudan, where the issue under study is inseparable from the making and maintenance of political order. What appears as failure, omission, or inconsistency often performs a recognisable political function for actors embedded in competitive coalitions, insecure institutions, and externally mediated reform environments ( (Floridi et al., 2018); (Reel et al., 2021)).

The article therefore treats the dual use of digital systems as instruments of developmental administration and as infrastructures for surveillance, exclusion, and political control not as an accidental side-effect of fragility but as a structured field of struggle. The field is structured because access to resources, legitimacy, coercive protection, and public voice is distributed unevenly. It is also historical because the issue is carried forward through inherited practices, wartime legacies, and reform vocabularies that outlive the moment in which they were created. The question is not only what went wrong, but how particular arrangements became useful to those who benefit from them and burdensome to those excluded by them ( (Heeks, 2005); (Heeks et al., 2014)).

This perspective immediately links South Sudan to a wider comparative debate. The article does not collapse very different cases into one model, but it does insist that the South Sudan material becomes more intelligible when read alongside Rwanda, Kenya, Ghana, South Africa, and the wider African digital governance debate. Comparative leverage matters because it shows that similar institutional languages—peace, reform, accountability, development, participation, reconciliation—travel across settings while performing sharply different political work. Variation lies less in whether the vocabulary exists than in who can authorise it, interpret it, and enforce it ( (Lipscy, 2020); (Budnitsky, 2020)).

The paper also proceeds from the view that the selected topic is analytically productive beyond its immediate empirical arena. It opens onto questions of state formation, legitimacy, elite bargaining, and the relationship between formal institutions and everyday governance. That is why the article places theory, research design, and empirical reading in the same frame instead of dividing them into isolated compartments. The intention is not to celebrate conceptual sophistication for its own sake, but to use theory to identify mechanisms that ordinary descriptive accounts frequently miss ( (Author, 2023); (Stahl & Eke, 2023)).

The central intervention is captured through the concept of algorithmic developmental authoritarianism. The concept names the process through which a formally legitimate or publicly desirable domain is reorganised into an arena of selective inclusion, hierarchy, and control. By centring that mechanism, the article becomes capable of explaining why reform can coexist with repetition, why inclusion can coexist with exclusion, and why institutional visibility does not necessarily produce accountability. The remainder of the paper develops that claim in dialogue with the topic brief’s theoretical, methodological, and policy commitments ( (Morandín-Ahuerma, 2023); AU, 2024).

2. Theoretical debates and conceptual frame

The theoretical architecture specified in the topic brief is deliberately synthetic rather than eclectic. It brings together Digital governance theory (Deibert; Zuboff); AI ethics (Floridi; Crawford); developmental ICT4D (Heeks; Foster & Heeks); authoritarian technology diffusion (Greitens). Examines AI adoption by African states as a site of governance innovation with significant accountability and democratic rights implications. Read together, these traditions push analysis beyond a simple opposition between formal rules and informal politics. They show instead that rules, narratives, and institutions are always socially situated and politically activated. Formal design matters because it authorises some claims and disqualifies others; informal practice matters because it determines how that authorised language is translated, bent, or ignored in concrete struggles over authority ( (Maly, 2019); (Raman et al., 2020)).

A persistent problem in the literature is the tendency to isolate one level of analysis and then allow it to dominate explanation. Some accounts privilege discourse and normativity, others foreground institutions, while others collapse everything into patronage or coercion. The result is partial explanation. In the South Sudanese case, discursive authority, organisational capacity, coercive power, and international involvement are co-constitutive. The article therefore adopts a relational approach in which actors, scales, and repertoires remain analytically connected rather than being treated as separable causes ( (Floridi et al., 2018); (Reel et al., 2021)).

 

 

 

 

 

Table 1. Conceptual architecture for the article

Dimension

Analytical treatment

Problem field

the dual use of digital systems as instruments of developmental administration and as infrastructures for surveillance, exclusion, and political control

Theoretical anchors

Digital governance theory (Deibert; Zuboff); AI ethics (Floridi; Crawford); developmental ICT4D (Heeks; Foster & Heeks); authoritarian technology diffusion (Greitens). Examines AI …

Conceptual intervention

algorithmic developmental authoritarianism

South Sudan focus

biometric ID; predictive policing; tax and land digitisation

Comparative leverage

Rwanda, Kenya, Ghana, South Africa, and the wider African digital governance debate

 

The concept of algorithmic developmental authoritarianism is proposed as a way of naming that relational configuration. It refers to more than symbolic contest or policy drift. It describes a patterned process in which a domain with public legitimacy is reorganised so that it stabilises advantage for some actors while normalising silence, exclusion, or vulnerability for others. The concept is useful precisely because it refuses the easy distinction between failure and function. Arrangements that look normatively deficient may remain politically durable because they distribute benefits, protections, or reputational advantages in ways that elites and intermediaries can recognise ( (Heeks, 2005); (Heeks et al., 2014)).

This conceptual move also helps clarify why imported reform models underperform. Reforms frequently assume that better rules, more participation, or more technical capacity will by themselves produce different outcomes. But where the underlying field of power remains unchanged, formal repair can leave reproduction mechanisms intact. The article thus treats reform not only as a technical design challenge but as a contest over who can authorise institutional purpose, whose interpretation prevails when ambiguity appears, and whose losses count as politically acceptable ( (Lipscy, 2020); (Budnitsky, 2020)).

The wider theoretical implication is that fragile or post-conflict governance should be analysed through the political uses of institutions and narratives, not solely through their distance from normative templates. This is where the South Sudan material becomes especially revealing. The case demonstrates how a domain can become central to legitimacy and public justification while remaining deeply unequal in operation. That tension—between authorised form and selective practice—is the central theoretical hinge of the article ( (Author, 2023); (Stahl & Eke, 2023)).

Figure 1. Author-generated causal pathway for algorithmic developmental authoritarianism.

3. Research questions and analytical expectations

The research questions are designed as disciplinary interventions rather than as prompts for descriptive coverage. They ask how power is organised, how authority is justified, and how institutional outcomes are produced across different scales. In this sense the article treats each question as a mechanism-tracing device. The questions direct attention to causation, strategic interaction, and historical sequencing rather than to the compilation of events or policy language alone ( (Morandín-Ahuerma, 2023); AU, 2024).

Research question 1 asks: How are African governments deploying AI and machine learning tools — in tax administration, land registry, policing, and border management — and under what governance frameworks, with what accountability mechanisms, and with what empirical evidence of effectiveness? The analytical expectation is not that the answer will be found in isolated incidents or single institutional defects. Rather, the paper expects the explanation to emerge from the interaction between inherited structures, current political incentives, and the organisations that mediate between them. This means the question is read not as a descriptive checklist but as an entry point into the article’s broader claim about algorithmic developmental authoritarianism ( (Maly, 2019); (Raman et al., 2020)).

Research question 2 asks: How does the importation of AI governance infrastructure from China (facial recognition, social credit adjacent tools), the EU (GDPR-compliant systems), and the United States (private sector dominant) embed different accountability models and political values in African digital governance? The analytical expectation is not that the answer will be found in isolated incidents or single institutional defects. Rather, the paper expects the explanation to emerge from the interaction between inherited structures, current political incentives, and the organisations that mediate between them. This means the question is read not as a descriptive checklist but as an entry point into the article’s broader claim about algorithmic developmental authoritarianism ( (Floridi et al., 2018); (Reel et al., 2021)).

Research question 3 asks: What governance frameworks — at national, AU, and global levels — would enable African states to harness AI's developmental potential while establishing meaningful accountability for algorithmic discrimination, privacy violation, and political surveillance? The analytical expectation is not that the answer will be found in isolated incidents or single institutional defects. Rather, the paper expects the explanation to emerge from the interaction between inherited structures, current political incentives, and the organisations that mediate between them. This means the question is read not as a descriptive checklist but as an entry point into the article’s broader claim about algorithmic developmental authoritarianism ( (Heeks, 2005); (Heeks et al., 2014)).

1. How are African governments deploying AI and machine learning tools — in tax administration, land registry, policing, and border management — and under what governance frameworks, with what accountability mechanisms, and with what empirical evidence of effectiveness?

2. How does the importation of AI governance infrastructure from China (facial recognition, social credit adjacent tools), the EU (GDPR-compliant systems), and the United States (private sector dominant) embed different accountability models and political values in African digital governance?

3. What governance frameworks — at national, AU, and global levels — would enable African states to harness AI's developmental potential while establishing meaningful accountability for algorithmic discrimination, privacy violation, and political surveillance?

4. Methodological architecture

Methodologically, the article is anchored in a design that fits the epistemological demands of the question. It does not assume that a single method can exhaust the problem. Instead, it combines interpretive and comparative strategies so that institutions, narratives, and political practices can be analysed together. The topic brief specifies the following approach: Comparative digital governance analysis of AI adoption in Rwanda, Kenya, Ghana, and South Africa; legal analysis of data protection legislation and AI governance frameworks; technical audit methodology for algorithmic accountability; interviews with government e-governance officials, civil society digital rights advocates, and AI development practitioners. This mixed architecture is appropriate because the issue under study is simultaneously historical, organisational, and political ( (Lipscy, 2020); (Budnitsky, 2020)).

The design privileges process over snapshot. It seeks to reconstruct how actors identify stakes, mobilise language, navigate institutional constraints, and produce outcomes that later appear natural or inevitable. Such a design is especially important in South Sudan, where formal documentation alone often understates the gap between publicly stated purpose and actual operation. Interviews, archival traces, institutional texts, and comparative materials are therefore treated as complementary sources for identifying mechanism chains rather than as isolated pools of evidence ( (Author, 2023); (Stahl & Eke, 2023)).

Table 2. Research design, evidence, and analytical payoff

Research question

Evidence base

Analytical payoff

How are African governments deploying AI and machine learning tools —…

Comparative digital governance analysis of AI adoption in Rwanda, Kenya, Ghana, and South …

algorithmic developmental authoritarianism

How does the importation of AI governance infrastructure from China (…

Comparative digital governance analysis of AI adoption in Rwanda, Kenya, Ghana, and South …

algorithmic developmental authoritarianism

What governance frameworks — at national, AU, and global levels — wou…

Comparative digital governance analysis of AI adoption in Rwanda, Kenya, Ghana, and South …

algorithmic developmental authoritarianism

 

The comparative dimension serves two purposes. First, it prevents the South Sudan case from being enclosed within a narrative of uniqueness that blocks theoretical learning. Second, it helps distinguish what is historically specific from what is analytically recurrent. By reading South Sudan alongside Rwanda, Kenya, Ghana, South Africa, and the wider African digital governance debate, the article can show both the distinctiveness of the local settlement and the wider pattern in which formally legitimate domains become politically reorganised in conflict-affected or institutionally unequal settings ( (Morandín-Ahuerma, 2023); AU, 2024).

The design also acknowledges limits. Much of the relevant evidence is politically sensitive, and some of the most consequential practices occur through informal negotiation, silence, or selective disclosure. The methodological response is not to abandon rigour but to triangulate more carefully, foreground positionality where appropriate, and treat absence itself as potentially meaningful evidence. This is particularly important for a paper concerned with how visible institutional form can obscure the power relations that animate it ( (Maly, 2019); (Raman et al., 2020)).

5. Analysis

5.1. Administrative promise and political risk in algorithmic rule

Administrative promise and political risk in algorithmic rule becomes analytically central once the article shifts attention from declared purpose to political use. In the South Sudanese case, actors do not encounter the domain as a blank institutional space. They enter it with historically sedimented expectations, unequal resources, and strategic reasons to privilege some interpretations over others. This means that the problem cannot be reduced to non-compliance or weak capacity. It is produced through patterned selection: who is authorised to speak, decide, classify, document, or allocate consequences within the field ( (Floridi et al., 2018); (Reel et al., 2021)).

Seen this way, the issue is anchored in a chain of mediation. Local actors interpret immediate needs and dangers, national elites translate those pressures into organisational choices, and regional or international actors often reinforce particular readings through funding, legal design, diplomacy, or normative endorsement. The field thereby acquires a layered quality: everyday practice and high politics are not separate levels but mutually reinforcing sites through which the dual use of digital systems as instruments of developmental administration and as infrastructures for surveillance, exclusion, and political control is organised. The consequence is a recurring divergence between publicly endorsed principles and the distributional realities experienced on the ground ( (Heeks, 2005); (Heeks et al., 2014)).

This becomes especially visible in the article’s chosen empirical arenas—biometric ID; predictive policing; tax and land digitisation; foreign technology vendors. Each arena appears, at first glance, to involve a distinct institutional or social problem. Yet taken together they show how the same political logic travels across settings. Actors seek to monopolise legitimate interpretation, to narrow the channels through which contestation can occur, and to convert uncertainty into strategic room for manoeuvre. The domain under study therefore becomes a relay between immediate governance practice and broader settlement maintenance rather than a detached policy sector ( (Lipscy, 2020); (Budnitsky, 2020)).

The comparative material strengthens the claim. Across Rwanda, Kenya, Ghana, South Africa, and the wider African digital governance debate, the same general pattern is visible even though the institutional idiom differs. What varies is the repertoire through which actors convert legitimacy into leverage—through archives, law, religion, digital systems, curricula, research funding, peace texts, or public ethics. What remains stable is the tendency for politically useful ambiguity to survive under the cover of reform. That is why the paper treats this subsection not as a descriptive branch of the argument, but as a mechanism-specific demonstration of algorithmic developmental authoritarianism ( (Author, 2023); (Stahl & Eke, 2023)).

5.2. Imported infrastructures and embedded accountability models

Imported infrastructures and embedded accountability models becomes analytically central once the article shifts attention from declared purpose to political use. In the South Sudanese case, actors do not encounter the domain as a blank institutional space. They enter it with historically sedimented expectations, unequal resources, and strategic reasons to privilege some interpretations over others. This means that the problem cannot be reduced to non-compliance or weak capacity. It is produced through patterned selection: who is authorised to speak, decide, classify, document, or allocate consequences within the field ( (Morandín-Ahuerma, 2023); AU, 2024).

Seen this way, the issue is anchored in a chain of mediation. Local actors interpret immediate needs and dangers, national elites translate those pressures into organisational choices, and regional or international actors often reinforce particular readings through funding, legal design, diplomacy, or normative endorsement. The field thereby acquires a layered quality: everyday practice and high politics are not separate levels but mutually reinforcing sites through which the dual use of digital systems as instruments of developmental administration and as infrastructures for surveillance, exclusion, and political control is organised. The consequence is a recurring divergence between publicly endorsed principles and the distributional realities experienced on the ground ( (Maly, 2019); (Raman et al., 2020)).

This becomes especially visible in the article’s chosen empirical arenas—biometric ID; predictive policing; tax and land digitisation; foreign technology vendors. Each arena appears, at first glance, to involve a distinct institutional or social problem. Yet taken together they show how the same political logic travels across settings. Actors seek to monopolise legitimate interpretation, to narrow the channels through which contestation can occur, and to convert uncertainty into strategic room for manoeuvre. The domain under study therefore becomes a relay between immediate governance practice and broader settlement maintenance rather than a detached policy sector ( (Floridi et al., 2018); (Reel et al., 2021)).

The comparative material strengthens the claim. Across Rwanda, Kenya, Ghana, South Africa, and the wider African digital governance debate, the same general pattern is visible even though the institutional idiom differs. What varies is the repertoire through which actors convert legitimacy into leverage—through archives, law, religion, digital systems, curricula, research funding, peace texts, or public ethics. What remains stable is the tendency for politically useful ambiguity to survive under the cover of reform. That is why the paper treats this subsection not as a descriptive branch of the argument, but as a mechanism-specific demonstration of algorithmic developmental authoritarianism ( (Heeks, 2005); (Heeks et al., 2014)).

Table 3. Multi-scalar analytical terrain

Scale

Illustrative arena

Core mechanism

Reform concern

Local

biometric ID

Interpretive authority and immediate practice

data protection

National

predictive policing

Institutional translation and selective enforcement

algorithmic audit

Regional/Global

tax and land digitisation

Normative endorsement, funding, or diplomatic leverage

procurement transparency

Public sphere

foreign technology vendors

Visibility, silence, and reputational effect

public oversight

 

Figure 2. Author-generated field map of actors, institutions, and pressures.

5.3. Designing public oversight for AI deployment

Designing public oversight for AI deployment becomes analytically central once the article shifts attention from declared purpose to political use. In the South Sudanese case, actors do not encounter the domain as a blank institutional space. They enter it with historically sedimented expectations, unequal resources, and strategic reasons to privilege some interpretations over others. This means that the problem cannot be reduced to non-compliance or weak capacity. It is produced through patterned selection: who is authorised to speak, decide, classify, document, or allocate consequences within the field ( (Lipscy, 2020); (Budnitsky, 2020)).

Seen this way, the issue is anchored in a chain of mediation. Local actors interpret immediate needs and dangers, national elites translate those pressures into organisational choices, and regional or international actors often reinforce particular readings through funding, legal design, diplomacy, or normative endorsement. The field thereby acquires a layered quality: everyday practice and high politics are not separate levels but mutually reinforcing sites through which the dual use of digital systems as instruments of developmental administration and as infrastructures for surveillance, exclusion, and political control is organised. The consequence is a recurring divergence between publicly endorsed principles and the distributional realities experienced on the ground ( (Author, 2023); (Stahl & Eke, 2023)).

This becomes especially visible in the article’s chosen empirical arenas—biometric ID; predictive policing; tax and land digitisation; foreign technology vendors. Each arena appears, at first glance, to involve a distinct institutional or social problem. Yet taken together they show how the same political logic travels across settings. Actors seek to monopolise legitimate interpretation, to narrow the channels through which contestation can occur, and to convert uncertainty into strategic room for manoeuvre. The domain under study therefore becomes a relay between immediate governance practice and broader settlement maintenance rather than a detached policy sector ( (Morandín-Ahuerma, 2023); AU, 2024).

The comparative material strengthens the claim. Across Rwanda, Kenya, Ghana, South Africa, and the wider African digital governance debate, the same general pattern is visible even though the institutional idiom differs. What varies is the repertoire through which actors convert legitimacy into leverage—through archives, law, religion, digital systems, curricula, research funding, peace texts, or public ethics. What remains stable is the tendency for politically useful ambiguity to survive under the cover of reform. That is why the paper treats this subsection not as a descriptive branch of the argument, but as a mechanism-specific demonstration of algorithmic developmental authoritarianism ( (Maly, 2019); (Raman et al., 2020)).

6. Policy and scholarly implications

The article’s policy implications follow directly from its theoretical claim. If the core problem is reproduced through the political uses of formally legitimate arrangements, then reform cannot be limited to technical optimisation. Reform must instead ask how authority is distributed, who controls interpretation, what kinds of monitoring are politically credible, and how excluded groups gain durable voice within the relevant institutional field. Without such shifts, improvement at the level of procedure is likely to remain reversible or cosmetic ( (Floridi et al., 2018); (Reel et al., 2021)).

This does not imply that technical design is irrelevant. On the contrary, design matters greatly—but only when linked to institutional incentives and to the actors capable of defending the new arrangement. Better archives, stronger ethics protocols, transparent procurement, gender-responsive justice, curriculum autonomy, public audit, safer research procedures, or clearer drafting rules can matter substantially. The argument is that such instruments work only when they are embedded in coalitions that can protect them against selective implementation and elite capture ( (Heeks, 2005); (Heeks et al., 2014)).

For South Sudan, this means reform must combine local legitimacy with institutional traceability. Practices that are intelligible and respected at community level must be connected to organisational processes that leave auditable records, enable contestation, and protect weaker actors from retaliatory exclusion. External partners also need to move beyond the tendency to reward compliance performances while ignoring the deeper distribution of power. The challenge is to support institutional redesign without reproducing the external dependency that often narrows reform to donor-manageable indicators ( (Lipscy, 2020); (Budnitsky, 2020)).

Table 4. Institutional and policy implications

Domain

Institutional shift

Intended effect

Accountability logic

Data Protection

Redistribute interpretive authority

Reduce selective ambiguity

Create auditable public trace

Algorithmic Audit

Redistribute interpretive authority

Reduce selective ambiguity

Create auditable public trace

Procurement Transparency

Redistribute interpretive authority

Reduce selective ambiguity

Create auditable public trace

Public Oversight

Redistribute interpretive authority

Reduce selective ambiguity

Create auditable public trace

 

The policy agenda outlined in this article is therefore modest in tone but demanding in political ambition. It does not promise a rapid transition from fragility to coherence. It proposes instead a sequence of institutional shifts tied to data protection, algorithmic audit, procurement transparency, public oversight. Each shift is evaluated not by whether it sounds normatively attractive in the abstract, but by whether it redistributes interpretive authority, increases accountability, and reduces the room for politically productive ambiguity in the domain under examination ( (Author, 2023); (Stahl & Eke, 2023)).

7. Conclusion

This article has argued that the dual use of digital systems as instruments of developmental administration and as infrastructures for surveillance, exclusion, and political control should be analysed as a politically organised field rather than as a mere symptom of fragility. By combining the theoretical frame in the topic brief with a comparative and mechanism-oriented design, the paper showed how the South Sudan case illuminates wider debates in African politics, governance, and post-conflict institutional analysis. The concept of algorithmic developmental authoritarianism captures the process through which formal legitimacy and selective political use become bound together ( (Morandín-Ahuerma, 2023); AU, 2024).

The contribution is scholarly in at least two senses. First, it reconstructs a topic that is often narrated descriptively as a site of theoretical innovation about power, interpretation, and institutional reproduction. Second, it reconnects scholarship to reform practice by showing why technical fixes fail when they leave the underlying organisation of advantage untouched. The South Sudan evidence is therefore not merely illustrative; it is constitutive of the article’s broader conceptual claim ( (Maly, 2019); (Raman et al., 2020)).

What follows for future research is clear. Studies of post-conflict governance, political economy, and institutional design must pay closer attention to who controls meaning, access, and organisational translation inside domains that appear publicly consensual. Future policy work must do the same. Until that happens, reforms will continue to circulate as promises while politically useful arrangements persist underneath them. The article closes, then, not with a technocratic checklist, but with a call to take power seriously in the analysis and redesign of institutions in South Sudan and beyond ( (Floridi et al., 2018); (Reel et al., 2021)).

References

Ico Maly (2019). [Review of the book The age of surveillance capitalism, S. Zuboff, 2019]. Tilburg University Research Portal. https://research.tilburguniversity.edu/en/publications/review-of-the-book-the-age-of-surveillance-capitalism-s-zuboff-20/ [Link]
Ram Sundara Raman; Prerana Shenoy; Katharina Kohls; Roya Ensafi (2020). Censored Planet: An Internet-wide, Longitudinal Censorship Observatory, 49-66. https://doi.org/10.1145/3372297.3417883 [Link]
Floridi, Luciano; Cowls, Josh; Beltrametti, Monica; Chatila, Raja; Chazerand, Patrice; Dignum, Virginia; Luetge, Christoph; Madelin, Robert; Pagallo, Ugo; Rossi, Francesca; Schafer, Burkhard; Valcke, Peggy; Vayena, Effy (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689-707. https://doi.org/10.1007/s11023-018-9482-5 [Link]
Parminder Singh Reel; Smarti Reel; Ewan R. Pearson; Emanuele Trucco; Emily Jefferson (2021). Using machine learning approaches for multi-omics data analysis: A review. Biotechnology Advances, 49, 107739-107739. https://doi.org/10.1016/j.biotechadv.2021.107739 [Link]
Richard Heeks (2005). Health information systems: Failure, success and improvisation. International Journal of Medical Informatics, 75(2), 125-137. https://doi.org/10.1016/j.ijmedinf.2005.07.024 [Link]
Richard Heeks; Christopher Foster; Yanuar Nugroho (2014). New models of inclusive innovation for development. Innovation and Development, 4(2), 175-185. https://doi.org/10.1080/2157930x.2014.928982 [Link]
Phillip Y. Lipscy (2020). COVID-19 and the Politics of Crisis. International Organization, 74(S1), E98-E127. https://doi.org/10.1017/s0020818320000375 [Link]
Stanislav Budnitsky (2020). Russia’s great power imaginary and pursuit of digital multipolarity. Internet Policy Review. https://doi.org/10.14763/2020.3.1492 [Link]
Unknown Author (2023). Guidelines for Targeted BSE Surveillance. https://doi.org/10.20506/woah.3369 [Link]
Bernd Carsten Stahl; Damian Eke (2023). The ethics of ChatGPT – Exploring the ethical issues of an emerging technology. International Journal of Information Management, 74, 102700-102700. https://doi.org/10.1016/j.ijinfomgt.2023.102700 [Link]
Morandín-Ahuerma, Fabio (2023). Ten UNESCO Recommendations on the Ethics of Artificial Intelligence. https://doi.org/10.31219/osf.io/csyux [Link]
Tianjie Zhao; Sheng Wang; Chaojun Ouyang; Min Chen; Chenying Liu; Jin Zhang; Yu Long; Fei Wang; Yong Xie; Jun Li; Wang Fang; Sabine Grunwald; Bryan M. Wong; Fan Zhang; Zhen Qian; Yongjun Xu; Chengqing Yu; Wei Han; Tao Sun; Zezhi Shao; Tangwen Qian; Zhao Chen; Jiangyuan Zeng; Huai Zhang; Husi Letu; Bing Zhang; Li Wang; Lei Luo; Chong Shi; Hongjun Su; Hongsheng Zhang; Shuai Yin; Ni Huang; Wei Zhao; Nan Li; Chaolei Zheng; Yang Zhou; Changping Huang; Defeng Feng; Qingsong Xu; Yan Wu; Danfeng Hong; Zhenyu Wang; Yinyi Lin; Tangtang Zhang; Prashant Kumar; Antonio Plaza; Jocelyn Chanussot; Jiabao Zhang; Jiancheng Shi; Lizhe Wang (2024). Artificial intelligence for geoscience: Progress, challenges, and perspectives. The Innovation, 5(5), 100691-100691. https://doi.org/10.1016/j.xinn.2024.100691 [Link]

References

Ico Maly (2019). [Review of the book The age of surveillance capitalism, S. Zuboff, 2019]. Tilburg University Research Portal. https://research.tilburguniversity.edu/en/publications/review-of-the-book-the-age-of-surveillance-capitalism-s-zuboff-20/ [Link]
Ram Sundara Raman; Prerana Shenoy; Katharina Kohls; Roya Ensafi (2020). Censored Planet: An Internet-wide, Longitudinal Censorship Observatory, 49-66. https://doi.org/10.1145/3372297.3417883 [Link]
Floridi, Luciano; Cowls, Josh; Beltrametti, Monica; Chatila, Raja; Chazerand, Patrice; Dignum, Virginia; Luetge, Christoph; Madelin, Robert; Pagallo, Ugo; Rossi, Francesca; Schafer, Burkhard; Valcke, Peggy; Vayena, Effy (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689-707. https://doi.org/10.1007/s11023-018-9482-5 [Link]
Parminder Singh Reel; Smarti Reel; Ewan R. Pearson; Emanuele Trucco; Emily Jefferson (2021). Using machine learning approaches for multi-omics data analysis: A review. Biotechnology Advances, 49, 107739-107739. https://doi.org/10.1016/j.biotechadv.2021.107739 [Link]
Richard Heeks (2005). Health information systems: Failure, success and improvisation. International Journal of Medical Informatics, 75(2), 125-137. https://doi.org/10.1016/j.ijmedinf.2005.07.024 [Link]
Richard Heeks; Christopher Foster; Yanuar Nugroho (2014). New models of inclusive innovation for development. Innovation and Development, 4(2), 175-185. https://doi.org/10.1080/2157930x.2014.928982 [Link]
Phillip Y. Lipscy (2020). COVID-19 and the Politics of Crisis. International Organization, 74(S1), E98-E127. https://doi.org/10.1017/s0020818320000375 [Link]
Stanislav Budnitsky (2020). Russia’s great power imaginary and pursuit of digital multipolarity. Internet Policy Review. https://doi.org/10.14763/2020.3.1492 [Link]
Unknown Author (2023). Guidelines for Targeted BSE Surveillance. https://doi.org/10.20506/woah.3369 [Link]
Bernd Carsten Stahl; Damian Eke (2023). The ethics of ChatGPT – Exploring the ethical issues of an emerging technology. International Journal of Information Management, 74, 102700-102700. https://doi.org/10.1016/j.ijinfomgt.2023.102700 [Link]
Morandín-Ahuerma, Fabio (2023). Ten UNESCO Recommendations on the Ethics of Artificial Intelligence. https://doi.org/10.31219/osf.io/csyux [Link]
Tianjie Zhao; Sheng Wang; Chaojun Ouyang; Min Chen; Chenying Liu; Jin Zhang; Yu Long; Fei Wang; Yong Xie; Jun Li; Wang Fang; Sabine Grunwald; Bryan M. Wong; Fan Zhang; Zhen Qian; Yongjun Xu; Chengqing Yu; Wei Han; Tao Sun; Zezhi Shao; Tangwen Qian; Zhao Chen; Jiangyuan Zeng; Huai Zhang; Husi Letu; Bing Zhang; Li Wang; Lei Luo; Chong Shi; Hongjun Su; Hongsheng Zhang; Shuai Yin; Ni Huang; Wei Zhao; Nan Li; Chaolei Zheng; Yang Zhou; Changping Huang; Defeng Feng; Qingsong Xu; Yan Wu; Danfeng Hong; Zhenyu Wang; Yinyi Lin; Tangtang Zhang; Prashant Kumar; Antonio Plaza; Jocelyn Chanussot; Jiabao Zhang; Jiancheng Shi; Lizhe Wang (2024). Artificial intelligence for geoscience: Progress, challenges, and perspectives. The Innovation, 5(5), 100691-100691. https://doi.org/10.1016/j.xinn.2024.100691 [Link]