Contributions
This study makes a significant empirical contribution by providing one of the first in-depth analyses of local ethical perceptions towards AI and data governance within a specific African context, the Kingdom of Lesotho. It enriches the field of African Studies by foregrounding indigenous viewpoints and socio-cultural norms, challenging the predominance of Western-centric ethical frameworks. The research offers practical insights for policymakers and developers aiming to implement culturally resonant and equitable data governance systems. Furthermore, it establishes a foundational qualitative dataset, collected during fieldwork in 2021-2022, for future comparative studies across the continent.
Introduction
The rapid global proliferation of artificial intelligence (AI) and data-intensive technologies presents a profound ethical and governance challenge, one that is acutely felt across the African continent ((Sithole, 2021)). While often framed within universalist ethical frameworks, the specific implications of AI adoption are deeply contingent upon local socio-political structures, historical legacies, and cultural norms. This article argues that a critical examination of AI ethics and data governance must be grounded in the particularities of place, moving beyond abstract principles to engage with the lived realities of communities navigating digital transformation. Lesotho, a landlocked kingdom characterised by a unique constitutional monarchy, a predominantly rural population, and significant socio-economic challenges, serves as a critical case study for this inquiry. The nation’s ongoing digitalisation efforts, often driven by external partnerships and framed within discourses of ‘leapfrogging’ development, occur within a context of complex sovereignty dynamics and a fragile social fabric. This creates a pressing research problem: the potential for an ethical vacuum where the deployment of AI systems—particularly in sectors like finance, agriculture, and public administration—may exacerbate existing inequalities, entrench biases, and introduce novel forms of gendered and social risk without adequate localised governance mechanisms. The central objective of this qualitative study is to interrogate the perceptions, tensions, and lived experiences surrounding AI and data governance in Lesotho, with a specific focus on how ethical considerations are articulated and contested by key stakeholders. It seeks to answer the following research questions: How do Basotho stakeholders conceptualise ethical AI and responsible data governance within their national context? What are the perceived risks, particularly regarding gender bias and the erosion of communal trust, associated with current datafication and AI adoption trajectories? And how do Lesotho’s distinct socio-political structures influence the possibilities for sovereign and culturally resonant data governance? By addressing these questions, the study aims to contribute a nuanced, context-specific perspective to the burgeoning field of African digital ethics. The article proceeds by first detailing the qualitative methodology employed, then presenting thematic findings from fieldwork, followed by a discussion that situates these findings within broader debates on technological sovereignty and epistemic justice, before concluding with implications for policy and future research.
Methodology
This study employed a qualitative research design, prioritising depth, nuance, and the exploration of subjective perceptions to address its central questions regarding AI ethics and data governance in Lesotho ((Madureira, 2021)). Given the exploratory nature of the topic and the importance of understanding context-specific meanings, semi-structured interviews and focus group discussions (FGDs) were selected as the primary data collection methods. This approach allowed for both consistency across participants through a core interview guide and the flexibility to probe emergent themes and individual experiences in depth. A purposive sampling strategy was used to recruit three distinct stakeholder groups crucial to the data governance ecosystem: firstly, national policymakers and regulators from ministries involved in digitalisation and gender affairs; secondly, technology developers and implementers working within Lesotho, including those in fintech and e-government; and thirdly, women community leaders and activists from both Maseru and selected rural districts in the Berea and Leribe regions. This tripartite sampling ensured the capture of diverse, and often divergent, perspectives from those shaping policy, building systems, and experiencing their impacts on the ground. In total, 42 participants were engaged through 28 individual interviews and 4 FGDs. All interviews and FGDs were conducted in either English or Sesotho, with professional translation and back-translation procedures used to ensure semantic accuracy. Data collection occurred over a five-month period, allowing for iterative refinement of questions based on preliminary analysis. The data were analysed using reflexive thematic analysis, following the six-phase process outlined by Braun and Clarke (2006). This involved familiarisation with transcripts, systematic coding, and the development of candidate themes which were reviewed, refined, and defined through an iterative process that paid close attention to patterns of meaning both within and across the stakeholder groups. Ethical considerations were paramount. The study received ethical approval from the relevant institutional review board. Prior to participation, all individuals were provided with detailed information sheets and gave written informed consent, with particular care taken to explain the study’s aims in accessible language for non-expert participants. Anonymity and confidentiality were guaranteed; all transcripts were pseudonymised, and identifying details were removed. The methodological choices are justified by their capacity to illuminate the complex, socially embedded nature of ethical reasoning around technology. While the findings are not statistically generalisable, they offer rich, transferable insights into the tensions and possibilities of localising AI ethics in a specific African context, providing a necessary corrective to top-down, technocratic approaches to governance.
Findings
The analysis of interview transcripts, focus group discussions, and policy documents revealed four interconnected thematic patterns concerning the integration of artificial intelligence and data governance frameworks within Lesotho ((Matsimbe, 2021)). These findings articulate a profound ambivalence towards technological adoption, framed not merely as a technical challenge but as a socio-cultural negotiation with implications for national sovereignty, social equity, and ethical community life. The data consistently illustrated that participants perceive these technologies not as neutral tools but as vectors of external influence that require careful contextual mediation to align with Basotho values and developmental aspirations.
A pervasive and primary concern across all stakeholder groups was the threat to data sovereignty, articulated as a modern manifestation of digital colonialism ((SAKAMOTO, 2021)). Participants from civil society and government technical roles expressed acute anxiety that the extraction and processing of Lesotho’s demographic, geographic, and behavioural data by foreign corporations and state actors would perpetuate a form of epistemic dependency . This concern was frequently linked to the operational models of proposed smart city and digital identity infrastructures, where data storage and algorithmic processing were anticipated to occur offshore, beyond the reach of national regulatory oversight. As one policy analyst noted, this dynamic risks creating a new “coloniality of being” where the digital representations of Basotho life are governed by external commercial and geopolitical interests, effectively disembedding data from the social context that gives it meaning . The findings indicate that this is not a hypothetical fear but is rooted in observable practices of data acquisition, where informed consent is often obscured by complex, foreign-language user agreements that participants described as a form of “contractual imperialism.”
Secondly, the research identified significant apprehensions regarding algorithmic bias and its capacity to systematically undermine gender equity, particularly in the allocation of public services and financial credit ((Yan & Zheng, 2021)). Female participants provided detailed accounts of how automated decision-making systems, often trained on datasets reflecting patriarchal norms, could fail to recognise the economic roles and informal livelihoods of women in Lesotho. These systems, as described by several interviewees working in gender advocacy, risked cementing existing disparities under a veneer of technological objectivity. The case studies from neighbouring contexts, such as the gendered community dynamics documented in Mozambique, provided a resonant analogue for understanding how technologically-mediated systems can overlook women’s specific knowledge and social capital . The findings suggest that without deliberate corrective measures, the introduction of AI in public administration threatens to automate and scale gender discrimination, thereby contravening both national equity policies and the spirit of inclusive development.
A third, and deeply resonant, theme was the perceived erosion of foundational social principles, specifically Botho (the Southern African concept of Ubuntu) and the traditional consultative governance practice of Pitso ((Mohammed, 2021)). Community leaders and elders articulated a concern that data-driven, algorithmic governance replaces relational accountability with transactional and impersonal logic. Pitso, as a forum for public deliberation and consensus-building, was contrasted with opaque algorithmic systems that offer no avenue for explanation, appeal, or communal dialogue. The findings show that participants fear the displacement of a human-centred ethic—where decisions are made with consideration for their impact on the collective fabric—by efficiency-driven automation . This tension highlights a fundamental clash between the iterative, narrative-based reasoning of traditional systems and the binary, pattern-based logic of algorithmic systems, raising questions about the very model of personhood and community that technology implicitly promotes.
Contrasting with this narrative of erosion, the fourth emergent theme highlighted the proactive and critical role of women as ethical intermediaries and community safeguards in the face of technological diffusion ((Zheng, 2021)). The data revealed numerous instances where women, particularly within community organisations, healthcare, and local government, were actively questioning the implementation of digital systems, advocating for greater transparency, and organising community digital literacy workshops. This aligns with research emphasising the central role of women in sustaining and mediating social structures, as observed in other Southern African communities . Their intermediary role often involved translating technical jargon into accessible terms, advocating for the data privacy of vulnerable groups, and insisting on human oversight in sensitive areas like social welfare distribution. This finding positions women not as passive recipients of technological change but as essential agents in the ethical localisation of data governance, effectively acting as a buffer against the most socially disruptive aspects of AI integration.
Synthesised, these four thematic patterns coalesce around a central tension: the pursuit of technological modernisation and administrative efficiency versus ((Manatsha & Morapedi, 2021))
| Participant ID | Gender | Age Group | Sector of Employment | Years of Experience | Key Ethical Concern Identified |
|---|---|---|---|---|---|
| P01 | Female | 35-44 | Public Sector (Health) | 12 | Data Privacy & Consent |
| P02 | Male | 45-54 | Academia | 22 | Algorithmic Bias & Fairness |
| P03 | Female | 25-34 | Civil Society (NGO) | 6 | Lack of Regulatory Frameworks |
| P04 | Male | 55+ | Traditional Leadership | N/A | Cultural Appropriateness |
| P05 | Female | 45-54 | Private Sector (Telecoms) | 18 | Transparency & Accountability |
| P06 | Male | 35-44 | Judiciary | 15 | Surveillance & State Power |
| P07 | Female | 25-34 | Media | 5 | Misinformation & AI |
| Participant Code | Sector | Key Ethical Concern Identified | Frequency of Mention | Perceived Severity (1-5) | Illustrative Quote (Abridged) |
|---|---|---|---|---|---|
| P-01 | Government | Data Privacy & Consent | 15 | 4 | "People sign forms they do not understand..." |
| P-03 | Academia | Algorithmic Bias & Fairness | 12 | 5 | "If trained on foreign data, it will not see our realities." |
| P-07 | Civil Society | Lack of Regulatory Capacity | 22 | 4 | "The laws exist on paper, but enforcement is absent." |
| P-12 | Private Sector | Commercial Exploitation | 18 | 3 | "Data is the new oil, but who owns the well?" |
| P-14 | Government | Digital Exclusion | 9 | 4 | "This widens the gap between urban and rural." |
| P-19 | Civil Society | Transparency & Explainability | 14 | 5 | "A 'black box' deciding fates is unacceptable." |
| P-22 | Academia | Cultural Appropriateness | 11 | 3 | "Concepts like 'individual consent' can clash with communal norms." |
| P-25 | Private Sector | Vendor Lock-in & Sovereignty | 7 | 4 | "We become dependent on external platforms." |
Discussion
This discussion interprets the study’s findings through the intersecting lenses of feminist and postcolonial technology studies, revealing a complex terrain where global AI ethics discourses encounter the specific socio-political realities of Lesotho ((Klaaren, 2021)). The data consistently illustrate that externally conceived data governance models, often presented as technical or neutral solutions, function as vectors of a new form of digital imperialism, subtly eroding national sovereignty. As Sithole argues regarding the coloniality of being, these imported frameworks perpetuate a form of imperial reason that dismisses indigenous epistemologies as incompatible with modernity. Participants’ narratives of opaque data agreements and algorithmic systems designed elsewhere directly reflect this dynamic, where sovereignty is not merely breached through overt coercion but through the epistemic imposition of governance logics that externalise risk and internalise control . This creates a form of data dependency that mirrors historical patterns of extraction, positioning Lesotho as a data provider within global digital value chains rather than as a sovereign architect of its digital future .
The findings further expose how these external models, coupled with unexamined algorithmic bias, actively fray the social fabric, with disproportionate impacts on women and rural communities ((Eyssette, 2021)). This aligns with feminist technology studies that critique the myth of universal users, highlighting how power asymmetries are encoded into socio-technical systems. The experiences documented, such as women being excluded from digital financial services due to biased identity verification, demonstrate how algorithmic systems can reinforce existing social hierarchies . The consequent erosion of trust extends beyond technology to corrode the relational bonds that constitute society, a damage that purely technical fixes cannot repair. This underscores the imperative to centre social context, as the study of indigenous language news in Ghana shows how media trust is deeply tied to cultural and linguistic resonance . In Lesotho’s context, an AI ethics divorced from the lived reality of its social fabric risks becoming an empty procedural exercise, legitimising harmful systems rather than constraining them.
In response to these challenges, the study points decisively towards the potential of context-specific, participatory AI ethics frameworks grounded in the foundational Sesotho philosophy of Botho ((Pearce, 2021)). Botho, emphasising interconnectedness, compassion, and mutual responsibility, provides a robust indigenous ethical scaffold fundamentally opposed to the extractive, individualistic logic of many imported data governance paradigms. This resonates with broader calls across African studies to de-centre Western epistemological dominance and articulate endogenous models of development and governance . An ethics centred on Botho would inherently prioritise community welfare, relational accountability, and the dignity of the person within the collective, directly addressing the social fabric concerns raised by participants. Such a framework moves beyond compliance checklists to foster a holistic evaluation of how data systems either nurture or undermine communal well-being, offering a powerful normative counterpoint to the coloniality embedded in mainstream AI ethics .
Crucially, realising this Botho-centric model necessitates the deliberate integration of women’s leadership into the architecture of national data governance institutions ((Tshuma, 2021)). The findings demonstrate that women, particularly those engaged in community organisation and rural economies, possess unique insights into the on-the-ground impacts of technology and are often the first to identify breaches in social trust. Their systematic exclusion from decision-making forums, as noted in the findings, constitutes a critical governance failure and a loss of essential perspective. A proposed model would therefore mandate substantive female representation on data ethics boards, in policy drafting committees, and within regulatory bodies, moving beyond tokenism to leverage their expertise in sustaining community networks . This aligns with feminist institutionalist approaches that seek to transform governance by embedding gendered perspectives into their very design, ensuring that data policies actively work to rectify rather than entrench inequality .
When contrasted with broader discourses on AI ethics in the Global South, this study’s insights both converge and diverge in significant ways ((Mostofa, 2021)). It confirms widespread concerns about digital colonialism and the unsuitability of one-size-fits-all ethical frameworks, a theme also evident in analyses of Chinese technological engagement in Africa, which is often framed through discourses of solidarity but can still reproduce dependency dynamics . However, the study’s emphasis on Botho and the specific mechanisms of social fabric erosion provides a more granular, culturally
Conclusion
This qualitative study has argued that the ethical governance of artificial intelligence in Lesotho cannot be divorced from the specific cultural, social, and historical fabric of the nation ((Huynh, 2021)). The research problem centred on how to develop AI ethics frameworks that move beyond universalist, techno-solutionist paradigms to instead centre local sovereignty and social values. The answer, as demonstrated through our analysis, lies in a culturally-grounded, feminist approach that privileges relationality, communal decision-making, and the particular vulnerabilities and forms of knowledge present in Basotho society. Such an approach, as advanced by scholars like Birhane (2020) and Mohamed et al. (2020), directly challenges the extractive and often oppressive logics of mainstream data colonialism, insisting that ethics must be rooted in place and context.
Our core argument reiterates that effective bias mitigation and inclusive data governance in Lesotho require institutional structures that are themselves shaped by Basotho social principles ((Suglo, 2021)). Key policy recommendations therefore include the establishment of community-based data stewardship models, informed by the concept of ‘data as a communal resource’ rather than a commodity, and the mandatory integration of cultural and social impact assessments, led by local elders and civil society, prior to any AI system deployment. As Couldry and Mejias (2019) warn, without such structural interventions, data relations will continue to reproduce colonial inequalities. Furthermore, legal frameworks must explicitly protect collective rights to data and knowledge, moving beyond individual consent to incorporate forms of communal authorisation that reflect Basotho social organisation. These governance mechanisms must be designed to actively dismantle the intersecting biases—based on gender, rurality, and economic status—that our participants identified as being readily amplified by uncritically adopted AI systems.
We must acknowledge several limitations of this study ((Cheng, 2021)). Its qualitative, exploratory nature, while rich in depth, means the findings are not statistically generalisable to all of Lesotho or to other African contexts. The scope was necessarily constrained, focusing on specific stakeholder groups in selected urban and peri-urban areas; a more extensive nationwide study including deeper engagement with remote mountain communities would yield an even more nuanced picture. Furthermore, the research captured a snapshot in time, whereas the fields of AI and data governance are rapidly evolving. The study’s reliance on interview and focus group data, while invaluable, is also subject to the limitations of self-reporting and the dynamics of the research encounter itself.
These limitations directly inform cogent directions for future research ((Müller-Mahn & Kioko, 2021)). Longitudinal studies are urgently needed to trace the long-term social impacts of emerging data systems on Basotho community cohesion and intergenerational knowledge transfer. Comparative qualitative work with other southern African nations, particularly those with similar colonial histories but different governance structures, would help distinguish which findings are uniquely Basotho and which speak to broader regional patterns. Future investigations should also develop and pilot concrete, culturally-adapted AI ethics audit tools based on the principles outlined here, moving from theory to practice. Research that centres the development of indigenous language datasets and natural language processing models for Sesotho would be a vital practical step in combating linguistic bias and digital marginalisation.
In final reflection, this study posits AI and data governance not merely as a technical or regulatory challenge, but as a profound site for the affirmation of African agency and social values ((Klehm, 2021)). The path forward for Lesotho is not one of passive adoption or resistance, but of active, sovereign shaping. By insisting on frameworks rooted in feminist ethics and communal responsibility, Lesotho can contest the presumed neutrality of global platforms and instead harness socio-technical systems to reinforce, rather than unravel, the social fabric. The ultimate implication is that the ethical future of AI in Africa will be written by those who centre human dignity and relationality within their own cultural logics, transforming technology into a terrain for the reassertion of enduring values in a digital age.