Journal of Intellectual Property and Information Technology Law (JIPIT) https://journal.strathmore.edu/index.php/jipit <p>The <em>Journal of Intellectual Property and Information Technology </em>(JIPIT) Law is an academic journal founded by the <em>Centre for Intellectual Property and Information Technology Law</em> (CIPIT). CIPIT undertook this endeavour to provide a platform for academic research on Intellectual Property and Technology Law, particularly as such topics relate to the Global South. We welcome submissions originating from all geographic regions.</p> Strathmore University en-US Journal of Intellectual Property and Information Technology Law (JIPIT) 2788-6727 Editorial https://journal.strathmore.edu/index.php/jipit/article/view/622 Collins Okoh Nelly Rotich Copyright (c) 2025 Collins Okoh, Dr. Nelly Rotich https://creativecommons.org/licenses/by/4.0 2025-11-15 2025-11-15 5 1 7 10 10.52907/jipit.v5i1.622 Foreword https://journal.strathmore.edu/index.php/jipit/article/view/621 Melissa Omino Copyright (c) 2025 Dr. Melissa Omino https://creativecommons.org/licenses/by/4.0 2025-11-15 2025-11-15 5 1 11 13 10.52907/jipit.v5i1.621 Gaps in Consumer Protection Regimes: Protecting Nigerians from Algorithmic Harms in the Digital Economy https://journal.strathmore.edu/index.php/jipit/article/view/578 <p><em>The adoption of digital technologies, particularly Artificial Intelligence (AI), big data, and algorithmic decision-making, has transformed consumer markets, offering unprecedented convenience and personalization. However, these innovations also present novel and complex challenges, such as dark patterns, personalized advertising, and algorithmic pricing, which undermine consumer autonomy, transparency, and fairness. This paper critically examines how Nigeria’s consumer protection framework, notably the Federal Competition and Consumer Protection Act (FCCPA) 2018 and the Nigeria Data Protection Act 2023 (NDPA), responds to these emerging threats. It finds that while existing laws address traditional market imbalances, they are insufficient to regulate opaque and exploitative digital practices. Strengthening the legal and regulatory framework is therefore essential to safeguarding consumer rights in an AI-driven economy. Drawing on regulatory experiences in the European Union, the United States, and India, this paper examines the limitations of Nigeria’s current regulatory regime and offers recommendations to reform consumer protection laws to ensure optimal protection for consumers in the digital marketplace.</em></p> Ihuoma Ilobinso Copyright (c) 2025 Ihuoma Ilobinso https://creativecommons.org/licenses/by/4.0 2025-11-15 2025-11-15 5 1 17 57 10.52907/jipit.v5i1.578 Reinterpreting the Identifiability of Personal Data in the Age of Artificial Intelligence https://journal.strathmore.edu/index.php/jipit/article/view/623 <p><em>This paper analyzes a significant legal issue pertaining to the application of Artificial Intelligence (AI) to personal data, namely the re-identification of natural persons to their personal data. Since the development and use of AI technologies heavily rely on analyzing large amounts of data and identifying links among them, AI may be used to retrace and de-anonymize data about natural persons, creating new personal data protection risks. Using the permissive theory of data protection law, this paper analyzes the identifiability element of the notion of personal data within the data protection laws of Kenya and the European Union (EU) with a view to explaining how it can be interpreted so as to avoid an overly restrictive data protection regime that could hinder beneficial AI innovations and/or deprive individual data subjects of the protection that the legislator intended them to have. This paper proposes restricting identifiability to the time of data processing at issue, and limiting the assessment of identifiability to the controller or processor, and persons who are likely to receive the information, rather than the public and the broader community of users. That way, the study offers an option that narrows the definition of personal data to reasonable limits that facilitate responsible AI innovation while maintaining adequate and effective protection of data subjects.</em></p> Josphat Ayamunda Copyright (c) 2025 Josphat Ayamunda https://creativecommons.org/licenses/by/4.0 2025-11-15 2025-11-15 5 1 59 103 10.52907/jipit.v5i1.623 Strengthening the Kenyan Court-Annexed Mediation Against the Threat of Deepfakes and Digital Deception https://journal.strathmore.edu/index.php/jipit/article/view/624 <p><em>This paper explores how Kenya’s mediation framework, particularly the Court-Annexed Mediation (CAM) system, can adapt to an era where fabricated evidence (such as Artificial Intelligence (AI) generated deepfakes and distorted digital documents) threatens the principles of truth, trust, and fairness, which are essential elements in Alternative Dispute Resolution (ADR). This analysis focuses on mediation due to its distinctive vulnerabilities: its inherent informality, the absence of formal evidentiary rules, and the mediator’s non-adjudicative role, all of which make it exceptionally susceptible to manipulation by synthetic media. Relying on Kenya’s Civil Procedure (Court-Annexed Mediation) Rules, 2022, the Evidence Act (Cap. 80), and emerging local jurisprudence, this paper examines how the law treats digital and synthetic evidence in ADR. Further, this paper relies on interdisciplinary literature from cognitive psychology and digital ethics to frame the challenge. Thus, the author addresses several sub-themes, such as the ‘illusion of truth’, the cognitive tendency to believe convincing fakes, and the ethical and procedural challenges that arise when fabricated information enters the dispute resolution process. Additionally, the author explores the promises and shortcomings of technologies such as AI-assisted tools for detecting such forgeries, considering their practical (in)accessibility to Kenyan mediators. To this end, this paper argues that the future of mediation in Kenya requires mediators to become more vigilant facilitators, armed with digital awareness and supported by frameworks that enable them to broker peace in an era where appearances no longer guarantee reality. This paper proposes a set of integrated reforms by examining the evolving nature of evidence and human judgment within the facilitative process. It also proposes institutional safeguards to be deployed when handling digital materials.</em></p> Raphael Okochil Copyright (c) 2025 Raphael Okochil https://creativecommons.org/licenses/by/4.0 2025-11-15 2025-11-15 5 1 105 133 10.52907/jipit.v5i1.624 From Privacy Safeguards to Innovation Barrier: Assessing Tanzania’s Personal Data Protection Act in the Age of AI https://journal.strathmore.edu/index.php/jipit/article/view/544 <p><em>The rise of digitization has led to the widespread adoption of Artificial Intelligence (AI) technologies, driving efficiency and innovation across sectors. AI systems rely on vast datasets to enhance accuracy, but their deployment raises concerns over data privacy, misuse, and bias. In response to the growth of technology and the collection of personal data, different jurisdictions have enacted data protection laws. This study focuses on Tanzania’s Personal Data Protection Act (PDPA), which aims to regulate the processing of personal data, and critically examines its implications for AI implementation in Tanzania. This study analyses key provisions of the PDPA, such as restrictions on data sharing, privacy safeguards, and the role of Data Protection Commissions (DPCs) in comparison with global and regional data protection frameworks to assess their implications for AI implementation. Findings suggest that, while the PDPA is not an AI-specific law, its stringent data access controls and compliance burdens may limit AI-driven advancements. The study also highlights how excessive personal data restrictions can reduce AI accuracy and fairness, as models require diverse datasets for effective learning. The study recommends potential strategies Tanzania could adopt, including regulatory sandboxes and risk-based compliance approaches, to balance privacy protection with AI innovation. This study advocates for the adoption of AI-specific guidelines that promote ‘privacy by design’ in AI models and introduces flexible policies that support responsible AI implementation. While the PDPA establishes a crucial framework for data governance in Tanzania, its applicability requires continuous assessment to prevent unintended barriers to AI growth.</em></p> Mark-Silas Malekela Tupokigwe Isagah Copyright (c) 2025 Mark Malekela, LL.M , Dr. Tupokigwe Isagah https://creativecommons.org/licenses/by/4.0 2025-11-15 2025-11-15 5 1 135 177 10.52907/jipit.v5i1.544 Authorship, Ownership, and Ethical Issues in AI-generated Research: Implications for Nigerian Academia https://journal.strathmore.edu/index.php/jipit/article/view/520 <p><em>As generative Artificial Intelligence (AI) systems continue to transform academic research, debates over their appropriateness within the academic community continue to garner global attention. These debates are exacerbated in the Global South, where limited access to AI infrastructure and slow adoption of ethical AI guidelines heighten vulnerabilities. Previous studies in Nigeria have largely examined the use of AI in academia from an empirical perspective, focusing on assessing students’ and academics’ levels of awareness, attitudes, and perceptions toward tools such as ChatGPT. While these studies provide valuable insights into patterns of use and acceptance, they pay little attention to the doctrinal interpretation of authorship and ownership under copyright law, issues that become increasingly complex when research outputs are generated with or by AI. Drawing on global contexts, this paper aims to fill this gap by critically analyzing how existing copyright principles of authorship and ownership apply to AI-generated academic works in the Nigerian context. The paper finds that Nigerian copyright law remains human-centric, recognizing only works demonstrating human creativity and originality. A distinction is emerging between AI-generated and AI-assisted works: while wholly AI-produced outputs lack protection, those involving meaningful human input—such as prompting or creative direction—may attract authorship and by extension ownership. Thus, students or researchers who apply intellectual effort in using AI tools can still be deemed authors. Ultimately, the challenge is not whether AI belongs in academia, but how to shape its presence in ways that uphold human creativity, accountability, and justice. Therefore, Nigerian universities and regulators must develop codes of conduct, establish AI ethics committees, and align with global authorship standards to ensure ethical use of AI while promoting equitable access to AI infrastructure.</em></p> Oluchi Maduka Temiloluwa Oyundoyin Adebayo Adejumo Copyright (c) 2025 Oluchi Maduka, Temiloluwa Oyundoyin, Prof. Adebayo Adejumo https://creativecommons.org/licenses/by/4.0 2025-11-15 2025-11-15 5 1 179 225 10.52907/jipit.v5i1.520