Nigeria’s growing reliance on artificial intelligence has sparked a public debate over the technology’s opacity, with citizens and experts questioning how decisions are made. The issue comes as the federal government expands AI use in sectors like healthcare, finance, and law enforcement, raising concerns about accountability and fairness. A recent report by the Nigerian Tech Council highlighted that over 60% of AI systems used in public services lack clear documentation of their decision-making processes, leaving users in the dark.
AI’s Hidden Mechanisms
The core issue is that many AI systems operate as “black boxes,” meaning their internal logic is not easily understood even by developers. This lack of transparency has led to mistrust among users, particularly in a country where technology adoption is rapidly increasing. In Lagos, a local software developer, Amina Hassan, said, “I’ve seen cases where people are denied loans or medical treatment based on AI decisions, but no one can explain why.”
Experts argue that without clear explanations, AI can perpetuate biases and errors. A 2023 study by the University of Ibadan found that 45% of AI-driven hiring tools in Nigeria had biases against women and minority groups. “If we don’t understand how these systems work, we can’t hold them accountable,” said Dr. Chidi Okoro, a computer science professor at the university.
Impact on Daily Life
For everyday Nigerians, the lack of AI transparency affects everything from accessing services to legal outcomes. In Kano, a man named Musa Ali was denied a government subsidy after an AI algorithm flagged his application as “high risk.” When he asked for an explanation, he was told the system “could not provide one.”
The problem is especially acute in rural areas, where people have less access to digital literacy and legal recourse. In Enugu, a local community leader, Nkechi Okafor, said, “Many of our people don’t even know what AI is. When something goes wrong, they don’t know who to talk to or how to challenge the decision.”
The issue also affects small businesses. A survey by the Lagos Chamber of Commerce found that 30% of entrepreneurs reported being unfairly penalized by AI systems used in tax and regulatory compliance. “We can’t run our businesses if we don’t know why the system is punishing us,” said Tunde Adeyemi, owner of a small trading company.
Government Response and Public Pressure
The Nigerian government has acknowledged the problem. In a statement, the National Information Technology Development Agency (NITDA) said, “We are working to establish clear guidelines for AI transparency, but the process is complex.”
Civil society groups are pushing for stronger regulations. The Digital Rights Alliance, a coalition of tech and human rights organizations, has called for a national AI accountability framework. “This isn’t just about technology—it’s about justice,” said CEO of the group, Bisi Adeyemi.
Public pressure is mounting. A recent online petition demanding AI transparency has gathered over 100,000 signatures, with many calling for the government to make AI decision-making processes more accessible and explainable.
Looking Ahead
The next step for Nigeria is to draft and implement a comprehensive AI transparency policy. A draft bill is expected to be introduced in the National Assembly by the end of the year. If passed, it could set a precedent for other African nations grappling with similar challenges.
For now, citizens remain in limbo. As the AI revolution continues, the question remains: how can a society trust systems it does not understand?



