Edward Jackson and Brian Rowe

On the outskirts of Dhaka, in the soft blue glow of a bank of computers, young engineers customize software for businesses from around the world. In a village outside Chittagong, a mother logs into an online-money platform to transfer tuition funds to her daughter in a nearby college town.

Both the software company and the mobile-money provider utilize machine learning and other forms of artificial intelligence (AI) to gain efficiencies, targeting and scale in serving their customers’ diverse and evolving needs.

Capable of visual perception, speech recognition and decision-making, AI has given us self-driving vehicles, robots and ‘intelligent software agents’ in our smart-phones like Siri and Alexa. These technologies are designed to continuously ‘learn, adapt and improve’ autonomously.

In Bangladesh and elsewhere in the Global South, AI innovations co-exist with extreme social challenges, especially income inequality, ethnic conflict, gender violence and climate change. The extent to which AI can either amplify or reduce these problems is an increasingly pressing question.

In fact, a growing number of organizations across the globe are trying to understand how AI has already begun to shape economies and societies:

  • The Global Commission on the Future of Work of the International Labour Organization has called for inclusive labour markets and a human-centred agenda that can seize the positive opportunities created by automation while also mitigating its negative effects.
  • The World Economic Forum’s Centre for the Fourth Industrial Revolution promotes new forms of multi-stakeholder collaboration that ‘optimize accountability, transparency, privacy and impartiality’ in machine learning and AI. The non-profit Open AI builds artificial intelligence applications aimed at delivering widely distributed benefits and long-term safety, and works to avoid uses of AI ‘that harm humanity or unduly concentrates power’. Google’s AI for Social Good platform helps non-profits to integrate insights from Google’s AI research into their programming, to access funding through an open call competition, and to build their knowledge through online training and guides. Late last year, the customer-relationships software firm, Salesforce, became the first global technology company to hire an executive-level Chief Ethical and Humane Use Officer, who is sparking questions around technology and the public good worldwide.

What does all this mean for international development agencies? As the economies of low- and middle-income countries change rapidly, particularly in Sub-Saharan Africa and South Asia where large numbers of the poor reside, are development NGOs keeping pace? What capacities should they build to navigate and fully engage with this emerging, AI-powered world?

There are five spheres of practice in which development NGOs should consider tooling up:

Managing the automation of critical industries: In sectors such as auto or garment manufacturing, widespread AI applications, notably through robotification, could eliminate millions of low- to mid-wage jobs that now support livelihoods and maintain social peace in the Global South. Mitigating the negative impacts of automation, while introducing new skills-training and social-protection programs, are important tasks. Researchers with the International Monetary Fund find that women workers, especially those over the age of 40, are more vulnerable than men in losing their jobs to automation.  In fact, the IMF estimates that women could lose as many as 180 million jobs worldwide to changing technologies over the next 20 years.  From targeted training subsidies, affordable health care and even provision of a universal basic income, smart public policy will be necessary to ease the transition of both genders to the economy of the future. Direct engagement of the poor and marginalized, and the workers most affected, can help design effective solutions to the labour market challenges that are already being triggered by AI. Such participatory development processes can help anchor and optimize community, rather than elite, benefits.

Deepening democracy in the era of the surveillance state: As George Soros has underscored, sophisticated surveillance AI is being used by  authoritarian regimes to monitor and contain civil-society activists  working for democracy and human rights. While the US, Russia and other  countries are active in this area, China has become the chief purveyor  of surveillance technology inside its own borders as well as overseas.

Banning fully autonomous weapons: Nobel laureates, human rights activists and AI scientists have joined the Campaign to Stop Killer Robots. Among other things, such weapons would decide who lives and who dies, could not be held accountable, and could too easily be used for suppressing civilian dissent.

Expanding program impact through machine learning: Educate Girls, a non-profit in India, used machine learning to analyze large, geo-tagged census and school-enrolment datasets to identify new village clusters where its programs could serve an additional 600,000 girls within its current five-year budget. It was crucial that the micro-level granularity of the data matched that of the non-profit’s programming decisions.

Deploying AI for more effective fundraising: New AI-driven tools can analyze large quantities of data on donor behaviour and preferences, prioritize the most likely prospects for giving, and even send emails to fundraising staff with a pre-drafted, customized message for each potential donor. This automated administrative support frees up fundraisers to invest more time in building the human relationships so essential to the giving process.

It is time for NGOs to build their capacities in artificial intelligence, and they must construct this new, unprecedented path by walking it. Purposeful debates and discussions of AI issues and innovations should infuse the sector’s conferences, webinars and podcasts. Recruiting AI advisors and staff will spark further learning inside individual organizations.

And specialization matters. In substantive terms, NGOs working in sustainable agriculture must understand and act on a set of AI issues and innovations that will be distinct from those of, for example, human rights defenders.

Universities and colleges can help, too. Worldwide, there is growing interest in fostering campus-community partnerships to address wicked societal problems. The cluster of issues around AI could be an ideal focal point for such collaboration, with the active support of development NGOs.

To borrow an ice-hockey metaphor, development NGOs must move to where the puck will be, rather than where it is now. However, they must recognize that AI is a new kind of puck. It is not just traveling faster; it is actually accelerating at an exponential pace.

Ultimately, development NGOs must enable and steward the authentic representation of local communities and national institutions in the Global South in shaping and directing the application of AI, and they must do this in ways that minimize harm and that maximize the public benefit.


Carleton University adjunct research professor Edward Jackson and international-development consultant Brian Rowe are studying the interface between artificial intelligence and social justice.

Thanks to Jim Delaney, Jacob Jackson and Ian Smillie for their comments on an earlier version of this paper.