Governing Artificial Intelligence to benefit the UN Sustainable Development Goals
Big Tech's unregulated roll-out out of experimental AI poses risks to the achievement of the UN Sustainable Development Goals (SDGs), with particular vulnerability for developing countries. The goal of financial inclusion is threatened by the imperfect and ungoverned design and implementation of AI decision-making software making important financial decisions affecting customers. Automated decision-making algorithms have displayed evidence of bias, lack ethical governance, and limit transparency in the basis for their decisions, causing unfair outcomes and amplify unequal access to finance. Poverty reduction and sustainable development targets are risked by Big Tech's potential exploitation of developing countries by using AI to harvest data and profits. Stakeholder progress toward preventing financial crime and corruption is further threatened by potential misuse of AI. In the light of such risks, Big Tech's unscrupulous history means it cannot be trusted to operate without regulatory oversight. The article proposes effective pre-emptive regulatory options to minimize scenarios of AI damaging the SDGs. It explores internationally accepted principles of AI governance, and argues for their implementation as regulatory requirements governing AI developers and coders, with compliance verified through algorithmic auditing. Furthermore, it argues that AI governance frameworks must require a benefit to the SDGs. The article argues that proactively predicting such problems can enable continued AI innovation through well-designed regulations adhering to international principles. It highlights risks of unregulated AI causing harm to human interests, where a public and regulatory backlash may result in over-regulation that could damage the otherwise beneficial development of AI.
Search for the Publication In: