The risks of artificial intelligence development (AI) have become an increasingly recurring and relevant topic given that these technologies are here to stay and are present in a thousand ways in our daily lives.
In this article, we rescue important information about AI, the risks that have been associated with it at a social level, as well as resources to mitigate them.
The AI Revolution: A Global Phenomenon
AI’s adoption has skyrocketed, evident in striking statistics from McKinsey’s 2022 technological trends report:
- A staggering 56% increase in AI adoption globally from 2015 to 2021.
- AI models now train 94.4% faster since 2018.
- The patent landscape has transformed, with 30 times more AI-related patents filed in 2021 compared to 2015.
The revolution that artificial intelligence has brought also extends to the social sphere, since these technologies can help in the decision-making process and thus generate more egalitarian and efficient policies. Not only that, responsible use of AI could improve our jobs, health services, educational quality, etc.
Since the potential of AI to improve social well-being began to be talked about, many governments and entrepreneurs have turned their attention towards solving countless social problems and creating effective public policies.
What are the risks of artificial intelligence?
The use of artificial intelligence systems in decision-making processes carries certain risks due to the potential direct or indirect impacts of the implementation of these technologies. Some of these risks include:
The leakage of personal data that can compromise people’s well-being.
The extreme surveillance and subsequent manipulation by private or government organizations with access to the information that feeds artificial intelligence development.
The “echo chambers” or “filter bubbles” that occur when you are exposed to the same ideas, news and/or facts, which is a common phenomenon among social media users and ends up strengthening preconceived biases. This is specifically dangerous among decision-makers in any area, but even more so among those who work in public policy.
Underrepresentation in models created by artificial intelligence, especially in issues related to access to health and education.
Have the information and not an action plan. It is as important to have the necessary information to address a social problem as it is to have developed a roadmap to solve it.
Fortunately, these types of risks can be anticipated and mitigated if we ensure that those involved in the development and use of these technologies establish clear protocols for each stage of their life cycle. This approach makes it easier to organize the discussion and identify the specific risks associated with each stage, when, given the iterative nature of AI, a linear approach is not appropriate.
Mitigating AI Risks: A Proactive Approach
At the regional level, the IDB, through its fAIr LAC initiative, has collaborated with governments in Latin America and the Caribbean (LAC) to promote the responsible use and adoption of AI. In response to the work done with the public and private sector, fAIr LAC has also focused efforts on developing resources to raise awareness of the potential benefits as well as the major risks that come with artificial intelligence. These resources include not only general documents, but also specific tools to put the principles of these technologies into practice.
The most recent resource is fAIr LAC at hand, where there are 5 open and available tools for those who lead projects that use AI and for teams that directly develop solutions and want to comply with ethical and risk reduction principles:
An ethical self-assessment for systems developed in government agencies. This tool is a questionnaire that reviews the most important aspects that must be taken into account to mitigate potential ethical risks both in the design phase and in the development of an AI solution.
As a mirror, a self-assessment for the entrepreneurial ecosystem aimed at both companies that use AI in their services and those that develop it.
A manual for anyone directing, from a public entity, a project that uses decision support systems. It is a practical guide that accompanies those responsible for formulating projects with AI in the different decision stages, and also alerts them about the possible associated ethical risks that they must take into account during the design, development and validation of the model, as well as in its subsequent deployment and monitoring.
A manual for the technical team in charge of developing the model. Using the life cycle of AI systems as an analysis framework, the manual offers technical guidance to project managers, as well as model development teams (which we call the technical team), to improve their decision-making processes. of decisions and their results in building an AI solution.
And finally, thinking about the deployment of an AI solution and the need for accountability, we share the Algorithmic Audit Guide. It highlights the implications and consequences of the use of automated systems in making or supporting decisions that affect people, in order to understand the need to implement an audit and the process that it entails.
All complemented by the MOOC “How to make responsible use of artificial intelligence?”,”,This is meant for elected officials who are making their first use of these tools to deal with a public policy issue.It is important to highlight that the tools follow a living process of iteration and calibration through practice, and always have the human being at the center. Only in this way can we ensure that we are effectively promoting the ethical use of AI that can improve lives.
Author Bio
Glad you are reading this. I’m Yokesh Shankar, the COO at Sparkout Tech, one of the primary founders of a highly creative space. I’m more associated with digital transformation solutions for global issues. Nurturing in Fintech, Supply chain, AR VR solutions, Real estate, and other sectors vitalizing new-age technology, I see this space as a forum to share and seek information. Writing and reading give me more clarity about what I need.