How can we harness AI to tackle the complexity of disaster risk?
To say that artificial intelligence will reshape the way we live and work is to state the obvious. However, reflecting on its promise and perils for one’s own area of work is quite another matter.Earlier this month I had the opportunity to participate in the ITU-hosted ‘AI for Good’ summit in Geneva, where during several sessions we explored the many ways that generative, predictive and integrated AI promises a vast range of benefits for disaster risk reduction and disaster response. Later, in New York, I joined a discussion with students, academics and practitioners at Columbia University’s National Centre for Disaster Preparedness, where I was struck – and greatly encouraged – by the focus on building multidisciplinary approaches to managing increasingly complex and systemic risks.What stayed with me was the sense of convergence: the technological leaps in AI are rising to match the complexity of systemic risk.Below are five reflections on how we might use AI not only to do more, but to do better.One: Let's start by asking the right questionsThe need for deeper and greater dialogue between producers of solutions and users of such solutions is nothing new – but with AI tools the stakes may be higher, and the opportunities more available.Problems once deemed intractable – those requiring swift analysis of vast and varied data drawn from scattered sources – are now within reach. But we need to be prudent in how we allocate our resources towards these new possibilities.We could use the new tools to build an AI-based epidemiological model for earthquakes that very rapidly estimates the type and quantum of search-and-rescue and medical needs after a seismic event. We may be able to develop faster ways of alerting lightning-strike-prone communities ahead of electrical storms. We could find ways to rapidly identify sources and control the spread of misinformation to avoid panic during an emergency.To decide how and where we employ our new AI toolkits, we must articulate the demand from both disaster risk reduction practitioners and at-risk communities, and prioritize the problems that matter most. Two: We must redefine disaster risk governance for the AI eraOur systems of risk governance were born in a simpler time, so we need to retrofit them for an AI-enabled future. Machine learning and artificial intelligence are not only going to redefine traditional professions – but also traditional institutions.For example, at present, the formal institutions of the state have the authority to issue alerts and ask people to evacuate in the wake of an impending cyclone. We are already beginning to see situations where competing sources of information are sometimes more agile, more nimble, and more accurate. Such developments are likely to displace the traditional state institutions that have the sole authority for actions such as evacuating people in the face of an impending hazardous event. We are going to need to find ways to ensure that decisions are streamlined, but institutional accountability remains in place.Authorities must still be held responsible for taking the best possible decisions – whether those decisions are made in data-constrained or data-rich environments. We need to remember that AI is no more than a tool to help us do our jobs better.Three: AI will become critical infrastructureYes – AI holds great promise for disaster risk reduction, and for just about every other sector, in many cases being put to good use keeping complex systems flowing smoothly. We need to remember that AI itself relies on infrastructure – data centres, energy infrastructure, digital connectivity infrastructure – and this too needs to be resilient to physical hazards and climate risk. AI infrastructure is growing rapidly, spanning multiple geographies across the world. As a result, it will inevitably be exposed to a range of hazards – many of them increasingly frequent and intense. We must make sure that we plan, locate, design and build AI infrastructure to manage these risks – now and into the future. As we inevitably rely more on AI systems to manage disaster risks, if compromised by disasters, these systems could trigger complex cascading risks leading to potentially catastrophic systemic failure.This infrastructure brings sustainability challenges, and, if unmanaged, will create new risks. Data centres consume huge amounts of power and water. As demand for AI grows, we’ll need more investment in green computing and low-resource solutions – including safeguards so that the environmental costs don’t fall on those already bearing the heaviest burdens.Four: It's time to rethink disaster education for an AI eraOver the past two decades formal disaster risk reduction education has expanded rapidly. In India alone more than two dozen universities or colleges offer Masters’ degrees in disaster risk management. But many of the subjects taught – like multi-sectoral policy analysis for disaster risk reduction; hazard, vulnerability and risk assessment; disaster risk reduction planning; early warning systems – are likely to increasingly be performed by AI. Such programmes will need to equip students to use new tools, and adapt further to future developments.These skills must be taught not only at elite institutions – to avoid knowledge inequality we must make sure that access is widespread. This is part of a much broader challenge – those communities that stand to gain the most from AI are those that are currently least served: lacking in connectivity, living in data-poor zones, and whose voices are unrepresented and ignored.There are emerging initiatives for public-good AI models that are trained to serve priority needs in vulnerable regions, and these must be supported and encouraged so we can fill those gaps.Five: We must keep risk knowledge grounded in peopleThere is a deeper issue: If there is one single learning from the practice of disaster risk reduction over the last three decades, it is that disaster risk is socially constructed.It's the behaviour of human beings in social, economic, political and cultural spheres that leads to accumulation of risk in a society. To date the AI use cases for disaster risk reduction are heavily loaded towards understanding, observing and predicting hazards. At best they are focused on forecasting the impacts based on the people, capital assets and economic activity in the path of hazards and how vulnerable they are. It stops well short of helping us understand why they are where they are and why they are fragile in the first place.If we are going to use AI to foster the agency of individuals, persons, households, communities, and local governments to take actions that reduce risk – we must target not just short term actions but also long-term development choices. AI can only work with the data it’s given, and risk is often under-represented or misrepresented in marginalized areas. This is both a technical and a social issue: we must make sure that community-generated data feeds into AI-supported solutions, and that all people are given agency to act – and not just to be analyzed.We must find ways to use AI to support deeper transformations in our society that lower risk and build resilience for all. If we fail to do this our efforts will be focused largely on more efficient band-aids.AI opens up powerful new possibilities for disaster risk reduction. But real progress won’t come from algorithms alone. It will come from asking better questions, forging stronger partnerships, and keeping justice, equity, and long-term resilience at the core of our innovation.
Comments (0)