
In today’s complex and rapidly evolving world, we face the unpredictable certainty of risks. We are living in the era of “datocracy”, where data has become increasingly strategic. It provides new sources of information to simplify and optimize the underwriting of insurance policies and financing. Simultaneously, it enhances our understanding of risk, allowing for more refined and granular categorizations through AI-based platforms.
Organizations can leverage technology and automated data analysis to pursue organizational and operational resilience in a more structured and conscious manner. However, it is crucial to effectively manage the risks associated with data and its processing by AI. This involves ensuring a “perfect triangulation” among data, AI, and risks through a risk-based and resilience-based approach to prevent everything from devolving into a “Bermuda triangle”.
We are seeing a huge increase in data, known as Big Data, which is driving the growth of AI decision-making. Big Data can be analysed to find insights. This helps improve the use of processes and technologies, including AI and machine learning, to combine and analyse large data sets. The goal is to identify patterns and develop actionable insights to make faster, better decisions that can increase efficiency, revenue, and profits, and manage organizational risks.
Digital technology has changed our lives, and we need to use its potential. Big Data connects people, things, and systems, helping us organize and optimize in real time. AI impacts our lives deeply, offering new strategies and predicting risks. This will change how we manage risks and make decisions, with more automatic and accurate measures. Inevitably, risk management and organizational resilience frameworks will undergo paradigm shifts and will adopt preventive measures that, in many cases, will be activated automatically and, thanks to more accurate monitoring and reporting, will allow more appropriate and successful decisions to be made.
The automation of risk management and modeling is now increasingly widespread, contributing to the improvement of the quality of decisions, not only of organizations, but also of the various stakeholders, starting with insurance, consulting, and banking, etc.
In this context, data – processed by AI technology – becomes increasingly strategic as it can simplify and optimize the underwriting of insurance policies as well as loans and, at the same time, improve the understanding of risk, allowing more “refined” and granular categorizations.
Innovative technologies that utilize data not only enhance our capabilities but also actively shape and guide them, for better or worse. Therefore, it is increasingly essential to ensure a ‘perfect triangulation,’ which is a calibrated synthesis of Big Data, AI, and risk management. This triangulation serves as a valuable lever to secure a resilient and sustainable future, facilitating the ongoing digitization process. It has an inevitable impact on both processes and the way things are done, influencing their profound nature and redefining their meaning. To prevent this “perfect triangulation” from becoming a “Bermuda triangle,” we must ensure that technology remains a tool at the service of humanity.
It is a matter of acquiring adequate knowledge of AI technology and the quality of the data on which it feeds to avoid intrinsic risks and at the same time manage them.
The European Union is moving in this direction with the AI Act and Cyber Resilience Act, which highlights the need for a change in the approach to risk in a context of technological innovation based on AI systems and security guarantee.
It is interesting to note that, until now, organizations have been used to implementing risk management systems to define both risk appetite and tolerance after carrying out the identification, measurement, treatment and necessary mitigation actions, while AI-based models are made up of complex functions without visibility or understanding of the logic followed or the structure of the decision-making process. Furthermore, considering that machine learning algorithms are trained with input data generated by people, the algorithm’s decision-making process is based on the same bias that applies to human decisions and influenced by the culture, points of view and stereotypes of those who “feed” them (i.e. in jargon “garbage in – garbage out” or making a bold transposition and bothering the Austrian philosopher Feuerbach, AI is what “eats”).
Therefore, the variables introduced by AI and Machine Learning are poorly suited to traditional risk management, which does not consider the “black box” effect of opaque AI and machine learning models. As a result, it will be increasingly necessary to update risk management systems and consider factors such as data ethics and align them with corporate values, while at the same time, being able to justify and explain the intent behind the use of such data processed by AI.
Therefore, it is more necessary than ever to consider risk management directly in the design phase of AI models, so that supervision is constant and simultaneous with internal development and external provisioning of AI throughout the organization by adopting a “derisking AI by design” mode.
Furthermore, every organization that uses an AI model must be able to ensure the development of a governance system that considers:
Therefore, organizations will need to update their risk management systems to be more flexible, agile, and adaptive. We can boldly say that AI-based technology almost becomes a “magister” – in the Latin meaning of the term, i.e. the one who shows the way – facilitating organizations in:
Technology, throughout history, has helped to improve people’s living conditions. Digital transformation should, therefore, be seen as part of a virtuous cycle—where technology is both shaped by people and, in turn, supports them throughout their lives.
It is essential to remember that technology, as a human creation, must be continually examined through the lens of the human sciences. In this context, the discipline of algorithmics—focused on the ethical development of artificial intelligence—emerges as a crucial means to ensure that humans remain at the centre of decision-making processes, serving as a potential counterbalance to the risk of algocracy.



