Monte Carlo Simulation (Python) Based on Fizell (2022)

To understand how to model risks with high variance, I developed a Python script using numpy and pandas to run a Monte Carlo simulation. Instead of relying on a single “average” prediction for future risk exposure (e.g., potential financial loss), the simulation generated 1,000 random iterations based on historical volatility.

  • Key Concept Applied: The script used norm.ppf (Percent Point Function) to generate random variables within a specified mean and standard deviation, effectively simulating “black swan” events and best/worst-case scenarios.
  • Outcome: The output provided a probability distribution rather than a single number. This allowed me to state with 95% confidence that the potential risk exposure would fall within a specific range, providing a far more defensible metric for stakeholders than a “High/Medium/Low” label.

Description: This script simulates 1,000 potential outcomes for a financial risk scenario (e.g., cost of a data breach) using historical volatility data. It calculates the 95% confidence interval (Value at Risk).

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import norm
# --- CONFIGURATION ---
# Scenario: Estimating potential financial loss from a supply chain disruption
# Based on historical data, we assume a normal distribution of daily loss.
simulations = 1000 # Number of iterations
days_to_forecast = 30 # Duration of the risk event
avg_daily_loss = 5000 # Mean daily loss in GBP
std_dev_loss = 1500 # Volatility (Standard Deviation)
# --- MONTE CARLO SIMULATION ---
def run_simulation():
results = []
for i in range(simulations):
# Generate random daily losses based on normal distribution
# norm.ppf converts a random percentage (0-1) to a value on the distribution curve
daily_losses = norm.ppf(np.random.rand(days_to_forecast), loc=avg_daily_loss, scale=std_dev_loss)
# Cumulative sum of losses for this 30-day iteration
total_event_cost = daily_losses.sum()
results.append(total_event_cost)
return np.array(results)
# --- EXECUTION & ANALYSIS ---
simulated_costs = run_simulation()
# Calculate Key Metrics
mean_cost = np.mean(simulated_costs)
worst_case = np.percentile(simulated_costs, 95) # 95th percentile (Value at Risk)
best_case = np.percentile(simulated_costs, 5) # 5th percentile
print(f"--- RISK FORECAST (30 DAYS) ---")
print(f"Mean Expected Cost: £{mean_cost:,.2f}")
print(f"95% Confidence Worst Case: £{worst_case:,.2f}")
print(f"5% Confidence Best Case: £{best_case:,.2f}")
# Optional: Visualization code would go here
# plt.hist(simulated_costs, bins=50)

References:

Fizell, Z. (2022) How to Create a Monte Carlo Simulation using Python. Available at: https://towardsdatascience.com/how-to-create-a-monte-carlo-simulation-using-python-c24634a0978a/

The Role of AI in Risk Management

Based on Kalogiannidis et al. (2024)

1. How does NLP improve the efficiency and accuracy of risk assessment processes? Natural Language Processing (NLP) fundamentally shifts risk assessment from a manual, labor-intensive process to an automated one capable of handling vast datasets. Kalogiannidis et al. (2024) highlight that over 80% of enterprise data is unstructured (e.g., text reports, social media), which traditional quantitative methods often struggle to process. By automating the analysis of this unstructured data, NLP significantly speeds up risk identification, finding supported by 70.2% of technology specialists. Furthermore, NLP reduces the human bias and error inherent in manual qualitative assessments, with 79.2% of respondents agreeing it improves identification accuracy.

2. In what ways can AI-powered data analytics enhance risk prediction and support business continuity? AI-powered analytics enables a transition from reactive to proactive risk management. Unlike traditional methods that rely on historical data and static risk factors, AI analytics can detect subtle patterns and anomalies in real-time streams. The study found that 71.5% of respondents agreed AI enhances the accuracy of predicting potential risks, rather than just reporting on past ones. Crucially, for business continuity, these tools allow for the rapid identification of “emerging risks” that have not yet materialised, with 93.5% of professionals noting that it supports a proactive approach.

3. Why is it important for businesses to integrate multiple AI technologies, beyond just NLP? While NLP is effective for efficiency, the study’s regression analysis indicates its direct impact on business continuity is only moderate compared to other technologies. In contrast, the integration of AI into Incident Response Planning demonstrated the highest statistical impact on minimising business disruption (coefficient of 0.361). Therefore, a “comprehensive strategy” is required: NLP for data processing, predictive analytics for identifying emerging threats, and AI-driven incident response to enhance resilience during crises. Relying solely on one tool leaves gaps in the Risk Management Process.

References

  • Kalogiannidis, S., Kalfas, D., Papaevangelou, O., Giannarakis, G. and Chatzitheodoridis, F. (2024) ‘The Role of Artificial Intelligence Technology in Predictive Risk Assessment for Business Continuity: A Case Study of Greece’, Risks, 12(2), p. 19. Available at: https://doi.org/10.3390/risks12020019