Luck is one of those concepts that resists precise definition the harder you push on it. In casual conversation it covers everything from winning a raffle to surviving a car accident, from a fortunate career introduction to finding a parking space on a busy street. It feels intuitively real — most people can point to moments in their lives where outcomes diverged dramatically from what probability alone would have predicted, and where that divergence shaped everything that followed. And yet luck, examined carefully, turns out to be extraordinarily difficult to separate from the ordinary operation of probability, incomplete information, and the human tendency to construct narratives around outcomes after they have occurred. That same narrative instinct shows up in systems built around chance — including those framed by incentives like an Elite Spin promo code — where outcomes are often interpreted as meaningful rather than statistical.
Artificial intelligence engages with this problem from an unusual angle. AI systems do not believe in luck. They do not feel fortunate or unfortunate. But they are required, constantly and across an enormous range of applications, to model the kind of uncertainty that luck describes — to represent, predict, and reason about outcomes that are not fully determined by available information. How they do this, and what the limits of that modelling reveal, tells us something interesting about both artificial intelligence and the nature of luck itself.
What Luck Actually Is, Formally Speaking
Before examining how AI models luck, it helps to establish what luck is in terms that a formal system can engage with. The philosophical literature on luck has converged, broadly, on a definition that involves two components: an outcome that is significantly good or bad for the person experiencing it, and an outcome that was not fully controlled or determined by that person’s choices and actions.
This definition separates luck from skill — the outcomes you can reliably produce through competence — while acknowledging that real-world results almost always involve both. A poker player who wins a hand may have played excellently and been fortunate in the cards dealt, with both contributions present simultaneously and genuinely difficult to disentangle. A scientist who makes a significant discovery may have been rigorous and methodical and also happened to notice something that a slightly different experimental setup would have obscured. Luck and skill coexist in most significant outcomes, and the interesting question is usually about their proportions rather than the presence or absence of either.
For an AI system, this conceptual landscape translates into the mathematics of probability and uncertainty. Luck, formally, is the gap between expected outcomes and actual outcomes — the deviation from what probability predicted. An event with a ten percent probability that actually occurs is not lucky in itself; it is simply one of the ten-in-a-hundred occasions when that event will occur. What makes it feel lucky is the significance of the outcome and the fact that it happened to this particular person on this particular occasion.
Randomness in AI Systems: Controlled and Genuine
Artificial intelligence systems encounter randomness in two fundamentally different ways, and understanding the distinction is important for understanding how AI models luck.
The first is controlled randomness — randomness that is deliberately introduced into AI systems to serve specific technical purposes. Training large neural networks, for instance, involves initialising the model’s weights with random values, shuffling training data in random order, and using stochastic gradient descent — an optimisation algorithm that introduces randomness into the process of improving the model’s performance. Without this controlled randomness, neural networks tend to get stuck in suboptimal solutions. The randomness is not a bug or a limitation; it is a feature that makes the system work better.
Reinforcement learning — the AI training paradigm used to develop systems that learn through interaction with an environment — relies heavily on controlled randomness to ensure that an agent explores the full range of possible actions rather than immediately converging on whatever approach happens to look best in early training. An AI learning to play chess, navigate a robot through a physical environment, or optimise a logistics network needs to try many different approaches, including ones that look unpromising, to discover strategies that simpler, more deterministic learning processes would never find.
The second form of randomness AI systems encounter is genuine uncertainty in the world they are modelling. A system predicting tomorrow’s weather, estimating the likelihood that a loan applicant will default, or assessing the probability that a particular medical symptom indicates a serious condition is dealing with situations where the outcome is not yet determined and where available information is genuinely incomplete. This is the domain where AI modelling of luck becomes most interesting — because it is also the domain that most closely resembles the situations humans describe as lucky or unlucky.
Probabilistic Reasoning: How AI Handles Uncertainty
The primary tool AI systems use to model luck and uncertainty is probabilistic reasoning — representing outcomes not as certain predictions but as distributions of possible results, each assigned a probability based on available evidence.
Bayesian inference, one of the most important frameworks in this domain, provides a systematic method for updating probability estimates as new evidence arrives. A Bayesian AI system begins with a prior probability — its best estimate of an outcome’s likelihood before observing any specific evidence — and updates that estimate systematically as relevant information becomes available. The result is a posterior probability that reflects both the prior and the evidence, weighted appropriately.
This framework handles luck elegantly at the formal level. An event that a well-calibrated Bayesian model assigns a three percent probability to and that then occurs is not, from the model’s perspective, lucky in any deep sense — it was always going to happen approximately three times in every hundred such situations. The model’s job is to assign that probability correctly, not to predict which specific instance will be the one where the event occurs. The feeling of luck attaches to the human experiencing the outcome, not to the probability estimate that preceded it.
Modern machine learning systems extend this probabilistic reasoning into domains of enormous complexity. A language model generating text is, at each step, sampling from a probability distribution over possible next words — making choices that are neither fully determined nor fully random, but shaped by learned patterns weighted against controlled stochasticity. A recommendation system is estimating the probability that a particular user will find a particular piece of content valuable, given everything it knows about that user’s history. A fraud detection system is assessing the probability that a particular transaction is genuine, given patterns learned from millions of previous transactions.
Where AI Modelling of Luck Breaks Down
The probabilistic framework handles much of what luck describes, but it encounters genuine limits at the edges of the concept — particularly around what philosophers call moral luck and what practitioners call unknown unknowns.
Moral luck refers to the way that factors entirely outside a person’s control affect outcomes for which they are nonetheless held responsible. A driver who runs a red light and reaches the other side safely is lucky. A driver who runs the same red light and kills a pedestrian is unlucky — and also, in most legal and moral frameworks, more culpable, despite the decision being identical. The outcome was determined by a factor — the presence or absence of a pedestrian — that was entirely outside the driver’s knowledge or control.
AI systems can model the probability that a pedestrian will be present. They cannot model the moral significance that attaches to the outcome after the fact, or the way that outcome retrospectively colours the evaluation of the decision that preceded it. This is not a technical limitation that better data or more sophisticated algorithms will resolve — it reflects something about the nature of moral and social reasoning that probabilistic frameworks are not designed to capture.
Unknown unknowns present a related challenge. A probability model can only assign probabilities to outcomes it knows to consider. Events that fall entirely outside the model’s conception of the possibility space — what Nassim Nicholas Taleb famously called Black Swan events — cannot be assigned any probability, and therefore cannot be meaningfully planned for. The AI system that modelled financial risk before the 2008 crisis was not wrong about the probabilities it estimated. It was wrong about which events to include in its probability space. That is a different kind of error, and one that no amount of improved probabilistic reasoning within the existing framework resolves.
What AI’s Engagement with Luck Tells Us
The way artificial intelligence systems model luck illuminates something that the casual concept of luck tends to obscure: that most of what we call luck is actually the operation of probability in conditions of incomplete information, and that the feeling of luckiness or unluckiness is a narrative we construct around outcomes after they occur rather than a property of the outcomes themselves.
AI systems are relentlessly literal about this. They do not feel lucky when a low-probability event occurs in their favour. They update their models, adjust their estimates, and continue reasoning. The outcome is noted; the surprise that a human would experience is absent. In this sense, AI modelling of luck functions as a kind of philosophical corrective — a demonstration that the gap between expected and actual outcomes is simply the normal operation of a probabilistic universe, and that the stories we tell about fortune and misfortune are added by minds that prefer narrative to mathematics.
This does not make luck meaningless. The outcomes that we describe as lucky or unlucky are real, and their significance to the people who experience them is not diminished by the fact that a probability distribution somewhere assigned them a non-zero likelihood. But understanding how AI handles the mathematics behind luck provides a clearer view of what luck actually is — and, perhaps, a slightly more equanimous relationship with the outcomes we cannot control.
