On the Margin of Safety and Value at Risk
- Sydwell Rammala

- Jan 29
- 7 min read
Updated: Aug 10
In the controlled chaos of an airport runway, a fraction of a second or a few feet can mean the difference between routine and disaster. The Atlantic magazine's exploration into the uptick of near-collision incidents in American skies paints a picture that is quietly alarming. Termed “airprox” events, these close encounters between aircraft are on the rise, driven by understaffed control towers, outdated radar systems, and an aviation industry stretched thin. Yet, despite these warnings, regulatory responses have been slow and understated. The incidents are treated more like statistical flukes than red flags waving furiously in plain sight.
These close calls aren’t confined to the sky. On the ground, particularly during thunderstorms, similar patterns emerge. In a detailed piece by Scientific American, the deceptive safety of surviving a lightning strike, or narrowly missing one, creates its own psychological traps. People often interpret these near misses as protective signs or proof that their instincts were correct. Someone might see a bolt strike just yards away and feel reassured rather than alarmed, convinced that standing under that lone tree wasn’t such a bad idea after all. Storm chasers, too, are susceptible to this illusion; repeated brushes with danger that end without consequence can breed overconfidence, making them more likely to push boundaries or downplay escalating risk. The reality, of course, is that proximity to danger doesn’t equal immunity. It’s an illusion of safety born from surviving the arbitrary.
Across domains, from air travel to weather to even games of chance, the human tendency to downplay or misinterpret near misses persists. In technical terms, a near miss is simply an event that could have caused harm but didn’t. But instead of prompting introspection or change, these events often go unexamined. When properly understood, they are golden opportunities: rare, cost-free previews of what might go wrong if no one intervenes. In safety science, they’re supposed to be the canaries in the coal mine. But when they’re ignored, normalized, or misunderstood, we risk walking blindly into disaster.
Perhaps nowhere is the psychological power of the near miss more apparent than in the gambling world. Slot machines are notorious for engineering “almost wins”, two cherries and a near miss on the third. These signals trigger dopamine and heighten motivation, even though the outcome is entirely random. Researchers have documented how these moments, though technically losses, produce brain activity similar to actual wins.
Virtual Reel Mapping: A Foundation for Illusion
With slot machines, the heart of near-miss design is virtual reel mapping, a technique that distorts players' perception of a game's odds. Game developers use clustering to disproportionately map blank reel stops near high-value symbols on the virtual reel. When these blanks land on the central payline, they often position the desirable symbols just above or below, creating a visual tease that suggests a favorable outcome was narrowly missed.
Legal and Ethical Tensions
The near-miss effect has long been a point of contention in the gambling industry. In 1984, legal counsel for gaming giant IGT acknowledged clustering's psychological power, warning that such practices might constitute false advertising. Despite ethical concerns, the competitive advantage of near misses proved too lucrative for the industry to resist.
A significant legal challenge to near-miss practices occurred in the late 1980s when the Nevada Gaming Control Board scrutinized a Japanese company's slot machines. These devices programmed near-miss outcomes directly onto the central payline, rather than generating them through clustering. While competitors argued the psychological effect was identical, the board deemed this approach illegal. The ruling prohibited secondary software from altering outcomes after random generation, favoring "naturally occurring" near misses generated through virtual reel mapping.
Critics argue that this distinction was arbitrary and perhaps designed to protect American gaming companies from foreign competition. Regardless, the ruling entrenched near-miss techniques within the industry, enabling their continued use while maintaining the guise of regulatory oversight.
The Psychology of Near Misses
Near misses are psychologically compelling because they recast losses as potential wins. Gamblers often interpret these events as proof they are "getting closer" to success, motivating continued play. This phenomenon is supported by theories like:
Prospect Theory: Near misses are framed as missed gains, making losses feel like partial wins and encouraging continued play.
Loss Aversion: The discomfort of narrowly missing a reward drives players to keep playing in an attempt to reverse the loss.
B.F. Skinner, a renowned behaviorist, observed in 1953 that near misses increase the likelihood of continued play without any additional cost to the machine's owner. Unlike a jackpot or even a small win, a near miss doesn’t require a payout, yet it still stimulates the same behavioral effect. Players experience heightened arousal and anticipation, which encourages the decision to play again, even in the absence of an actual reward. This makes near misses a uniquely efficient psychological tool: they trigger the motivational effects of a win while preserving the profitability of a loss. For the gambling industry, this represents a powerful cost-effective mechanism, one that keeps players engaged, spending, and chasing outcomes that remain just out of reach.
On Value-at-Risk
Risk managers in finance grapple with a mirror-image of the near-miss problem every day. Instead of planes passing within a whisker of disaster, they watch simulated price paths that dip deep into the loss tail before popping back above water. In Monte-Carlo Value-at-Risk models, those “ghost losses” show up as the worst 1 % or 5 % outcomes: no cash actually leaves the account, yet the model insists it could. Treating them as statistical curiosities, much like air-traffic officials waving off an “airprox” as a one-in-a-million fluke, breeds the same false comfort. The uncompromising maths behind VaR is only half the story; the other half is whether managers respond or merely note that nothing really happened and move on.
Seen through this lens, a VaR exceedance during back-testing is the financial equivalent of a lightning bolt that strikes the fence instead of the crowd. It costs nothing in the moment, but it shouts about a vulnerability in plain sight: position sizes too large, vol estimates too stale, correlation assumptions too cosy. Just as storm-chasing meteorologists advocate using near strikes to recalibrate safety guidelines, risk desks should treat every model breach, or even clusters of near-breaches, as data points to tighten scenarios, fatten tails, or lengthen look-back windows. Ignored, these statistical events become the prelude to the day liquidity evaporates and the theoretical loss turns painfully real.
Integrating the behavioural and the quantitative closes the loop. Pilots file near-miss reports so engineers can redesign procedures; gamblers who understand virtual-reel mapping learn to walk away; portfolio managers who embed extreme drawdowns into their stress dashboards train themselves to act before a market downdraft forces their hand. The sophisticated Monte-Carlo engine is nothing more than an automated storyteller, spinning thousands of alternate market realities.
In practice, the crossover is straightforward: log every VaR breach the way the FAA logs airprox events, investigate the root cause, and feed the lessons back into the model. Most critically, cultivate a culture that sees tail-risk simulations not as accounting formalities but as near-misses that happened inside the silicon first. Whether the warning comes from radar blips or Monte-Carlo paths, the message is the same: we got lucky this time, let’s not rely on luck again.
The Evolution of Near-Miss Design
Advancements in gaming technology have expanded the methods for creating near misses. Early video slots eliminated the need for physical reels, allowing developers to engineer near-miss effects horizontally across multiple virtual reels. By adding extra reels and manipulating symbol distribution, designers crafted "unbalanced" or "asymmetric" reels that heightened the illusion of near misses while complying with legal requirements.
Today, more sophisticated techniques such as teaser strips, weighted with high-paying symbols during the "spin" phase, further distort players' perceptions. These visual cues suggest better odds than actually exist, deepening the psychological grip on players.
Enchantment by Design
Near-miss strategies exemplify how gambling machines leverage psychology and technology to foster engagement. Through mechanisms like virtual reel mapping, teaser strips, and asymmetric reels, designers create captivating illusions of control and luck. Supported by a complex regulatory and corporate infrastructure, these designs enchant players, encouraging persistence even in the face of losses. By reconfiguring losses as near wins, the gambling industry has cultivated a powerful and enduring psychological hook. While debates over the ethics and legality of such practices persist, one thing remains clear: near misses are not just a feature of slot machines—they are a cornerstone of their success.
In more serious arenas like aviation or public safety, this fallacy becomes dangerous. Near misses get rationalized or swept under the rug. Air traffic nearly collides, but since nothing happened, the system is deemed resilient. A lightning strike misses, and the risky behavior that preceded it becomes embedded as a safe choice. Over time, these misreadings compound, breeding complacency within individuals and institutions alike. The very events that should spark change instead become misconstrued as evidence that the status quo is working.
The real challenge lies in how we interpret these moments. The near miss fallacy tells us that surviving a threat means the threat was manageable, or even skillfully avoided. But that’s not always the case. Sometimes we’re just lucky. And when luck becomes mistaken for competence, risk compounds quietly until the luck finally runs out.
To avoid this trap, we must start treating near misses not as shrugged-off non-events, but as crucial data points. They are the warning shots that don’t cost lives, the early drafts of disasters yet to be finalized. In aviation, that means listening to controllers and pilots who report near hits instead of dismissing them. In storms, it means changing behavior not because you were hurt, but because you could have been.
The comfort of surviving a close call is a powerful feeling, but it should never replace caution or analysis. If we can resist the pull of the near miss fallacy, we’ll learn to see these moments not as reassurance, but as reminders: it was close, too close, and next time, it might not be.




Comments