The terms we use in production technology are often borrowed or adapted, much like the tools themselves. There is probably an entire study to be done on the way that the entertainment technology industry filters, processes, and reflects back the intellectual and technical trends of wider culture. This is not that study. But it starts there.

The term Extended Reality predates our current use cases, much the way that the term Virtual Production has recently been expanded to incorporate LED volumes. The initial use of the term Extended Reality was in the context of philosophy, but at some point — most likely the Milgram and Kishino paper in 1994 — the concept of a granular definition of experience ranging from unmediated reality at one end to a fully mediated virtual reality at the other took root, giving us a series of acronyms (AR, VR, XR, MR) that we now use to make sense of the ways that digital experiences are infiltrating our lives.

The Milgram & Kishino paper, "A Taxonomy of Mixed Reality Visual Displays," introduced the Virtuality Continuum as a formal model. The continuum runs from consensus reality — the thing our brain constructs that we all agree to accept as real — through to fully virtual environments where all sensory input is mediated. Everything in between is some flavour of Extended Reality. The paper was foundational because it proposed that the important question was not binary (real or virtual) but positional: where on this continuum is a given experience, and what are the implications of that position?

Before getting into what XR means in 2025, a brief and opinionated glossary. These are my interpretations of these terms and I do have an agenda.

Reality — The thing our brain constructs that we all accept as the real world. The things are the things. The colours are the colours. We feel the wind coming off the ocean and the heat from a nearby fire. There are things that itch. This is sometimes called "consensus reality" because the people who are really into Extended Reality spend a lot of time pondering how our brains construct the waking hallucination that we refer to as "the real world."

Virtual Reality (VR) — A very literal term delivering an environment that is virtually real. A VR system sits between a user and the real world and substitutes digital content for real content. This means that a Virtual Reality experience is explicitly fully immersive and functions by replacing reality.

Augmented Reality (AR) — The aim is typically to enhance reality, often utilising portals or lenses that offer a digitally modified view of a real space. These add information or other stimuli that supplement what the user is getting through their own senses.

Extended Reality (XR) — This covers everything from the lightest touches of Augmented Reality through to the most fully comprehensive Virtual Reality, and everything in between.

Mixed Reality (MR) — Mixed Reality was originally one of several alternate terms for Augmented Reality. The Milgram & Kishino paper refers to MR as a subclass of VR-related technologies that involve the merging of real and virtual worlds. That framing has drifted as VR became predominantly associated with head-mounted displays and practitioners experimented with the different ways that light and audio can be used to add layers to immersive experiences. Mixed Reality today can be seen as a broad term that employs all available tools to create an experience that seamlessly merges real and digital elements into a single immersive experience.

Worth noting throughout: these definitions include sound, light, and anything else we can control — including haptics — to more completely render and manage an experience. A definition of these terms that is limited to screens is already incomplete.


The Topology of a Continuum

One of the quirks of the Virtuality Continuum is that it is possible to have a real experience outside of the continuum — like when your iPhone battery dies — but it is not possible to have a purely virtual experience because you need a physical person.

Virtual Reality, much to my surprise, is one of the least tortured terms in the lexicon. We are all pretty sure it involves strapping a box to your face and looking at things that may or may not include video inputs of the room you are in. You may or may not have legs.

This stands in contrast to Augmented Reality, which is now used to refer to everything from futuristic contact lenses, eyeglasses, headsets, smartphone applications, transparent displays, and applications that use a digital front plate in a live broadcast application. But each of these fits squarely within the scope of Augmented Reality — they fuse digital assets with reality either directly or indirectly.

The Virtual Production Glossary has a stub covering Extended Reality as an open-ended, all-encompassing term, and I think that is precisely the correct way to frame it. But it is also the root cause of significant confusion. Somehow Extended Reality is both everything and this one very specific thing.


What the Words Actually Mean in the Wild

NVIDIA's Omniverse XR is described as "an immersive spatial computing reference application." It is worth noting that the term Spatial Computing does not show up in Google Ngram until 1990, just after the term Virtual Reality starts to gain currency. Since that time VR has shown up in waves in the consumer electronics market while Spatial Computing has primarily been an academic term.

Spatial Computing is largely synonymous with Extended Reality, and this is an important link to make in understanding the scope of what we are actually talking about. The Virtuality Continuum is not just a taxonomic exercise for academics — it is a map of the territory that display technology is being asked to animate.

The XR stage in virtual production is a useful fixed point. The XR stage is a room lined with LED panels where camera tracking data drives a real-time game engine render so that the virtual scene behind the actors responds correctly to camera movement. This technology descends from the process shot — driving scenes in older films used a projected background behind actors in a stationary car. The XR stage is a real-time version of the same fundamental principle, but the background is rendered by a game engine responding to tracking data. The LED volume is the projection surface. The camera tracking system is the link between physical and virtual coordinate spaces.

A Synthetic Space is the environment produced when you extend that same principle beyond the camera — into the room, the building, the city block. At some level this is an idealised space where information display, lighting, sound, reflection, and other environmental parameters are being controlled within a 3D coordinate system in real time, in response to how a space is being used and who is in that space. The XR stage is not the end state. It is an early instance of a broader category.


Why Extended Reality Is the Critical Lens

Here is why the terminology matters more than a definitional exercise would suggest, and why it matters specifically now.

The Virtuality Continuum is not a spectrum between two stable endpoints. It is a description of the territory that display technology is progressively annexing. Every advance in display technology — in resolution, in form factor, in viewing angle, in the ability to render light that is not contained within a rectangular frame — moves the practical boundary of what can be synthesised. Every advance in tracking and sensing technology makes that synthetic experience more seamless.

The question is not whether this territory will be annexed. It is who will have the tools to do the annexing, what those tools will cost, and who will have access to them.

The LED display industry is currently in the middle of two manufacturing transitions that will determine the answer to those questions for the next decade. COB — chip-on-board — is a process that places individual LED dies directly onto circuit boards at extraordinary density. Those displays have crossed from expensive and unreliable to cheap and acceptable over the past four or five years. Prices are heading toward the pricing of consumer LCD panels. COG — chip on glass — applies the same assembly process to TFT glass substrates at LCD-panel scale, in the same massive fabs that currently produce LCD panels.

These two approaches are in a race, and the outcome will reshape who makes LED displays, what configurations those displays come in, and what the smallest viable order quantity will be. The 25 to 30 COB module manufacturers operating today are racing to scale. When the major panel companies close that race, a significant portion of the current field will not survive the transition.

This is what happens when infrastructure consolidates. The manufacturing diversity that makes creative flexibility possible contracts toward standardised solutions optimised for volume markets. The window during which that infrastructure is diverse, accessible, and available in non-standard configurations is open now.


The Newton Era

We do not yet have the iPhone of synthetic spaces. What we have is closer to the Newton — a set of devices that pointed clearly in the right direction, established vocabulary, trained a generation of practitioners, and were eventually superseded by the refined synthesis they made possible.

The Las Vegas Sphere, Frameless, Meow Wolf, and others do not yet demonstrate what it means for a space to know that you are in it, to respond to your presence, to merge the programmable and the physical in ways that change what a physical space can be.

That synthesis is coming. The hardware trajectory makes it directionally clear. The question is whether the practitioners developing these environments will work in public — where vocabulary gets established, where failure is documented and becomes infrastructure for the next attempt — or in private, where each attempt starts from scratch and the learning stays proprietary.

XR, understood properly — not as a production workflow but as the entire Virtuality Continuum, the full range from consensus reality to full virtual mediation — is the most critical lens available for understanding what display technology is being asked to do and what it will need to become. It describes the territory. The hardware is the infrastructure. The manufacturing window during which that infrastructure is diverse, accessible, and available in non-standard configurations is open.

Extended Reality is both everything and this one very specific thing. This publication is about both.


Editorial Note

Throughout XRTICLE: LED displays are LED displays. LCD displays are LCD displays. "Direct view LED" is a marketing term popularised after one consumer electronics company used LED TV to distinguish their LCD televisions from other LCD televisions in the consumer market. The industry does not benefit from the continued use of direct view LED.


Further reading: Milgram & Kishino, "A Taxonomy of Mixed Reality Visual Displays" [bit.ly/Milgram_Kishino1994] · Laura Frank, Real-Time Video Content for Virtual Production, Mixed Reality and Live Entertainment, Routledge 2022 · Virtual Production Glossary [bit.ly/VPGlossary]

Matthew Ward co-founded Element Labs, the company behind VersaTILE, VersaTUBE, and the Stealth LED mesh system. He currently works with Fuse Technical Group and is Head of Product at Superlumenary. The Modular Display History series — a 14-part public archive of the LED industry from its origins — begins in the next section of this publication.