Apple Kicks Off the Spatial Computing Age — But What Exactly Is It?

Why Trust Techopedia

The term “spatial computing” has been around — although without much buzz — for several years. However, it gained traction after Apple announced its Vision Pro last June.

A couple of years ago, if you mentioned spatial computing in a room of ten people, chances are that almost everyone would wince with a ‘what’s he talking about” kind of look.

However, that’s about to change with Apple’s unveiling of their mixed reality headset, Apple Vision Pro, which, according to Apple’s CEO, Tim Cook, is a spatial computer.

While Apple’s Vision Pro — by a stretch — may be a reflection of spatial computing at work, it still does not do a comprehensive explainer of the term, how the technology behind it works, what you and I can do with it, and what the future holds for the sector.

So, let’s get into all that.

How Does Spatial Computing Work?

Spatial computing, in its basic meaning, is the fusion of the digital and physical worlds in a way that allows us to interact with computers in an intuitive and immersive way.

It is an umbrella term that loops in concepts like virtual reality (VR), augmented reality (AR), mixed reality, and extended reality.


Unlike conventional computing, which unifies data and logic in a two-dimensional context, spatial computing integrates data, logic, and three-dimensional contextual information, providing a more precise fusion of the physical and digital realms.

The technology blurs the line between virtual and physical reality, providing users with a seamless experience through devices such as the Apple Vision Pro, Microsoft HoloLens, Meta Quest Pro, and Magic Leap. These devices not only display the real world but also incorporate real objects into the scene in a 3D manner.

Apple Vision Pro
An image from Apple showing how the Apple Vision Pro will look for the user.

For instance, an intern at a manufacturing plant who still needs to consult a technical manual to implement certain steps in manufacturing could overlay technical manuals on a wall and pick the information he needs as he works instead of resorting to the physical manual guide.

Spatial computing operates through the integration of advanced sensors and cameras that capture comprehensive visual data from the user’s surroundings. This data undergoes processing via sophisticated computer vision algorithms and sensor fusion techniques, resulting in a 3D representation of the physical environment, a process known as spatial mapping.

Apple Vision Pro view from the rear
Credit: Apple

The device then overlays virtual objects onto this spatial map, facilitating precise positioning and manipulation of digital content within the physical environment.

As can be seen in the latest mixed-reality headsets, users can interact with their environment and control spatial computing devices through eye-tracking technology, handheld controllers, motion sensors, and voice commands.

A Short History of Spatial Computing Devices

The term “spatial computing” has been around — although without much buzz — for several years. However, it gained traction after Apple announced its Vision Pro last June.

Going by history, the concept dates back to 2003, when Simon Greenwold, an MIT scholar, published a thesis on spatial computing (PDF). Back then, it was used for computational representation and analysis.

In 2005, Google took us closer to a realistic feel of what spatial computing could look like through its mobile version of Google Maps which provides a digital model of the physical world in our palm with the ability to track a user’s location within it.

The year 2006 saw Israeli startup PrimeSense (later acquired by Apple) introduce a depth-sensing device that could enable users to control video games through gestures. This technology was later incorporated into Microsoft’s Kinect, an Xbox 360 accessory, capitalizing on the popularity of motion-controlled games at the time.

All the above laid the foundation for modern mixed-reality headsets and have seen tech giants like Meta, Microsoft, Apple, and Sony launch different versions of these devices.

Key Use Cases of Spatial Computing

Spatial computing promises varied use cases for home use, entertainment, manufacturing, and healthcare. We delve into some examples here:

Home Use Cases

At home, spatial computing can turn any room into a dynamic, interactive environment. Imagine waking up and seeing your day’s schedule projected onto your bedroom wall or cooking dinner with step-by-step instructions displayed on your kitchen counter. It can also enhance entertainment experiences at home by allowing you to play immersive video games or watch movies in a virtual theater setting.

Work Use Cases

Spatial computing can revolutionize collaboration and productivity in the workplace. Instead of mounting a big projector for demos and presentations in conference rooms, virtual screens can be created anywhere, allowing employees to work in a more flexible digital environment. Complex data can be visualized in 3D, making it easier to understand and analyze. Remote meetings can feel like in-person interactions, with participants represented as life-like avatars in a shared virtual space.


In healthcare, spatial computing can be used to create simulations for medical training with which students can practice medical procedures. Doctors can also leverage the technology to enhance patient care, allowing them to visualize and better understand medical conditions in their patients.


Spatial computing can empower engineers to create and manipulate 3D models in a virtual environment. This can lead to more efficient design processes, as changes can be made quickly and easily without the need for physical prototypes. It can be used to create realistic training simulations for workers, reducing the cost of financing physical simulations and improving training outcomes.

What the Future Holds for Spatial Computing

The spatial computing industry was valued at $98 billion in 2023 and is expected to reach $280 billion in 2028, according to a market report. Key players include Microsoft, Meta, Apple, Sony,  Qualcomm, Google, Samsung, and Magic Leap.

However, as 2024 gradually takes shape, the emphasis is shifting towards the consumer implications of spatial computing. The proliferation of this technology is anticipated to unlock new possibilities and present challenges for consumers and businesses alike.

While it’s expected to find applications in diverse sectors such as entertainment, education, training, manufacturing, and healthcare, Steven Athwal, Managing Director at Euro Communications Distribution, argues that “marginal improvements to comfort, computing power, and smart AI-enabled optimizations will gradually boost the usefulness of VR and AR for people, as well as making the technology cheaper and more accessible over time.”

Athwal further believes that the future of spatial computing technology is tied to affordability.

“Once the technology is more easily affordable, there is genuine potential for it to become integrated into our everyday lives, and I think that’s when we’ll see the real innovations take off,” he said.

The Bottom Line

While earlier attempts at spatial computing have paved the way, the focus is now on the consumer impact of this technology in 2024. With its Vision Pro, Apple is well-positioned to lead this market, heralding a new era of spatial computing.

Although Athwal favors Apple’s Vision Pro headset to be the best due to the inclusion of “eye-tracking capability as the primary user interface,” he still has some reservations over the sustainability of that idea.

While the future of spatial computing looks promising, there are limits to the technology. While there are heard claims that we will eventually swap out all our computer use for spatial computing headsets, there may be natural friction in walking around with a headset on for much of the day.

But perhaps it will sit side by side with our conventional computers and mobile phones.


Related Reading

Related Terms

Franklin Okeke
Technology Journalist
Franklin Okeke
Technology Journalist

Franklin Okeke is an author and tech journalist with over seven years of IT experience. Coming from a software development background, his writing spans cybersecurity, AI, cloud computing, IoT, and software development. In addition to pursuing a Master's degree in Cybersecurity & Human Factors from Bournemouth University, Franklin has two published books and four academic papers to his name. His writing has been featured in tech publications such as TechRepublic, The Register, Computing, TechInformed, Moonlock and other top technology publications. When he is not reading or writing, Franklin trains at a boxing gym and plays the piano.