
OVER Robotics: Empowering Machines to See the World
2025-12-19
At OVER, our mission has always been to make the physical world digitally accessible to create a bridge between reality and the digital layer of information that defines our time.
Today, we’re taking that vision one step further.
We’re expanding the OVER ecosystem with a new framework that extends our technology beyond humans to machines.
Welcome to OVER Robotics.
From Mapping the World to Teaching Machines to Perceive It
Over the past two years, OVER has built the largest and richest 3D mapping datasets in existence:
174,000 unique locations mapped, 86.8 million images, and 781 Terabytes of data, growing every month by more than 8,000 new locations.
This foundation unlocks an unprecedented opportunity, to give machines the ability to see, understand, and navigate the world.
OVER Robotics focuses on two fundamental pillars of robotics intelligence:
Machine Perception and Simulation.
Machine Perception: Making the World Machine-Readable
Humans and animals have evolved to turn 2D visual input into rich 3D mental representations of the world around them. That ability to perceive depth, dimensions, context, and motion is what allows us to thrive in the physical world.
Now imagine giving that same perceptual intelligence to robots.
By combining OVER’s ever-growing 3D map of the world with our Visual Positioning System (VPS) and Vision Foundation Models, we’re creating a universal layer of machine-readable reality, enabling robots to understand where they are, what’s around them, and how to reach their destination.
Our Machine Perception framework integrates natively with ROS 2, the open standard that powers most robotics systems, ensuring compatibility out of the box.
With it, any robot, from autonomous drones to delivery bots to industrial machines, can finally anchor itself to the real world.
Robotic Simulation: Closing the Sim-to-Real Gap
Video courtesy of GaussGytm. Source: https://gauss-gym.com/
The new wave of robotics isn’t about metal and motors, it’s about intelligence.
Mechanical innovation made robots move.
Now, Physical AI is making them think.
The best way to teach robots how to operate in the real world is through simulation, a safe, scalable environment where millions of training hours can be compressed into a single day. But simulated worlds have long suffered from a critical flaw: they’re not real enough.
A robot that learns perfectly in simulation often fails when deployed in reality, a problem known as the Sim-to-Real Gap.
OVER Robotics is closing that gap.
By plugging our 174,000 Gaussian Splats, generated with more than 80,000 hours of GPU computation, into open-source simulation environments like GaussGYM https://gauss-gym.com/ and GWM , we’re turning synthetic worlds into photorealistic, data-rich environments that mirror the complexity of the real world.
And we’re going further.
Our dataset will fuel the next generation of World Models, AI systems capable of generating infinite, geometrically consistent worlds, extending well beyond what’s been captured, and finally bridging simulation and reality. On this chart you can appreciate the scale of OVER’s 3D Maps dataset compared to what is used to train existing AI Models.

For an even more detailed comparison, please check our Dataset Technical Report.
The Next Chapter
This is just the beginning.
In the coming weeks, we’ll share our detailed roadmap for OVER Robotics, including release timelines and integrations. But one thing’s already close:
The first iteration of our Machine Perception system integrated with ROS 2 is dropping today: https://github.com/OVR-Platform/over_vps
At OVER, we believe the next intelligent revolution won’t happen on screens, it will happen in the real world.
And it will be powered by machines that can truly see.
OVER Robotics. Empowering Machines to See the World.
