Spatial Capture
Three approaches to capturing the real world in 3D. Neural radiance fields and Gaussian splats for photorealistic volumetric scenes. Photogrammetry for production-ready textured meshes. Point clouds for raw spatial data at scale. Each method has a different output, a different strength, and a different place in the pipeline.
Three Methods, Three Outputs
NeRF / Gaussian Splat
Volumetric Scene
A neural or point-based representation of a scene learned from photos or video. The model encodes how light behaves in the space — reflections, translucency, specular highlights — and renders photorealistic novel viewpoints in real time.
Immersive walkthroughs, preserving lighting conditions, web-delivered 3D, virtual tours, film previs, archival of spaces as they actually look and feel.
Navigable volumetric scene (WebGL, splat file)
Photogrammetry
Textured Mesh
Reconstructs actual polygon geometry with UV-mapped photo textures from overlapping images. Produces a real 3D mesh — vertices, faces, normals — that can be edited, rigged, animated, 3D printed, or dropped into any standard 3D pipeline.
3D printing, game assets, VFX integration, product visualization, object scanning, anything that needs a manipulable mesh with real-world texture.
Textured 3D mesh (OBJ, FBX, GLB, USDZ)
Point Cloud
Raw Spatial Data
Millions of individual 3D coordinates captured by LiDAR or depth sensors, each carrying color and position data. No surfaces, no mesh — just raw points in space. The most direct representation of scanned geometry before any processing or interpretation.
Surveying, architecture, construction documentation, large-scale environments, scientific measurement, forensic archival, and as raw input for mesh reconstruction.
XYZ coordinate data with color (PLY, LAS, E57)
How They Relate
These aren't competing technologies — they're complementary tools for different problems. Photogrammetry gives you a mesh you can manipulate in Blender, rig, animate, or send to a 3D printer. NeRFs and Gaussian splats give you a scene you can walk through with photorealistic lighting that a mesh can't replicate. Point clouds give you raw spatial truth — the measured coordinates of a space before any interpretation is applied.
In practice, they often feed into each other. A point cloud can be the starting data for a photogrammetric mesh. A photogrammetric scan can inform a NeRF training set. The same walk-around video footage can produce all three outputs depending on how it's processed. Understanding the strengths and limitations of each method is what determines which tool fits the job.
Quick Comparison
NeRF / Gaussian Splat Captures
Each capture below is a fully navigable volumetric scene. Click and drag to orbit. Scroll to zoom. These are live 3D models running in your browser, not pre-rendered video.



