JI.
Snap Spectacles AR Testing
Back to Projects

Next-Gen Wearables

Spatial Computing & AI

Designing, developing, and testing spatial experiences across the most advanced wearable platforms - from AR-powered Snap Spectacles to AI-driven smart glasses. Each device unlocks a unique interaction paradigm, and every project pushes the frontier of what's possible when computing moves from screens to the world around you.

4+Platforms
AIVoice Assistant
6-DOFSpatial Tracking
Real-TimeTranslation
RoleCreative Technologist
CategorySpatial UX / AI
PlatformsSpectacles, ER G1, Mentra Live
FocusHCI & Mixed Reality
AI SystemOTTO (Custom)
StackTypeScript, OpenRouter
MemorySupabase Persistent
NetworkSPDR Agent System

Over a Decade in Wearables

Always Had a Gadget On

This didn't start with smart glasses. It started with a Casio watch at six years old - the first gadget I ever wore, and I haven't stopped since. From that watch to a Pebble smartwatch in the early 2010s, to early “VR” viewers, to getting my hands on an Oculus DK1 - I've always been the first person in the room testing new wearable tech. Over a decade of strapping on whatever's next and figuring out what it can do.

Before glasses had cameras, I was engineering ways to record POV video - rigging action cameras to headbands and helmets, getting creative with mounting systems and angles. Always working with new pieces, new cameras, new hardware. Not because it was trendy, but because I could see where it was going.

Early Adopter by Nature

I see it coming before most people. That's not a brag - it's a compulsion. When new tech drops, I'm not reading about it, I'm ordering it. I'm not watching demos, I'm building with it. From the first consumer VR headsets to the current generation of AR glasses, the pattern is the same: get the hardware, break it open, figure out what nobody else has tried yet.

The Timeline

  • First Casio watch - 6 years old
  • DIY POV camera rigs - before glasses had cameras
  • Pebble smartwatch - early 2010s
  • Early VR viewers & Oculus DK1
  • Snap Spectacles, Even Realities G1, Mentra Live - now

Snap Spectacles

Spatial AR Development

Spatial Creativity Unlocked

Snap's Spectacles represent a leap forward in wearable AR - full 6-DOF spatial tracking, hand tracking, and world-locked content that persists in physical space. Working with the Lens Studio SDK, each experiment explores a different creative axis: generative AI overlays, real-time 3D rendering, and interactive spatial interfaces that respond to gesture and gaze.

Prototyping in AR

Each session is a rapid prototype - testing concepts that bridge digital and physical in seconds. From spawning interactive 3D objects anchored to real-world surfaces to building generative image pipelines that render directly into spatial view, the Spectacles become a design tool as much as a display device.

00

First Look - Unboxing the Spectacles

Opening and trying Snap Spectacles for the first time - initial reactions to spatial AR, hand tracking, and world-locked content straight out of the box.

AR Development Lab
01

AR Development Lab

Testing generative AI, OpenAI Realtime integration, and Snap3D spatial rendering through Lens Studio on the Spectacles dev kit. Multiple AR overlays and interaction paradigms running simultaneously in spatial view.

Digital Nature
02

Digital Nature

World-locked AR flora growing from the palm - a hand-tracked interaction study exploring how digital organic elements can blend seamlessly with the physical environment through spatial anchoring.

Citi Field Live AR
03

Citi Field - Live AR

Location-aware AR experience at Citi Field - spatial overlays anchored to stadium architecture in real-time, demonstrating large-scale environment tracking and world-locked content persistence during a live event.

Even Realities G1

AI-Powered Smart Glasses in the Field

Even Realities G1 in Japan
Field Report

Japan - Live Translation

Wearing the Even Realities G1 through Japan - real-time translation displayed directly in the lens while navigating restaurants, train stations, and local shops.

Breaking the Language Barrier

Traveling through Japan with the Even Realities G1 completely changed the way I experienced a country where I don't speak the language. The glasses provided live translation directly in my field of view - menus, signs, conversations - all rendered as a subtle heads-up display without ever pulling out a phone.

What made it transformative wasn't just convenience - it was immersion. Instead of constantly stopping to type into a translation app, I could stay present. I ordered meals in local restaurants by reading the translated menu through my lens. I navigated subway systems by glancing at kanji signs that instantly resolved into English. I had spontaneous conversations with locals where the glasses bridged the gap in real-time.

The Invisible Interface

The G1's lightweight form factor - nearly indistinguishable from regular glasses - meant I could wear them everywhere without drawing attention. In izakayas, at train platforms, walking through Shibuya at night. The technology disappeared into the experience, which is the holy grail of wearable design.

This field test validated a core thesis of my wearable research: the best spatial computing experiences aren't about spectacle - they're about removing friction. The G1 didn't add a layer of technology between me and Japan. It removed one.

OTTO - AI for Smart Glasses

Mentra Live / SPDR Agent Network

Currently Building

O.T.T.O.

Operational Technician & Tactical Observer - a custom AI assistant designed from the ground up for the Mentra Live smart glasses platform. OTTO is a voice-first, vision-capable agent that lives in your peripheral, always ready to observe, remember, and assist.

Part of the larger SPDR agent network, OTTO represents a new paradigm in wearable AI - an always-present assistant that understands spatial context, maintains persistent memory across sessions, and can delegate tasks across a distributed network of specialized agents.

Multi-Model Intelligence

OTTO routes through OpenRouter to access Gemini 2.0 Flash, GPT-4o, Claude Sonnet, and Llama 3.3 - switching models on-the-fly via voice command. Each model brings different strengths for different tasks: vision analysis, creative generation, code interpretation, and fast inference.

Persistent Memory

Backed by Supabase, OTTO remembers facts, visual observations, work logs, and meeting transcripts across sessions. Ask “where did I leave my phone?” and it recalls from past visual observations. Tell it something once and it remembers forever.

Spatial Awareness

Camera integration enables real-time scene understanding - OCR for reading signs and menus, live translation across 20+ languages, and background visual memory that passively captures and catalogs the world around you.

Core Capabilities

Voice-First Interaction

Wake word activation, natural conversation with 10-turn context memory, and concise 1-2 sentence responses optimized for spoken delivery through the glasses speaker.

Work Mode

Tracks active tasks with periodic frame capture, analyzes activity, detects off-task drift, and generates productivity summaries with visual documentation.

Meeting Mode

Full conversation transcription with timestamps, real-time summary generation, and automatic extraction of key topics, decisions, and action items.

Live Translation

Point-and-read OCR with instant translation across 20+ languages - from Japanese menus to Korean street signs to French documents.

Visual Memory

Background photo capture every 2 minutes builds a passive visual log. GPS-tagged observations enable spatial recall -'what was that store I passed earlier?'

Agent Delegation

Connected to the SPDR agent network for task handoff. Delegate research, scheduling, or code tasks to specialized agents while you stay focused on the physical world.

System Architecture

TypeScript Core

Built on Express.js v5 with the Mentra SDK, OTTO runs as a session-aware server that manages multiple glass connections simultaneously. The modular architecture separates AI reasoning, memory management, and voice command handling into discrete, testable modules.

Real-Time Dashboard

A web-based monitoring dashboard provides live visibility into OTTO's state - active sessions, current AI model, recent transcriptions, battery levels, work frame captures, and meeting status. Model switching and session inspection happen in real-time from any browser.

Hardware Integration

Deep integration with the Mentra Live hardware - LED feedback in multiple colors for status indication, spatial audio SFX for interaction cues, camera access for vision tasks, GPS for location-aware features, and VAD (voice activity detection) for natural conversation flow.

SPDR Network

OTTO doesn't operate alone. As a node in the SPDR agent network, it can delegate tasks to specialized agents - research, code generation, scheduling - and track their completion. The network enables a distributed intelligence model where the glasses become a portal to a larger system of AI capabilities.

The Vision

Beyond the Screen

Every platform tested here - Spectacles, G1, Mentra Live - points to the same future: computing that lives in the world, not on a rectangle in your pocket. The spatial computing paradigm shift isn't about replacing phones - it's about making technology invisible so you can be more present in the physical world while having AI augment your capabilities.

Always Learning

Each device, each field test, each prototype feeds into a growing body of knowledge about how humans interact with spatial computing. From the aesthetic possibilities of Spectacles AR to the practical utility of G1 translation to the deep AI integration of OTTO - every experiment pushes the boundary of what wearable technology can be.

Tech Stack

Snap SpectaclesLens StudioEven Realities G1Mentra Live@mentra/sdkTypeScriptExpress.jsOpenRouterSupabaseGPT-4oClaudeGeminiWebSocketsVoice AIComputer VisionOCRGPS Tracking

More Projects