PORTFOLIO

FUNDAMENTAL DATUM

24/7 live-streamed ambient installation with signed audio, oscilloscope visualizer, and live AI commentary — all at the edge.

Cloudflare Workers R2 Stream Durable Objects Vectorize D1 Vanilla JS


Overview

Fundamental Datum treats a simple camera feed as raw material for an ambient instrument. Instead of traditional UI, the site offers a continuous composition: live video, a generative soundtrack, a 4×4 oscilloscope that “plays” the sound, and AI commentary that reacts in real time. The experience is intentionally minimal—press play, watch, and listen—while the system behind it executes a precise, edge-native pipeline.

Everything runs at the edge on Cloudflare: Workers route requests and serve assets, Stream distributes the live video, audio is privately delivered from R2, a Durable Object coordinates two LLM personas for commentary over WebSockets, and Vectorize + D1 capture and explore “strange events” detected in the stream.

Stack & Architecture

  • Workers + ASSETS: A single Worker handles routing, directory normalization, and static delivery via env.ASSETS. It also serves a manifest fallback when needed to keep install metadata consistent.
  • Live video via Stream: The camera is published to Cloudflare Stream (HLS/DASH), giving resilient delivery and easy device compatibility.
  • Private audio from R2: Audio objects live in R2 and are served with per-request signed URLs in production. In local dev, the Worker transparently falls back to /api/audio/stream, so the UI doesn’t change across environments.
  • Real‑time commentary:
    • Durable Objects: ConversationDO runs the stateful dialog between two AI personas and fans out messages to everyone via WebSockets. A global instance (via idFromName("global-conversation")) ensures all users hear the same moment-to-moment “show.”
    • AnomalyDO runs anomaly processing and persistence in a serialized, consistent context.
  • Embeddings + storage:
    • Vectorize stores embeddings produced from scene snapshots/descriptions.
    • D1 persists structured “strange events,” powering a simple BI view and similarity lookups.
  • Media proxy & caching: Worker endpoints stream private media with Range (206), ETag/Last-Modified, and Cache-Control: immutable. Filenames are allowlisted to block traversal.
  • Frontend, minimal by design: Vanilla JS powers the oscilloscope, carousel, and commentary client. Scripts load with defer, and the UI keeps a small footprint so the “instrument” (video+audio+voice) stays central.

How it works

  1. Camera publishes to Cloudflare Stream; the page loads HLS/DASH.
  2. The Worker selects a soundtrack via /api/next-track (time-of-day heuristics) and returns a signed URL (prod) or a local stream URL (dev).
  3. A scheduled Worker periodically snapshots frames, describes them, and stores embeddings in Vectorize and metadata in D1.
  4. The browser opens a WebSocket to ConversationDO; two personas react to the live scene and broadcast commentary to all visitors.
  5. The oscilloscope renders audio energy in a 4×4 grid, turning sound into a simple, legible visual language.

Highlights

  • Edge‑native, zero‑ops architecture with minimal latency.
  • Env‑aware audio: signed URLs in production, seamless local fallback.
  • Global, synchronized commentary via Durable Objects + WebSockets.
  • Private media with Range streaming and immutable caching.
  • Searchable “strangeness” via Vectorize embeddings + D1.

Outcome

A stable, always‑on ambient experience with a unique aesthetic that merges raw video, sound, and AI commentary. The system is easily extensible (more personas, refined strangeness metrics, additional visualizations) while remaining low‑maintenance.