Talking Blobs: A Non-Anthropomorphic Design Approach for Virtual Companions
2025
Mathias Hradecsni
Andreas Muxel
Talking Blobs explores an alternative design paradigm for virtual companions—one that embraces abstract forms and motion instead of faces, voices, or humanoid features. Rather than mimicking people, these companions communicate through rhythm, presence, and fluid movement in space.
Instead of speech bubbles or facial expressions, Talking Blobs quietly shifts and reshapes in the corner of your vision. A calm shape gently breathes in your periphery, signaling availability. A geometric form bounces softly when a call comes in. A flowing arrow leads the way through an unfamiliar building. These abstract cues create subtle, emotionally resonant interactions without relying on human likeness.
This non-anthropomorphic approach invites a more inclusive and transparent interaction model—one that avoids reinforcing cultural, gendered, or normative stereotypes baked into many current avatar systems.
Scenarios
Soft Attention
A gentle, rounded form hovers at the edge of your vision. It doesn’t interrupt or speak. It just stays close, calm and aware.
Visual Tone for Incoming Calls
Different shapes express different relationships. A round, welcoming form means it’s a friend calling. A crystalline, angular one might suggest it’s someone new. Without any labels, tone is communicated through movement and form.
Playful Reminders
When passing a wilting plant, the blob stretches and reshapes into the outline of a watering can. This acts as an intuitive, expressive reminder that’s impossible to miss, but doesn’t feel like a notification.
Ambient Navigation
In unfamiliar surroundings, the companion fluidly adapts to guide your movement. Inside a building, it drifts ahead and pauses near the correct door, offering a clear but unobtrusive cue for where to enter.
We built a custom Apple Vision Pro app using RealityKit, rendering real-time 3D point-based blobs directly in the headset. Shape behaviors are designed in TouchDesigner, where we use the new POP family to craft expressive particle systems. Data is streamed via UDP to the Vision Pro, with performance optimized for ~1000 points per frame.
This setup enables quick prototyping and expressive movement without requiring complex assets or rigging.
Talking Blobs was presented at the 1st International Workshop on Design Fiction for IVAs titled “The Future is Near: Design Fiction Explorations of Virtual Companions,” hosted during IVA 2025 (ACM International Conference on Intelligent Virtual Agents) in Berlin.
As part of the workshop, our paper was accepted and is included in the published adjunct.