meet.able
Design for collaboration beyond the screen
meet.able: When everyone needs control
2009 · Academic Research → Industry Adoption · Multi-touch Interaction Design

The broken meeting
Picture this: Four designers around a conference table. One person controls the laptop and projector. Everyone else watches. When someone has an idea, they wait for the "driver" to find the right file, open the right tool, make space on screen. The technology that's supposed to help collaboration creates a bottleneck instead.

I watched this pattern repeat across architecture firms and advertising agencies during field research. The fundamental problem: traditional computing interfaces assume single-user control. One mouse. One keyboard. One person in charge.
Multi-touch tables promised to fix this. Microsoft Surface launched in 2007. The iPhone proved multi-touch worked at scale. But nobody had solved the collaborative UX questions: How do multiple people interact simultaneously without chaos? Where do menus go when there's no "front" of the table?
Research: Understanding collaborative space
I started by mapping what we do know about how people work together at tables.

Observational insights from real design meetings:
-
People create personal zones (Scott et al.: 87-100% of actions happen in the space directly in front of you)
-
Seating arrangements shift fluidly—tight collaboration, then individual work, then regrouping
-
Physical materials matter: paper for quick sketching, business cards exchanged, coffee cups everywhere
-
The table itself is neutral territory; imposed structure (like fixed menu bars) breaks social flow
The paper paradox:
Everyone had laptops, but designers still grabbed pens for annotation. Paper is robust (works when wet), requires no boot time, accepts any input tool, and never runs out of battery. Digital systems kept trying to replace paper. Wrong goal. The table should augment what paper does well, not eliminate it.
Technical constraints I had to solve for:
-
Multiple simultaneous touches (ruled out resistive screens)
-
Debris tolerance (coffee cups can't crash the system)
-
Natural posture (people rest elbows on table edges)
-
Reach ergonomics (Fitts' law: time-to-target increases with distance)
The design challenge crystallizes
After synthesizing collaborative coupling patterns, territoriality research, and gesture vocabularies, I faced the core interaction problem:
If six people are working around a table, how does each person access tools without interfering with others?
Bad solutions I rejected:
-
Fixed menu bar: Someone always sees it upside down; wastes shared space
-
Floating palettes: Clutter; require constant repositioning; unclear ownership
-
Voice commands: Noisy; socially awkward; poor for parallel work
-
Physical remotes: Back to single-user bottleneck; defeats the multi-touch premise
I needed something personal (everyone has their own access), ergonomic (fast, low-effort), unobtrusive (doesn't disrupt conversation), and spatially neutral (works from any seat).
The breakthrough: Resting Hand Gesture
The solution came from watching people not use the table.
During meetings, people rest their hands flat on surfaces while thinking, listening, explaining. It's a natural, relaxed posture. What if that posture—that moment of presence—became the interaction starting point?
The Resting Hand Gesture (RHG)

Concept: Place your full hand flat → context-sensitive menu buttons appear at your fingertips → tap the one you want.
Why it works:
Ergonomic: Fitts' distance optimized—buttons never more than 2cm from fingertips. Minimal movement, maximum speed.
Consistent: Same gesture, same finger→function mapping (Search = index finger, always). Builds muscle memory.
Personal: Your hand, your menu, in your space. Six people can invoke simultaneously without conflict.
Low cognitive load: One gesture to remember. No hunting through hierarchical menus.
Socially invisible: Resting your hand looks natural. Doesn't signal "I'm monopolizing the computer now."
Design details that matter
Invocation sensitivity:
5-blob detection with area threshold. Too sensitive = accidental triggers when gesturing. Too strict = frustration. Solution: Brief persistence after hand lift lets you adjust without losing the menu. Double-tap to pin it for extended use.
Context adaptation:
Buttons change based on what's under your hand. Over a document: annotate, share, archive. Over empty space: search, create, grab (pull distant objects to you). Over another person's zone: collaboration options appear.
Visual feedback:
Buttons "grow" from fingertips with fluid animation—clear cause-and-effect. Active selections highlighted in your personal color (user identification via chair sensors, like DiamondTouch).

Validation: Testing with fake hardware
I couldn't build a working multi-touch table (student budget), so I invented rapid prototyping through film:

-
Filmed people at a real meeting table over green screen
-
Animated the UI behavior in After Effects
-
Composited hands + interface frame-by-frame
-
Published concept video with annotations
Result: 3,000+ views across HCI community. Feedback was sharp:
What landed:

"This is the first gestural interface that feels like it respects how humans actually sit at tables." — NUI Group
What needed work:
People rest hands all the time. Risk of accidental invocation. Suggested solutions: Require intentional hold (0.5s), or add finger-spread detection, or context-aware activation (only when content is nearby).
Impact: The Resting Hand Gesture pattern was subsequently adopted by Adobe for their touch interfaces and influenced multi-user tabletop interaction research in academia and industry.

Project Details
Methodology: Scenario-driven research · Field observation · Literature synthesis · Iterative concept design · Video pre-visualization
Contribution: Resting Hand Gesture as reusable pattern for multi-user surface UIs
Tools: Paper prototyping · After Effects · Motion · Green screen filming
What I learned
Process beats inspiration:
The RHG didn't come from a flash of genius. It emerged from systematic research → field observation → constraint mapping → iteration. The late timing (appeared in sketches near the end) shows good ideas need pressure-testing against real problems.
Design for transitions, not replacements:
The most compelling feedback mentioned the coffee cup recognition, business card scanning, paper-on-glass tracing. People loved that meet.able integrated physical objects instead of demanding a purely digital workflow.
Show, don't build (when necessary):
The video pre-visualization method validated core interaction concepts without hardware investment. Not a replacement for real user testing, but enough to prove the idea had legs.
Social dynamics trump technical features:
Territoriality, collaborative coupling, proxemics—these human factors mattered more than multi-touch technical specs. The interface had to get out of the way of human collaboration, not showcase itself.

Working this way shaped how I design today: When I build design systems or privacy-first interfaces now, I start the same way—map real workflows, identify constraint conflicts, prototype ruthlessly. The RHG taught me that breakthrough interactions emerge from pressure-testing ideas against messy reality, not from inspired sketches in isolation.