Now validating with robotics teams

Human hands.
Robot brains.

Turn iPhone recordings into robot training data. Anyone can contribute demonstrations. Any robot can learn from them.

$0
contributor cost
99%
cheaper than labs
42
joints tracked
Any
robot morphology
The problem

Robot training data doesn't scale

Teaching robots costs $8,700+ per capture station, requires trained operators, and only works in labs. Real-world diversity is impossible.

$8,700+

Per capture station

Specialized cameras, motion capture rigs, calibrated environments.

1

Lab environment

Fixed setups can't represent the thousands of kitchens where robots need to work.

0

Path to 10K demos

Lab setups fundamentally can't produce the environmental diversity robots need.

How it works

iPhone to robot in four steps

Record a task. We extract 3D hand motion and retarget it to any robot.

Step 01

Record

iPhone LiDAR captures RGB + depth + IMU via Record3D.

Step 02

Extract

AI detects 21 hand joints. LiDAR places them in 3D space.

Step 03

Track

42 joints smoothed across frames with grip state detection.

Step 04

Retarget

IK maps your motion to any robot morphology.

Results

One iPhone recording.
Robot picks up the mug.

Real iPhone 15 Pro recording → HaMeR hand extraction → trajectory retargeting → Franka Panda grasps and lifts 18.1cm.

Real recording → robot grasp

Hand trajectory drives approach. Expert funnel handles final contact. 18.1cm clean lift.

Validation replay

Same recording, different controller. +17.2cm lift confirms usable grasp signals.

93.7%
hand detection
18.1cm
clean lift
0
collisions
1
iPhone recording
V5 Audit

End-effector distance, height, mug lift, and gripper state over 270 frames

18.1cm
max lift
0.13cm
min distance
0
table collisions
PASS
automated verdict
V1
Wild
V2
Tipped
V3
4cm miss
V4
4cm+ off
V5
18.1cm lift ✓

5 iterations in 1 day · Feb 28 2026 · GroundingDINO + HaMeR on Modal · MuJoCo Panda

Robot-agnostic

One recording, any robot

Swap the IK solver, keep the demonstrations.

Franka Panda

7-DoF arm with parallel gripper. Full pick and place.

Unitree G1

43-DoF humanoid with finger-level retargeting.

Economics

99% cheaper

Contributors use hardware they already own.

Flexa

$0
  • iPhone with LiDAR + Record3D (free)
  • Any environment — kitchen, office, garage
  • No training required
  • Scales to 10,000+ contributors

Lab capture

$8,700+
  • Specialized cameras + mocap rigs
  • Fixed lab environment
  • Trained operators required
  • Dozens of stations, max
Scaling

More data, better robots

Every 10× in demonstrations halves positioning error.

Projected
Validated (100 demos)
Threshold
0.036m
@ 100 demos
~0.018m
@ 1K demos
~0.009m
@ 10K demos
Data

Download real pipeline output

Compatible with every major robotics framework.

LeRobot
RLDS / TensorFlow
Raw JSON
EgoScale-compatible

LeRobot

HuggingFace-compatible. 270 steps, 14-dim obs + 8-dim actions.

238 KB
Download

RLDS

RT-X / Octo compatible. 270 steps with language instruction.

265 KB
Download

Raw episodes

Full state: qpos, ee_pos, mug_pos, gripper, actions.

216 KB
Download
python
# pip install h5py huggingface_hub
from huggingface_hub import hf_hub_download
import h5py

# Download real Flexa dataset from HuggingFace
path = hf_hub_download("Flexa/pick-mug-v5", "data/data.hdf5", repo_type="dataset")

with h5py.File(path, "r") as f:
    ep = f["episode_0"]
    qpos = ep["observations/qpos_arm"][:]      # (270, 7)
    ee = ep["observations/ee_pos"][:]           # (270, 3)
    actions = ep["actions/target_qpos"][:]      # (270, 7)
    print(f"Lift: {ep.attrs['mug_lift_cm']:.1f}cm")
# → Lift: 17.7cm
Earlier validation — E2E proof & policy training, Feb 2026

100 simulated contributors → pipeline → BC training → 0.036m accuracy. Single demo → 2 working policies in <500 steps.

Trained BC policy

Behavioral cloning on crowdsourced data

Expert reference

Single demonstration for comparison

Policy A · BC

Pick + Lift

Max lift+17.8cm
Min EE→mug2.45cm
Policy B · Phase-conditioned

Pick + Place

Lift+10.7cm
Transport30cm
Min EE→mug2.24cm
1
demo
~500
training steps
2
working policies
<5ms
inference
Contribute

Earn with your iPhone

Record everyday tasks for 20 minutes. Your iPhone has better depth sensors than most robotics labs.

01

Your iPhone

iPhone 12 Pro+ with LiDAR. Record3D is free. Nothing else needed.

02

Record tasks

Cooking, sorting, pouring. Structured prompts guide each session.

03

Get paid

$3–5 per validated task. Beta contributors get early access.

Join the beta

500 spots · iOS app coming soon

Founder

Christian Nyamekye — EE + CS at Dartmouth '26. Works with Boston Dynamics Spot. Built Flexa after watching training data become the #1 bottleneck for robot deployment.

Dartmouth '26 Boston Dynamics EE + CS
iPhone LiDAR Record3D HaMeR GroundingDINO MuJoCo Unitree G1 Franka Panda Modal LeRobot

Need robot training data?

We're validating with robotics teams now.

Get in touch View 3D demo →