Turn iPhone recordings into robot training data. Anyone can contribute demonstrations. Any robot can learn from them.
Teaching robots costs $8,700+ per capture station, requires trained operators, and only works in labs. Real-world diversity is impossible.
Specialized cameras, motion capture rigs, calibrated environments.
Fixed setups can't represent the thousands of kitchens where robots need to work.
Lab setups fundamentally can't produce the environmental diversity robots need.
Record a task. We extract 3D hand motion and retarget it to any robot.
iPhone LiDAR captures RGB + depth + IMU via Record3D.
AI detects 21 hand joints. LiDAR places them in 3D space.
42 joints smoothed across frames with grip state detection.
IK maps your motion to any robot morphology.
Real iPhone 15 Pro recording → HaMeR hand extraction → trajectory retargeting → Franka Panda grasps and lifts 18.1cm.
Hand trajectory drives approach. Expert funnel handles final contact. 18.1cm clean lift.
Same recording, different controller. +17.2cm lift confirms usable grasp signals.
End-effector distance, height, mug lift, and gripper state over 270 frames
5 iterations in 1 day · Feb 28 2026 · GroundingDINO + HaMeR on Modal · MuJoCo Panda
Swap the IK solver, keep the demonstrations.
7-DoF arm with parallel gripper. Full pick and place.
43-DoF humanoid with finger-level retargeting.
Contributors use hardware they already own.
Every 10× in demonstrations halves positioning error.
Compatible with every major robotics framework.
# pip install h5py huggingface_hub from huggingface_hub import hf_hub_download import h5py # Download real Flexa dataset from HuggingFace path = hf_hub_download("Flexa/pick-mug-v5", "data/data.hdf5", repo_type="dataset") with h5py.File(path, "r") as f: ep = f["episode_0"] qpos = ep["observations/qpos_arm"][:] # (270, 7) ee = ep["observations/ee_pos"][:] # (270, 3) actions = ep["actions/target_qpos"][:] # (270, 7) print(f"Lift: {ep.attrs['mug_lift_cm']:.1f}cm") # → Lift: 17.7cm
100 simulated contributors → pipeline → BC training → 0.036m accuracy. Single demo → 2 working policies in <500 steps.
Behavioral cloning on crowdsourced data
Single demonstration for comparison
Record everyday tasks for 20 minutes. Your iPhone has better depth sensors than most robotics labs.
iPhone 12 Pro+ with LiDAR. Record3D is free. Nothing else needed.
Cooking, sorting, pouring. Structured prompts guide each session.
$3–5 per validated task. Beta contributors get early access.
500 spots · iOS app coming soon
Christian Nyamekye — EE + CS at Dartmouth '26. Works with Boston Dynamics Spot. Built Flexa after watching training data become the #1 bottleneck for robot deployment.
We're validating with robotics teams now.