Licensable, synchronized multi-modal datasets for humanoid robotics. Let engineers preview, verify quality, and license precisely what they need — scene by scene, not whole archives.
Learn MoreFinger-level truth (not approximations)
Millimeter-scale hand/finger kinematics plus pressure/contact signals, so models can learn real grasp dynamics—not just pose trajectories.
Synchronized multi-modal ground truth
Motion + multi-view video (and additional signals where applicable) captured in time alignment, enabling strong visual grounding and cross-modal learning.
Coverage across the full realism spectrum
Controlled “factory-grade” precision, large-space motion-with-vision, and truly natural “in-the-wild” behavior—so training data spans clean labels and messy real-world variance.
Built for scale, consistency, and deployment
Repeatable acquisition pipelines, standardized calibration/QA, and dataset structure designed for model training workflows—so you get reliable data, not one-off demos.
120Hz full-body skeletal data with optical-inertial hybrid rigs. Sub-millisecond synchronization across all sensors.
Unstructured real-world environments: retail, warehouse, kitchen, outdoor. Portable rig captures for domain diversity.
Consumer-grade capture for scale and behavioral diversity. Covers rare interaction primitives across demographics.
Synchronized multi-camera arrays with calibrated extrinsics. Enables 3D scene reconstruction and ego/third-person pairing.
We collaborate with teams pushing the edge of embodied AI.