Teach Robots
with Vision and Language
The open-source framework for Vision-Language-Action robotics. Train, fine-tune, and deploy VLA models to real robots.
How It Works
Three steps from data to deployment.
Collect
Import datasets from Open X-Embodiment, LeRobot, HDF5 demonstrations, and ROS bags.
Train
Fine-tune VLA models with LoRA, QLoRA, or full training. CLI or web dashboard.
Deploy
Test in MuJoCo simulation, then deploy to real robots via ROS2 adapters.
Supported Models
Unified interface for major VLA architectures. Adapters in development.
OpenVLA-7B
Discrete token action prediction via SigLIP + DinoV2 + Llama 2 backbone.
SmolVLA-450M
Flow-matching continuous control. Consumer GPU friendly.
Dream-VLA-7B
Diffusion language model backbone for parallel action generation.
Pi-0
Flow-matching at 50Hz for high-frequency robot control.
Why vlarobot?
Everything you need for VLA robotics in one framework.
Unified Model Interface
Single API for OpenVLA, SmolVLA, and Dream-VLA. Switch models without changing your code.
Training Pipeline
Fine-tune VLA models with LoRA, QLoRA, or full training. CLI and web dashboard.
Simulation Support
MuJoCo integration for safe testing before deploying to real hardware.
Robot Adapters
Pluggable adapters for Franka, WidowX, UR5 via ROS2. Extensible to any robot.
Benchmark Suite
Standardized evaluation framework for comparing VLA models across tasks.
Open Source
Apache 2.0 licensed. Build, fork, and contribute freely.
Trusted by the Robotics Community
Researchers, engineers, and educators building with vlarobot.
“vlarobot unified the fragmented VLA ecosystem into one clean interface. Exactly what the field needed.”
Research Lab
Robotics PhD Candidate
“We went from paper to robot in 2 days. The training pipeline and ROS2 integration saved us weeks.”
Industry Team
Robotics Startup
“My students built their first robot policy in simulation during a single lab session. Incredible for teaching.”
University
Adjunct Professor
Built For Everyone
From research lab to production floor.
Built for Researchers
Reproduce SOTA VLA papers. Benchmark your models against standardized tasks. Share checkpoints and datasets with the community. Access pre-configured training pipelines for OpenVLA, SmolVLA, Dream-VLA, and more.
Ready to Build the Future of Robotics?
Join the community building the open-source standard for VLA robotics.