Skip to content
Open Source • Apache 2.0

Teach Robots
with Vision and Language

The open-source framework for Vision-Language-Action robotics. Train, fine-tune, and deploy VLA models to real robots.

3VLA Architectures
Open SourceApache 2.0
Python + TSFull Stack

How It Works

Three steps from data to deployment.

1

Collect

Import datasets from Open X-Embodiment, LeRobot, HDF5 demonstrations, and ROS bags.

2

Train

Fine-tune VLA models with LoRA, QLoRA, or full training. CLI or web dashboard.

3

Deploy

Test in MuJoCo simulation, then deploy to real robots via ROS2 adapters.

Supported Models

Unified interface for major VLA architectures. Adapters in development.

👁️

OpenVLA-7B

7Bparams
Discrete
Image
SigLIP
DinoV2
Llama 2
Actions

Discrete token action prediction via SigLIP + DinoV2 + Llama 2 backbone.

A100 40GBDetails

SmolVLA-450M

450Mparams
Flow-matching
Image
SigLIP
SmolLM
Flow
Actions

Flow-matching continuous control. Consumer GPU friendly.

RTX 3090Details
🌀

Dream-VLA-7B

7Bparams
Diffusion
Image
Vision
LLM
Diffusion
Actions

Diffusion language model backbone for parallel action generation.

A100 40GBDetails
🤖

Pi-0

7Bparams
Flow-matching
Image
PaliGemma
Flow
50Hz
Actions

Flow-matching at 50Hz for high-frequency robot control.

A100 80GBDetails

Why vlarobot?

Everything you need for VLA robotics in one framework.

Unified Model Interface

Single API for OpenVLA, SmolVLA, and Dream-VLA. Switch models without changing your code.

Training Pipeline

Fine-tune VLA models with LoRA, QLoRA, or full training. CLI and web dashboard.

Simulation Support

MuJoCo integration for safe testing before deploying to real hardware.

Robot Adapters

Pluggable adapters for Franka, WidowX, UR5 via ROS2. Extensible to any robot.

Benchmark Suite

Standardized evaluation framework for comparing VLA models across tasks.

Open Source

Apache 2.0 licensed. Build, fork, and contribute freely.

Trusted by the Robotics Community

Researchers, engineers, and educators building with vlarobot.

Apache 2.0Open Source
3VLA Architectures

vlarobot unified the fragmented VLA ecosystem into one clean interface. Exactly what the field needed.

Research Lab

Robotics PhD Candidate

We went from paper to robot in 2 days. The training pipeline and ROS2 integration saved us weeks.

Industry Team

Robotics Startup

My students built their first robot policy in simulation during a single lab session. Incredible for teaching.

University

Adjunct Professor

Built For Everyone

From research lab to production floor.

Built for Researchers

Reproduce SOTA VLA papers. Benchmark your models against standardized tasks. Share checkpoints and datasets with the community. Access pre-configured training pipelines for OpenVLA, SmolVLA, Dream-VLA, and more.

3
VLA Architectures
8
Model Presets
7-DoF
Action Space
Apache 2.0
License

Ready to Build the Future of Robotics?

Join the community building the open-source standard for VLA robotics.