Train and deploy Vision-Language-Action (VLA) models for robotic imitation learning
Physical AI Studio is an end-to-end framework for teaching robots to perform tasks through imitation learning from human demonstrations.
- End-to-End Pipeline - From demonstration recording to robot deployment
- State-of-the-Art Policies - Native policy implementations such as ACT, Pi0, SmolVLA, GR00T and Pi0.5, plus full LeRobot policy zoo
- Flexible Interface - Use Python API, CLI, or GUI
- Production Export - Deploy to OpenVINO, ONNX, or Torch for any hardware
- Standardized Benchmarks - Evaluate on benchmarks such as LIBERO and PushT
- Built on Lightning - PyTorch Lightning for distributed training, mixed precision, and more
For users who prefer a visual interface for end-to-end workflow:
Run the full application (backend + UI) in a single container (using Docker):
# Clone the repository
git clone https://github.com/open-edge-platform/physical-ai-studio.git
cd physical-ai-studio
# Setup and run docker services
cd application/docker
cp .env.example .env
docker compose --profile xpu up # or use --profile cuda, --profile cpuApplication runs at http://localhost:7860. See the Docker README for hardware configuration (Intel XPU, NVIDIA CUDA) and device setup.
If you plan to train Hugging Face Hub-backed policies (for example, SmolVLA, Pi0,
and others), configure HF_TOKEN to avoid unauthenticated Hub access warnings. See
Hugging Face Integration.
Run the application in development mode, using uv package manager and node v24 (we recommend using nvm)
# Clone the repository
git clone https://github.com/open-edge-platform/physical-ai-studio.git
cd physical-ai-studio
# Install and run backend
cd application/backend && uv sync --extra xpu # or --extra cpu, --extra cuda
./run.sh
# In a new terminal: install and run UI
cd application/ui
nvm use
npm install
# Fetch the api from the backend and build the types and start the frontend.
npm run build:api:download && npm run build:api && npm run startOpen http://localhost:3000 in your browser.
If you plan to train Hugging Face Hub-backed policies (for example, SmolVLA, Pi0,
and others), configure HF_TOKEN in your backend environment. See
Hugging Face Integration.
For programmatic control over training, benchmarking, and deployment with both API and CLI
pip install physicalai-trainTraining
from physicalai.data import LeRobotDataModule
from physicalai.policies import ACT
from physicalai.train import Trainer
datamodule = LeRobotDataModule(repo_id="lerobot/aloha_sim_transfer_cube_human")
model = ACT()
trainer = Trainer(max_epochs=100)
trainer.fit(model=model, datamodule=datamodule)Benchmark
from physicalai.benchmark import LiberoBenchmark
from physicalai.policies import ACT
policy = ACT.load_from_checkpoint("experiments/lightning_logs/version_0/checkpoints/last.ckpt")
benchmark = LiberoBenchmark(task_suite="libero_10", num_episodes=20)
results = benchmark.evaluate(policy)
print(f"Success rate: {results.aggregate_success_rate:.1f}%")Export
from physicalai.export import get_available_backends
from physicalai.policies import ACT
# See available backends
print(get_available_backends()) # ['onnx', 'openvino', 'torch', 'executorch']
# Export to OpenVINO
policy = ACT.load_from_checkpoint("experiments/lightning_logs/version_0/checkpoints/last.ckpt")
policy.export("./policy", backend="openvino")Inference
from physicalai.inference import InferenceModel
policy = InferenceModel.load("./policy")
obs, info = env.reset()
done = False
while not done:
action = policy.select_action(obs)
obs, reward, terminated, truncated, info = env.step(action)
done = terminated or truncatedCLI Usage
# Train
physicalai fit --config configs/physicalai/act.yaml
# Evaluate
physicalai benchmark --config configs/benchmark/libero.yaml --ckpt_path model.ckpt
# Export (Python API only - CLI coming soon)
# Use: policy.export("./policy", backend="openvino")| Resource | Description |
|---|---|
| Library Docs | API reference, guides, and examples |
| Application Docs | GUI setup and usage |
| Contributing | Contributing and development setup |
We welcome contributions! See CONTRIBUTING.md for guidelines.

