RTMO-ORT: A Tiny ONNX Runtime Wrapper for RTMO from MMPose - RTMO in Minutes
- Namas Bhandari
- Aug 30
- 1 min read

I put together rtmo-ort, a tiny wrapper that runs RTMO (MMPose) with pure ONNX Runtime. No training stack, no giant dependencies—just install, grab the models, and run on an image, video, or webcam.
Repo: https://github.com/namas191297/rtmo-ort PyPI: https://pypi.org/project/rtmo-ort/ What it is
A small Python package with three CLIs: rtmo-image, rtmo-video, rtmo-webcam.
Presets for model size (tiny/small/medium/large) and dataset (coco/crowdpose/body7).
A helper script to download the ONNX models into the expected layout.
It’s meant for quick demos, PoCs, and handing someone a working pose script that “just runs.” Install & run (quick start)
Option A - pip (CPU):
pip install -U pip
pip install rtmo-ort[cpu]
Option B - pip (GPU):
pip install rtmo-ort[gpu]
Option C - Conda: conda create -n rtmo-ort python=3.9 conda activate rtmo-ort pip install -U pip pip install rtmo-ort[cpu]
Get the models (required):
Clone the repository from: https://github.com/namas191297/rtmo-ort
Recommended (from repo root):
./get_models.sh
Manual:
Download the .onnx files from the GitHub Releases and place them under models/ with the expected layout:
mkdir -p models/rtmo_s_640x640_coco
curl -L -o models/rtmo_s_640x640_coco/rtmo_s_640x640_coco.onnx \
CLI:
Run webcam:
rtmo-webcam --model-type small --dataset coco --device cpu
Run image:
rtmo-image --model-type small --dataset coco --input assets/demo.jpg --output out.jpg
Run video: rtmo-video --model-type medium --dataset coco --input input.mp4 --output out.mp4
Why this helps
Low friction: skip framework setup when all you need is inference.
Portable: ONNX Runtime on CPU or GPU, works in minimal environments.
Practical defaults: sensible presets, optional flags when you need them.
Credits
All model and training credit goes to OpenMMLab / MMPose.This project is a thin runner on top of their work (Apache-2.0).







Comments