Getting started

  • Installation

  • Data Preparation

  • Body Model Preparation (Optional)

  • Inference

  • Evaluation

  • Training

  • More tutorials

Installation

Please refer to installation.md for installation.

Data Preparation

Please refer to data_preparation.md for data preparation.

Body Model Preparation (Optional)

If you want to obtain keypoints3d, the body model is not necessary. If you want to infer SMPL as well, you can prepare the body model as follows.

  • SMPL v1.0.0 is used in our experiments. Please register to get access to the downloads section.

    • Download male and female models from SMPL and neutral model from SMPLify.

    • All body models have to be renamed in SMPL_{GENDER}.pkl format.
      For example, mv basicModel_neutral_lbs_10_207_0_v1.0.0.pkl SMPL_NEUTRAL.pkl

  • Download smpl_mean_params.npz from here.

  • gmm_08.zip from smplify-x repo.

  • gmm_08.pkl from openxrlab backup.

Download the above resources and arrange them in the following file structure:

xrmocap
├── xrmocap
├── docs
├── tests
├── tools
├── configs
└── xrmocap_data
    └── body_models
        ├── gmm_08.pkl
        ├── smpl_mean_params.npz
        └── smpl
            ├── SMPL_FEMALE.pkl
            ├── SMPL_MALE.pkl
            └── SMPL_NEUTRAL.pkl

Inference

We provide a demo script to estimate SMPL parameters for single-person or multi-person from multi-view synchronized input images or videos. With this demo script, you only need to choose a method, we currently support two types of methods, namely, optimization-based approaches and end-to-end learning algorithms, specify a few arguments, and then you can get the estimated results.

We assume that the cameras have been calibrated. If you want to know more about camera calibration, refer to XRPrimer for more details.

Perception Model

Prepare perception models, including detection, 2d pose estimation, tracking and CamStyle models.

sh scripts/download_weight.sh

You could find perception models in weight file.

Single Person

Currently, we only provide optimization-based method for single person estimation.

  1. Download body model. Please refer to Body Model Preparation

  2. Download an example raw .smc file from HuMMan dataset here.

  3. Place the .smc file in xrmocap_data/humman/.

  4. Run process_smc tool.

python tools/process_smc.py \
	--estimator_config configs/humman_mocap/mview_sperson_smpl_estimator.py \
	--smc_path xrmocap_data/humman/p000455_a000986.smc \
	--output_dir xrmocap_data/humman/p000455_a000986_output \
	--visualize

Multiple People

A small test dataset for quick demo can be downloaded here. It contains 50 frames from the Shelf sequence, with 5 camera views calibrated and synchronized.

Optimization-based methods

For optimization-based approaches, it utilizes the association between 2D keypoints and generates 3D keypoints by triangulation or other methods. Taking MVPose as an example, it can be run as

  1. Download data and body model

  • download data

mkdir xrmocap_data
wget https://openxrlab-share-mainland.oss-cn-hangzhou.aliyuncs.com/xrmocap/example_resources/Shelf_50.zip -P xrmocap_data
cd xrmocap_data/ && unzip -q Shelf_50.zip && rm Shelf_50.zip && cd ..
  • download body model

In this section, the smplify in the config file is not None, and you will get SMPL model. Please download the body model and refer to Body Model Preparation for details.

  1. Run demo

python tools/mview_mperson_topdown_estimator.py \
      --estimator_config 'configs/mvpose_tracking/mview_mperson_topdown_estimator.py' \
      --image_and_camera_param 'xrmocap_data/Shelf_50/image_and_camera_param.txt' \
      --start_frame 300 \
      --end_frame 350 \
      --output_dir 'output/estimation' \
      --enable_log_file

If all the configuration is OK, you could see the results in output_dir.

Learning-based methods

For learning-based methods, it resorts to an end-to-end learning scheme so as to require training before inference. Taking Multi-view Pose Transformer (MvP) as an example, we can download pretrained MvP model and run it on Shelf_50 as:

  1. Install Deformable package by running the script:

sh scripts/download_install_deformable.sh
  1. Download data and pretrained model

# download data
mkdir -p xrmocap_data
wget https://openxrlab-share-mainland.oss-cn-hangzhou.aliyuncs.com/xrmocap/example_resources/Shelf_50.zip -P xrmocap_data
cd xrmocap_data/ && unzip -q Shelf_50.zip && rm Shelf_50.zip && cd ..

# download pretrained model
mkdir -p weight/mvp
wget https://openxrlab-share-mainland.oss-cn-hangzhou.aliyuncs.com/xrmocap/weight/mvp/xrmocap_mvp_shelf-22d1b5ed_20220831.pth -P weight/mvp
  1. Run demo with Shelf_50

# Evaluation
sh ./scripts/eval_mvp.sh 1 configs/mvp/shelf_config/mvp_shelf_50.py weight/mvp/xrmocap_mvp_shelf-22d1b5ed_20220831.pth

If all the configuration is OK, you could see the evaluation result in the terminal.

# Estimation: Visualization of predicted keypoints3d and SMPL
python tools/mview_mperson_end2end_estimator.py \
    --output_dir ./output/estimation \
    --model_dir weight/mvp/xrmocap_mvp_shelf-22d1b5ed_20220831.pth \
    --estimator_config configs/modules/core/estimation/mview_mperson_end2end_estimator.py \
    --image_and_camera_param ./xrmocap_data/Shelf_50/image_and_camera_param.txt \
    --start_frame 300 \
    --end_frame 350  \
    --enable_log_file

If all the configuration is OK, you could see the estimation results in output_dir.

For detailed tutorials about dataset preparation, model weights and checkpoints download for learning-based methods, please refer to the evaluation tutorial and estimator tutorial.

Evaluation

Perception Model

Prepare perception models, including detection, 2d pose estimation, tracking and CamStyle models.

sh scripts/download_weight.sh

Evaluate with a single GPU / multiple GPUs

Optimization-based methods

  1. Download data and body model

  • download Shelf dataset and meta-data

# download Shelf dataset (16G)
mkdir xrmocap_data
wget https://www.campar.in.tum.de/public_datasets/2014_cvpr_belagiannis/Shelf.tar.bz2 -P xrmocap_data
cd xrmocap_data/ && tar -xf Shelf.tar.bz2 && rm Shelf.tar.bz2 && cd ..

# download meta-data
mkdir -p xrmocap_data/Shelf
wget https://openxrlab-share-mainland.oss-cn-hangzhou.aliyuncs.com/xrmocap/xrmocap_meta/Shelf/xrmocap_meta_testset_fasterrcnn.zip -P xrmocap_data/Shelf
cd xrmocap_data/Shelf && unzip xrmocap_meta_testset_fasterrcnn.zip && rm xrmocap_meta_testset_fasterrcnn.zip && cd ../..
  • download body model

In this section, the smplify in config file is not None, and you will get SMPL model. Please download the body model and refer to Body Model Preparation for details.

  1. Run demo

  • Evaluate on the Shelf dataset and run the tool without tracking.

python tools/mview_mperson_evaluation.py \
      --enable_log_file \
      --evaluation_config configs/mvpose/shelf_config/eval_keypoints3d.py
  • Evaluate on the Shelf dataset and run the tool with tracking.

python tools/mview_mperson_evaluation.py \
      --enable_log_file \
      --evaluation_config configs/mvpose_tracking/shelf_config/eval_keypoints3d.py

More details about dataset preparation and evaluation can be found at MVPose evaluation or MVPose tracking evaluation.

Learning-based methods

  1. Download and install the Deformable package (Skip if you have done this step before)

Run the script:

sh scripts/download_install_deformable.sh
  1. Download dataset and pretrained model, taking Shelf dataset as an example:

# download Shelf dataset (16G)
mkdir -p xrmocap_data
wget https://www.campar.in.tum.de/public_datasets/2014_cvpr_belagiannis/Shelf.tar.bz2 -P xrmocap_data
cd xrmocap_data/ && tar -xf Shelf.tar.bz2 && rm Shelf.tar.bz2 && cd ..

# download meta data
mkdir -p xrmocap_data
wget https://openxrlab-share-mainland.oss-cn-hangzhou.aliyuncs.com/xrmocap/xrmocap_meta/Shelf/xrmocap_meta_testset.zip -P xrmocap_data
cd xrmocap_data/ && unzip xrmocap_meta_testset.zip && rm xrmocap_meta_testset.zip && mv xrmocap_meta_testset ./Shelf && cd ..

# download pretrained model
mkdir -p weight/mvp
wget https://openxrlab-share-mainland.oss-cn-hangzhou.aliyuncs.com/xrmocap/weight/mvp/xrmocap_mvp_shelf-22d1b5ed_20220831.pth -P weight/mvp
  1. Run the evaluation:

sh ./scripts/eval_mvp.sh 8 configs/mvp/shelf_config/mvp_shelf.py weight/mvp/xrmocap_mvp_shelf-22d1b5ed_20220831.pth

Evaluate with slurm

If you can run XRMoCap on a cluster managed with slurm, you can use the script scripts/slurm_eval_mvp.sh.

sh ./scripts/slurm_eval_mvp.sh ${PARTITION} 8 configs/mvp/shelf_config/mvp_shelf.py weight/mvp/xrmocap_mvp_shelf-22d1b5ed_20220831.pth

For learning-based methods, more details about dataset preparation, model weights and checkpoints download and evaluation can be found at evaluation tutorial.

Training

Training is only applicable to learning-based methods.

Training with a single / multiple GPUs

To train the learning-based model, such as a MvP model, to prepare the datasets and pre-trained weights:

  1. Download and install the Deformable package (Skip if you have done this step before)

Run the script:

sh scripts/download_install_deformable.sh
  1. Download dataset and pretrained models, taking Shelf dataset as an example:

# download Shelf dataset (16G)
mkdir -p xrmocap_data
wget https://www.campar.in.tum.de/public_datasets/2014_cvpr_belagiannis/Shelf.tar.bz2 -P xrmocap_data
cd xrmocap_data/ && tar -xf Shelf.tar.bz2 && rm Shelf.tar.bz2 && cd ..

# download meta data
mkdir -p xrmocap_data
wget https://openxrlab-share-mainland.oss-cn-hangzhou.aliyuncs.com/xrmocap/xrmocap_meta/Shelf/xrmocap_meta_trainset_pesudo_gt.zip -P xrmocap_data
cd xrmocap_data/ && unzip xrmocap_meta_trainset_pesudo_gt.zip && rm xrmocap_meta_trainset_pesudo_gt.zip && mv xrmocap_meta_trainset_pesudo_gt ./Shelf && cd ..

# download pretrained 5-view panoptic model to finetune with Shelf datasest
mkdir -p weight/mvp
wget https://openxrlab-share-mainland.oss-cn-hangzhou.aliyuncs.com/xrmocap/weight/mvp/xrmocap_mvp_panoptic_5view-1b673cdf_20220831.pth -P weight/mvp
  1. Run the training:

sh ./scripts/train_mvp.sh 8 configs/mvp/campus_config/mvp_campus.py

Training with Slurm

If you can run XRMoCap on a cluster managed with slurm, you can use the script scripts/slurm_train_mvp.sh.

sh ./scripts/slurm_train_mvp.sh ${PARTITION} 8 configs/mvp/shelf_config/mvp_shelf.py

For learning-based methods, more details about dataset preparation, model weights and checkpoints download and training can be found at training tutorial