<p align="center">
<img src="demo/logo.png" width="200" height="100">
</p>
## LightTrack: A Generic Framework for Online Top-Down Human Pose Tracking
### Update 5/16/2019: Add Camera Demo
[[Project Page](http://guanghan.info/projects/LightTrack)] [[Paper](https://arxiv.org/pdf/1905.02822.pdf)] [[Github](http://github.com/Guanghan/lighttrack)]
[](https://paperswithcode.com/sota/pose-tracking-on-posetrack2017?p=lighttrack-a-generic-framework-for-online-top)
**With the provided code, you can easily:**
- **Perform online pose tracking on live webcam.**
- **Perform online pose tracking on arbitrary videos.**
- **Replicate ablation study experiments on PoseTrack'18 Validation Set.**
- **Train models on your own.**
- **Replace pose estimators or improve data association modules for future research.**
#### Real-life Application Scenarios:
- [Surveillance](https://youtu.be/P9Bzs3cSF-w) / [Sport analytics](https://youtu.be/PZVGYmr7Ryk) / Security / Self-driving / Selfie video / Short videos (Douyin, Tiktok, etc.)
## Table of Contents
- [LightTrack](#LightTrack)
* [Table of Contents](#table-of-contents)
* [Overview](#overview)
* [Prerequisites](#Prerequisites)
* [Getting Started](#getting-started)
* [Demo on Live Camera](#Demo-on-Live-Camera)
* [Demo on Arbitrary Videos](#Demo-on-Arbitrary-Videos)
* [Validate on PoseTrack 2018](#validate-on-posetrack-2018)
* [Evaluation on PoseTrack 2018](#evaluation-on-posetrack-2018)
* [Qualitative Results](#qualitative-results-on-posetrack)
* [Quantitative Results](#quantitative-results-on-posetrack)
* [Performance on PoseTrack 2017 Benchmark (Test Set)](#quantitative-results-on-posetrack)
* [Training](#training)
* [Pose Estimation Module](#1-pose-estimation-module)
* [Pose Matching Module: SGCN](#2-pose-matching-module)
* [Limitations](#limitations)
* [Citation](#citation)
* [Reference](#reference)
* [Contact](#contact)
## Overview
LightTrack is an effective light-weight framework for human pose tracking, truly online and generic for top-down pose tracking.
The code for [the paper](https://arxiv.org/pdf/1905.02822.pdf) includes LightTrack framework as well as its replaceable component modules, including detector, pose estimator and matcher, the code of which largely borrows or adapts from [Cascaded Pyramid Networks](https://github.com/chenyilun95/tf-cpn) [[1]], [PyTorch-YOLOv3](https://github.com/eriklindernoren/PyTorch-YOLOv3), [st-gcn](https://github.com/yysijie/st-gcn) and [OpenSVAI](https://doc-opensvai.readthedocs.io/en/latest/) [[3]].

In contrast to **Visual Object Tracking** (VOT) methods, in which the visual features are implicitly represented by kernels or CNN feature maps, we track each human pose by recursively updating the bounding box and its corresponding pose in an explicit manner. The bounding box region of a target is inferred from the explicit features, i.e., the human keypoints. Human keypoints can be considered as a series of special visual features.
The advantages of using pose as explicit features include:
* (1) The explicit features are human-related and interpretable, and have very strong and stable relationship with the bounding box position. Human pose enforces direct constraint on the bounding box region.
* (2) The task of pose estimation and tracking requires human keypoints be predicted in the first place. Taking advantage of the predicted keypoints is efficient in tracking the ROI region, which is almost free. This mechanism makes the online tracking possible.
* (3) It naturally keeps the identity of the candidates, which greatly alleviates the burden of data association in the system. Even when data association is necessary, we can re-use the pose features for skeleton-based pose matching.
(Here we adopt **Siamese Graph Convolutional Networks (SGCN)** for efficient identity association.)
**Single Pose Tracking** (SPT) and **Single Visual Object Tracking** (VOT) are thus incorporated into one unified functioning entity, easily implemented by a replaceable single-person human pose estimation module.
Below is a simple step-by-step explanation of how the LightTrack framework works.

(1). Detection only at the 1st Frame. Blue bboxes indicate tracklets inferred from keypoints.

(2). Detection at every other 10 Frames. Red bbox indicates keyframe detection.

(3). Detection at every other 10 Frames for multi-person:
* At non-keyframes, IDs are naturally kept for each person;
* At keyframes, IDs are associated via spatial consistency.
For more technical details, please refer to our arXiv paper.
## Prerequisites
- Set up a Python3 environment with provided anaconda environment file.
```Shell
# This anaconda environment should contain everything needed, including tensorflow, pytorch, etc.
conda env create -f environment.yml
```
### (Optional: set up the environment on your own)
- Install [PyTorch](http://pytorch.org/) 1.0.0 (or higher) and TorchVision. (Siamese Graph Convolutional Network)
- Install [Tensorflow](https://www.tensorflow.org/install) 1.12. Tensorflow v2.0 is not tested yet. (Human Pose Estimator)
- Install some other packages:
```Shell
pip install cython opencv-python pillow matplotlib
```
## Getting Started
- Clone this repository and enter the ~~dragon~~ lighttrack folder:
```Shell
git clone https://github.com/Guanghan/lighttrack.git;
# build some necessities
cd lighttrack/lib;
make;
cd ../graph/torchlight;
python setup.py install
# enter lighttrack
cd ../../
```
- If you'd like to train LightTrack, download the [COCO dataset](http://cocodataset.org/#download) and the [PoseTrack dataset](https://posetrack.net/users/download.php) first. Note that this script will take a while and dump 21gb of files into `./data/coco`. For PoseTrack dataset, you can replicate our ablation experiment results on the validation set. You will need to register at the official website and create entries in order to submit your test results to the server.
```Shell
sh data/download_coco.sh
sh data/download_posetrack17.sh
sh data/download_posetrack18.sh
```
### Demo on Live Camera
| PoseTracking Framework | Keyframe Detector | Keyframe ReID Module | Pose Estimator | FPS |
|:----------:|:-----------:|:--------------:|:----------------:|:---------:|
| LightTrack | YOLOv3 | Siamese GCN | MobileNetv1-Deconv | 220* / 15 |
- Download weights.
```Shell
cd weights;
bash ./download_weights.sh # download weights for backbones (only for training), detectors, pose estimators, pose matcher, etc.
cd -;
```
- Perform pose tracking demo on your Webcam.
```Shell
# access virtual environment
source activate py36;
# Perform LightTrack demo (on camera) with light-weight detector and pose estimator
python demo_camera_mobile.py
```
### Demo on Arbitrary Videos
| PoseTracking Framework | Keyframe Detector | Keyframe ReID Module | Pose Estimator | FPS |
|:----------:|:-----------:|:--------------:|:----------------:|:---------:|
| LightTrack | YOLOv3 | Siamese GCN | MobileNetv1-Deconv | 220* / 15 |
- Download demo video.
```Shell
cd data/demo;
bash ./download_demo_video.sh # download the video for demo; you could later replace it with your own video for fun
cd -;
```
- Perform online tracking demo.
```Shell
# access virtual environment
source activate py36;
# Perform LightTrack demo (on arbitrary video) with light-weight detector and pose estimator
python demo_video_mobile.py
```
- After processing, pose tracking results are stored in standardized OpenSVAI format JSON files, located at [**data/demo/jsons/**].
- Visualized images and videos h
没有合适的资源?快使用搜索试试~ 我知道了~
This project integrates some project working, example as [VideoP...

共491个文件
py:285个
md:19个
yaml:19个

需积分: 0 0 下载量 11 浏览量
更新于2025-05-15
收藏 88.37MB ZIP 举报
The project extracted the 2d joint key point from the video by using [AlphaPose](https://github.com/MVIG-SJTU/AlphaPose),[HRNet](https://github.com/HRNet/Higher-HRNet-Human-Pose-Estimation) and so on. Then transform the 2d point to the 3D joint point by using [VideoPose3D](https://github.com/facebookresearch/VideoPose3D). Finally We convert the 3d joint point to the bvh motion file.
收起资源包目录





































































































共 491 条
- 1
- 2
- 3
- 4
- 5
资源推荐
资源预览
资源评论
182 浏览量
170 浏览量
163 浏览量
2019-08-08 上传
2017-12-01 上传
110 浏览量

2021-11-25 上传
2006-02-23 上传
163 浏览量
2011-09-14 上传


176 浏览量


119 浏览量
151 浏览量
150 浏览量
105 浏览量

2016-04-02 上传
151 浏览量
2010-01-26 上传
2010-12-14 上传
200 浏览量

124 浏览量
144 浏览量
2010-03-08 上传
3268 浏览量
资源评论


PFworld232323
- 粉丝: 1
上传资源 快速赚钱
我的内容管理 展开
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助


最新资源
- 单片机原理与接技术.doc
- JSP程序设计方案习题解答[1].doc
- 基于单片机的数字温度计方案设计书.doc
- linux-X窗口系统是如何配置的.doc
- 学生宿舍管理系统--数据库课程设计[1].doc
- 电气自动化控制在供配电系统中的运用1.docx
- 网络化智能家居系统.doc
- 单片机医院病房呼叫系统设计本科课程设计.doc
- 5G网络安全发展趋势及创新进展.docx
- 编程语言扩展-函数导出与调用-动态链接库接口-外部函数表管理-基于C语言的模块化开发框架-支持printf格式化的跨平台函数注册与调用系统-用于嵌入式系统和应用程序开发的灵活函数扩.zip
- 互联网专线接入项目预可研性方案.doc
- 大数据时代背景下技术创新管理方法的探析.docx
- 大数据时代下农村地区幼儿教育发展现状及提升研究-以山东省秀家橦村为例.docx
- 移动通信站机房防雷接地工程注意方法和步骤.doc
- 清华附小学生用大数据揭秘苏轼.docx
- 机械工程附自动化课程设计拖拉机用垫片成型工艺与模具设计.doc
安全验证
文档复制为VIP权益,开通VIP直接复制
