# **Y**ou **O**nly **L**ook **A**t **C**oefficien**T**s
```
██╗ ██╗ ██████╗ ██╗ █████╗ ██████╗████████╗
╚██╗ ██╔╝██╔═══██╗██║ ██╔══██╗██╔════╝╚══██╔══╝
╚████╔╝ ██║ ██║██║ ███████║██║ ██║
╚██╔╝ ██║ ██║██║ ██╔══██║██║ ██║
██║ ╚██████╔╝███████╗██║ ██║╚██████╗ ██║
╚═╝ ╚═════╝ ╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═╝
```
A simple, fully convolutional model for real-time instance segmentation. This is the code for [our paper](https://arxiv.org/abs/1904.02689).
#### ICCV update (v1.1) released! Check out the ICCV trailer here:
[](https://www.youtube.com/watch?v=0pMfmo8qfpQ)
Read [the changelog](CHANGELOG.md) for details on, well, what changed. Oh, and the paper got updated too with pascal results and an appendix with box mAP.
Some examples from our base model (33.5 fps on a Titan Xp and 29.8 mAP on COCO's `test-dev`):



# Installation
- Set up a Python3 environment.
- Install [Pytorch](http://pytorch.org/) 1.0.1 (or higher) and TorchVision.
- Install some other packages:
```Shell
# Cython needs to be installed before pycocotools
pip install cython
pip install opencv-python pillow pycocotools matplotlib
```
- Clone this repository and enter it:
```Shell
git clone https://github.com/dbolya/yolact.git
cd yolact
```
- If you'd like to train YOLACT, download the COCO dataset and the 2014/2017 annotations. Note that this script will take a while and dump 21gb of files into `./data/coco`.
```Shell
sh data/scripts/COCO.sh
```
- If you'd like to evaluate YOLACT on `test-dev`, download `test-dev` with this script.
```Shell
sh data/scripts/COCO_test.sh
```
# Evaluation
As of April 5th, 2019 here are our latest models along with their FPS on a Titan Xp and mAP on `test-dev`:
| Image Size | Backbone | FPS | mAP | Weights | |
|:----------:|:-------------:|:----:|:----:|----------------------------------------------------------------------------------------------------------------------|--------|
| 550 | Resnet50-FPN | 42.5 | 28.2 | [yolact_resnet50_54_800000.pth](https://drive.google.com/file/d/1yp7ZbbDwvMiFJEq4ptVKTYTI2VeRDXl0/view?usp=sharing) | [Mirror](https://ucdavis365-my.sharepoint.com/:u:/g/personal/yongjaelee_ucdavis_edu/EUVpxoSXaqNIlssoLKOEoCcB1m0RpzGq_Khp5n1VX3zcUw) |
| 550 | Darknet53-FPN | 40.0 | 28.7 | [yolact_darknet53_54_800000.pth](https://drive.google.com/file/d/1dukLrTzZQEuhzitGkHaGjphlmRJOjVnP/view?usp=sharing) | [Mirror](https://ucdavis365-my.sharepoint.com/:u:/g/personal/yongjaelee_ucdavis_edu/ERrao26c8llJn25dIyZPhwMBxUp2GdZTKIMUQA3t0djHLw)
| 550 | Resnet101-FPN | 33.0 | 29.8 | [yolact_base_54_800000.pth](https://drive.google.com/file/d/1UYy3dMapbH1BnmtZU4WH1zbYgOzzHHf_/view?usp=sharing) | [Mirror](https://ucdavis365-my.sharepoint.com/:u:/g/personal/yongjaelee_ucdavis_edu/EYRWxBEoKU9DiblrWx2M89MBGFkVVB_drlRd_v5sdT3Hgg)
| 700 | Resnet101-FPN | 23.6 | 31.2 | [yolact_im700_54_800000.pth](https://drive.google.com/file/d/1lE4Lz5p25teiXV-6HdTiOJSnS7u7GBzg/view?usp=sharing) | [Mirror](https://ucdavis365-my.sharepoint.com/:u:/g/personal/yongjaelee_ucdavis_edu/Eagg5RSc5hFEhp7sPtvLNyoBjhlf2feog7t8OQzHKKphjw)
To evalute the model, put the corresponding weights file in the `./weights` directory and run one of the following commands.
## Quantitative Results on COCO
```Shell
# Quantitatively evaluate a trained model on the entire validation set. Make sure you have COCO downloaded as above.
# This should get 29.92 validation mask mAP last time I checked.
python eval.py --trained_model=weights/yolact_base_54_800000.pth
# Output a COCOEval json to submit to the website or to use the run_coco_eval.py script.
# This command will create './results/bbox_detections.json' and './results/mask_detections.json' for detection and instance segmentation respectively.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --output_coco_json
# You can run COCOEval on the files created in the previous command. The performance should match my implementation in eval.py.
python run_coco_eval.py
# To output a coco json file for test-dev, make sure you have test-dev downloaded from above and go
python eval.py --trained_model=weights/yolact_base_54_800000.pth --output_coco_json --dataset=coco2017_testdev_dataset
```
## Qualitative Results on COCO
```Shell
# Display qualitative results on COCO. From here on I'll use a confidence threshold of 0.15.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --display
```
## Benchmarking on COCO
```Shell
# Run just the raw model on the first 1k images of the validation set
python eval.py --trained_model=weights/yolact_base_54_800000.pth --benchmark --max_images=1000
```
## Images
```Shell
# Display qualitative results on the specified image.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --image=my_image.png
# Process an image and save it to another file.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --image=input_image.png:output_image.png
# Process a whole folder of images.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --images=path/to/input/folder:path/to/output/folder
```
## Video
```Shell
# Display a video in real-time. "--video_multiframe" will process that many frames at once for improved performance.
# If you want, use "--display_fps" to draw the FPS directly on the frame.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --video_multiframe=4 --video=my_video.mp4
# Display a webcam feed in real-time. If you have multiple webcams pass the index of the webcam you want instead of 0.
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --video_multiframe=4 --video=0
# Process a video and save it to another file. This uses the same pipeline as the ones above now, so it's fast!
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --video_multiframe=4 --video=input_video.mp4:output_video.mp4
```
As you can tell, `eval.py` can do a ton of stuff. Run the `--help` command to see everything it can do.
```Shell
python eval.py --help
```
# Training
By default, we train on COCO. Make sure to download the entire dataset using the commands above.
- To train, grab an imagenet-pretrained model and put it in `./weights`.
- For Resnet101, download `resnet101_reducedfc.pth` from [here](https://drive.google.com/file/d/1tvqFPd4bJtakOlmn-uIA492g2qurRChj/view?usp=sharing).
- For Resnet50, download `resnet50-19c8e357.pth` from [here](https://drive.google.com/file/d/1Jy3yCdbatgXa5YYIdTCRrSV0S9V5g1rn/view?usp=sharing).
- For Darknet53, download `darknet53.pth` from [here](https://drive.google.com/file/d/17Y431j4sagFpSReuPNoFcj9h7azDTZFf/view?usp=sharing).
- Run one of the training commands below.
- Note that you can press ctrl+c while training and it will save an `*_interrupt.pth` file at the current iteration.
- All weights are saved in the `./weights` directory by default with the file name `<conf
没有合适的资源?快使用搜索试试~ 我知道了~
Python毕业设计-违规驾驶行为识别系统源码+数据库.zip

共128个文件
py:81个
sh:8个
png:6个

1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉

温馨提示
Python毕业设计—违规驾驶行为识别系统源码+数据库.zip Python毕业设计—违规驾驶行为识别系统源码+数据库.zip Python毕业设计—违规驾驶行为识别系统源码+数据库.zip Python毕业设计—违规驾驶行为识别系统源码+数据库.zip Python毕业设计—违规驾驶行为识别系统源码+数据库.zip Python毕业设计—违规驾驶行为识别系统源码+数据库.zip Python毕业设计—违规驾驶行为识别系统源码+数据库.zip Python毕业设计—违规驾驶行为识别系统源码+数据库.zip Python毕业设计—违规驾驶行为识别系统源码+数据库.zip Python毕业设计—违规驾驶行为识别系统源码+数据库.zip Python毕业设计—违规驾驶行为识别系统源码+数据库.zip Python毕业设计—违规驾驶行为识别系统源码+数据库.zip Python毕业设计—违规驾驶行为识别系统源码+数据库.zip Python毕业设计—违规驾驶行为识别系统源码+数据库.zip Python毕业设计—违规驾驶行为识别系统源码+数据库.zip
资源推荐
资源详情
资源评论




























收起资源包目录





































































































共 128 条
- 1
- 2
资源评论

- weixin_542141302023-05-07这个资源值得下载,资源内容详细全面,与描述一致,受益匪浅。
- szx_lllhhh2023-02-09实在是宝藏资源、宝藏分享者!感谢大佬~
- 2401_849889062025-04-10发现一个宝藏资源,赶紧冲冲冲!支持大佬~

程序员张小妍
- 粉丝: 2w+
上传资源 快速赚钱
我的内容管理 展开
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助


最新资源
- C普通车床PLC控制系统设计(张进国).doc
- 项目管理之成功人士的四个特质.docx
- 如何提高高中生的计算机应用能力.docx
- 大数据环境下海量多媒体信息过滤技术的改进.docx
- 2017-2018学年高中数学-第三章-导数及其应用-3.2.2-导数的运算法则-新人教A版选修1.ppt
- 关于电气工程及其自动化的建设与发展研究.docx
- VoIP企业融合通信与实现.doc
- 大数据时代唐山市公共服务体系发展与对策研究.docx
- 浅析网络技术在广播电视工程中的运用.docx
- 大数据的电力计量装置故障智能化诊断技术.docx
- 商品销售管理系统设计与实现软件技术.doc
- 水库安全监控与管理信息化.doc
- 电子商务中的会计信息化.doc
- 项目管理目标责任书.doc
- 开放式计算机实验实训教学分析.docx
- 红安大布与互联网的发展与传承.docx
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈



安全验证
文档复制为VIP权益,开通VIP直接复制
