# `libvmaf`
## Prerequisites
For building, you need the following:
- [Python3](https://www.python.org/download/releases/3.0/) (3.6 or higher)
- [Meson](https://mesonbuild.com/) (0.47 or higher)
- [Ninja](https://ninja-build.org/) (1.7.1 or higher)
- [NASM](https://www.nasm.us/) (for x86 builds only, 2.13.02 or higher)
Follow the steps below to set up a clean virtual environment and install the tools:
```
python3 -m pip install virtualenv
python3 -m virtualenv .venv
source .venv/bin/activate
pip install meson
sudo [package-manager] install nasm ninja doxygen
```
You need to invoke `[package-manager]` depending on which system you are on: `apt-get` for Ubuntu and Debian, `yum` for CentOS and RHEL, `dnf` for Fedora, `zypper` for openSUSE, `brew` for MacOS (no `sudo`).
## Compile
Run:
```
meson build --buildtype release
```
(add `-Denable_float=true` flag in the rare case if you want to use the floating-point feature extractors.)
Build with:
```
ninja -vC build
```
## Test
Build and run tests with:
```
ninja -vC build test
```
## Install
Install the library, headers, and the `vmaf` command line tool:
```
ninja -vC build install
```
This will install the following files:
```
├── bin
│ └── vmaf
├── include
│ └── libvmaf
│ ├── compute_vmaf.h
│ ├── feature.h
│ ├── libvmaf.h
│ ├── model.h
│ ├── picture.h
│ └── version.h
└── lib
├── libvmaf.1.dylib
├── libvmaf.a
├── libvmaf.dylib -> libvmaf.1.dylib
└── pkgconfig
└── libvmaf.pc
```
## Documentation
Generate HTML documentation with:
```
ninja -vC build doc/html
```
## VMAF Models
`libvmaf` now has a number of VMAF models built-in. This means that no external VMAF model files are required, since the models are compiled into and read directly from the library. If you do not wish to compile the built-in models into your build, you may disable them with `-Dbuilt_in_models=false`. Previous versions of this library required a `.pkl` model file. Since libvmaf v2.0.0, these `.pkl` model files have been deprecated in favor of `.json` model files. If you have a previously trained `.pkl` model you would like to convert to `.json`, this [Python conversion script](../python/vmaf/script/convert_model_from_pkl_to_json.py) is available.
## `vmaf`
A command line tool called `vmaf` is included as part of the build/installation. See the `vmaf` [README.md](tools/README.md) for details. An older command line tool (`vmafossexec`) is still part of the build but is not part of the installation. `vmafossexec` will be removed in a future version of this library.
## API Walkthrough
Create a `VmafContext` with `vmaf_init()`. `VmafContext` is an opaque type, and `VmafConfiguration` is a options struct used to initialize the context. Be sure to clean up the `VmafContext` with `vmaf_close()` when you are done with it.
```c
int vmaf_init(VmafContext **vmaf, VmafConfiguration cfg);
int vmaf_close(VmafContext *vmaf);
```
Calculating a VMAF score requires a VMAF model. The next step is to create a `VmafModel`. There are a few ways to get a `VmafModel`. Use `vmaf_model_load()` when you would like to load one of the default built-in models. Use `vmaf_model_load_from_path()` when you would like to read a model file from a filesystem. After you are done using the `VmafModel`, clean it up with `vmaf_model_destroy()`.
```c
int vmaf_model_load(VmafModel **model, VmafModelConfig *cfg,
const char *version);
int vmaf_model_load_from_path(VmafModel **model, VmafModelConfig *cfg,
const char *path);
void vmaf_model_destroy(VmafModel *model);
```
A VMAF score is a fusion of several elementary features which are specified by a model file. The next step is to register all feature extractors required by your model or models with `vmaf_use_features_from_model()`. If there are auxillary metrics (i.e. `PSNR`) you would also like to extract use `vmaf_use_feature()` to register it directly.
```c
int vmaf_use_features_from_model(VmafContext *vmaf, VmafModel *model);
int vmaf_use_feature(VmafContext *vmaf, const char *feature_name,
VmafFeatureDictionary *opts_dict);
```
VMAF is a full-reference metric, meaning it is calculated on pairs of reference/distorted pictures. To allocate a `VmafPicture` use `vmaf_picture_alloc`. After allocation, you may fill the buffers with pixel data.
```c
int vmaf_picture_alloc(VmafPicture *pic, enum VmafPixelFormat pix_fmt,
unsigned bpc, unsigned w, unsigned h);
```
Read all of you input pictures in a loop with `vmaf_read_pictures()`. When you are done reading pictures, some feature extractors may have internal buffers may still need to be flushed. Call `vmaf_read_pictures()` again with `ref` and `dist` set to `NULL` to flush these buffers. Once buffers are flushed, all further calls to `vmaf_read_pictures()` are invalid.
```c
int vmaf_read_pictures(VmafContext *vmaf, VmafPicture *ref, VmafPicture *dist,
unsigned index);
```
After your pictures have been read, you can retrieve a vmaf score. Use `vmaf_score_at_index` to get the score at single index, and use `vmaf_score_pooled()` to get a pooled score across multiple frames.
```c
int vmaf_score_at_index(VmafContext *vmaf, VmafModel *model, double *score,
unsigned index);
int vmaf_score_pooled(VmafContext *vmaf, VmafModel *model,
enum VmafPoolingMethod pool_method, double *score,
unsigned index_low, unsigned index_high);
```
For complete API documentation, see [libvmaf.h](include/libvmaf/libvmaf.h). For an example of using the API to create the `vmaf` command line tool, see [vmaf.c](tools/vmaf.c).
## Contributing a new VmafFeatureExtractor
To write a new feature extractor, please first familiarize yourself with the [VmafFeatureExtractor API](https://github.com/Netflix/vmaf/blob/master/libvmaf/src/feature/feature_extractor.h#L36-L87) documentation.
Create a new `VmafFeatureExtractor` and add it to the build as well as the `feature_extractor_list[]`. See [this diff](https://github.com/Netflix/vmaf/commit/fd3c79697c7e06586aa5b9cda8db0d9aedfd70c5) for an example. Once you do this your feature extractor may be registered and used inside of `libvmaf` via `vmaf_use_feature()` or `vmaf_use_features_from_model()`. To invoke this feature extractor directly from the command line with `vmaf` use the `--feature` flag.
`VmafFeatureExtractor` is a feature extraction class with just a few callbacks. If you have preallocations and/or precomputations to make, it is best to do this in the `.init()` callback and store the output in `.priv`. This is a place for custom data which is available for all subsequent callbacks. If you allocate anything in `.init()` be sure to clean it up in the `.close()` callback.
The remainder of your work should take place in the `.extract()` callback. This callback is called for every pair of input pictures. Read the pixel data make some computations and then write the output(s) to the `VmafFeatureCollector` via the `vmaf_feature_collector_append()` api. An important thing to know about this callback is that it can (and probably is) being called in an arbitrary order. If your feature extractor has a temporal requirement (i.e. `motion`), set the `VMAF_FEATURE_EXTRACTOR_TEMPORAL` flag and the `VmafFeatureExtractorContext` will ensure that this callback is executed in serial. For an example of a feature extractor with a temporal dependency see the [motion](https://github.com/Netflix/vmaf/blob/master/libvmaf/src/feature/integer_motion.c) feature extractor.
If the `VMAF_FEATURE_EXTRACTOR_TEMPORAL` is set, it is likely that you have buffers that need flushing. If this is the case, `.flush()` is called in a loop until something non-zero is returned.
vmaf-2.3.1.zip
需积分: 0 19 浏览量
更新于2024-01-12
1
收藏 18.54MB ZIP 举报
VMAF(Video Multi-Method Assessment Fusion)是一种广泛使用的视频质量评估工具,它结合了多种视频质量模型,如PSNR(峰值信噪比)、SSIM(结构相似度指数)和VIF(视觉信息 fidelity)等,以提供更准确的主观视频质量预测。VMAF 2.3.1是该工具的一个版本,它可能包含了一些优化和改进,以更好地适应不同的视频处理场景。
在VMAF中,主要涉及到以下几个核心概念和技术:
1. **视频质量评估**:传统的视频质量评估方法如PSNR和SSIM主要关注像素级别的差异,但这些指标并不能完全反映人类视觉系统的感知效果。VMAF通过综合多个指标,旨在更准确地模拟人眼对视频质量的感知。
2. **多方法融合**:VMAF不仅包括PSNR和SSIM,还可能包括其他模型,如MS-SSIM(多尺度结构相似度)、VIF、PSNR-HVS-M(考虑人眼视觉系统的PSNR)等。通过将这些模型的结果融合,VMAF可以提供一个全面的视频质量评分。
3. **训练与验证**:VMAF是基于大量主观评价数据训练得到的。这意味着它通过学习大量的视频片段,了解了不同质量因素如何影响人类对视频质量的感知,并据此构建了一个预测模型。
4. **应用场景**:VMAF常用于视频编码、压缩、传输和播放过程中的质量控制,比如在视频编码器优化时,可以通过VMAF来评估不同码率下的视频质量,找出最佳的编码参数。
5. **版本更新**:VMAF 2.3.1可能是对前一版本的升级,可能包含性能提升、新功能添加或已知问题修复。具体改动通常会在发布文档或 changelog 文件中详细说明。
6. **使用与集成**:VMAF通常以Python库的形式提供,用户可以将其导入到自己的项目中,通过API接口计算视频的VMAF分数。此外,还有命令行工具方便快速测试。
7. **参数调整**:VMAF允许用户根据实际需求调整一些参数,例如滤波器设置、模型权重等,以适应特定类型的视频内容或质量评估需求。
8. **开源特性**:VMAF是一个开源项目,这意味着它的源代码公开,开发者可以查看和修改代码,甚至可以根据自己的需求进行定制。
在VMAF 2.3.1这个版本中,我们可以期待它提供了更稳定、更准确的视频质量评估能力,可能针对某些特定场景进行了优化,或者增加了新的功能。具体细节需要通过查看解压后的文件内容,如README、源代码和测试用例,来进一步了解其具体改进和使用方法。对于视频处理和质量控制的专业人士,理解和应用VMAF工具是非常有价值的,因为它可以帮助他们更好地量化视频质量,从而做出更好的决策。

神遁克里苏
- 粉丝: 707
最新资源
- 自动驾驶规划控制常用算法c++代码实现
- C++ 实现自动驾驶规划与控制常用算法代码
- Delphi算法与数据结构精要
- 基于树莓派的自动驾驶小车,利用树莓派和tensorflow实现小车在赛道的自动驾驶 (Self-driving car based on raspberry pi(tensorflow))
- 自动驾驶Apollo源码注释.(annotated Apollo 1.0 source code)
- 基于树莓派与 TensorFlow 的赛道自动驾驶小车实现
- Udacity 自动驾驶系列课程第一期学习内容
- 轻量级LMS 2.0:基于博客的在线评估新方法
- 自动驾驶领域各类算法的实现方式及原理深度分析 自动驾驶相关各类算法的具体实现路径与原理解析 自动驾驶领域各类算法实现方法及核心原理分析 自动驾驶相关各类算法的实现流程与原理深度剖析 自动驾驶领域各类算
- Udacity 自动驾驶培训课程首期班
- 基于 carla-ros-bridge 在 carla 实现自动驾驶规划与控制
- Android studio 打包uniapp
- 机器学习(预测模型):犯罪新闻标题二元分类任务的数据集
- 基于 carla-ros-bridge 在 carla 实现自动驾驶规划与控制
- 使用 TensorFlow 与 OpenCV 模拟自动驾驶系统 基于 TensorFlow 和 OpenCV 的自动驾驶模拟实现 借助 TensorFlow 与 OpenCV 进行自动驾驶模拟 采用