Skip to content

Commit

Permalink
[Doc] Fix document link (#2457)
Browse files Browse the repository at this point in the history
  • Loading branch information
cir7 authored Jun 16, 2023
1 parent c43ced9 commit c331b65
Show file tree
Hide file tree
Showing 18 changed files with 34 additions and 34 deletions.
2 changes: 1 addition & 1 deletion configs/recognition/mvit/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ In this paper, we study Multiscale Vision Transformers (MViTv2) as a unified arc
1. Models with * in `Inference results` are ported from the repo [SlowFast](https://github.com/facebookresearch/SlowFast/) and tested on our data, and models in `Training results` are trained in MMAction2 on our data.
2. The values in columns named after `reference` are copied from paper, and `reference*` are results using [SlowFast](https://github.com/facebookresearch/SlowFast/) repo and trained on our data.
3. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at [Kinetics400-Validation](https://mycuhk-my.sharepoint.com/:u:/g/personal/1155136485_link_cuhk_edu_hk/EbXw2WX94J1Hunyt3MWNDJUBz-nHvQYhO9pvKqm6g39PMA?e=a9QldB). The corresponding [data list](https://download.openmmlab.com/mmaction/dataset/k400_val/kinetics_val_list.txt) (each line is of the format 'video_id, num_frames, label_index') and the [label map](https://download.openmmlab.com/mmaction/dataset/k400_val/kinetics_class2ind.txt) are also available.
4. MaskFeat fine-tuning experiment is based on pretrain model from [MMSelfSup](https://github.com/open-mmlab/mmselfsup/tree/dev-1.x/projects/maskfeat_video), and the corresponding reference result is based on pretrain model from [SlowFast](https://github.com/facebookresearch/SlowFast/).
4. MaskFeat fine-tuning experiment is based on pretrain model from [MMSelfSup](https://github.com/open-mmlab/mmselfsup/tree/main/projects/maskfeat_video), and the corresponding reference result is based on pretrain model from [SlowFast](https://github.com/facebookresearch/SlowFast/).
5. Due to the different versions of Kinetics-400, our training results are different from paper.
6. Due to the training efficiency, we currently only provide MViT-small training results, we don't ensure other config files' training accuracy and welcome you to contribute your reproduction results.
7. We use `repeat augment` in MViT training configs following [SlowFast](https://github.com/facebookresearch/SlowFast/). [Repeat augment](https://arxiv.org/pdf/1901.09335.pdf) takes multiple times of data augment for one video, this way can improve the generalization of the model and relieve the IO stress of loading videos. And please note that the actual batch size is `num_repeats` times of `batch_size` in `train_dataloader`.
Expand Down
2 changes: 1 addition & 1 deletion configs/recognition/uniformer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ The models are ported from the repo [UniFormer](https://github.com/Sense-X/UniFo
1. The values in columns named after "reference" are the results of the original repo.
2. The values in `top1/5 acc` is tested on the same data list as the original repo, and the label map is provided by [UniFormer](https://drive.google.com/drive/folders/17VB-XdF3Kfr9ORmnGyXCxTMs86n0L4QL). The total videos are available at [Kinetics400](https://pan.baidu.com/s/1t5K0FRz3PGAT-37-3FwAfg) (BaiduYun password: g5kp), which consists of 19787 videos.
3. The values in columns named after "mm-Kinetics" are the testing results on the Kinetics dataset held by MMAction2, which is also used by other models in MMAction2. Due to the differences between various versions of Kinetics dataset, there is a little gap between `top1/5 acc` and `mm-Kinetics top1/5 acc`. For a fair comparison with other models, we report both results here. Note that we simply report the inference results, since the training set is different between UniFormer and other models, the results are lower than that tested on the author's version.
4. Since the original models for Kinetics-400/600/700 adopt different [label file](https://drive.google.com/drive/folders/17VB-XdF3Kfr9ORmnGyXCxTMs86n0L4QL), we simply map the weight according to the label name. New label map for Kinetics-400/600/700 can be found [here](https://github.com/open-mmlab/mmaction2/tree/dev-1.x/tools/data/kinetics).
4. Since the original models for Kinetics-400/600/700 adopt different [label file](https://drive.google.com/drive/folders/17VB-XdF3Kfr9ORmnGyXCxTMs86n0L4QL), we simply map the weight according to the label name. New label map for Kinetics-400/600/700 can be found [here](https://github.com/open-mmlab/mmaction2/tree/main/tools/data/kinetics).
5. Due to some difference between [SlowFast](https://github.com/facebookresearch/SlowFast) and MMAction, there are some gaps between their performances.

For more details on data preparation, you can refer to [preparing_kinetics](/tools/data/kinetics/README.md).
Expand Down
2 changes: 1 addition & 1 deletion configs/recognition/uniformerv2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ The models with * are ported from the repo [UniFormerV2](https://github.com/Open
1. The values in columns named after "reference" are the results of the original repo.
2. The values in `top1/5 acc` is tested on the same data list as the original repo, and the label map is provided by [UniFormerV2](https://drive.google.com/drive/folders/17VB-XdF3Kfr9ORmnGyXCxTMs86n0L4QL).
3. The values in columns named after "mm-Kinetics" are the testing results on the Kinetics dataset held by MMAction2, which is also used by other models in MMAction2. Due to the differences between various versions of Kinetics dataset, there is a little gap between `top1/5 acc` and `mm-Kinetics top1/5 acc`. For a fair comparison with other models, we report both results here. Note that we simply report the inference results, since the training set is different between UniFormer and other models, the results are lower than that tested on the author's version.
4. Since the original models for Kinetics-400/600/700 adopt different [label file](https://drive.google.com/drive/folders/17VB-XdF3Kfr9ORmnGyXCxTMs86n0L4QL), we simply map the weight according to the label name. New label map for Kinetics-400/600/700 can be found [here](https://github.com/open-mmlab/mmaction2/tree/dev-1.x/tools/data/kinetics).
4. Since the original models for Kinetics-400/600/700 adopt different [label file](https://drive.google.com/drive/folders/17VB-XdF3Kfr9ORmnGyXCxTMs86n0L4QL), we simply map the weight according to the label name. New label map for Kinetics-400/600/700 can be found [here](https://github.com/open-mmlab/mmaction2/tree/main/tools/data/kinetics).
5. Due to some differences between [SlowFast](https://github.com/facebookresearch/SlowFast) and MMAction2, there are some gaps between their performances.
6. Kinetics-710 is used for pretraining, which helps improve the performance on other datasets efficiently. You can find more details in the [paper](https://arxiv.org/abs/2211.09552).

Expand Down
2 changes: 1 addition & 1 deletion configs/skeleton/2s-agcn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ In skeleton-based action recognition, graph convolutional networks (GCNs), which
| | four-stream | | | 90.89 | | | | | | |

1. The **gpus** indicates the number of gpus we used to get the checkpoint. If you want to use a different number of gpus or videos per gpu, the best way is to set `--auto-scale-lr` when calling `tools/train.py`, this parameter will auto-scale the learning rate according to the actual batch size, and the original batch size.
2. For two-stream fusion, we use **joint : bone = 1 : 1**. For four-stream fusion, we use **joint : joint-motion : bone : bone-motion = 2 : 1 : 2 : 1**. For more details about multi-stream fusion, please refer to this [tutorial](/docs/en/advanced_guides/useful_tools.md#multi-stream-fusion).
2. For two-stream fusion, we use **joint : bone = 1 : 1**. For four-stream fusion, we use **joint : joint-motion : bone : bone-motion = 2 : 1 : 2 : 1**. For more details about multi-stream fusion, please refer to this [tutorial](/docs/en/useful_tools.md#multi-stream-fusion).

## Train

Expand Down
2 changes: 1 addition & 1 deletion configs/skeleton/stgcn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ Dynamics of human body skeletons convey significant information for human action
| | four-stream | | | 86.19 | | | | | | |

1. The **gpus** indicates the number of gpus we used to get the checkpoint. If you want to use a different number of gpus or videos per gpu, the best way is to set `--auto-scale-lr` when calling `tools/train.py`, this parameter will auto-scale the learning rate according to the actual batch size, and the original batch size.
2. For two-stream fusion, we use **joint : bone = 1 : 1**. For four-stream fusion, we use **joint : joint-motion : bone : bone-motion = 2 : 1 : 2 : 1**. For more details about multi-stream fusion, please refer to this [tutorial](/docs/en/advanced_guides/useful_tools.md#multi-stream-fusion).
2. For two-stream fusion, we use **joint : bone = 1 : 1**. For four-stream fusion, we use **joint : joint-motion : bone : bone-motion = 2 : 1 : 2 : 1**. For more details about multi-stream fusion, please refer to this [tutorial](/docs/en/useful_tools.md#multi-stream-fusion).

## Train

Expand Down
2 changes: 1 addition & 1 deletion configs/skeleton/stgcnpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ We present PYSKL: an open-source toolbox for skeleton-based action recognition b
| | four-stream | | | 91.87 | | | | | | |

1. The **gpus** indicates the number of gpus we used to get the checkpoint. If you want to use a different number of gpus or videos per gpu, the best way is to set `--auto-scale-lr` when calling `tools/train.py`, this parameter will auto-scale the learning rate according to the actual batch size, and the original batch size.
2. For two-stream fusion, we use **joint : bone = 1 : 1**. For four-stream fusion, we use **joint : joint-motion : bone : bone-motion = 2 : 1 : 2 : 1**. For more details about multi-stream fusion, please refer to this [tutorial](/docs/en/advanced_guides/useful_tools.md#multi-stream-fusion).
2. For two-stream fusion, we use **joint : bone = 1 : 1**. For four-stream fusion, we use **joint : joint-motion : bone : bone-motion = 2 : 1 : 2 : 1**. For more details about multi-stream fusion, please refer to this [tutorial](/docs/en/useful_tools.md#multi-stream-fusion).

## Train

Expand Down
4 changes: 2 additions & 2 deletions docs/en/get_started/contribution_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,11 @@ We use the following tools for linting and formatting:
- [mdformat](https://github.com/executablebooks/mdformat): Mdformat is an opinionated Markdown formatter that can be used to enforce a consistent style in Markdown files.
- [docformatter](https://github.com/myint/docformatter): A formatter to format docstring.

Style configurations of yapf and isort can be found in [setup.cfg](https://github.com/open-mmlab/mmaction2/blob/1.x/setup.cfg).
Style configurations of yapf and isort can be found in [setup.cfg](https://github.com/open-mmlab/mmaction2/blob/main/setup.cfg).

We use [pre-commit hook](https://pre-commit.com/) that checks and formats for `flake8`, `yapf`, `isort`, `trailing whitespaces`, `markdown files`,
fixes `end-of-files`, `double-quoted-strings`, `python-encoding-pragma`, `mixed-line-ending`, sorts `requirments.txt` automatically on every commit.
The config for a pre-commit hook is stored in [.pre-commit-config](https://github.com/open-mmlab/mmaction2/blob/1.x/.pre-commit-config.yaml).
The config for a pre-commit hook is stored in [.pre-commit-config](https://github.com/open-mmlab/mmaction2/blob/main/.pre-commit-config.yaml).

After you clone the repository, you will need to install initialize pre-commit hook.

Expand Down
8 changes: 4 additions & 4 deletions docs/en/get_started/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ If the contents here do not cover your issue, please create an issue using the [

- **"Why I got the error message 'Please install XXCODEBASE to use XXX' even if I have already installed XXCODEBASE?"**

You got that error message because our project failed to import a function or a class from XXCODEBASE. You can try to run the corresponding line to see what happens. One possible reason is, for some codebases in OpenMMLAB, you need to install mmcv and mmengine before you install them. You could follow this [tutorial](https://mmaction2.readthedocs.io/en/latest/get_started.html#installation) to install them.
You got that error message because our project failed to import a function or a class from XXCODEBASE. You can try to run the corresponding line to see what happens. One possible reason is, for some codebases in OpenMMLAB, you need to install mmcv and mmengine before you install them. You could follow this [tutorial](https://mmaction2.readthedocs.io/en/latest/get_started/installation.html#installation) to install them.

## Data

Expand All @@ -48,9 +48,9 @@ If the contents here do not cover your issue, please create an issue using the [

We have both pipeline for processing videos and frames.

**For videos**, We should decode them on the fly in the pipeline, so pairs like `DecordInit & DecordDecode`, `OpenCVInit & OpenCVDecode`, `PyAVInit & PyAVDecode` should be used for this case like [this example](https://github.com/open-mmlab/mmaction2/blob/1.x/configs/recognition/tsn/tsn_imagenet-pretrained-r50_8xb32-1x1x3-100e_kinetics400-rgb.py#L14-L16).
**For videos**, We should decode them on the fly in the pipeline, so pairs like `DecordInit & DecordDecode`, `OpenCVInit & OpenCVDecode`, `PyAVInit & PyAVDecode` should be used for this case like [this example](https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/tsn/tsn_imagenet-pretrained-r50_8xb32-1x1x3-100e_kinetics400-rgb.py#L14-L16).

**For Frames**, the image has been decoded offline, so pipeline item `RawFrameDecode` should be used for this case like [this example](https://github.com/open-mmlab/mmaction2/blob/1.x/configs/recognition/trn/trn_imagenet-pretrained-r50_8xb16-1x1x8-50e_sthv1-rgb.py#L17).
**For Frames**, the image has been decoded offline, so pipeline item `RawFrameDecode` should be used for this case like [this example](https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/trn/trn_imagenet-pretrained-r50_8xb16-1x1x8-50e_sthv1-rgb.py#L17).

`KeyError: 'total_frames'` is caused by incorrectly using `RawFrameDecode` step for videos, since when the input is a video, it can not get the `total_frames` beforehand.

Expand All @@ -65,7 +65,7 @@ If the contents here do not cover your issue, please create an issue using the [

- **How to fix stages of backbone when finetuning a model?**

You can refer to [`def _freeze_stages()`](https://github.com/open-mmlab/mmaction2/blob/1.x/mmaction/models/backbones/resnet3d.py#L791) and [`frozen_stages`](https://github.com/open-mmlab/mmaction2/blob/1.x/mmaction/models/backbones/resnet3d.py#L369-L370).
You can refer to [`def _freeze_stages()`](https://github.com/open-mmlab/mmaction2/blob/main/mmaction/models/backbones/resnet3d.py#L791) and [`frozen_stages`](https://github.com/open-mmlab/mmaction2/blob/main/mmaction/models/backbones/resnet3d.py#L369-L370).
Reminding to set `find_unused_parameters = True` in config files for distributed training or testing.

Actually, users can set `frozen_stages` to freeze stages in backbones except C3D model, since almost all backbones inheriting from `ResNet` and `ResNet3D` support the inner function `_freeze_stages()`.
Expand Down
2 changes: 1 addition & 1 deletion docs/en/notes/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ Built upon the new [training engine](https://github.com/open-mmlab/mmengine).

- **Unified interfaces**. As a part of the OpenMMLab 2.0 projects, MMAction2 1.x unifies and refactors the interfaces and internal logics of train, testing, datasets, models, evaluation, and visualization. All the OpenMMLab 2.0 projects share the same design in those interfaces and logics to allow the emergence of multi-task/modality algorithms.

- **More documentation and tutorials**. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it [here](https://github.com/open-mmlab/mmaction2/blob/1.x/docs/en/migration.md).
- **More documentation and tutorials**. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it [here](https://github.com/open-mmlab/mmaction2/blob/main/docs/en/migration.md).

**Breaking Changes**

Expand Down
4 changes: 2 additions & 2 deletions docs/en/user_guides/inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ If you use mmaction2 as a 3rd-party package, you need to download the conifg and
Run 'mim download mmaction2 --config tsn_imagenet-pretrained-r50_8xb32-1x1x8-100e_kinetics400-rgb --dest .' to download the required config.
Run 'wget https://github.com/open-mmlab/mmaction2/blob/dev-1.x/demo/demo.mp4' to download the desired demo video.
Run 'wget https://github.com/open-mmlab/mmaction2/blob/main/demo/demo.mp4' to download the desired demo video.
```

```python
Expand All @@ -37,4 +37,4 @@ result = inference_recognizer(model, img_path)

`result` is a dictionary containing `pred_scores`.

An action recognition demo can be found in [demo/demo.py](https://github.com/open-mmlab/mmaction2/blob/dev-1.x/demo/demo.py).
An action recognition demo can be found in [demo/demo.py](https://github.com/open-mmlab/mmaction2/blob/main/demo/demo.py).
2 changes: 1 addition & 1 deletion docs/zh_cn/get_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ MMAction2 可以仅在 CPU 环境中安装。在 CPU 模式下,你可以完成

### 通过Docker使用MMAction2

我们提供一个[Dockerfile](https://github.com/open-mmlab/mmaction2/blob/1.x/docker/Dockerfile)用来构建镜像,确保你的 [Docker版本](https://docs.docker.com/engine/install/)>=19.03.
我们提供一个[Dockerfile](https://github.com/open-mmlab/mmaction2/blob/main/docker/Dockerfile)用来构建镜像,确保你的 [Docker版本](https://docs.docker.com/engine/install/)>=19.03.

```shell
# 例如构建PyTorch 1.6.0, CUDA 10.1, CUDNN 7的镜像
Expand Down
2 changes: 1 addition & 1 deletion docs/zh_cn/notes/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Built upon the new [training engine](https://github.com/open-mmlab/mmengine).

- **Unified interfaces**. As a part of the OpenMMLab 2.0 projects, MMAction2 1.x unifies and refactors the interfaces and internal logics of train, testing, datasets, models, evaluation, and visualization. All the OpenMMLab 2.0 projects share the same design in those interfaces and logics to allow the emergence of multi-task/modality algorithms.

- **More documentation and tutorials**. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it [here](https://github.com/open-mmlab/mmaction2/blob/1.x/docs/en/migration.md).
- **More documentation and tutorials**. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it [here](https://github.com/open-mmlab/mmaction2/blob/main/docs/en/migration.md).

**Breaking Changes**

Expand Down
4 changes: 2 additions & 2 deletions docs/zh_cn/user_guides/3_inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ MMAction2提供了高级 Python APIs,用于对给定视频进行推理:
下载所需的配置:'mim download mmaction2 --config tsn_imagenet-pretrained-r50_8xb32-1x1x8-100e_kinetics400-rgb --dest .'
下载所需的演示视频:'wget https://github.com/open-mmlab/mmaction2/blob/dev-1.x/demo/demo.mp4'
下载所需的演示视频:'wget https://github.com/open-mmlab/mmaction2/blob/main/demo/demo.mp4'
```

```python
Expand All @@ -35,4 +35,4 @@ model = init_recognizer(config_path, checkpoint_path, device="cpu") # 也可以
result = inference_recognizer(model, img_path)
```

`result` 是一个包含 `pred_scores` 的字典。动作识别示例代码详见 [demo/demo.py](https://github.com/open-mmlab/mmaction2/blob/1.x/demo/demo.py)
`result` 是一个包含 `pred_scores` 的字典。动作识别示例代码详见 [demo/demo.py](https://github.com/open-mmlab/mmaction2/blob/main/demo/demo.py)
4 changes: 2 additions & 2 deletions projects/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,10 @@ Here is an [example project](./example_project) about how to add your algorithms

We also provide some documentation listed below:

- [Contribution Guide](https://mmaction2.readthedocs.io/en/dev-1.x/notes/contribution_guide.html)
- [Contribution Guide](https://mmaction2.readthedocs.io/en/latest/get_started/contribution_guide.html)

The guides for new contributors about how to add your projects to MMAction2.

- [Discussions](https://github.com/open-mmlab/mmaction2/discussions)

Welcome to start discussion!
Welcome to start a discussion!
Loading

0 comments on commit c331b65

Please sign in to comment.