Update README.md
Browse files
README.md
CHANGED
@@ -7,20 +7,35 @@ metrics:
|
|
7 |
- bleu
|
8 |
- meteor
|
9 |
- rouge
|
10 |
-
pipeline_tag: text-
|
11 |
inference: false
|
12 |
tags:
|
13 |
- video-captioning
|
14 |
---
|
15 |
-
|
16 |
|
17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
-
|
20 |
-
Text Decoder Model: [gpt2](https://huggingface.co/gpt2)
|
21 |
|
22 |
-
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
#### Example Inference Code:
|
26 |
```python
|
|
|
7 |
- bleu
|
8 |
- meteor
|
9 |
- rouge
|
10 |
+
pipeline_tag: video-text-to-text
|
11 |
inference: false
|
12 |
tags:
|
13 |
- video-captioning
|
14 |
---
|
15 |
+
<h1 align='center'> SpaceTimeGPT - Video Captioning Model </h1>
|
16 |
|
17 |
+
<div align="center">
|
18 |
+
<a href="https://github.com/Neleac/SpaceTimeGPT">
|
19 |
+
<img src="https://img.shields.io/badge/GitHub-Neleac/SpaceTimeGPT-purple.svg">
|
20 |
+
</a>
|
21 |
+
<img src="https://raw.githubusercontent.com/Neleac/SpaceTimeGPT/main/model.JPG" width="75%" height="75%">
|
22 |
+
<p> (partial diagrams from <a href="https://arxiv.org/abs/2103.15691">1</a>, <a href="https://arxiv.org/abs/2102.05095">2</a>, <a href="https://arxiv.org/abs/1706.03762">3</a>) </p>
|
23 |
+
</div>
|
24 |
|
25 |
+
SpaceTimeGPT is a video description generation model capable of both spatial and temporal reasoning. Given a video, eight frames are sampled and analyzed by the model. The output is a sentence description of the events that occured in the video, generated using autoregression.
|
|
|
26 |
|
27 |
+
## Architecture and Training
|
28 |
+
Vision Encoder: [timesformer-base-finetuned-k600](https://huggingface.co/facebook/timesformer-base-finetuned-k600) \
|
29 |
+
Text Decoder: [gpt2](https://huggingface.co/gpt2)
|
30 |
+
|
31 |
+
The encoder and decoder are initialized using pretrained weights for video classification and sentence completion, respectively. Encoder-decoder cross attention is used to unify the visual and linguistic domains. The model is fine-tuned end-to-end on the video captioning task.
|
32 |
+
|
33 |
+
## Dataset and Evaluation
|
34 |
+
SpaceTimeGPT is trained on [VATEX](https://eric-xw.github.io/vatex-website/index.html), a large video captioning dataset.
|
35 |
+
|
36 |
+
Performance: 67.3 [CIDEr](https://github.com/ramavedantam/cider) on the VATEX test split
|
37 |
+
|
38 |
+
Sampling method: 30 $\le$ generated tokens $\le$ 60, beam search with 8 beams
|
39 |
|
40 |
#### Example Inference Code:
|
41 |
```python
|