Add dataset card README
Browse files
README.md
CHANGED
|
@@ -1,49 +1,45 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
| 3 |
-
|
| 4 |
-
-
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
---
|
| 7 |
|
|
|
|
| 8 |
|
| 9 |
-
|
| 10 |
|
| 11 |
-
##
|
| 12 |
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
It is constructed for the pretraining stage for feature alignment in visual instruction tuning.
|
| 17 |
-
We aim to build large multimodal towards GPT-4 vision/language capability.
|
| 18 |
|
| 19 |
-
|
| 20 |
-
LLaVA Visual Instruct CC3M Pretrain 595K was created in May 2023.
|
| 21 |
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
- `blip_laion_cc_sbu_558k_meta.json` contains the meta data of the image file name, image URL, synthetic BLIP caption.
|
| 25 |
-
- `images.zip` contains all raw images of the filtered subset from LAION/CC/SBU. Important notice: Upon the request from the community, as ~15% images of the original LAION/CC/SBU dataset are no longer accessible, we upload images.zip for better reproducing our work in research community. It should not be used for any other purpose. The use of these images must comply with the LAION/CC/SBU license. This may be taken down when requested by the original LAION/CC/SBU dataset owner or owners of the referenced images.
|
| 26 |
|
| 27 |
-
|
| 28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
-
|
| 31 |
-
Must comply with license of [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE), [BLIP](https://github.com/salesforce/BLIP/blob/main/LICENSE.txt) (if you use their synthetic caption).
|
| 32 |
|
| 33 |
-
|
| 34 |
-
The dataset may be freely used for any purpose, although acknowledgement of
|
| 35 |
-
Google LLC ("Google") as the data source would be appreciated. The dataset is
|
| 36 |
-
provided "AS IS" without any warranty, express or implied. Google disclaims all
|
| 37 |
-
liability for any damages, direct or indirect, resulting from the use of the
|
| 38 |
-
dataset.
|
| 39 |
|
|
|
|
| 40 |
|
| 41 |
-
|
| 42 |
-
https://
|
| 43 |
-
|
| 44 |
-
## Intended use
|
| 45 |
-
**Primary intended uses:**
|
| 46 |
-
The primary use of LLaVA is research on large multimodal models and chatbots.
|
| 47 |
-
|
| 48 |
-
**Primary intended users:**
|
| 49 |
-
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|
|
|
|
| 1 |
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-to-text
|
| 5 |
+
- visual-question-answering
|
| 6 |
+
tags:
|
| 7 |
+
- llava
|
| 8 |
+
- vision-language
|
| 9 |
+
- pretrain
|
| 10 |
+
- multimodal
|
| 11 |
+
size_categories:
|
| 12 |
+
- 100K<n<1M
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# LLaVA-Pretrain Dataset
|
| 16 |
|
| 17 |
+
Pretraining data for LLaVA (Large Language and Vision Assistant).
|
| 18 |
|
| 19 |
+
## Description
|
| 20 |
|
| 21 |
+
This dataset contains the pretraining data used in LLaVA training, including:
|
| 22 |
+
- `blip_laion_cc_sbu_558k.json` - Annotation file with 558K image-caption pairs
|
| 23 |
+
- `images/` - Corresponding images
|
|
|
|
|
|
|
| 24 |
|
| 25 |
+
## Usage
|
|
|
|
| 26 |
|
| 27 |
+
```python
|
| 28 |
+
from huggingface_hub import snapshot_download
|
|
|
|
|
|
|
| 29 |
|
| 30 |
+
# Download the dataset
|
| 31 |
+
snapshot_download(
|
| 32 |
+
repo_id="pppop7/LLaVA-Pretrain",
|
| 33 |
+
repo_type="dataset",
|
| 34 |
+
local_dir="./llava_pretrain"
|
| 35 |
+
)
|
| 36 |
+
```
|
| 37 |
|
| 38 |
+
## Related Datasets
|
|
|
|
| 39 |
|
| 40 |
+
- [pppop7/LLaVA-Instruct-150K](https://huggingface.co/datasets/pppop7/LLaVA-Instruct-150K) - Instruction tuning data
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
|
| 42 |
+
## Reference
|
| 43 |
|
| 44 |
+
- [LLaVA Official Repository](https://github.com/haotian-liu/LLaVA)
|
| 45 |
+
- [LLaVA Paper](https://arxiv.org/abs/2304.08485)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|