pppop7 commited on
Commit
c931619
·
verified ·
1 Parent(s): b6948b1

Add dataset card README

Browse files
Files changed (1) hide show
  1. README.md +32 -36
README.md CHANGED
@@ -1,49 +1,45 @@
1
  ---
2
- license: other
3
- language:
4
- - en
5
- pretty_name: LLaVA Pretrain
 
 
 
 
 
 
 
6
  ---
7
 
 
8
 
9
- # LLaVA Visual Instruct Pretrain Dataset Card
10
 
11
- ## Dataset details
12
 
13
- **Dataset type:**
14
- LLaVA Visual Instruct Pretrain LCS-558K is a subset of LAION/CC/SBU dataset, filtered with a more balanced concept coverage distribution.
15
- Captions are also associated with [BLIP synthetic caption](https://github.com/salesforce/BLIP#pre-training-datasets-download) for reference.
16
- It is constructed for the pretraining stage for feature alignment in visual instruction tuning.
17
- We aim to build large multimodal towards GPT-4 vision/language capability.
18
 
19
- **Dataset date:**
20
- LLaVA Visual Instruct CC3M Pretrain 595K was created in May 2023.
21
 
22
- **Dataset structure:**
23
- - `blip_laion_cc_sbu_558k.json` contains the multimodal synthesized conversation from the image-caption pairs, by adding randomly selected instructions like: "Describe this image". It is used for pretraining in LLaVA. We use the raw CC-3M caption as the default answer.
24
- - `blip_laion_cc_sbu_558k_meta.json` contains the meta data of the image file name, image URL, synthetic BLIP caption.
25
- - `images.zip` contains all raw images of the filtered subset from LAION/CC/SBU. Important notice: Upon the request from the community, as ~15% images of the original LAION/CC/SBU dataset are no longer accessible, we upload images.zip for better reproducing our work in research community. It should not be used for any other purpose. The use of these images must comply with the LAION/CC/SBU license. This may be taken down when requested by the original LAION/CC/SBU dataset owner or owners of the referenced images.
26
 
27
- **Paper or resources for more information:**
28
- https://llava-vl.github.io/
 
 
 
 
 
29
 
30
- **License:**
31
- Must comply with license of [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE), [BLIP](https://github.com/salesforce/BLIP/blob/main/LICENSE.txt) (if you use their synthetic caption).
32
 
33
- CC-3M
34
- The dataset may be freely used for any purpose, although acknowledgement of
35
- Google LLC ("Google") as the data source would be appreciated. The dataset is
36
- provided "AS IS" without any warranty, express or implied. Google disclaims all
37
- liability for any damages, direct or indirect, resulting from the use of the
38
- dataset.
39
 
 
40
 
41
- **Where to send questions or comments about the model:**
42
- https://github.com/haotian-liu/LLaVA/issues
43
-
44
- ## Intended use
45
- **Primary intended uses:**
46
- The primary use of LLaVA is research on large multimodal models and chatbots.
47
-
48
- **Primary intended users:**
49
- The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
 
1
  ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - image-to-text
5
+ - visual-question-answering
6
+ tags:
7
+ - llava
8
+ - vision-language
9
+ - pretrain
10
+ - multimodal
11
+ size_categories:
12
+ - 100K<n<1M
13
  ---
14
 
15
+ # LLaVA-Pretrain Dataset
16
 
17
+ Pretraining data for LLaVA (Large Language and Vision Assistant).
18
 
19
+ ## Description
20
 
21
+ This dataset contains the pretraining data used in LLaVA training, including:
22
+ - `blip_laion_cc_sbu_558k.json` - Annotation file with 558K image-caption pairs
23
+ - `images/` - Corresponding images
 
 
24
 
25
+ ## Usage
 
26
 
27
+ ```python
28
+ from huggingface_hub import snapshot_download
 
 
29
 
30
+ # Download the dataset
31
+ snapshot_download(
32
+ repo_id="pppop7/LLaVA-Pretrain",
33
+ repo_type="dataset",
34
+ local_dir="./llava_pretrain"
35
+ )
36
+ ```
37
 
38
+ ## Related Datasets
 
39
 
40
+ - [pppop7/LLaVA-Instruct-150K](https://huggingface.co/datasets/pppop7/LLaVA-Instruct-150K) - Instruction tuning data
 
 
 
 
 
41
 
42
+ ## Reference
43
 
44
+ - [LLaVA Official Repository](https://github.com/haotian-liu/LLaVA)
45
+ - [LLaVA Paper](https://arxiv.org/abs/2304.08485)