File size: 3,035 Bytes
313b1ac
9acd147
 
 
313b1ac
 
 
 
 
 
 
9acd147
313b1ac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0658111
 
 
 
313b1ac
 
 
 
 
d61de7e
 
44e46fa
 
 
 
 
 
 
0ab59a5
 
 
 
44e46fa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
size_categories:
- 10K<n<100K
pretty_name: Hugging Face Hub Models with Base Model Metadata
dataset_info:
  features:
  - name: author
    dtype: string
  - name: last_modified
    dtype: timestamp[us, tz=UTC]
  - name: createdAt
    dtype: timestamp[us, tz=UTC]
  - name: downloads
    dtype: int64
  - name: likes
    dtype: int64
  - name: library_name
    dtype: string
  - name: modelId
    dtype: string
  - name: datasets
    sequence: string
  - name: language
    sequence: string
  - name: base_model
    dtype: string
  splits:
  - name: train
    num_bytes: 5408255
    num_examples: 36181
  download_size: 2137676
  dataset_size: 5408255
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
tags:
- metadata
---

# Dataset Card for Hugging Face Hub Models with Base Model Metadata


## Dataset Details

This dataset contains a subset of possible metadata for models hosted on the Hugging Face Hub. 
All of these models contain `base_model` metadata i.e. information about the model used for fine-tuning. 
This data can be used for creating network graphs showing links between models on the Hub. 

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->



- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]

### Dataset Sources [optional]

<!-- Provide the basic links for the dataset. -->

- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]

## Uses

<!-- Address questions around how the dataset is intended to be used. -->

### Direct Use

<!-- This section describes suitable use cases for the dataset. -->

[More Information Needed]

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->

[More Information Needed]

## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

[More Information Needed]

## Dataset Creation

### Curation Rationale

<!-- Motivation for the creation of this dataset. -->

[More Information Needed]

### Source Data

<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->

#### Data Collection and Processing

<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->

[More Information Needed]

#### Who are the source data producers?

The source data is model card creators for models on the Hub as well as tools and deep learning libraries which automatically assign metadata to model repositories.