haneulpark commited on
Commit
ec60e4b
Β·
verified Β·
1 Parent(s): 04fae7a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -14
README.md CHANGED
@@ -91,26 +91,27 @@ then, from within python load the datasets library
91
  and load the `MolData` datasets, e.g.,
92
 
93
  >>> MolData = datasets.load_dataset("maomlab/MolData")
94
- Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5.23k/5.23k [00:00<00:00, 35.1kkB/s]
95
- Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 34.5k//34.5k/ [00:00<00:00, 155kB/s]
96
- Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 97.1k/97.1k [00:00<00:00, 587kB/s]
97
- Generating test split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 594/594 [00:00<00:00, 12705.92 examples/s]
98
- Generating train split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1788/1788 [00:00<00:00, 43895.91 examples/s]
99
 
100
  and inspecting the loaded dataset
101
 
102
  >>> MolData
103
- MolData
104
  DatasetDict({
105
- test: Dataset({
106
- features: ['SMILES', 'Y'],
107
- num_rows: 594
108
- })
109
- train: Dataset({
110
- features: ['SMILES', 'Y'],
111
- num_rows: 1788
112
- })
113
  })
 
 
 
 
 
 
 
 
 
114
 
115
  ### Use a dataset to train a model
116
  One way to use the dataset is through the [MolFlux](https://exscientia.github.io/molflux/) package developed by Exscientia.
 
91
  and load the `MolData` datasets, e.g.,
92
 
93
  >>> MolData = datasets.load_dataset("maomlab/MolData")
94
+ Generating train split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 138547273/138547273 [02:07<00:00, 1088043.12 examples/s]
95
+ Generating test split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 17069726/17069726 [00:16<00:00, 1037407.67 examples/s]
96
+ Generating validation split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 12728449/12728449 [00:11<00:00, 1093675.24 examples/s]
 
 
97
 
98
  and inspecting the loaded dataset
99
 
100
  >>> MolData
 
101
  DatasetDict({
102
+ train: Dataset({
103
+ features: ['SMILES', 'PUBCHEM_CID', 'split', 'AID', 'Y'],
104
+ num_rows: 138547273
 
 
 
 
 
105
  })
106
+ test: Dataset({
107
+ features: ['SMILES', 'PUBCHEM_CID', 'split', 'AID', 'Y'],
108
+ num_rows: 17069726
109
+ })
110
+ validation: Dataset({
111
+ features: ['SMILES', 'PUBCHEM_CID', 'split', 'AID', 'Y'],
112
+ num_rows: 12728449
113
+ })
114
+ })
115
 
116
  ### Use a dataset to train a model
117
  One way to use the dataset is through the [MolFlux](https://exscientia.github.io/molflux/) package developed by Exscientia.