christopher commited on
Commit
fa01510
·
verified ·
1 Parent(s): 3035ec4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -1
README.md CHANGED
@@ -27,12 +27,29 @@ size_categories:
27
  - 1K<n<10K
28
  ---
29
  > [!NOTE]
30
- > The following is taken from the authors' GitHub repository: https://github.com/greydanus/mnist1d
31
  >
32
  # The MNIST-1D Dataset
33
 
34
  Most machine learning models get around the same ~99% test accuracy on MNIST. Our dataset, MNIST-1D, is 100x smaller (default sample size: 4000+1000; dimensionality: 40) and does a better job of separating between models with/without nonlinearity and models with/without spatial inductive biases.
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  ## Dataset Creation
37
 
38
  This version of the dataset was created by using the pickle file provided by the dataset authors in the original repository: [mnist1d_data.pkl](https://github.com/greydanus/mnist1d/blob/master/mnist1d_data.pkl) and was generated like follows:
@@ -52,6 +69,8 @@ test = Dataset.from_dict({"x": data["x_test"], "y":data["y_test"]})
52
  DatasetDict({"train":train, "test":test}).push_to_hub("christopher/mnist1d")
53
  ```
54
 
 
 
55
  ## Dataset Usage
56
 
57
  Using the `datasets` library:
 
27
  - 1K<n<10K
28
  ---
29
  > [!NOTE]
30
+ > This dataset card is based on the README file of the authors' GitHub repository: https://github.com/greydanus/mnist1d
31
  >
32
  # The MNIST-1D Dataset
33
 
34
  Most machine learning models get around the same ~99% test accuracy on MNIST. Our dataset, MNIST-1D, is 100x smaller (default sample size: 4000+1000; dimensionality: 40) and does a better job of separating between models with/without nonlinearity and models with/without spatial inductive biases.
35
 
36
+ MNIST-1D is a core teaching dataset in Simon Prince's [Understanding Deep Learning](https://udlbook.github.io/udlbook/) textbook.
37
+
38
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5e70f6048ce3c604d78fe133/VhgTkDsRQ24LVCsup9oMX.png)
39
+
40
+
41
+ ## Comparing MNIST and MNIST-1D
42
+
43
+ | Dataset | Logistic Regression | MLP | CNN | GRU* | Human Expert |
44
+ |----------------------|---------------------|------|------|------|--------------|
45
+ | MNIST | 92% | 99+% | 99+% | 99+% | 99+% |
46
+ | MNIST-1D | 32% | 68% | 94% | 91% | 96% |
47
+ | MNIST-1D (shuffle**) | 32% | 68% | 56% | 57% | ~30% |
48
+ *Training the GRU takes at least 10x the walltime of the CNN.
49
+
50
+ **The term "shuffle" refers to shuffling the spatial dimension of the dataset, as in [Zhang et al. (2017)](https://arxiv.org/abs/1611.03530).
51
+
52
+
53
  ## Dataset Creation
54
 
55
  This version of the dataset was created by using the pickle file provided by the dataset authors in the original repository: [mnist1d_data.pkl](https://github.com/greydanus/mnist1d/blob/master/mnist1d_data.pkl) and was generated like follows:
 
69
  DatasetDict({"train":train, "test":test}).push_to_hub("christopher/mnist1d")
70
  ```
71
 
72
+ The origina
73
+
74
  ## Dataset Usage
75
 
76
  Using the `datasets` library: