SAELens
ArthurConmyGDM commited on
Commit
4c740a4
·
verified ·
1 Parent(s): bcccf5f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -9,17 +9,7 @@ library_name: saelens
9
 
10
  This is a landing page for **Gemma Scope**, a comprehensive, open suite of sparse autoencoders for Gemma 2 9B and 2B. Sparse Autoencoders are a "microscope" of sorts that can help us break down a model’s internal activations into the underlying concepts, just as biologists use microscopes to study the individual cells of plants and animals.
11
 
12
- # Key links:
13
-
14
- - Learn more about Gemma Scope in our [Google DeepMind blog post](https://deepmind.google/discover/blog/gemma-scope-helping-safety-researchers-shed-light-on-the-inner-workings-of-language-models).
15
- - Check out the [interactive Gemma Scope demo](https://www.neuronpedia.org/gemma-scope) made by [Neuronpedia](https://www.neuronpedia.org/).
16
- - Check out our [Google Colab notebook tutorial](https://colab.research.google.com/drive/17dQFYUYnuKnP6OwQPH9v_GSYUW5aj-Rp?ts=66a77041) for how to use Gemma Scope.
17
- - Read [the Gemma Scope technical report](https://storage.googleapis.com/gemma-scope/gemma-scope-report.pdf).
18
- - Check out [Mishax](https://github.com/google-deepmind/mishax), a GDM internal tool that we used in this project to expose the internal activations inside Gemma 2 models.
19
-
20
- # Quick start:
21
-
22
- You can get started with Gemma Scope by downloading the weights from any of our repositories:
23
 
24
  - https://huggingface.co/google/gemma-scope-2b-pt-res
25
  - https://huggingface.co/google/gemma-scope-2b-pt-mlp
@@ -31,6 +21,16 @@ You can get started with Gemma Scope by downloading the weights from any of our
31
  - https://huggingface.co/google/gemma-scope-9b-it-res
32
  - https://huggingface.co/google/gemma-scope-27b-pt-res
33
 
 
 
 
 
 
 
 
 
 
 
34
  The full list of SAEs we trained at which sites and layers are linked from the following table, adapted from Figure 1 of our technical report:
35
 
36
  | <big>Gemma 2 Model</big> | <big>SAE Width</big> | <big>Attention</big> | <big>MLP</big> | <big>Residual</big> | <big>Tokens</big> |
 
9
 
10
  This is a landing page for **Gemma Scope**, a comprehensive, open suite of sparse autoencoders for Gemma 2 9B and 2B. Sparse Autoencoders are a "microscope" of sorts that can help us break down a model’s internal activations into the underlying concepts, just as biologists use microscopes to study the individual cells of plants and animals.
11
 
12
+ **There are no model weights in this repo. If you are looking for them, please visit:**
 
 
 
 
 
 
 
 
 
 
13
 
14
  - https://huggingface.co/google/gemma-scope-2b-pt-res
15
  - https://huggingface.co/google/gemma-scope-2b-pt-mlp
 
21
  - https://huggingface.co/google/gemma-scope-9b-it-res
22
  - https://huggingface.co/google/gemma-scope-27b-pt-res
23
 
24
+ # Key links:
25
+
26
+ - Learn more about Gemma Scope in our [Google DeepMind blog post](https://deepmind.google/discover/blog/gemma-scope-helping-the-safety-community-shed-light-on-the-inner-workings-of-language-models).
27
+ - Check out the [interactive Gemma Scope demo](https://www.neuronpedia.org/gemma-scope) made by [Neuronpedia](https://www.neuronpedia.org/).
28
+ - Check out our [Google Colab notebook tutorial](https://colab.research.google.com/drive/17dQFYUYnuKnP6OwQPH9v_GSYUW5aj-Rp?ts=66a77041) for how to use Gemma Scope.
29
+ - Read [the Gemma Scope technical report](https://storage.googleapis.com/gemma-scope/gemma-scope-report.pdf).
30
+ - Check out [Mishax](https://github.com/google-deepmind/mishax), a GDM internal tool that we used in this project to expose the internal activations inside Gemma 2 models.
31
+
32
+ # Full weight set:
33
+
34
  The full list of SAEs we trained at which sites and layers are linked from the following table, adapted from Figure 1 of our technical report:
35
 
36
  | <big>Gemma 2 Model</big> | <big>SAE Width</big> | <big>Attention</big> | <big>MLP</big> | <big>Residual</big> | <big>Tokens</big> |