sayakpaul HF staff commited on
Commit
034dd44
·
1 Parent(s): bc158f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -11,7 +11,7 @@ pinned: false
11
 
12
  *By: [Hila Chefer](https://hila-chefer.github.io) and [Sayak Paul](https://sayak.dev)*
13
 
14
- *Website: [atv.github.io])https://atv.github.io)*
15
 
16
  *Abstract: In this tutorial, we explore different ways to leverage attention in vision. From left to right: (i) attention can be used to explain the predictions by the model (e.g., CLIP for an image-text pair) (ii) By manipulating the attention-based explainability maps, one can enforce that the prediction is made based on the right reasons (e.g., foreground vs. background) (iii) The cross-attention maps of multi-modal models can be used to guide generative models (e.g., mitigating neglect in Stable Diffusion).*
17
 
 
11
 
12
  *By: [Hila Chefer](https://hila-chefer.github.io) and [Sayak Paul](https://sayak.dev)*
13
 
14
+ *Website: [atv.github.io](https://atv.github.io)*
15
 
16
  *Abstract: In this tutorial, we explore different ways to leverage attention in vision. From left to right: (i) attention can be used to explain the predictions by the model (e.g., CLIP for an image-text pair) (ii) By manipulating the attention-based explainability maps, one can enforce that the prediction is made based on the right reasons (e.g., foreground vs. background) (iii) The cross-attention maps of multi-modal models can be used to guide generative models (e.g., mitigating neglect in Stable Diffusion).*
17