wzuidema commited on
Commit
d0460c0
·
unverified ·
1 Parent(s): ce5a659

added a second slider and some examples.

Browse files
Files changed (1) hide show
  1. app.py +24 -5
app.py CHANGED
@@ -273,9 +273,10 @@ hila = gradio.Interface(
273
  inputs=["text", layer_slider],
274
  outputs="html",
275
  )
 
276
  lig = gradio.Interface(
277
  fn=sentence_sentiment,
278
- inputs=["text", layer_slider],
279
  outputs="html",
280
  )
281
 
@@ -291,18 +292,36 @@ But how does it arrive at its classification? A range of so-called "attribution
291
  Two key methods for Transformers are "attention rollout" (Abnar & Zuidema, 2020) and (layer) Integrated Gradient. Here we show:
292
 
293
  * Gradient-weighted attention rollout, as defined by [Hila Chefer](https://github.com/hila-chefer)
294
- [(Transformer-MM_explainability)](https://github.com/hila-chefer/Transformer-MM-Explainability/)
295
- * Layer IG, as implemented in [Captum](https://captum.ai/)(LayerIntegratedGradients)
296
  """,
297
  examples=[
298
  [
299
  "This movie was the best movie I have ever seen! some scenes were ridiculous, but acting was great",
300
- 8
301
  ],
302
  [
303
  "I really didn't like this movie. Some of the actors were good, but overall the movie was boring",
304
- 8
 
 
 
 
 
 
 
 
305
  ],
 
 
 
 
 
 
 
 
 
 
306
  ],
307
  )
308
  iface.launch()
 
273
  inputs=["text", layer_slider],
274
  outputs="html",
275
  )
276
+ layer_slider2 = gradio.Slider(minimum=0, maximum=12, value=0, step=1, label="Select layer")
277
  lig = gradio.Interface(
278
  fn=sentence_sentiment,
279
+ inputs=["text", layer_slider2],
280
  outputs="html",
281
  )
282
 
 
292
  Two key methods for Transformers are "attention rollout" (Abnar & Zuidema, 2020) and (layer) Integrated Gradient. Here we show:
293
 
294
  * Gradient-weighted attention rollout, as defined by [Hila Chefer](https://github.com/hila-chefer)
295
+ [(Transformer-MM_explainability)](https://github.com/hila-chefer/Transformer-MM-Explainability/), without rollout recursion upto selected layer
296
+ * Layer IG, as implemented in [Captum](https://captum.ai/)(LayerIntegratedGradients), based on gradient w.r.t. selected layer.
297
  """,
298
  examples=[
299
  [
300
  "This movie was the best movie I have ever seen! some scenes were ridiculous, but acting was great",
301
+ 8,0
302
  ],
303
  [
304
  "I really didn't like this movie. Some of the actors were good, but overall the movie was boring",
305
+ 8,0
306
+ ],
307
+ [
308
+ "If the acting had been better, this movie might have been pretty good.",
309
+ 8,0
310
+ ],
311
+ [
312
+ "If he had hated it, he would not have said that he loved it.",
313
+ 8,3
314
  ],
315
+ [
316
+ "If he had hated it, he would not have said that he loved it.",
317
+ 8,9
318
+ ],
319
+ [
320
+ "Attribution methods are very interesting, but unfortunately do not work reliably out of the box.",
321
+ 8,0
322
+ ],
323
+
324
+
325
  ],
326
  )
327
  iface.launch()