BhumikaMak commited on
Commit
b525512
·
verified ·
1 Parent(s): 8d948ee

resolved: para formatting

Browse files
Files changed (1) hide show
  1. app.py +3 -5
app.py CHANGED
@@ -130,7 +130,7 @@ body {
130
  with gr.Blocks(css=custom_css) as interface:
131
 
132
  gr.HTML("""
133
- <span style="color: #E6E6FA; font-weight: bold;" id="neural-vista-title">NeuralVista</span>
134
 
135
  A powerful tool designed to help you <span style="color: #E6E6FA; font-weight: bold;" id="neural-vista-text">visualize</span> models in action.
136
  """)
@@ -186,11 +186,9 @@ with gr.Blocks(css=custom_css) as interface:
186
  )
187
 
188
  gr.HTML("""
189
- <span style="color: #E6E6FA; font-weight: bold;">Concept Discovery</span> involves identifying interpretable high-level features or concepts within a deep learning model's representation.
190
-
191
- It aims to understand what a model has learned and how these learned features relate to meaningful attributes in the data.
192
 
193
- <span style="color: #E6E6FA; font-weight: bold;">Deep Feature Factorization (DFF)</span> is a technique that decomposes the deep features learned by a model into <span style="color: #E6E6FA; font-weight: bold;" id="neural-vista-text">disentangled and interpretable components</span>. It typically involves matrix factorization methods applied to activation maps, enabling the identification of semantically meaningful concepts captured by the model.
194
 
195
  Together, these methods enhance model interpretability and provide insights into the decision-making process of neural networks.
196
  """)
 
130
  with gr.Blocks(css=custom_css) as interface:
131
 
132
  gr.HTML("""
133
+ <span style="color: #E6E6FA; font-weight: bold;" id="neural-vista-title">NeuralVista</span><br>
134
 
135
  A powerful tool designed to help you <span style="color: #E6E6FA; font-weight: bold;" id="neural-vista-text">visualize</span> models in action.
136
  """)
 
186
  )
187
 
188
  gr.HTML("""
189
+ <span style="color: #E6E6FA; font-weight: bold;">Concept Discovery</span> involves identifying interpretable high-level features or concepts within a deep learning model's representation. It aims to understand what a model has learned and how these learned features relate to meaningful attributes in the data. <br>
 
 
190
 
191
+ <span style="color: #E6E6FA; font-weight: bold;">Deep Feature Factorization (DFF)</span> is a technique that decomposes the deep features learned by a model into <span style="color: #E6E6FA; font-weight: bold;">disentangled and interpretable components</span>. It typically involves matrix factorization methods applied to activation maps, enabling the identification of semantically meaningful concepts captured by the model.
192
 
193
  Together, these methods enhance model interpretability and provide insights into the decision-making process of neural networks.
194
  """)