BhumikaMak commited on
Commit
9203d76
·
verified ·
1 Parent(s): 92546a4

changed text font and colour

Browse files
Files changed (1) hide show
  1. app.py +9 -7
app.py CHANGED
@@ -122,11 +122,11 @@ body {
122
  # Then in the Gradio interface:
123
 
124
  with gr.Blocks(css=custom_css) as interface:
125
- gr.Markdown("""
126
- ## <span id="neural-vista-title">NeuralVista</span>
127
- A powerful tool designed to help you visualize models in action.
128
-
129
- """)
130
 
131
  # Default sample
132
  default_sample = "Sample 1"
@@ -179,9 +179,11 @@ with gr.Blocks(css=custom_css) as interface:
179
  )
180
 
181
  gr.HTML("""
182
- <span style="color: purple;">Concept Discovery</span> involves identifying interpretable high-level features or concepts within a deep learning model's representation. It aims to understand what a model has learned and how these learned features relate to meaningful attributes in the data.
 
 
183
 
184
- <span style="color: purple;">Deep Feature Factorization (DFF)</span> is a technique that decomposes the deep features learned by a model into disentangled and interpretable components. It typically involves matrix factorization methods applied to activation maps, enabling the identification of semantically meaningful concepts captured by the model.
185
 
186
  Together, these methods enhance model interpretability and provide insights into the decision-making process of neural networks.
187
  """)
 
122
  # Then in the Gradio interface:
123
 
124
  with gr.Blocks(css=custom_css) as interface:
125
+
126
+ gr.HTML("""
127
+ <span style="color: #E6E6FA; font-weight: bold;" id="neural-vista-title">NeuralVista</span>
128
+ A powerful tool designed to help you <span style="color: #E6E6FA; font-weight: bold;" id="neural-vista-title">visualize</span> models in action.
129
+ """)
130
 
131
  # Default sample
132
  default_sample = "Sample 1"
 
179
  )
180
 
181
  gr.HTML("""
182
+ <span style="color: #E6E6FA; font-weight: bold;">Concept Discovery</span> involves identifying interpretable high-level features or concepts within a deep learning model's representation.
183
+
184
+ It aims to understand what a model has learned and how these learned features relate to meaningful attributes in the data.
185
 
186
+ <span style="color: #E6E6FA; font-weight: bold;">Deep Feature Factorization (DFF)</span> is a technique that decomposes the deep features learned by a model into <span style="color: #E6E6FA; font-weight: bold;" id="neural-vista-title">disentangled and interpretable components</span>. It typically involves matrix factorization methods applied to activation maps, enabling the identification of semantically meaningful concepts captured by the model.
187
 
188
  Together, these methods enhance model interpretability and provide insights into the decision-making process of neural networks.
189
  """)