Spaces:
Sleeping
Sleeping
changed text font and colour
Browse files
app.py
CHANGED
@@ -122,11 +122,11 @@ body {
|
|
122 |
# Then in the Gradio interface:
|
123 |
|
124 |
with gr.Blocks(css=custom_css) as interface:
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
|
131 |
# Default sample
|
132 |
default_sample = "Sample 1"
|
@@ -179,9 +179,11 @@ with gr.Blocks(css=custom_css) as interface:
|
|
179 |
)
|
180 |
|
181 |
gr.HTML("""
|
182 |
-
<span style="color:
|
|
|
|
|
183 |
|
184 |
-
<span style="color:
|
185 |
|
186 |
Together, these methods enhance model interpretability and provide insights into the decision-making process of neural networks.
|
187 |
""")
|
|
|
122 |
# Then in the Gradio interface:
|
123 |
|
124 |
with gr.Blocks(css=custom_css) as interface:
|
125 |
+
|
126 |
+
gr.HTML("""
|
127 |
+
<span style="color: #E6E6FA; font-weight: bold;" id="neural-vista-title">NeuralVista</span>
|
128 |
+
A powerful tool designed to help you <span style="color: #E6E6FA; font-weight: bold;" id="neural-vista-title">visualize</span> models in action.
|
129 |
+
""")
|
130 |
|
131 |
# Default sample
|
132 |
default_sample = "Sample 1"
|
|
|
179 |
)
|
180 |
|
181 |
gr.HTML("""
|
182 |
+
<span style="color: #E6E6FA; font-weight: bold;">Concept Discovery</span> involves identifying interpretable high-level features or concepts within a deep learning model's representation.
|
183 |
+
|
184 |
+
It aims to understand what a model has learned and how these learned features relate to meaningful attributes in the data.
|
185 |
|
186 |
+
<span style="color: #E6E6FA; font-weight: bold;">Deep Feature Factorization (DFF)</span> is a technique that decomposes the deep features learned by a model into <span style="color: #E6E6FA; font-weight: bold;" id="neural-vista-title">disentangled and interpretable components</span>. It typically involves matrix factorization methods applied to activation maps, enabling the identification of semantically meaningful concepts captured by the model.
|
187 |
|
188 |
Together, these methods enhance model interpretability and provide insights into the decision-making process of neural networks.
|
189 |
""")
|