Spaces:
Sleeping
Sleeping
resolved: font size
Browse files
app.py
CHANGED
@@ -116,6 +116,12 @@ body {
|
|
116 |
font-weight: bold;
|
117 |
text-align: center;
|
118 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
119 |
|
120 |
"""
|
121 |
|
@@ -125,7 +131,8 @@ with gr.Blocks(css=custom_css) as interface:
|
|
125 |
|
126 |
gr.HTML("""
|
127 |
<span style="color: #E6E6FA; font-weight: bold;" id="neural-vista-title">NeuralVista</span>
|
128 |
-
|
|
|
129 |
""")
|
130 |
|
131 |
# Default sample
|
@@ -183,7 +190,7 @@ with gr.Blocks(css=custom_css) as interface:
|
|
183 |
|
184 |
It aims to understand what a model has learned and how these learned features relate to meaningful attributes in the data.
|
185 |
|
186 |
-
<span style="color: #E6E6FA; font-weight: bold;">Deep Feature Factorization (DFF)</span> is a technique that decomposes the deep features learned by a model into <span style="color: #E6E6FA; font-weight: bold;" id="neural-vista-
|
187 |
|
188 |
Together, these methods enhance model interpretability and provide insights into the decision-making process of neural networks.
|
189 |
""")
|
|
|
116 |
font-weight: bold;
|
117 |
text-align: center;
|
118 |
}
|
119 |
+
#neural-vista-text {
|
120 |
+
color: purple !important; /* Purple color for the title */
|
121 |
+
font-size: 14px; /* Adjust font size as needed */
|
122 |
+
font-weight: bold;
|
123 |
+
text-align: center;
|
124 |
+
}
|
125 |
|
126 |
"""
|
127 |
|
|
|
131 |
|
132 |
gr.HTML("""
|
133 |
<span style="color: #E6E6FA; font-weight: bold;" id="neural-vista-title">NeuralVista</span>
|
134 |
+
|
135 |
+
A powerful tool designed to help you <span style="color: #E6E6FA; font-weight: bold;" id="neural-vista-text">visualize</span> models in action.
|
136 |
""")
|
137 |
|
138 |
# Default sample
|
|
|
190 |
|
191 |
It aims to understand what a model has learned and how these learned features relate to meaningful attributes in the data.
|
192 |
|
193 |
+
<span style="color: #E6E6FA; font-weight: bold;">Deep Feature Factorization (DFF)</span> is a technique that decomposes the deep features learned by a model into <span style="color: #E6E6FA; font-weight: bold;" id="neural-vista-text">disentangled and interpretable components</span>. It typically involves matrix factorization methods applied to activation maps, enabling the identification of semantically meaningful concepts captured by the model.
|
194 |
|
195 |
Together, these methods enhance model interpretability and provide insights into the decision-making process of neural networks.
|
196 |
""")
|