Spaces:
Sleeping
Sleeping
updated dff desc
Browse files
app.py
CHANGED
@@ -189,12 +189,12 @@ with gr.Blocks(css=custom_css) as interface:
|
|
189 |
)
|
190 |
|
191 |
gr.HTML("""
|
192 |
-
<span style="
|
|
|
|
|
|
|
|
|
193 |
|
194 |
-
<span style="color: purple; font-weight: bold;">Deep Feature Factorization (DFF)</span> is a technique that decomposes the deep features learned by a model into disentangled and interpretable components. It typically involves matrix factorization methods applied to activation maps, enabling the identification of semantically meaningful concepts captured by the model.
|
195 |
-
|
196 |
-
Together, these methods enhance model interpretability and provide insights into the decision-making process of neural networks.
|
197 |
-
""")
|
198 |
with gr.Row(elem_classes="custom-row"):
|
199 |
dff_gallery = gr.Gallery(
|
200 |
label="Deep Feature Factorization",
|
|
|
189 |
)
|
190 |
|
191 |
gr.HTML("""
|
192 |
+
<span style="font-family: 'Papyrus', cursive; font-size: 14px;">
|
193 |
+
Concept Discovery is the process of uncovering the hidden, high-level features that a deep learning model has learned. It provides a way to understand the essence of its internal representations, akin to peering into the mind of the model and revealing the meaningful patterns it detects in the data.
|
194 |
+
<br><br>
|
195 |
+
Deep Feature Factorization (DFF) serves as a tool for breaking down these complex features into simpler, more interpretable components. By applying matrix factorization on activation maps, it untangles the intricate web of learned representations, making it easier to comprehend what the model is truly focusing on. Together, these methods bring us closer to understanding the underlying logic of neural networks, shedding light on the often enigmatic decisions they make.
|
196 |
+
</span>
|
197 |
|
|
|
|
|
|
|
|
|
198 |
with gr.Row(elem_classes="custom-row"):
|
199 |
dff_gallery = gr.Gallery(
|
200 |
label="Deep Feature Factorization",
|