BhumikaMak commited on
Commit
c602862
·
verified ·
1 Parent(s): 975eaa4

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +5 -5
app.py CHANGED
@@ -153,8 +153,8 @@ with gr.Blocks(css=custom_css) as interface:
153
  value=["yolov5"],
154
  label="Select Model(s)",
155
  )
156
- with gr.Row(elem_classes="custom-row"):
157
- run_button = gr.Button("Run", elem_classes="custom-button")
158
 
159
 
160
  with gr.Column():
@@ -182,11 +182,11 @@ with gr.Blocks(css=custom_css) as interface:
182
  )
183
 
184
  gr.Markdown("""
185
- Concept Discovery involves identifying interpretable high-level features or concepts within a deep learning model's representation. It aims to understand what a model has learned and how these learned features relate to meaningful attributes in the data.
186
 
187
- Deep Feature Factorization (DFF) is a technique that decomposes the deep features learned by a model into disentangled and interpretable components. It typically involves matrix factorization methods applied to activation maps, enabling the identification of semantically meaningful concepts captured by the model.
188
 
189
- Together, these methods enhance model interpretability and provide insights into the decision-making process of neural networks.
190
 
191
  """)
192
  with gr.Row(elem_classes="custom-row"):
 
153
  value=["yolov5"],
154
  label="Select Model(s)",
155
  )
156
+ #with gr.Row(elem_classes="custom-row"):
157
+ run_button = gr.Button("Run", elem_classes="custom-button")
158
 
159
 
160
  with gr.Column():
 
182
  )
183
 
184
  gr.Markdown("""
185
+ ##Concept Discovery involves identifying interpretable high-level features or concepts within a deep learning model's representation. It aims to understand what a model has learned and how these learned features relate to meaningful attributes in the data.
186
 
187
+ ##Deep Feature Factorization (DFF) is a technique that decomposes the deep features learned by a model into disentangled and interpretable components. It typically involves matrix factorization methods applied to activation maps, enabling the identification of semantically meaningful concepts captured by the model.
188
 
189
+ ##Together, these methods enhance model interpretability and provide insights into the decision-making process of neural networks.
190
 
191
  """)
192
  with gr.Row(elem_classes="custom-row"):