BhumikaMak commited on
Commit
1fb12c9
·
verified ·
1 Parent(s): c602862

object alignment

Browse files
Files changed (1) hide show
  1. app.py +8 -5
app.py CHANGED
@@ -65,9 +65,12 @@ body {
65
  background-position: center;
66
  background-size: cover;
67
  background-attachment: fixed;
68
- height: 100%; /* Ensure body height is 100% of the viewport */
69
- margin: 0;
70
  overflow-y: auto; /* Allow vertical scrolling */
 
 
 
 
 
71
  }
72
 
73
  .custom-row {
@@ -182,11 +185,11 @@ with gr.Blocks(css=custom_css) as interface:
182
  )
183
 
184
  gr.Markdown("""
185
- ##Concept Discovery involves identifying interpretable high-level features or concepts within a deep learning model's representation. It aims to understand what a model has learned and how these learned features relate to meaningful attributes in the data.
186
 
187
- ##Deep Feature Factorization (DFF) is a technique that decomposes the deep features learned by a model into disentangled and interpretable components. It typically involves matrix factorization methods applied to activation maps, enabling the identification of semantically meaningful concepts captured by the model.
188
 
189
- ##Together, these methods enhance model interpretability and provide insights into the decision-making process of neural networks.
190
 
191
  """)
192
  with gr.Row(elem_classes="custom-row"):
 
65
  background-position: center;
66
  background-size: cover;
67
  background-attachment: fixed;
 
 
68
  overflow-y: auto; /* Allow vertical scrolling */
69
+ display: flex;
70
+ justify-content: center;
71
+ align-items: center;
72
+ height: 100vh;
73
+ margin: 0;
74
  }
75
 
76
  .custom-row {
 
185
  )
186
 
187
  gr.Markdown("""
188
+ ## Concept Discovery involves identifying interpretable high-level features or concepts within a deep learning model's representation. It aims to understand what a model has learned and how these learned features relate to meaningful attributes in the data.
189
 
190
+ ## Deep Feature Factorization (DFF) is a technique that decomposes the deep features learned by a model into disentangled and interpretable components. It typically involves matrix factorization methods applied to activation maps, enabling the identification of semantically meaningful concepts captured by the model.
191
 
192
+ ## Together, these methods enhance model interpretability and provide insights into the decision-making process of neural networks.
193
 
194
  """)
195
  with gr.Row(elem_classes="custom-row"):