Spaces:
Runtime error
Runtime error
sashavor
commited on
Commit
·
fb9010f
1
Parent(s):
cacde98
shifting things around, collapsing Accordion
Browse files
app.py
CHANGED
@@ -13,6 +13,13 @@ _INTRO = """
|
|
13 |
|
14 |
Explore the data generated from [DiffusionBiasExplorer](https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer)!
|
15 |
This demo showcases patterns in the images generated from different prompts input to Stable Diffusion and Dalle-2 systems.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
"""
|
17 |
|
18 |
_CONTEXT = """
|
@@ -28,13 +35,6 @@ we should not assign a specific gender or ethnicity to a synthetic figure genera
|
|
28 |
In this app, we instead take a 2-step clustering-based approach. First, we generate 680 images for each model by varying mentions of terms that denote gender or ethnicity in the prompts.
|
29 |
Then, we use a [VQA-based model](https://huggingface.co/Salesforce/blip-vqa-base) to cluster these images at different granularities (12, 24, or 48 clusters).
|
30 |
Exploring these clusters allows us to examine trends in the models' associations between visual features and textual representation of social attributes.
|
31 |
-
We encourage users to take advantage of this app to explore those trends, for example through the lens of the following questions:
|
32 |
-
- Find the cluster that has the most prompts denoting a gender or ethnicity that you identify with. Do you think the generated images look like you?
|
33 |
-
- Find two clusters that have a similar distribution of gender terms but different distributions of ethnicity terms. Do you see any meaningful differences in how gender is visually represented?
|
34 |
-
- Do you find that some ethnicity terms lead to more stereotypical visual representations than others?
|
35 |
-
- Do you find that some gender terms lead to more stereotypical visual representations than others?
|
36 |
-
|
37 |
-
These questions only scratch the surface of what we can learn from demos like this one, let us know what you find [in the discussions tab](https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering/discussions), or if you think of other relevant questions!
|
38 |
"""
|
39 |
|
40 |
clusters_12 = json.load(open("clusters/id_all_blip_clusters_12.json"))
|
@@ -212,7 +212,7 @@ def show_cluster(cl_id, num_clusters):
|
|
212 |
with gr.Blocks(title=TITLE) as demo:
|
213 |
gr.Markdown(_INTRO)
|
214 |
with gr.Accordion(
|
215 |
-
"How do diffusion-based models represent gender and ethnicity?", open=
|
216 |
):
|
217 |
gr.Markdown(_CONTEXT)
|
218 |
gr.HTML(
|
|
|
13 |
|
14 |
Explore the data generated from [DiffusionBiasExplorer](https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer)!
|
15 |
This demo showcases patterns in the images generated from different prompts input to Stable Diffusion and Dalle-2 systems.
|
16 |
+
We encourage users to take advantage of this app to explore those trends, for example through the lens of the following questions:
|
17 |
+
- Find the cluster that has the most prompts denoting a gender or ethnicity that you identify with. Do you think the generated images look like you?
|
18 |
+
- Find two clusters that have a similar distribution of gender terms but different distributions of ethnicity terms. Do you see any meaningful differences in how gender is visually represented?
|
19 |
+
- Do you find that some ethnicity terms lead to more stereotypical visual representations than others?
|
20 |
+
- Do you find that some gender terms lead to more stereotypical visual representations than others?
|
21 |
+
|
22 |
+
These questions only scratch the surface of what we can learn from demos like this one, let us know what you find [in the discussions tab](https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering/discussions), or if you think of other relevant questions!
|
23 |
"""
|
24 |
|
25 |
_CONTEXT = """
|
|
|
35 |
In this app, we instead take a 2-step clustering-based approach. First, we generate 680 images for each model by varying mentions of terms that denote gender or ethnicity in the prompts.
|
36 |
Then, we use a [VQA-based model](https://huggingface.co/Salesforce/blip-vqa-base) to cluster these images at different granularities (12, 24, or 48 clusters).
|
37 |
Exploring these clusters allows us to examine trends in the models' associations between visual features and textual representation of social attributes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
"""
|
39 |
|
40 |
clusters_12 = json.load(open("clusters/id_all_blip_clusters_12.json"))
|
|
|
212 |
with gr.Blocks(title=TITLE) as demo:
|
213 |
gr.Markdown(_INTRO)
|
214 |
with gr.Accordion(
|
215 |
+
"How do diffusion-based models represent gender and ethnicity?", open =False
|
216 |
):
|
217 |
gr.Markdown(_CONTEXT)
|
218 |
gr.HTML(
|