Commit
·
c59a64d
1
Parent(s):
6aa2b53
Update parquet files (step 56 of 476)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces.zip +0 -3
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Film Chokher Bali Aishwarya Rai and Raima Sen in a Riveting Bengali Drama - Download Here.md +0 -122
- spaces/1gistliPinn/ChatGPT4/Examples/Defraggler Pro 2018 Latest Key Crack Keygen Free Download.md +0 -41
- spaces/1phancelerku/anime-remove-background/APKMirror vs Google Play Store Which One is Better for Android 4.0 Users?.md +0 -147
- spaces/1phancelerku/anime-remove-background/Download Zero RPG Kit for Free and Start Making Pixel Art Adventures.md +0 -164
- spaces/AI-ZTH-03-23/3.HTML5-Aframe-3dMap-Flight/README.md +0 -53
- spaces/AIWaves/SOP_Generation-single/app.py +0 -395
- spaces/Aaaaaaaabdualh/meter2poem-1/README.md +0 -14
- spaces/Abhaykoul/Merriam-webster_clone/README.md +0 -13
- spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/Opchatgpts.py +0 -7
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateOverlapSizer.js +0 -8
- spaces/Akmyradov/TurkmenTTSweSTT/vits/README.md +0 -58
- spaces/AmirTrader/LinearRegression/Dockerfile +0 -16
- spaces/Amitontheweb/InstaoffyzFreeParaphraser/app.py +0 -66
- spaces/Amon1/ChatGPTForAcadamic/Dockerfile +0 -13
- spaces/Amrrs/DragGan-Inversion/PTI/utils/models_utils.py +0 -25
- spaces/Andy1621/uniformer_image_detection/configs/centripetalnet/README.md +0 -26
- spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_scoring_roi_head.py +0 -122
- spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/utils/metrics_accumulator.py +0 -18
- spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/datasets/drive.py +0 -59
- spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/hsigmoid.py +0 -34
- spaces/Arcader7171/positive/app.py +0 -13
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/ml_nms.py +0 -31
- spaces/BIASLab/sars-cov-2-classification-fcgr/src/models/resnet50_8mers.py +0 -103
- spaces/Benson/text-generation/Examples/Bubble Shooter Classic.md +0 -62
- spaces/Benson/text-generation/Examples/Descarga De Descarga De 1 Apk.md +0 -120
- spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/validate.py +0 -384
- spaces/Binguii/Ballen/README.md +0 -10
- spaces/CForGETaass/vits-uma-genshin-honkai/mel_processing.py +0 -101
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/point_rend/roi_heads.py +0 -222
- spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/core/base_cfgs.py +0 -369
- spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/remove.h +0 -113
- spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/transform.h +0 -22
- spaces/CVPR/MonoScene/monoscene/.ipynb_checkpoints/unet3d_kitti-checkpoint.py +0 -88
- spaces/CVPR/WALT/mmdet/models/dense_heads/sabl_retina_head.py +0 -621
- spaces/CVPR/regionclip-demo/detectron2/data/samplers/grouped_batch_sampler.py +0 -47
- spaces/ChenWu98/Stable-CycleDiffusion/app.py +0 -421
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/resolver.py +0 -160
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/_exceptions.py +0 -94
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/unicode.py +0 -50
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/json_component.py +0 -122
- spaces/Dinoking/Guccio-AI-Designer/netdissect/evalablate.py +0 -248
- spaces/DragGan/DragGan-Inversion/PTI/models/e4e/stylegan2/op/__init__.py +0 -2
- spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/bias_act.h +0 -38
- spaces/DragGan/DragGan/stylegan_human/pti/pti_configs/__init__.py +0 -0
- spaces/DylanYan/WizardLM-WizardCoder-Python-34B-V1.0/README.md +0 -12
- spaces/Falah/stablediffusionDB/README.md +0 -12
- spaces/Fr33d0m21/Music_Splitter/app.py +0 -26
- spaces/FridaZuley/RVC_HFKawaii/lib/infer_pack/modules/F0Predictor/__init__.py +0 -0
- spaces/GT4SD/regression_transformer/model_cards/regression_transformer_description.md +0 -13
spaces.zip
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:fbb4b253de8e51bfa330e5c7cf31f7841e64ef30c1718d4a05c75e21c8ccf729
|
3 |
-
size 671941275
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Film Chokher Bali Aishwarya Rai and Raima Sen in a Riveting Bengali Drama - Download Here.md
DELETED
@@ -1,122 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Film Chokher Bali Full Movie Download: A Review of the Bengali Drama Based on Rabindranath Tagore's Novel</h1>
|
3 |
-
<p>If you are looking for a film that explores the complexities of human relationships, emotions and morality, you should watch <strong>Chokher Bali</strong>, a 2003 Bengali drama film directed by Rituparno Ghosh and based on Rabindranath Tagore's 1903 novel of the same name. The film stars Aishwarya Rai Bachchan, Raima Sen, Prosenjit Chatterjee, Tota Roy Chowdhury and Lily Chakravarty in pivotal roles.</p>
|
4 |
-
<p>The film tells the story of Binodini, a young widow who comes to live with a woman and her son Mahendra, who had once rejected her as a prospective bride. Binodini soon develops a friendship with Mahendra's wife Ashalata, but also an attraction for Mahendra himself. This leads to a web of deceit, adultery, jealousy and revenge that affects all their lives.</p>
|
5 |
-
<h2>film Chokher Bali full movie download</h2><br /><p><b><b>Download Zip</b> ✓ <a href="https://byltly.com/2uKzNM">https://byltly.com/2uKzNM</a></b></p><br /><br />
|
6 |
-
<p>The film won several awards and accolades, including the National Film Award for Best Feature Film in Bengali, and was screened at various international film festivals. It was also dubbed into Hindi and released worldwide.</p>
|
7 |
-
<p>In this article, we will review <strong>Chokher Bali</strong> in detail, covering its plot, cast, direction, music, reception and impact. We will also tell you how you can download or stream this film online legally.</p>
|
8 |
-
<h2>The novel Chokher Bali</h2>
|
9 |
-
<p>Before we dive into the film adaptation, let us first understand the source material that inspired it. <strong>Chokher Bali</strong> is a novel written by Rabindranath Tagore, one of India's most celebrated writers and Nobel laureates. The novel was first published in 1903 in Bengali as a serial in a magazine called Bangadarshan.</p>
|
10 |
-
<p>The novel is set in late 19th century Bengal, during the British colonial rule. It revolves around four main characters: Binodini, a young widow who is intelligent, beautiful and ambitious; Mahendra, a wealthy landowner who is spoiled and impulsive; Ashalata, his naive and devoted wife who is unaware of his flaws; and Behari, his friend who is noble and upright.</p>
|
11 |
-
<p>The novel explores how these four characters interact with each other under different circumstances, revealing their personalities, desires, conflicts and dilemmas. It also depicts how they are influenced by their social environment, which imposes strict norms on women's roles, marriage customs and widowhood practices.</p>
|
12 |
-
<p>The novel is considered to be one of Tagore's finest works, as it showcases his mastery of storytelling, characterization, dialogue and symbolism. It also deals with themes such as love, friendship, betrayal, passion, loyalty, sacrifice and redemption.</p>
|
13 |
-
<h2>The film adaptation</h2>
|
14 |
-
<p>How did Rituparno Ghosh translate Tagore's novel into a cinematic masterpiece? Let us look at some of the aspects that make <strong>Chokher Bali</strong> a remarkable film adaptation.</p>
|
15 |
-
<p>Chokher Bali Aishwarya Rai movie download<br />
|
16 |
-
Watch Chokher Bali online free streaming<br />
|
17 |
-
Chokher Bali Bengali film based on Rabindranath Tagore novel<br />
|
18 |
-
Download Chokher Bali 2003 movie with subtitles<br />
|
19 |
-
Chokher Bali Rituparno Ghosh directorial download<br />
|
20 |
-
Chokher Bali Raima Sen and Prosenjit Chatterjee movie<br />
|
21 |
-
How to download Chokher Bali movie in HD quality<br />
|
22 |
-
Chokher Bali movie review and ratings<br />
|
23 |
-
Chokher Bali movie plot and summary<br />
|
24 |
-
Chokher Bali movie awards and nominations<br />
|
25 |
-
Chokher Bali movie songs and music download<br />
|
26 |
-
Chokher Bali movie trailer and teaser download<br />
|
27 |
-
Chokher Bali movie cast and crew details<br />
|
28 |
-
Chokher Bali movie release date and box office collection<br />
|
29 |
-
Chokher Bali movie scenes and dialogues download<br />
|
30 |
-
Chokher Bali movie behind the scenes and making of videos<br />
|
31 |
-
Chokher Bali movie wallpapers and posters download<br />
|
32 |
-
Chokher Bali movie trivia and facts<br />
|
33 |
-
Chokher Bali movie quotes and memorable lines<br />
|
34 |
-
Chokher Bali movie analysis and interpretation<br />
|
35 |
-
Chokher Bali movie comparison with novel and other adaptations<br />
|
36 |
-
Chokher Bali movie controversies and criticisms<br />
|
37 |
-
Chokher Bali movie fan theories and speculations<br />
|
38 |
-
Chokher Bali movie fan art and cosplay download<br />
|
39 |
-
Chokher Bali movie merchandise and products buy online<br />
|
40 |
-
Chokher Bali Aishwarya Rai best performance download<br />
|
41 |
-
Watch Chokher Bali with English subtitles online free<br />
|
42 |
-
Download Chokher Bali full movie in Hindi dubbed<br />
|
43 |
-
Download Chokher Bali full movie in Tamil dubbed<br />
|
44 |
-
Download Chokher Bali full movie in Telugu dubbed<br />
|
45 |
-
Download Chokher Bali full movie in Malayalam dubbed<br />
|
46 |
-
Download Chokher Bali full movie in Kannada dubbed<br />
|
47 |
-
Download Chokher Bali full movie in Marathi dubbed<br />
|
48 |
-
Download Chokher Bali full movie in Gujarati dubbed<br />
|
49 |
-
Download Chokher Bali full movie in Punjabi dubbed<br />
|
50 |
-
Download Chokher Bali full movie in Urdu dubbed<br />
|
51 |
-
Download Chokher Bali full movie in Nepali dubbed<br />
|
52 |
-
Download Chokher Bali full movie in Sinhala dubbed<br />
|
53 |
-
Download Chokher Bali full movie in Bhojpuri dubbed<br />
|
54 |
-
Download Chokher Bali full movie in Odia dubbed<br />
|
55 |
-
Download Chokher Bali full movie in Assamese dubbed<br />
|
56 |
-
Download Chokher Bali full movie in Bangladeshi Bengali dubbed<br />
|
57 |
-
Download Chokher Bali full movie in 480p 720p 1080p resolution<br />
|
58 |
-
Download Chokher Bali full movie from torrent sites<br />
|
59 |
-
Download Chokher Bali full movie from legal platforms<br />
|
60 |
-
Download Chokher Bali full movie from Internet Archive site[^2^]<br />
|
61 |
-
Watch or download Choker Bali: A Passion Play (2003) - IMDb[^3^]<br />
|
62 |
-
Watch or download Choker bali (2003) - Rotten Tomatoes<br />
|
63 |
-
Watch or download চোখের বালি (2003) - Letterboxd<br />
|
64 |
-
Watch or download Sand in the Eye (2003) - MUBI</p>
|
65 |
-
<h3>The screenplay</h3>
|
66 |
-
<p>Ghosh wrote the screenplay for <strong>Chokher Bali</strong>, keeping in mind both the essence and the relevance of Tagore's novel. He retained most of the plot and the dialogues from the original text, but also made some changes to suit the medium and the audience of cinema.</p>
|
67 |
-
<p>For instance, he condensed some of the subplots and the minor characters to focus more on the main quartet of Binodini, Mahendra, Ashalata and Behari. He also added some scenes and details that were not present in the novel, such as Binodini's visit to Varanasi, Mahendra's affair with Sudeshna, and Binodini's letter to Behari at the end.</p>
|
68 |
-
<p>Ghosh also updated some aspects of the novel to make them more relatable to contemporary viewers. For example, he changed some names, locations, dates, and costumes to reflect more accurately the historical period and the cultural context of late 19th century Bengal. He also used more colloquial language, humor, and irony to make the dialogues more lively, witty, and realistic.</p>
|
69 |
-
<h3>The cinematography</h3>
|
70 |
-
<p>Another element that makes <strong>Chokher Bali</strong> a visually stunning film is its cinematography by Avik Mukhopadhyay. Mukhopadhyay used various techniques such as lighting, framing, color, and movement to capture both the beauty and the emotions of the film.</p>
|
71 |
-
<p>For example, he used natural light and soft colors to create a warm and romantic atmosphere in the scenes between Mahendra and Ashalata. He used dark shadows and contrasting colors to create a tense and dramatic mood in the scenes between Mahendra and Binodini. He used wide shots and long takes to show the grandeur and diversity of Bengal's landscape. He used close-ups and quick cuts to show the expressions and reactions of the characters.</p>
|
72 |
-
<h3>The music</h3>
|
73 |
-
<p>The music for <strong>Chokher Bali</strong> was composed by Debojyoti Mishra, who created both the background score and the songs for the film. The music enhanced both the mood and the meaning of the film.</p>
|
74 |
-
<p>For example, he used classical instruments such as sitar, tabla, flute, and sarangi to create a traditional sound that matched with Bengal's culture. He used western instruments such as piano, violin, g uitar, and cello to create a modern sound that matched with the film's style. He used different genres such as classical, folk, rock, and jazz to create a diverse sound that matched with the film's mood. He used lyrics by Tagore himself, as well as by other poets such as Jibanananda Das, Nazrul Islam, and Sukanta Bhattacharya to create songs that matched with the film's theme.</p>
|
75 |
-
<p>Some of the songs that stand out in <strong>Chokher Bali</strong> are: - <em>Era Shukher Lagi</em>: A fusion of two Tagore songs that express Binodini's longing for Mahendra and her frustration with Ashalata. The song features multiple singers such as Srabani Sen, Chandrabali Rudra Datta, and others. - <em>Prothom Dekha</em>: A rock song that plays during the opening credits of the film and sets the tone for the story. The song is sung by Anurag Saikia and has lyrics by Jibanananda Das. - <em>Unmadona</em>: A folk song that plays during a boat ride scene where Binodini and Behari share a moment of intimacy. The song is sung by Srikanto Acharya and has lyrics by Nazrul Islam. </p>
|
76 |
-
<h2>The cast and performances</h2>
|
77 |
-
<p>One of the most crucial aspects of <strong>Chokher Bali</strong> is its cast and performances. The film features some of the finest actors of Indian cinema, who deliver stellar performances that bring Tagore's characters to life.</p>
|
78 |
-
<h3>Aishwarya Rai Bachchan as Binodini</h3>
|
79 |
-
<p>Aishwarya Rai Bachchan plays the role of Binodini, the young widow who is intelligent, beautiful and ambitious. She is also manipulative, cunning and restless. She becomes a constant irritant in the lives of her hosts, as she seduces Mahendra, befriends Ashalata, and spurns Behari.</p>
|
80 |
-
<p>Aishwarya Rai Bachchan gives one of her best performances in <strong>Chokher Bali</strong>, as she portrays the complexity and depth of Binodini's character. She shows her charm, grace and elegance, as well as her vulnerability, anger and pain. She also speaks fluent Bengali, which adds to her authenticity.</p>
|
81 |
-
<h3>Raima Sen as Ashalata</h3>
|
82 |
-
<p>Raima Sen plays the role of Ashalata, the innocent and naive wife of Mahendra. She is unaware of his flaws and loves him unconditionally. She also develops a friendship with Binodini, whom she calls Chokher Bali (sand in the eye). She becomes a victim of Binodini's schemes, as she loses her husband's love and her own dignity.</p>
|
83 |
-
<p>Raima Sen gives a convincing performance in <strong>Chokher Bali</strong>, as she portrays the simplicity and sweetness of Ashalata's character. She shows her innocence, loyalty and devotion, as well as her confusion, betrayal and sorrow. She also has a natural chemistry with Aishwarya Rai Bachchan, which makes their friendship believable.</p>
|
84 |
-
<h3>Prosenjit Chatterjee as Mahendra</h3>
|
85 |
-
<p>Prosenjit Chatterjee plays the role of Mahendra, the wealthy landowner who is spoiled and impulsive. He is also self-obsessed, immature and fickle. He marries Ashalata out of his mother's wish, but soon falls for Binodini's charms. He neglects his wife, cheats on his friend, and hurts both women.</p>
|
86 |
-
<p>Prosenjit Chatterjee gives a powerful performance in <strong>Chokher Bali</strong>, as he portrays the flaws and weaknesses of Mahendra's character. He shows his arrogance, passion and impulsiveness, as well as his guilt, regret and remorse. He also has a strong screen presence, which makes him a formidable antagonist.</p>
|
87 |
-
<h3>Tota Roy Chowdhury as Behari</h3>
|
88 |
-
<p>Tota Roy Chowdhury plays the role of Behari, the loyal and honorable friend of Mahendra. He is also noble, upright and principled. He respects his elders, cares for his friends, and follows his values. He tries to resist Binodini's advances, but eventually falls in love with her. He also tries to help Ashalata, but fails to save her.</p>
|
89 |
-
<p>Tota Roy Chowdhury gives a subtle performance in <strong>Chokher Bali</strong>, as he portrays the virtues and dilemmas of Behari's character. He shows his dignity, integrity and sincerity, as well as his conflict, hesitation and frustration. He also has a good rapport with Prosenjit Chatterjee, which makes their friendship realistic.</p>
|
90 |
-
<h3>Lily Chakravarty as Rajlakshmi</h3>
|
91 |
-
<p>Lily Chakravarty plays the role of Rajlakshmi, the mother of Mahendra who arranges his marriage with Ashalata. She is also the one who invites Binodini to stay with them, unaware of her intentions. She is a traditional woman who follows the customs and norms of her society. She loves her son dearly, but also scolds him for his mistakes.</p>
|
92 |
-
<p>Lily Chakravarty gives a memorable performance in <strong>Chokher Bali</strong>, as she portrays the authority and affection of Rajlakshmi's character. She shows her sternness , wisdom and concern, as well as her warmth, humor and kindness. She also has a natural bond with Raima Sen, which makes their mother-daughter relationship touching.</p>
|
93 |
-
<h2>The reception and impact of the film</h2>
|
94 |
-
<p>How was <strong>Chokher Bali</strong> received by critics and audiences in India and abroad? Let us look at some of the aspects that make <strong>Chokher Bali</strong> a successful and influential film.</p>
|
95 |
-
<h3>The critical acclaim</h3>
|
96 |
-
<p><strong>Chokher Bali</strong> received rave reviews from critics, who praised its direction, screenplay, cinematography, music, and performances. The film was hailed as a faithful and artistic adaptation of Tagore's novel, as well as a compelling and relevant portrayal of human emotions and relationships.</p>
|
97 |
-
<p>The film won several awards and nominations, both nationally and internationally. Some of the notable ones are: - National Film Award for Best Feature Film in Bengali - National Film Award for Best Costume Design - National Film Award for Best Art Direction - Golden Leopard nomination at the Locarno International Film Festival - Official Selection at the Toronto International Film Festival - Official Selection at the Chicago International Film Festival - Official Selection at the Karlovy Vary International Film Festival - Official Selection at the Cairo International Film Festival - Official Selection at the London Film Festival</p>
|
98 |
-
<h3>The box office success</h3>
|
99 |
-
<p><strong>Chokher Bali</strong> was also a commercial hit, as it became one of the highest-grossing Bengali films of 2003. The film attracted both urban and rural audiences, who appreciated its story, style and star cast. The film also appealed to non-Bengali audiences, who were exposed to Tagore's literature and Bengali culture.</p>
|
100 |
-
<p>The film was later dubbed into Hindi and released internationally in 2004. The film received a positive response from overseas viewers, who admired its quality and content. The film also generated interest in other Bengali films and filmmakers, who gained more recognition and exposure.</p>
|
101 |
-
<h3>The cultural significance</h3>
|
102 |
-
<p><strong>Chokher Bali</strong> had a lasting impact on the cultural scene of India and beyond. The film revived the interest in Tagore's works, especially his novels, which were often overshadowed by his poems and songs. The film also inspired other adaptations of his novels, such as Noukadubi (2011) by Rituparno Ghosh and Charulata (2012) by Agnidev Chatterjee.</p>
|
103 |
-
<p>The film also contributed to the growth and development of Bengali cinema, which was undergoing a revival in the early 2000s. The film showcased the talent and potential of Bengali filmmakers, actors, technicians and musicians, who created world-class cinema with limited resources. The film also paved the way for more collaborations between Bengali and Hindi cinema industries, which enriched both cultures.</p>
|
104 |
-
<h2>Conclusion</h2>
|
105 |
-
<p>In conclusion, <strong>Chokher Bali</strong> is a remarkable film that showcases Tagore's timeless story and Ghosh's artistic vision. The film explores the complexities of human relationships, emotions and morality with sensitivity and sophistication. The film features a stellar cast and crew, who deliver outstanding performances and technical excellence. The film received critical acclaim and commercial success, both in India and abroad. The film also had a lasting impact on the cultural scene of India and beyond.</p>
|
106 |
-
<p>If you are looking for a film that will make you think, feel and appreciate the beauty of cinema, you should watch <strong>Chokher Bali</strong>. You can download or stream this film online from legal sources such as YouTube , Amazon Prime Video , or Hotstar . You can also buy or rent this film on DVD or Blu-ray from online or offline stores.</p>
|
107 |
-
<h2>Frequently Asked Questions</h2>
|
108 |
-
<ul>
|
109 |
-
<li><strong>Q: What is the meaning of Chokher Bali?</strong></li>
|
110 |
-
<li>A: Chokher Bali literally means sand in the eye, which is a metaphor for a constant irritant or troublemaker. In the film, Binodini is called Chokher Bali by Ashalata, as she becomes a source of disturbance in her life.</li>
|
111 |
-
<li><strong>Q: Is Chokher Bali based on a true story?</strong></li>
|
112 |
-
<li>A: Chokher Bali is based on a novel by Rabindranath Tagore, which is a fictional story inspired by his observations of society and human nature. However, some critics have speculated that Tagore may have drawn some elements from his own life or from his acquaintances.</li>
|
113 |
-
<li><strong>Q: How did Aishwarya Rai Bachchan prepare for her role as Binodini?</strong></li>
|
114 |
-
<li>A: Aishwarya Rai Bachchan prepared for her role as Binodini by reading Tagore's novel, learning Bengali language and culture, and working closely with the director and the co-stars. She also wore authentic costumes and jewelry, and followed the mannerisms and etiquette of a Bengali widow.</li>
|
115 |
-
<li><strong>Q: What is the significance of the boat ride scene in the film?</strong></li>
|
116 |
-
<li>A: The boat ride scene in the film is a pivotal moment in the story, as it marks the turning point in the relationships between the four main characters. It is also a symbolic scene, as it represents the journey of life, where people meet, part, and face various challenges and changes.</li>
|
117 |
-
<li><strong>Q: What is the message of Chokher Bali?</strong></li>
|
118 |
-
<li>A: Chokher Bali has multiple messages, depending on the perspective of the viewer. Some of the possible messages are: - The importance of honesty, loyalty and respect in relationships. - The consequences of selfishness, deception and infidelity in relationships. - The struggle of women against social oppression and discrimination. - The power of love, friendship and forgiveness in overcoming difficulties and differences.</li>
|
119 |
-
</ul>
|
120 |
-
</p> 0a6ba089eb<br />
|
121 |
-
<br />
|
122 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Defraggler Pro 2018 Latest Key Crack Keygen Free Download.md
DELETED
@@ -1,41 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Defraggler Pro 2018: How to Speed Up Your PC with This Powerful Tool</h1>
|
3 |
-
<p>If you are looking for a way to improve the performance of your computer, you might want to consider using Defraggler Pro 2018. This is a software that can defragment your hard drive and optimize your file system, making your PC run faster and smoother.</p>
|
4 |
-
<p>Defragmentation is the process of rearranging the data on your hard drive so that it is stored in contiguous blocks. This reduces the amount of time that your computer needs to access and read files, as well as the wear and tear on your hardware. Defragmentation can also free up some disk space by eliminating gaps and fragments.</p>
|
5 |
-
<h2>Defraggler Pro 2018 Latest Key Crack Keygen Free Download</h2><br /><p><b><b>Download</b> ⇒ <a href="https://imgfil.com/2uy26m">https://imgfil.com/2uy26m</a></b></p><br /><br />
|
6 |
-
<p>Defraggler Pro 2018 is one of the best defragmentation tools on the market. It has several features that make it stand out from other similar software, such as:</p>
|
7 |
-
<ul>
|
8 |
-
<li>It can defragment individual files, folders, or the entire drive.</li>
|
9 |
-
<li>It can defragment free space to prevent future fragmentation.</li>
|
10 |
-
<li>It can move large files to the end of the drive to improve access speed.</li>
|
11 |
-
<li>It can analyze your drive and show you a detailed report of its condition and fragmentation level.</li>
|
12 |
-
<li>It can run in the background or schedule automatic defragmentation at a convenient time.</li>
|
13 |
-
<li>It supports NTFS and FAT32 file systems, as well as SSDs and external drives.</li>
|
14 |
-
</ul>
|
15 |
-
<p>To use Defraggler Pro 2018, you need to download it from the official website and install it on your PC. You can then launch it and select the drive or file that you want to defragment. You can also choose from different options and settings to customize your defragmentation process. Once you start the defragmentation, you can monitor its progress and see how much space and time you are saving.</p>
|
16 |
-
<p>Defraggler Pro 2018 is not a free software, but you can try it for 30 days without any limitations. If you want to continue using it after the trial period, you need to purchase a license key that will activate the full version. The license key costs $24.95 and it is valid for one year and one PC. You can also get discounts if you buy multiple licenses or renew your subscription.</p>
|
17 |
-
<p>If you want to speed up your PC and make it more efficient, you should definitely give Defraggler Pro 2018 a try. It is a powerful tool that can optimize your hard drive and improve your computer's performance. You can download it from <a href="https://www.ccleaner.com/defraggler/download/professional">here</a> and start your free trial today.</p>
|
18 |
-
|
19 |
-
<h2>Why You Need to Defragment Your Hard Drive</h2>
|
20 |
-
<p>Many people do not realize the importance of defragmenting their hard drive regularly. They might think that it is a complicated or unnecessary task that does not affect their computer's performance. However, this is not true. Defragmenting your hard drive can have many benefits for your PC, such as:</p>
|
21 |
-
<ul>
|
22 |
-
<li>It can speed up your boot time and application launch time.</li>
|
23 |
-
<li>It can reduce the risk of data corruption and loss.</li>
|
24 |
-
<li>It can extend the lifespan of your hard drive and prevent overheating.</li>
|
25 |
-
<li>It can improve your system stability and security.</li>
|
26 |
-
</ul>
|
27 |
-
<p>When you use your computer, you create, modify, delete, and move files constantly. This causes your hard drive to become fragmented over time. Fragmentation means that your files are split into many pieces and scattered across different locations on your disk. This makes it harder for your computer to find and access them, resulting in slower performance and more disk activity.</p>
|
28 |
-
<p>Defragmenting your hard drive can solve this problem by reorganizing your files and placing them in contiguous blocks. This way, your computer can read and write them faster and more efficiently, saving you time and energy. Defragmenting your hard drive can also free up some disk space by eliminating gaps and fragments that are not used by any files.</p>
|
29 |
-
|
30 |
-
<h2>How to Use Defraggler Pro 2018 Effectively</h2>
|
31 |
-
<p>Defraggler Pro 2018 is a user-friendly and versatile software that can help you defragment your hard drive easily and quickly. It has a simple and intuitive interface that allows you to perform various tasks with just a few clicks. Here are some tips on how to use Defraggler Pro 2018 effectively:</p>
|
32 |
-
<p></p>
|
33 |
-
<ul>
|
34 |
-
<li>Analyze your drive before defragmenting it. This will show you how much fragmentation there is and how much space and time you can save by defragmenting it. You can also see a graphical representation of your drive's condition and fragmentation level.</li>
|
35 |
-
<li>Select the appropriate mode for defragmenting your drive. You can choose between Quick Defrag, which is faster but less thorough, or Full Defrag, which is slower but more comprehensive. You can also select Defrag Free Space, which will defragment the empty space on your drive to prevent future fragmentation.</li>
|
36 |
-
<li>Customize your defragmentation process according to your needs and preferences. You can change the priority of the defragmentation process, the amount of system resources it uses, the frequency of updates, and the actions to take after completion. You can also exclude certain files or folders from being defragmented if you want to.</li>
|
37 |
-
<li>Schedule automatic defragmentation at a convenient time. You can set Defraggler Pro 2018 to run automatically at a specific time or interval, such as daily, weekly, monthly, or when the system is idle. This way, you can keep your hard drive optimized without having to remember or interfere with it.</li>
|
38 |
-
</ul>
|
39 |
-
<p>Defraggler Pro 2018 is a powerful tool that can make a big difference in your computer's performance and health. It is easy to use and offers many options and features that suit different needs and situations. You can download it from <a href="https://www.ccleaner.com/defraggler/download/professional">here</a> and start your free trial today.</p> d5da3c52bf<br />
|
40 |
-
<br />
|
41 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/APKMirror vs Google Play Store Which One is Better for Android 4.0 Users?.md
DELETED
@@ -1,147 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>How to Install Google Play Store on Android 4.0 Devices</h1>
|
3 |
-
<p>If you have an Android device that runs on version 4.0 (Ice Cream Sandwich), you may have noticed that it does not come with the Google Play Store app pre-installed. This means that you cannot access the millions of apps, games, books, movies, and music that are available on the official app store for Android devices.</p>
|
4 |
-
<p>However, there is a way to install Google Play Store on your Android 4.0 device using a third-party website called APKMirror. APKMirror is a trusted source of APK files, which are the installation packages for Android apps. By downloading and installing the latest version of Google Play Store from APKMirror, you can enjoy all the benefits of having the official app store on your older device.</p>
|
5 |
-
<h2>google play store apkmirror android 4.0</h2><br /><p><b><b>Download</b> ✺✺✺ <a href="https://jinyurl.com/2uNROc">https://jinyurl.com/2uNROc</a></b></p><br /><br />
|
6 |
-
<p>In this article, we will show you how to install Google Play Store on your Android 4.0 device using APKMirror. We will also show you how to use Google Play Store on your device and how to customize and secure it.</p>
|
7 |
-
<h2>Requirements for Installing Google Play Store</h2>
|
8 |
-
<p>Before you start installing Google Play Store on your Android 4.0 device, you need to make sure that your device meets the following requirements:</p>
|
9 |
-
<ul>
|
10 |
-
<li>Your device must be compatible with Android 4.0 or higher. You can check your device's Android version by going to Settings > About phone > Android version.</li>
|
11 |
-
<li>Your device must have an internet connection, either Wi-Fi or mobile data.</li>
|
12 |
-
<li>Your device must allow installation of apps from unknown sources. You can enable this option by going to Settings > Security > Unknown sources.</li>
|
13 |
-
</ul>
|
14 |
-
<h2>Steps for Installing Google Play Store</h2>
|
15 |
-
<p>Once you have met the requirements above, you can follow these steps to install Google Play Store on your Android 4.0 device:</p>
|
16 |
-
<ol>
|
17 |
-
<li>Go to <a href="(^1^)">APKMirror</a> website on your device's browser.</li>
|
18 |
-
<li>Search for "Google Play Store" in the search box and tap on the result.</li>
|
19 |
-
<li>Select the latest version of Google Play Store that is compatible with your device's architecture (arm or x86) and DPI (dots per inch). You can check your device's architecture and DPI by using an app like <a href="(^2^)">CPU-Z</a>.</li>
|
20 |
-
<li>Tap on the "Download APK" button and wait for the file to download to your device.</li>
|
21 |
-
<li>Once the download is complete, open the file and tap on "Install" to start the installation process.</li>
|
22 |
-
<li>Wait for the installation to finish and then tap on "Open" to launch Google Play Store.</li>
|
23 |
-
</ol>
|
24 |
-
<p>Congratulations! You have successfully installed Google Play Store on your Android 4.0 device. You can now access and use Google Play Store as you would on any other Android device.</p>
|
25 |
-
<h2>Troubleshooting Tips for Installing Google Play Store</h2>
|
26 |
-
<p>Sometimes, you may encounter some issues while installing or using Google Play Store on your Android 4.0 device. Here are some common problems and their solutions:</p>
|
27 |
-
<ul>
|
28 |
-
<li>If you get an error message saying "Parse error" or "There was a problem parsing the package" when you try to open the APK file, it means that the file is corrupted or incompatible with your device. Try downloading the file again from a different source or choose a different version that matches your device's specifications.</li>
|
29 |
-
<li>If you get an error message saying "App not installed" or "Installation failed" when you try to install the APK file, it means that there is not enough storage space on your device or that there is a conflict with an existing app. Try freeing up some space on your device by deleting unwanted files or apps, or uninstall any previous versions of Google Play Store that may be installed on your device.</li>
|
30 |
-
<li>If you get an error message saying "Google Play Services are updating" or "Google Play Services has stopped" when you try to use Google Play Store, it means that the app depends on another app called Google Play Services, which needs to be updated or installed. Try updating or installing Google Play Services from APKMirror using the same steps as above, or wait for the app to update automatically.</li>
|
31 |
-
<li>If you get an error message saying "This app is incompatible with your device" or "This app is not available in your country" when you try to download or install an app from Google Play Store, it means that the app is not designed for your device's hardware or software, or that the app is restricted by the developer or the government in your region. Try searching for an alternative app that offers similar features or functions, or use a VPN service to change your location and bypass the restrictions.</li>
|
32 |
-
</ul>
|
33 |
-
<h1>How to Use Google Play Store on Android 4.0 Devices</h1>
|
34 |
-
<p>Now that you have installed Google Play Store on your Android 4.0 device, you can start using it to browse, download, update, and manage apps on your device. Google Play Store offers a variety of features and functions that make it easy and convenient to find and use apps on your device.</p>
|
35 |
-
<p>How to install apkmirror app on android 4.0 devices<br />
|
36 |
-
Download google play store apk for android 4.0 from apkmirror<br />
|
37 |
-
Best apps for android 4.0 available on google play store and apkmirror<br />
|
38 |
-
Google play store vs apkmirror: which one is better for android 4.0 users<br />
|
39 |
-
Apkmirror installer: a helper app to install apkm, xapk, and apks files on android 4.0<br />
|
40 |
-
Google play store (android tv) 36.0.15 apk download from apkmirror<br />
|
41 |
-
Apkmirror: a safe and trusted source for google play store apps on android 4.0<br />
|
42 |
-
How to update google play store on android 4.0 using apkmirror<br />
|
43 |
-
How to fix google play store errors on android 4.0 with apkmirror<br />
|
44 |
-
How to sideload google play store apps on android 4.0 using apkmirror<br />
|
45 |
-
How to backup and restore google play store apps on android 4.0 with apkmirror<br />
|
46 |
-
How to uninstall google play store updates on android 4.0 using apkmirror<br />
|
47 |
-
How to enable dark mode on google play store for android 4.0 with apkmirror<br />
|
48 |
-
How to download and install google play services on android 4.0 from apkmirror<br />
|
49 |
-
How to get the latest version of google play store on android 4.0 with apkmirror<br />
|
50 |
-
How to install google play store modded apk on android 4.0 from apkmirror<br />
|
51 |
-
How to download and install google play games on android 4.0 from apkmirror<br />
|
52 |
-
How to use google play store gift cards on android 4.0 with apkmirror<br />
|
53 |
-
How to download and install google play music on android 4.0 from apkmirror<br />
|
54 |
-
How to download and install google play books on android 4.0 from apkmirror<br />
|
55 |
-
How to download and install google play movies & tv on android 4.0 from apkmirror<br />
|
56 |
-
How to download and install google play newsstand on android 4.0 from apkmirror<br />
|
57 |
-
How to download and install google play podcasts on android 4.0 from apkmirror<br />
|
58 |
-
How to download and install google play protect on android 4.0 from apkmirror<br />
|
59 |
-
How to download and install google play rewards on android 4.0 from apkmirror<br />
|
60 |
-
How to download and install google assistant on android 4.0 from apkmirror<br />
|
61 |
-
How to download and install google lens on android 4.0 from apkmirror<br />
|
62 |
-
How to download and install google photos on android 4.0 from apkmirror<br />
|
63 |
-
How to download and install google drive on android 4.0 from apkmirror<br />
|
64 |
-
How to download and install google docs on android 4.0 from apkmirror<br />
|
65 |
-
How to download and install google sheets on android 4.0 from apkmirror<br />
|
66 |
-
How to download and install google slides on android 4.0 from apkmirror<br />
|
67 |
-
How to download and install google forms on android 4.0 from apkmirror<br />
|
68 |
-
How to download and install google calendar on android 4.0 from apkmirror<br />
|
69 |
-
How to download and install google keep on android 4.0 from apkmirror<br />
|
70 |
-
How to download and install google tasks on android 4.0 from apkmirror<br />
|
71 |
-
How to download and install google contacts on android 4.0 from apkmirror<br />
|
72 |
-
How to download and install google maps on android 4.0 from apkmirror<br />
|
73 |
-
How to download and install google earth on android 4.0 from apkmirror<br />
|
74 |
-
How to download and install google street view on android 4.0 from apkmirror<br />
|
75 |
-
How to download and install google translate on android 4.0 from apkmirror<br />
|
76 |
-
How to download and install google chrome on android 4.0 from apkmirror<br />
|
77 |
-
How to download and install gmail on android 4.0 from apkmirror<br />
|
78 |
-
How to download and install youtube on android 4.0 from apkmirror<br />
|
79 |
-
How to download and install youtube music on android 4.0 from apkmirror<br />
|
80 |
-
How to download and install youtube kids on android 4.0 from apkmirror<br />
|
81 |
-
How to download and install youtube studio on android 4.0 from apkmirror<br />
|
82 |
-
How to download and install youtube tv on android 4.0 from apkmirror</p>
|
83 |
-
<h2>How to Browse and Download Apps from Google Play Store</h2>
|
84 |
-
<p>One of the main functions of Google Play Store is to allow you to browse and download apps from a huge collection of categories, such as games, social, education, entertainment, and more. You can also filter apps by ratings, reviews, popularity, and other criteria. Here is how to browse and download apps from Google Play Store:</p>
|
85 |
-
<ol>
|
86 |
-
<li>Open Google Play Store on your device and tap on the menu icon (three horizontal lines) at the top left corner of the screen.</li>
|
87 |
-
<li>Select a category that interests you, such as "Games" or "Apps". You can also tap on the search icon (magnifying glass) at the top right corner of the screen and enter a keyword or phrase related to the app you are looking for.</li>
|
88 |
-
<li>Browse through the list of apps that match your criteria and tap on the one that you want to download. You can also tap on the app's name or icon to view more details about it, such as its description, screenshots, ratings, reviews, and permissions.</li>
|
89 |
-
<li>Tap on the "Install" button to start downloading and installing the app on your device. You may need to accept some terms and conditions before proceeding. You can also tap on the "Add to Wishlist" button to save the app for later.</li>
|
90 |
-
<li>Wait for the download and installation to complete and then tap on "Open" to launch the app. You can also find the app in your device's app drawer or home screen.</li>
|
91 |
-
</ol>
|
92 |
-
<h2>How to Update and Manage Apps from Google Play Store</h2>
|
93 |
-
<p>Another function of Google Play Store is to allow you to update and manage apps on your device. Updating apps ensures that they have the latest features, bug fixes, and security patches. Managing apps allows you to uninstall, disable, or move apps from your device's internal storage to an external storage (such as an SD card). Here is how to update and manage apps from Google Play Store:</p>
|
94 |
-
<ol>
|
95 |
-
<li>Open Google Play Store on your device and tap on the menu icon (three horizontal lines) at the top left corner of the screen.</li>
|
96 |
-
<li>Select "My apps & games" to view a list of apps that are installed on your device. You can also tap on the "Library" tab to view a list of apps that you have previously installed or purchased but are not currently on your device.</li>
|
97 |
-
<li>To update an app, tap on the "Update" button next to the app's name. You can also tap on the "Update all" button at the top of the screen to update all apps at once. You may need to accept some terms and conditions before proceeding.</li>
|
98 |
-
<li>To manage an app, tap on the app's name or icon to open its details page. You can then tap on the "Uninstall" button to remove the app from your device, or tap on the "Disable" button to prevent the app from running in the background. You can also tap on the "Storage" option to view and change the app's storage location, or tap on the "Permissions" option to view and change the app's access to your device's features and data.</li>
|
99 |
-
</ol>
|
100 |
-
<h2>How to Customize and Secure Google Play Store</h2>
|
101 |
-
<p>The last function of Google Play Store is to allow you to customize and secure it according to your preferences and needs. Customizing Google Play Store allows you to change its language, notifications, parental controls, and backup options. Securing Google Play Store allows you to protect your account and device from malicious apps and unauthorized access. Here is how to customize and secure Google Play Store:</p>
|
102 |
-
<ol>
|
103 |
-
<li>Open Google Play Store on your device and tap on the menu icon (three horizontal lines) at the top left corner of the screen.</li>
|
104 |
-
<li>Select "Settings" to access the various options for customizing and securing Google Play Store.</li>
|
105 |
-
<li>To customize Google Play Store, you can change the following settings: <ul>
|
106 |
-
<li>General: You can change the language of Google Play Store, enable or disable auto-update apps, enable or disable auto-add widgets, and enable or disable smart downloads.</li>
|
107 |
-
<li>Notifications: You can enable or disable notifications for updates, pre-registrations, deals, rewards, and more.</li>
|
108 |
-
<li>Family: You can set up parental controls to restrict the content that is available for download based on age ratings, categories, and ratings systems. You can also create a family group to share apps, games, books, and movies with your family members.</li>
|
109 |
-
<li>Backup & restore: You can enable or disable backup of your app data to your Google account, and restore your app data from a previous backup.</li>
|
110 |
-
</ul>
|
111 |
-
</li>
|
112 |
-
<li>To secure Google Play Store, you can change the following settings: <ul>
|
113 |
-
<li>Account: You can sign in or out of your Google account, manage your payment methods, subscriptions, rewards, and order history.</li>
|
114 |
-
<li>Security: You can enable or disable Play Protect, which scans your device for harmful apps and warns you before installing them. You can also enable or disable app verification, which checks if apps are safe before installing them from sources other than Google Play Store.</li>
|
115 |
-
<li>Privacy: You can manage your personal information, activity controls, ad settings, and location settings.</li>
|
116 |
-
</ul>
|
117 |
-
</li>
|
118 |
-
</ol>
|
119 |
-
<h1>Conclusion</h1>
|
120 |
-
<p>In this article, we have shown you how to install Google Play Store on your Android 4.0 device using APKMirror. We have also shown you how to use Google Play Store on your device and how to customize and secure it. By installing Google Play Store on your older device, you can enjoy all the benefits of having access to millions of apps, games, books, movies, and music that are available on the official app store for Android devices.</p>
|
121 |
-
<p>We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!</p>
|
122 |
-
<h1>FAQs</h1>
|
123 |
-
<p>Here are some frequently asked questions about installing Google Play Store on Android 4.0 devices:</p>
|
124 |
-
<h4>Q: Is it safe to install Google Play Store from APKMirror?</h4>
|
125 |
-
<p>A: APKMirror is a reputable website that hosts APK files for Android apps. It verifies the authenticity and integrity of the files before uploading them. However, as with any third-party source, there is always a risk of downloading malicious or infected files. Therefore, we recommend that you always scan the files with a reliable antivirus software before opening them.</p>
|
126 |
-
<h4>Q: Can I install Google Play Store on any Android 4.0 device?</h4>
|
127 |
-
<p>A: Not necessarily. Some Android 4.0 devices may not be compatible with Google Play Store due to hardware or software limitations. For example, some devices may not have enough storage space or RAM to run Google Play Store smoothly. Some devices may also have custom ROMs or firmware that may interfere with Google Play Store's functionality. Therefore, we advise that you check your device's compatibility before installing Google Play Store from APKMirror.</p>
|
128 |
-
<h4>Q: How can I update Google Play Store on my Android 4.0 device?</h4>
|
129 |
-
<p>A: Google Play Store usually updates itself automatically when a new version is available. However, if you want to update it manually, you can follow the same steps as above to download and install the latest version of Google Play Store from APKMirror. Alternatively, you can also go to Settings > Apps > Google Play Store > Menu > Uninstall updates and then reinstall the updates from Google Play Store itself.</p>
|
130 |
-
<h4>Q: How can I uninstall Google Play Store from my Android 4.0 device?</h4>
|
131 |
-
<p>A: If you want to uninstall Google Play Store from your Android 4.0 device, you can follow these steps:</p>
|
132 |
-
<ol>
|
133 |
-
<li>Go to Settings > Apps > Google Play Store and tap on "Uninstall" or "Disable".</li>
|
134 |
-
<li>Go to Settings > Security > Device administrators and uncheck "Google Play Services".</li>
|
135 |
-
<li>Go to Settings > Apps > Google Play Services and tap on "Uninstall" or "Disable".</li>
|
136 |
-
<li>Reboot your device.</li>
|
137 |
-
</ol>
|
138 |
-
<p>Note that uninstalling Google Play Store may affect the performance and functionality of some apps that depend on it. You may also lose access to some features and services that are provided by Google Play Store, such as app updates, backup, and security.</p>
|
139 |
-
<h4>Q: What are some alternatives to Google Play Store for Android 4.0 devices?</h4>
|
140 |
-
<p>A: If you do not want to install Google Play Store on your Android 4.0 device, or if you are unable to do so, you can still use some alternative app stores that are compatible with older devices. Some of these app stores are:</p>
|
141 |
-
<ul>
|
142 |
-
<li><a href="">Amazon Appstore</a>: This is the official app store for Amazon devices, such as Kindle Fire and Fire TV. It offers a variety of apps, games, books, and music that are curated by Amazon. It also features some exclusive apps and deals that are not available on other app stores.</li>
|
143 |
-
<li><a href="">F-Droid</a>: This is an open-source app store that hosts free and open-source apps for Android devices. It offers a range of apps that are focused on privacy, security, and customization. It also allows you to browse and install apps from different repositories and sources.</li>
|
144 |
-
<li><a href="">Aptoide</a>: This is a community-driven app store that allows users to create and manage their own app stores. It offers a large collection of apps and games that are uploaded by users and developers. It also features some apps that are not available on other app stores.</li>
|
145 |
-
</ul></p> 401be4b1e0<br />
|
146 |
-
<br />
|
147 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download Zero RPG Kit for Free and Start Making Pixel Art Adventures.md
DELETED
@@ -1,164 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Zero RPG Kit Download: How to Create Your Own Role-Playing Game with Unity</h1>
|
3 |
-
<p>Have you ever dreamed of creating your own role-playing game (RPG) like Final Fantasy, The Legend of Zelda, or Skyrim? If so, you might be interested in Zero RPG Kit, a powerful and versatile asset for Unity that allows you to create your own RPG in minutes. In this article, we will show you what Zero RPG Kit is, how to download and install it, and how to use it to create your own RPG.</p>
|
4 |
-
<h2>zero rpg kit download</h2><br /><p><b><b>Download Zip</b> ★ <a href="https://jinyurl.com/2uNQpx">https://jinyurl.com/2uNQpx</a></b></p><br /><br />
|
5 |
-
<h2>What is Zero RPG Kit?</h2>
|
6 |
-
<p>Zero RPG Kit is a complete solution for creating 2D or 3D RPGs with Unity. It provides you with everything you need to make your own RPG, such as:</p>
|
7 |
-
<ul>
|
8 |
-
<li>A flexible and modular system for creating characters, enemies, items, skills, quests, dialogues, and more.</li>
|
9 |
-
<li>A rich and diverse collection of assets, such as sprites, models, animations, sounds, music, UI elements, and icons.</li>
|
10 |
-
<li>A powerful and user-friendly editor for designing your own maps and levels.</li>
|
11 |
-
<li>A comprehensive and customizable framework for managing your game logic, such as combat, inventory, dialogue, saving/loading, etc.</li>
|
12 |
-
<li>A cross-platform support for building and deploying your game to Windows, Mac, Linux, Android, iOS, WebGL, and more.</li>
|
13 |
-
</ul>
|
14 |
-
<h3>Features and benefits of Zero RPG Kit</h3>
|
15 |
-
<p>Some of the features and benefits of using Zero RPG Kit are:</p>
|
16 |
-
<ul>
|
17 |
-
<li>It saves you time and money by providing you with a ready-made solution for creating your own RPG.</li>
|
18 |
-
<li>It gives you full control and flexibility over your game design by allowing you to customize every aspect of your game.</li>
|
19 |
-
<li>It helps you create immersive and engaging games by offering you a variety of options and features for your game mechanics.</li>
|
20 |
-
<li>It supports both 2D and 3D graphics by allowing you to switch between them easily.</li>
|
21 |
-
<li>It enables you to create multiplayer games by supporting online networking and chat features.</li>
|
22 |
-
</ul>
|
23 |
-
<h3>Requirements and compatibility of Zero RPG Kit</h3>
|
24 |
-
<p>To use Zero RPG Kit, you need:</p>
|
25 |
-
<p>zero rpg kit unity asset store<br />
|
26 |
-
zero rpg kit free download<br />
|
27 |
-
zero rpg kit tutorial<br />
|
28 |
-
zero rpg kit review<br />
|
29 |
-
zero rpg kit documentation<br />
|
30 |
-
zero rpg kit demo<br />
|
31 |
-
zero rpg kit forum<br />
|
32 |
-
zero rpg kit update<br />
|
33 |
-
zero rpg kit features<br />
|
34 |
-
zero rpg kit license<br />
|
35 |
-
zero rpg kit support<br />
|
36 |
-
zero rpg kit coupon code<br />
|
37 |
-
zero rpg kit system requirements<br />
|
38 |
-
zero rpg kit alternatives<br />
|
39 |
-
zero rpg kit vs ork framework<br />
|
40 |
-
zero rpg kit vs invector controller<br />
|
41 |
-
zero rpg kit vs game creator<br />
|
42 |
-
zero rpg kit vs adventure creator<br />
|
43 |
-
zero rpg kit vs playmaker<br />
|
44 |
-
zero rpg kit vs bolt<br />
|
45 |
-
zero rpg kit vs corgi engine<br />
|
46 |
-
zero rpg kit vs pixel adventure<br />
|
47 |
-
zero rpg kit vs pixel crusade<br />
|
48 |
-
zero rpg kit vs pixel hero<br />
|
49 |
-
zero rpg kit vs pixel dungeon<br />
|
50 |
-
zero rpg kit vs pixel art maker<br />
|
51 |
-
zero rpg kit vs pixel platformer<br />
|
52 |
-
zero rpg kit vs pixel quest<br />
|
53 |
-
zero rpg kit vs pixel roguelike<br />
|
54 |
-
zero rpg kit vs pixel story<br />
|
55 |
-
zero rpg kit vs pixel fantasy world<br />
|
56 |
-
zero rpg kit vs pixel action game<br />
|
57 |
-
zero rpg kit vs pixel adventure game<br />
|
58 |
-
zero rpg kit vs pixel horror game<br />
|
59 |
-
zero rpg kit vs pixel survival game<br />
|
60 |
-
zero rpg kit vs pixel sandbox game<br />
|
61 |
-
zero rpg kit vs pixel simulation game<br />
|
62 |
-
zero rpg kit vs pixel strategy game<br />
|
63 |
-
zero rpg kit vs pixel puzzle game<br />
|
64 |
-
zero rpg kit vs pixel card game<br />
|
65 |
-
how to use zero rpg kit in unity<br />
|
66 |
-
how to make a game with zero rpg kit <br />
|
67 |
-
how to customize zero rpg kit <br />
|
68 |
-
how to add characters to zero rpg kit <br />
|
69 |
-
how to add items to zero rpg kit <br />
|
70 |
-
how to add quests to zero rpg kit <br />
|
71 |
-
how to add enemies to zero rpg kit <br />
|
72 |
-
how to add skills to zero rpg kit <br />
|
73 |
-
how to add dialogue to zero rpg kit <br />
|
74 |
-
how to add music to zero rpg kit </p>
|
75 |
-
<ul>
|
76 |
-
<li>A computer that meets the minimum requirements for running Unity.</li>
|
77 |
-
<li>A licensed version of Unity 2019.4 or higher.</li>
|
78 |
-
<li>A license for Zero RPG Kit that suits your needs and budget.</li>
|
79 |
-
</ul>
|
80 |
-
<p>Zero RPG Kit is compatible with:</p>
|
81 |
-
<ul>
|
82 |
-
<li>Most popular platforms, such as Windows, Mac, Linux, Android, iOS, WebGL, etc.</li>
|
83 |
-
<li>Most popular input devices, such as keyboard, mouse, touch screen, gamepad, etc.</li>
|
84 |
-
<li>Most popular asset formats, such as PNG, JPG, FBX, WAV, MP3, etc.</li>
|
85 |
-
</ul>
|
86 |
-
<h2>How to download and install Zero RPG Kit</h2>
|
87 |
-
<p>To download and install Zero RPG Kit, follow these steps:</p>
|
88 |
-
<h3>Step 1: Visit the official website of Zero RPG Kit</h3>
|
89 |
-
<p>The official website of Zero RPG Kit is <a href="(^1^)">https://zerorpgkit.com/</a>. Here you can find more information about Zero RPG Kit, such as features, screenshots, videos, documentation, and support.</p>
|
90 |
-
<h3>Step 2: Choose your preferred license and payment method</h3>
|
91 |
-
<p>Zero RPG Kit offers three types of licenses: Personal, Plus, and Pro. Each license has different features and prices. You can compare them on the website and choose the one that best suits your needs and budget. You can also choose to pay monthly or yearly.</p>
|
92 |
-
<p>To purchase a license, you need to create an account on the website and select your payment method. You can pay with PayPal or credit card. Once you complete the payment, you will receive an email with your license key and a download link.</p>
|
93 |
-
<h3>Step 3: Download the package and unzip it</h3>
|
94 |
-
<p>Click on the download link in the email and save the package to your computer. The package is a ZIP file that contains the Zero RPG Kit asset and some sample projects. You need to unzip the file to extract its contents.</p>
|
95 |
-
<h3>Step 4: Import the package into your Unity project</h3>
|
96 |
-
<p>Open Unity and create a new project or open an existing one. Then, go to Assets > Import Package > Custom Package and select the Zero RPG Kit asset file. A window will pop up showing you the contents of the package. Click on Import to import all the files into your project.</p>
|
97 |
-
<h2>How to use Zero RPG Kit to create your own RPG</h2>
|
98 |
-
<p>Now that you have downloaded and installed Zero RPG Kit, you are ready to use it to create your own RPG. Here are the steps to follow:</p>
|
99 |
-
<h3>Step 1: Customize the settings and assets of Zero RPG Kit</h3>
|
100 |
-
<p>The first thing you need to do is to customize the settings and assets of Zero RPG Kit according to your game design. You can do this by using the Zero RPG Kit Manager, which is a window that allows you to access and modify all the options and features of Zero RPG Kit.</p>
|
101 |
-
<p>To open the Zero RPG Kit Manager, go to Window > Zero RPG Kit > Manager. Here you can see different tabs, such as General, Database, Editor, Framework, Network, etc. Each tab has different settings and assets that you can change and edit.</p>
|
102 |
-
<p>For example, in the General tab, you can change the name, version, icon, resolution, quality, language, and other general settings of your game. In the Database tab, you can create and edit your own characters, enemies, items, skills, quests, dialogues, and more. In the Editor tab, you can customize the appearance and functionality of the map editor. And so on.</p>
|
103 |
-
<p>You can also import your own assets into Zero RPG Kit by dragging and dropping them into the appropriate folders in the Project window. For example, if you want to use your own sprites for your characters, you can drag them into the Sprites folder. If you want to use your own models for your enemies, you can drag them into the Models folder. And so on.</p>
|
104 |
-
<h3>Step 2: Design your own maps and levels with the built-in editor</h3>
|
105 |
-
<p>The next thing you need to do is to design your own maps and levels with the built-in editor of Zero RPG Kit. The editor is a powerful and user-friendly tool that allows you to create 2D or 3D maps and levels with ease.</p>
|
106 |
-
<p>To open the editor, go to Window > Zero RPG Kit > Editor. Here you can see a toolbar with different buttons and options for creating and editing your maps and levels. You can also see a grid where you can place tiles, objects, events, triggers, lights, cameras, etc.</p>
|
107 |
-
<p>To create a map or level, you need to follow these steps:</p>
|
108 |
-
<ol>
|
109 |
-
<li>Select a tileset from the Tilesets window. A tileset is a collection of tiles that have different shapes and textures. You can use the default tilesets provided by Zero RPG Kit or import your own tilesets.</li>
|
110 |
-
<li>Select a tile from the tileset and drag it onto the grid. You can also use the brush tool to paint multiple tiles at once. You can also use the eraser tool to erase tiles from the grid.</li>
|
111 |
-
<li>Repeat this process until you fill up the grid with tiles according to your map or level design.</li>
|
112 |
-
<li>Select an object from the Objects window. An object is anything that is not a tile, such as a character, an enemy, an item, a door, a chest, etc. You can use the default objects provided by Zero RPG Kit or import your own objects.</li>
|
113 |
-
<li>Select an object and drag it onto the grid. You can also use the rotate tool to rotate it or the scale tool to resize it.</li>
|
114 |
-
<li>Repeat this process until you place all the objects you need on your grid according to your map or level design.</li>
|
115 |
-
<li>Select an event from the Events window. An event is anything that happens when the player interacts with an object, such as a dialogue, a battle, a cutscene, etc. You can use the default events provided by Zero RPG Kit or create your own events.</li>
|
116 |
-
<li>Select an event and drag it onto the grid. You can also use the link tool to link it to an object or another event.</li>
|
117 |
-
<li>Repeat this process until you add all the events you need on your map or level.</li>
|
118 |
-
<li>Select a trigger from the Triggers window. A trigger is anything that activates an event when the player enters or exits a certain area, such as a teleporter, a switch, a trap, etc. You can use the default triggers provided by Zero RPG Kit or create your own triggers.</li>
|
119 |
-
<li>Select a trigger and drag it onto the grid. You can also use the link tool to link it to an event or another trigger.</li>
|
120 |
-
<li>Repeat this process until you add all the triggers you need on your map or level.</li>
|
121 |
-
<li>Select a light from the Lights window. A light is anything that illuminates your map or level, such as a sun, a moon, a lamp, a fire, etc. You can use the default lights provided by Zero RPG Kit or import your own lights.</li>
|
122 |
-
<li>Select a light and drag it onto the grid. You can also use the rotate tool to rotate it or the scale tool to resize it.</li>
|
123 |
-
<li>Repeat this process until you add all the lights you need on your map or level.</li>
|
124 |
-
<li>Select a camera from the Cameras window. A camera is anything that controls how your map or level is viewed by the player, such as a perspective, an orthographic, a follow, etc. You can use the default cameras provided by Zero RPG Kit or create your own cameras.</li>
|
125 |
-
<li>Select a camera and drag it onto the grid. You can also use the rotate tool to rotate it or the scale tool to resize it.</li>
|
126 |
-
<li>Repeat this process until you add all the cameras you need on your map or level.</li>
|
127 |
-
</ol>
|
128 |
-
<p>You can also use the preview button to test your map or level in play mode. You can also use the save button to save your map or level as a scene file in your project folder.</p>
|
129 |
-
<h3>Step 3: Add your own characters, enemies, items, and quests with the easy-to-use tools</h3>
|
130 |
-
<p>The next thing you need to do is to add your own characters, enemies, items, and quests with the easy-to-use tools of Zero RPG Kit. These tools are windows that allow you to create and edit these elements of your game with simple forms and fields.</p>
|
131 |
-
<p>To open these tools, go to Window > Zero RPG Kit > Tools. Here you can see different windows, such as Character Creator, Enemy Creator, Item Creator, Quest Creator, etc. Each window has different tabs and options for creating and editing these elements of your game.</p>
|
132 |
-
<p>For example, in the Character Creator window, you can create and edit your own characters by filling in their name, description, stats, skills, inventory, equipment, appearance, animations, sounds, etc. In the Enemy Creator window, you can create and edit your own enemies by filling in their name, description, stats, skills, loot, appearance, animations, sounds, etc. In the Item Creator window, you can create and edit your own items by filling in their name, description, stats, type, icon, etc. In the Quest Creator window, you can create and edit your own quests by filling in their name, description, objectives, rewards, conditions, etc.</p>
|
133 |
-
<p>You can also use the preview button to test your characters, enemies, items, and quests in play mode. You can also use the save button to save them as scriptable objects in your project folder.</p>
|
134 |
-
<h3>Step 4: Test and debug your game with the integrated console and profiler</h3>
|
135 |
-
<p>The next thing you need to do is to test and debug your game with the integrated console and profiler of Zero RPG Kit. These are tools that allow you to monitor and optimize the performance and quality of your game.</p>
|
136 |
-
<p>To open these tools, go to Window > Zero RPG Kit > Tools. Here you can see different windows, such as Console and Profiler. Each window has different tabs and options for testing and debugging your game.</p>
|
137 |
-
<p>For example, in the Console window, you can see the output of your game, such as messages, errors, warnings, etc. You can also use commands to execute functions or change variables in your game. In the Profiler window, you can see the statistics of your game, such as CPU usage, memory usage, frame rate, etc. You can also use graphs and charts to analyze the performance of your game.</p>
|
138 |
-
<p>You can also use the play button to run your game in play mode. You can also use the pause button to pause your game and inspect its state. You can also use the step button to advance your game frame by frame.</p>
|
139 |
-
<h3>Step 5: Build and deploy your game to your desired platform</h3>
|
140 |
-
<p>The final thing you need to do is to build and deploy your game to your desired platform with Zero RPG Kit. This is the process of exporting your game as an executable file that can run on different devices and platforms.</p>
|
141 |
-
<p>To build and deploy your game, follow these steps:</p>
|
142 |
-
<ol>
|
143 |
-
<li>Select a platform from the Platform window. A platform is a device or system that can run your game, such as Windows, Mac, Linux, Android, iOS, WebGL, etc. You can use the default platforms provided by Unity or add your own platforms.</li>
|
144 |
-
<li>Select a platform and click on the build button. A window will pop up asking you to choose a location and a name for your build file. You can also choose other options such as compression, resolution, quality, etc.</li>
|
145 |
-
<li>Click on build to start the building process. This may take some time depending on the size and complexity of your game.</li>
|
146 |
-
<li>Once the building process is done, you will see a message saying that your build is complete. You can also see the location and name of your build file.</li>
|
147 |
-
<li>Copy or move your build file to your target device or system. For example, if you built your game for Windows, you can copy or move it to a Windows computer. If you built your game for Android, you can copy or move it to an Android device.</li>
|
148 |
-
<li>Run your build file on your target device or system. For example, if you built your game for Windows, you can double-click on it to run it on a Windows computer. If you built your game for Android, you can tap on it to run it on an Android device.</li>
|
149 |
-
</ol>
|
150 |
-
<p>Congratulations! You have successfully created your own RPG with Zero RPG Kit!</p>
|
151 |
-
<h2>Conclusion and FAQs</h2>
|
152 |
-
<p>In this article, we have shown you what Zero RPG Kit is, how to download and install it, and how to use it to create your own RPG. We hope that you have found this article helpful and informative. If you have any questions or feedback, please feel free to contact us or leave a comment below. Here are some frequently asked questions about Zero RPG Kit:</p>
|
153 |
-
<h3>Q: How much does Zero RPG Kit cost?</h3>
|
154 |
-
<p>A: Zero RPG Kit offers three types of licenses: Personal, Plus, and Pro. The Personal license costs $29 per month or $299 per year. The Plus license costs $49 per month or $499 per year. The Pro license costs $99 per month or $999 per year. You can also get a free trial for 14 days.</p>
|
155 |
-
<h3>Q: What are the differences between the licenses?</h3>
|
156 |
-
<p>A: The main differences between the licenses are the number of projects, users, and features that you can use with Zero RPG Kit. The Personal license allows you to use Zero RPG Kit for one project and one user. The Plus license allows you to use Zero RPG Kit for three projects and three users. The Pro license allows you to use Zero RPG Kit for unlimited projects and users. The Pro license also gives you access to more features, such as multiplayer support, source code access, priority support, etc.</p>
|
157 |
-
<h3>Q: Can I use Zero RPG Kit for commercial purposes?</h3>
|
158 |
-
<p>A: Yes, you can use Zero RPG Kit for commercial purposes as long as you have a valid license and you follow the terms and conditions of Zero RPG Kit. You can sell or distribute your games made with Zero RPG Kit without paying any royalties or fees to Zero RPG Kit.</p>
|
159 |
-
<h3>Q: Can I modify or extend Zero RPG Kit?</h3>
|
160 |
-
<p>A: Yes, you can modify or extend Zero RPG Kit as much as you want. You can add your own features, assets, scripts, etc. to Zero RPG Kit. You can also use other assets or plugins from the Unity Asset Store or other sources with Zero RPG Kit. However, if you want to access the source code of Zero RPG Kit, you need to have a Pro license.</p>
|
161 |
-
<h3>Q: Where can I find more tutorials and resources for Zero RPG Kit?</h3>
|
162 |
-
<p>A: You can find more tutorials and resources for Zero RPG Kit on the official website of Zero RPG Kit, which is <a href="">https://zerorpgkit.com/</a>. Here you can find the documentation, videos, forums, blogs, etc. for Zero RPG Kit. You can also join the Discord server of Zero RPG Kit, which is <a href="">https://discord.gg/zerorpgkit</a>. Here you can chat with other users and developers of Zero RPG Kit, ask questions, share ideas, etc.</p> 401be4b1e0<br />
|
163 |
-
<br />
|
164 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AI-ZTH-03-23/3.HTML5-Aframe-3dMap-Flight/README.md
DELETED
@@ -1,53 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: 3.HTML5-3D-VR-Aframe-Map-Land
|
3 |
-
emoji: 🗺️VR🏞️
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: green
|
6 |
-
sdk: static
|
7 |
-
pinned: false
|
8 |
-
license: mit
|
9 |
-
duplicated_from: awacke1/HTML5-Aframe-3dMap-Flight
|
10 |
-
---
|
11 |
-
|
12 |
-
🏷️ **Title:** HTML5-3D-VR-Aframe-Map 📚3D-VR
|
13 |
-
|
14 |
-
📋 **Description:** This is a fun 📚3D-VR simulator that shows a map 🗺️ with motion controls ⌨️ of the WASD keyboard. You can explore a 3D landscape 🏞️ using Aframe.
|
15 |
-
|
16 |
-
🧐 **Details:**
|
17 |
-
|
18 |
-
- **HTML5:** Refers to the version of the HTML (Hypertext Markup Language) used to create the web page on which the 3D-VR-Aframe-Map is hosted.
|
19 |
-
|
20 |
-
- **3D:** Refers to the three-dimensional nature of the map in the 3D-VR-Aframe-Map simulator.
|
21 |
-
|
22 |
-
- **VR:** Refers to the virtual reality aspect of the 3D-VR-Aframe-Map simulator. Users can immerse themselves in the virtual environment and interact with it using VR headsets.
|
23 |
-
|
24 |
-
- **Aframe:** Refers to the web framework used to create the 3D-VR-Aframe-Map simulator. Aframe is a popular framework for creating virtual reality experiences on the web.
|
25 |
-
|
26 |
-
- **Map:** Refers to the representation of geographic or spatial data in a visual form. In the 3D-VR-Aframe-Map simulator, users can explore a 3D landscape using motion controls and a map interface.
|
27 |
-
|
28 |
-
💻 **Code Snippet:**
|
29 |
-
|
30 |
-
```html
|
31 |
-
<html>
|
32 |
-
<head>
|
33 |
-
<title>HTML5-3D-VR-Aframe-Map 📚3D-VR </title>
|
34 |
-
<script src="https://aframe.io/releases/1.2.0/aframe.min.js"></script>
|
35 |
-
</head>
|
36 |
-
<body>
|
37 |
-
<a-scene>
|
38 |
-
<a-box position="-1 0.5 -3" rotation="0 45 0" color="#4CC3D9"></a-box>
|
39 |
-
<a-sphere position="0 1.25 -5" radius="1.25" color="#EF2D5E"></a-sphere>
|
40 |
-
<a-cylinder position="1 0.75 -3" radius="0.5" height="1.5" color="#FFC65D"></a-cylinder>
|
41 |
-
<a-plane position="0 0 -4" rotation="-90 0 0" width="4" height="4" color="#7BC8A4"></a-plane>
|
42 |
-
<a-sky color="#ECECEC"></a-sky>
|
43 |
-
</a-scene>
|
44 |
-
</body>
|
45 |
-
</html>
|
46 |
-
```
|
47 |
-
|
48 |
-
🔑 Acronyms:
|
49 |
-
|
50 |
-
HTML: Hypertext Markup Language, a coding language used to create web pages.
|
51 |
-
VR: Virtual Reality, an immersive experience that simulates a real environment.
|
52 |
-
Aframe: A web framework used to create virtual reality experiences on the web.
|
53 |
-
WASD: A set of four keyboard keys that are commonly used in video games for motion controls.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIWaves/SOP_Generation-single/app.py
DELETED
@@ -1,395 +0,0 @@
|
|
1 |
-
import sys
|
2 |
-
import os
|
3 |
-
import argparse
|
4 |
-
from gradio_base import WebUI, UIHelper, PORT, HOST, Client
|
5 |
-
from gradio_config import GradioConfig as gc
|
6 |
-
from typing import List, Tuple, Any
|
7 |
-
import gradio as gr
|
8 |
-
import time
|
9 |
-
from Agent import Agent
|
10 |
-
from design_states import get_desgin_states,get_cot_result
|
11 |
-
from gen_utils import *
|
12 |
-
from utils import get_embedding,cos_sim
|
13 |
-
import torch
|
14 |
-
import json
|
15 |
-
import openai
|
16 |
-
|
17 |
-
def get_embedding(sentence,api_key):
|
18 |
-
openai.api_key = api_key
|
19 |
-
embedding_model = openai.Embedding
|
20 |
-
embed = embedding_model.create(
|
21 |
-
model="text-embedding-ada-002",
|
22 |
-
input=sentence
|
23 |
-
)
|
24 |
-
embed = embed["data"][0]["embedding"]
|
25 |
-
embed = torch.tensor(embed,dtype=torch.float32)
|
26 |
-
if len(embed.shape)==1:
|
27 |
-
embed = embed.unsqueeze(0)
|
28 |
-
return embed
|
29 |
-
|
30 |
-
class GeneralUI(WebUI):
|
31 |
-
def render_and_register_ui(self):
|
32 |
-
# bind the agent with avatar
|
33 |
-
self.agent_name:list = [self.cache["agents_name"]] if isinstance(self.cache["agents_name"], str) else self.cache['agents_name']
|
34 |
-
gc.add_agent(self.agent_name)
|
35 |
-
|
36 |
-
def handle_message(self, history, state, agent_name, token, node_name):
|
37 |
-
if state % 10 == 0:
|
38 |
-
self.data_history.append({agent_name: token})
|
39 |
-
elif state % 10 == 1:
|
40 |
-
# Same state. Need to add new bubble in same bubble.
|
41 |
-
self.data_history[-1][agent_name] += token
|
42 |
-
elif state % 10 == 2:
|
43 |
-
# New state. Need to add new bubble.
|
44 |
-
history.append([None, ""])
|
45 |
-
self.data_history.clear()
|
46 |
-
self.data_history.append({agent_name: token})
|
47 |
-
else:
|
48 |
-
assert False, "Invalid state."
|
49 |
-
render_data = self.render_bubble(history, self.data_history, node_name, render_node_name= True)
|
50 |
-
return render_data
|
51 |
-
|
52 |
-
def __init__(
|
53 |
-
self,
|
54 |
-
client_cmd: list,
|
55 |
-
socket_host: str = HOST,
|
56 |
-
socket_port: int = PORT,
|
57 |
-
bufsize: int = 1024,
|
58 |
-
ui_name: str = "GeneralUI"
|
59 |
-
):
|
60 |
-
super(GeneralUI, self).__init__(client_cmd, socket_host, socket_port, bufsize, ui_name)
|
61 |
-
self.first_recieve_from_client()
|
62 |
-
self.current_node_name = ""
|
63 |
-
self.data_history = None
|
64 |
-
for _ in ['agents_name', 'api_key']:
|
65 |
-
assert _ in self.cache
|
66 |
-
|
67 |
-
def generate_sop(self,api_key,proxy,target):
|
68 |
-
os.environ["API_KEY"] = api_key
|
69 |
-
# os.environ["PROXY"] = proxy
|
70 |
-
self.design_assistant = "An assistant that can help users create content such as articles, blogs, advertising copy, etc"
|
71 |
-
self.tutor = "A tutor who provides personalized learning resources for students to help them understand complex concepts and problems"
|
72 |
-
self.online_medical_consultant = "An online medical consultant who offers preliminary medical advice to patients and answers common questions about diseases, symptoms, and treatments."
|
73 |
-
self.online_legal_consultant = "An online legal advisor who can respond to inquiries related to legal matters, providing basic legal information and advice."
|
74 |
-
self.online_financial_advisor = "An online financial advisor who can analyze financial markets and data, offering investment advice and market forecasts to users."
|
75 |
-
self.virtual_tour_guide = "A virtual tour guide providing destination information, travel recommendations, and virtual travel experiences for travelers."
|
76 |
-
self.design_assistant = get_embedding(self.design_assistant,api_key)
|
77 |
-
self.tutor = get_embedding(self.tutor,api_key)
|
78 |
-
self.online_medical_consultant = get_embedding(self.online_medical_consultant,api_key)
|
79 |
-
self.online_legal_consultant = get_embedding(self.online_legal_consultant,api_key)
|
80 |
-
self.online_financial_advisor = get_embedding(self.online_financial_advisor,api_key)
|
81 |
-
self.virtual_tour_guide = get_embedding(self.virtual_tour_guide,api_key)
|
82 |
-
self.embeddings = torch.cat([self.design_assistant,self.tutor,self.online_medical_consultant,self.online_legal_consultant,self.online_financial_advisor,self.virtual_tour_guide],dim = 0)
|
83 |
-
self.SOP["config"]["API_KEY"] = api_key
|
84 |
-
# self.SOP["config"]["PROXY"] = proxy
|
85 |
-
target_tensor = get_embedding(target,api_key)
|
86 |
-
sim_scores = cos_sim(target_tensor, self.embeddings)[0]
|
87 |
-
top_k_score, top_k_idx = torch.topk(sim_scores,k = 1)
|
88 |
-
if top_k_score > 0.7:
|
89 |
-
index = top_k_idx
|
90 |
-
else:
|
91 |
-
index = 0
|
92 |
-
target = get_cot_result(target)
|
93 |
-
design_states = get_desgin_states(target,index)
|
94 |
-
root = design_states[0]["state_name"]
|
95 |
-
agents = get_agents(design_states)
|
96 |
-
relations = get_relations(design_states)
|
97 |
-
states = gen_states(design_states)
|
98 |
-
for state_name,state_dict in states.items():
|
99 |
-
state_dict["begin_role"] = list(agents.keys())[0]
|
100 |
-
state_dict["begin_query"] = "Now that we are in the **{}**, I'm glad to offer you assistance.".format(state_name)
|
101 |
-
self.SOP["root"] = root
|
102 |
-
self.SOP["relations"] = relations
|
103 |
-
self.SOP["agents"] = agents
|
104 |
-
self.SOP["states"] = states
|
105 |
-
# 将字典写入JSON文件
|
106 |
-
print(self.SOP)
|
107 |
-
file_name = 'generated_sop.json'
|
108 |
-
with open(file_name, "w",encoding="utf-8") as json_file:
|
109 |
-
json.dump(self.SOP, json_file ,indent=4,ensure_ascii=False)
|
110 |
-
return file_name
|
111 |
-
|
112 |
-
def load_sop_fn(self,sop):
|
113 |
-
return sop.name
|
114 |
-
|
115 |
-
def construct_ui(self):
|
116 |
-
with gr.Blocks(css=gc.CSS) as demo:
|
117 |
-
with gr.Tab(label="SOP generation") as tab1:
|
118 |
-
self.SOP = {
|
119 |
-
"config": {
|
120 |
-
"API_KEY": "sk-********",
|
121 |
-
"MAX_CHAT_HISTORY": "5",
|
122 |
-
"User_Names": '["User"]',
|
123 |
-
},
|
124 |
-
"root": "state1",
|
125 |
-
"relations": {
|
126 |
-
"state1": {"0": "state1", "1": "state2"},
|
127 |
-
"state2": {"0": "state2", "1": "end_state"},
|
128 |
-
},
|
129 |
-
"agents": None,
|
130 |
-
"states": None,
|
131 |
-
}
|
132 |
-
gr.Markdown("""# Generate Agent""")
|
133 |
-
with gr.Row():
|
134 |
-
self.api_key_sop_generation = gr.Textbox(label="api_key")
|
135 |
-
self.proxy_sop_generation = gr.Textbox(label="proxy",visible=False)
|
136 |
-
with gr.Row():
|
137 |
-
self.requirement_sop_generation = gr.Textbox(value ="a shopping assistant help customer to buy the commodity",label="requirement")
|
138 |
-
with gr.Row():
|
139 |
-
self.generated_sop = gr.File(label="generated_file")
|
140 |
-
self.generate_button = gr.Button(label="Generate")
|
141 |
-
self.generate_button.click(fn = self.generate_sop,inputs=[self.api_key_sop_generation,self.proxy_sop_generation,self.requirement_sop_generation],outputs=[self.generated_sop])
|
142 |
-
with gr.Tab(label="Chat") as tab2:
|
143 |
-
uploaded_sop = gr.State()
|
144 |
-
with gr.Row():
|
145 |
-
sop = gr.File(label="upload your custmized SOP")
|
146 |
-
load_sop_btn = gr.Button(value="Load SOP")
|
147 |
-
load_sop_btn.click(self.load_sop_fn, sop,uploaded_sop)
|
148 |
-
with gr.Column():
|
149 |
-
self.radio_mode = gr.Radio(
|
150 |
-
[Client.SINGLE_MODE],
|
151 |
-
label = Client.MODE_LABEL,
|
152 |
-
info = Client.MODE_INFO,
|
153 |
-
value= Client.SINGLE_MODE,
|
154 |
-
interactive=True
|
155 |
-
# label="Select the execution mode",
|
156 |
-
# info="Single mode refers to when the current agent output ends, it will stop running until you click to continue. Auto mode refers to when you complete the input, all agents will continue to output until the task ends."
|
157 |
-
)
|
158 |
-
self.text_api = gr.Textbox(
|
159 |
-
value = self.cache["api_key"],
|
160 |
-
placeholder="openai key",
|
161 |
-
label="Please input valid openai key for gpt-3.5-turbo-16k."
|
162 |
-
)
|
163 |
-
self.btn_start = gr.Button(
|
164 |
-
value="Start😁(Click here to start!)",
|
165 |
-
)
|
166 |
-
self.chatbot = gr.Chatbot(
|
167 |
-
elem_id="chatbot1",
|
168 |
-
label="Dialog",
|
169 |
-
visible=False,
|
170 |
-
height=700
|
171 |
-
)
|
172 |
-
self.btn_next = gr.Button(
|
173 |
-
value="Next Agent Start",
|
174 |
-
visible=False
|
175 |
-
)
|
176 |
-
with gr.Row():
|
177 |
-
self.text_input = gr.Textbox(
|
178 |
-
placeholder="Please enter your content.",
|
179 |
-
label="Input",
|
180 |
-
scale=9,
|
181 |
-
visible=False
|
182 |
-
)
|
183 |
-
self.btn_send = gr.Button(
|
184 |
-
value="Send",
|
185 |
-
visible=False
|
186 |
-
)
|
187 |
-
self.btn_reset = gr.Button(
|
188 |
-
value="Restart",
|
189 |
-
visible=False
|
190 |
-
)
|
191 |
-
|
192 |
-
all_components = [self.btn_start, self.btn_send, self.btn_reset, self.chatbot, self.text_input, self.btn_next]
|
193 |
-
|
194 |
-
self.btn_start.click(
|
195 |
-
fn = self.btn_start_when_click,
|
196 |
-
inputs=[self.radio_mode, self.text_api,uploaded_sop],
|
197 |
-
outputs=[self.btn_start, self.btn_send, self.btn_reset, self.chatbot, self.text_input, self.btn_next, self.radio_mode, self.text_api]
|
198 |
-
).then(
|
199 |
-
fn = self.btn_start_after_click,
|
200 |
-
inputs=[self.chatbot],
|
201 |
-
outputs=all_components
|
202 |
-
)
|
203 |
-
|
204 |
-
self.btn_send.click(
|
205 |
-
fn=self.btn_send_when_click,
|
206 |
-
inputs=[self.text_input, self.chatbot],
|
207 |
-
outputs=all_components
|
208 |
-
).then(
|
209 |
-
fn=self.btn_send_after_click,
|
210 |
-
inputs=[self.text_input, self.chatbot],
|
211 |
-
outputs=all_components
|
212 |
-
)
|
213 |
-
|
214 |
-
self.text_input.submit(
|
215 |
-
fn=self.btn_send_when_click,
|
216 |
-
inputs=[self.text_input, self.chatbot],
|
217 |
-
outputs=all_components
|
218 |
-
).then(
|
219 |
-
fn=self.btn_send_after_click,
|
220 |
-
inputs=[self.text_input, self.chatbot],
|
221 |
-
outputs=all_components
|
222 |
-
)
|
223 |
-
|
224 |
-
self.btn_reset.click(
|
225 |
-
fn=self.btn_reset_when_click,
|
226 |
-
inputs=[],
|
227 |
-
outputs=all_components
|
228 |
-
).then(
|
229 |
-
fn=self.btn_reset_after_click,
|
230 |
-
inputs=[],
|
231 |
-
outputs=[self.btn_start, self.btn_send, self.btn_reset, self.chatbot, self.text_input, self.btn_next, self.radio_mode, self.text_api]
|
232 |
-
)
|
233 |
-
|
234 |
-
self.btn_next.click(
|
235 |
-
fn=self.btn_next_when_click,
|
236 |
-
inputs=[self.chatbot],
|
237 |
-
outputs=all_components
|
238 |
-
).then(
|
239 |
-
fn=self.btn_next_after_click,
|
240 |
-
inputs=[self.chatbot],
|
241 |
-
outputs=all_components
|
242 |
-
)
|
243 |
-
|
244 |
-
self.demo = demo
|
245 |
-
|
246 |
-
def btn_start_when_click(self, mode, api,sop):
|
247 |
-
"""
|
248 |
-
inputs=[mode, api]
|
249 |
-
outputs=[self.btn_start, self.btn_send, self.btn_reset, self.chatbot, self.text_input, self.btn_next, self.radio_mode]
|
250 |
-
"""
|
251 |
-
print("server: send ", mode, api)
|
252 |
-
self.send_start_cmd({"mode": mode, "api_key":api,"uploaded_sop": sop})
|
253 |
-
agents,roles_to_names,names_to_roles = Agent.from_config(str(sop))
|
254 |
-
agents_name = []
|
255 |
-
for i in names_to_roles :
|
256 |
-
for j in names_to_roles[i]:
|
257 |
-
agents_name.append(j+"("+names_to_roles[i][j]+")")
|
258 |
-
self.new_render_and_register_ui(agents_name)
|
259 |
-
return gr.Button.update(visible=False), \
|
260 |
-
gr.Button.update(visible=False),\
|
261 |
-
gr.Button.update(visible=False),\
|
262 |
-
gr.Chatbot.update(visible=True),\
|
263 |
-
gr.Textbox.update(visible=False),\
|
264 |
-
gr.Button.update(visible=False),\
|
265 |
-
gr.Radio.update(visible=False),\
|
266 |
-
gr.Textbox.update(visible=False)
|
267 |
-
|
268 |
-
def new_render_and_register_ui(self,agent_names):
|
269 |
-
gc.add_agent(agent_names, 0)
|
270 |
-
|
271 |
-
def btn_start_after_click(self, history):
|
272 |
-
"""
|
273 |
-
inputs=[self.chatbot]
|
274 |
-
outputs=[self.btn_start, self.btn_send, self.btn_reset, self.chatbot, self.text_input, self.btn_next]
|
275 |
-
"""
|
276 |
-
if self.data_history is None:
|
277 |
-
self.data_history = list()
|
278 |
-
receive_server = self.receive_server
|
279 |
-
while True:
|
280 |
-
data_list: List = receive_server.send(None)
|
281 |
-
for item in data_list:
|
282 |
-
data = eval(item)
|
283 |
-
assert isinstance(data, list)
|
284 |
-
state, agent_name, token, node_name = data
|
285 |
-
self.current_node_name = node_name
|
286 |
-
assert isinstance(state, int)
|
287 |
-
assert state in [10, 11, 12, 30, 99, 98]
|
288 |
-
if state == 99:
|
289 |
-
# finish
|
290 |
-
yield gr.Button.update(visible=False),\
|
291 |
-
gr.Button.update(visible=True, interactive=False),\
|
292 |
-
gr.Button.update(visible=True, interactive=True),\
|
293 |
-
history,\
|
294 |
-
gr.Textbox.update(visible=True, interactive=False),\
|
295 |
-
gr.Button.update(visible=False)
|
296 |
-
return
|
297 |
-
elif state == 98:
|
298 |
-
# single mode
|
299 |
-
yield gr.Button.update(visible=False), \
|
300 |
-
gr.Button.update(visible=False),\
|
301 |
-
gr.Button.update(visible=True),\
|
302 |
-
history,\
|
303 |
-
gr.Textbox.update(visible=False),\
|
304 |
-
gr.Button.update(visible=True, value=f"Next Agent: 🤖{agent_name} | Next Node: ⭕{node_name}")
|
305 |
-
return
|
306 |
-
elif state == 30:
|
307 |
-
# user input
|
308 |
-
yield gr.Button.update(visible=False), \
|
309 |
-
gr.Button.update(visible=True),\
|
310 |
-
gr.Button.update(visible=True),\
|
311 |
-
history,\
|
312 |
-
gr.Textbox.update(visible=True, value=""),\
|
313 |
-
gr.Button.update(visible=False)
|
314 |
-
return
|
315 |
-
history = self.handle_message(history, state, agent_name, token, node_name)
|
316 |
-
yield gr.Button.update(visible=False), \
|
317 |
-
gr.Button.update(visible=False),\
|
318 |
-
gr.Button.update(visible=False),\
|
319 |
-
history,\
|
320 |
-
gr.Textbox.update(visible=False),\
|
321 |
-
gr.Button.update(visible=False)
|
322 |
-
|
323 |
-
def btn_send_when_click(self, text_input, history):
|
324 |
-
'''
|
325 |
-
inputs=[self.text_input, self.chatbot]
|
326 |
-
outputs=[self.btn_start, self.btn_send, self.btn_reset, self.chatbot, self.text_input, self.btn_next]
|
327 |
-
'''
|
328 |
-
history = self.handle_message(history, 10, 'User', text_input, self.current_node_name)
|
329 |
-
self.send_message("<USER>"+text_input+self.SIGN["SPLIT"])
|
330 |
-
yield gr.Button.update(visible=False), \
|
331 |
-
gr.Button.update(visible=False),\
|
332 |
-
gr.Button.update(visible=False),\
|
333 |
-
history,\
|
334 |
-
gr.Textbox.update(visible=False),\
|
335 |
-
gr.Button.update(visible=False)
|
336 |
-
return
|
337 |
-
|
338 |
-
def btn_send_after_click(self, text_input, history):
|
339 |
-
'''
|
340 |
-
inputs=[self.text_input, self.chatbot]
|
341 |
-
outputs=[self.btn_start, self.btn_send, self.btn_reset, self.chatbot, self.text_input, self.btn_next]
|
342 |
-
'''
|
343 |
-
yield from self.btn_start_after_click(history=history)
|
344 |
-
return
|
345 |
-
|
346 |
-
def btn_reset_when_click(self):
|
347 |
-
"""
|
348 |
-
outputs=[self.btn_start, self.btn_send, self.btn_reset, self.chatbot, self.text_input, self.btn_next]
|
349 |
-
"""
|
350 |
-
return gr.Button.update(interactive=False), gr.Button.update(interactive=False), gr.Button.update(interactive=False, value="Restarting....."), gr.Chatbot.update(label="Dialog"), \
|
351 |
-
gr.Textbox.update(interactive=False), gr.Button.update(visible=False)
|
352 |
-
|
353 |
-
def btn_reset_after_click(self):
|
354 |
-
"""
|
355 |
-
outputs=[self.btn_start, self.btn_send, self.btn_reset, self.chatbot, self.text_input, self.btn_next, self.radio_mode]
|
356 |
-
"""
|
357 |
-
self.reset()
|
358 |
-
self.first_recieve_from_client(reset_mode=True)
|
359 |
-
self.current_node_name = ""
|
360 |
-
self.data_history = None
|
361 |
-
return gr.Button.update(interactive=True, visible=True), \
|
362 |
-
gr.Button.update(interactive=True, visible=False), \
|
363 |
-
gr.Button.update(interactive=True, value="Restart", visible=False), \
|
364 |
-
gr.Chatbot.update(label="Dialog", visible=False, value=None), \
|
365 |
-
gr.Textbox.update(interactive=True, visible=False),\
|
366 |
-
gr.Button.update(visible=False),\
|
367 |
-
gr.Radio.update(visible=True), \
|
368 |
-
gr.Textbox.update(visible=True)
|
369 |
-
|
370 |
-
def btn_next_when_click(self, history):
|
371 |
-
"""
|
372 |
-
outputs=[self.btn_start, self.btn_send, self.btn_reset, self.chatbot, self.text_input, self.btn_next]
|
373 |
-
"""
|
374 |
-
yield gr.Button.update(visible=False), \
|
375 |
-
gr.Button.update(visible=False),\
|
376 |
-
gr.Button.update(visible=False),\
|
377 |
-
history,\
|
378 |
-
gr.Textbox.update(visible=False),\
|
379 |
-
gr.Button.update(visible=False)
|
380 |
-
self.send_message("nothing")
|
381 |
-
return
|
382 |
-
|
383 |
-
def btn_next_after_click(self, history):
|
384 |
-
time.sleep(1)
|
385 |
-
yield from self.btn_start_after_click(history=history)
|
386 |
-
return
|
387 |
-
|
388 |
-
if __name__ == '__main__':
|
389 |
-
parser = argparse.ArgumentParser(description='A demo of chatbot')
|
390 |
-
parser.add_argument('--agent', type=str, help='path to SOP json')
|
391 |
-
args = parser.parse_args()
|
392 |
-
|
393 |
-
ui = GeneralUI(client_cmd=["python","gradio_backend.py"])
|
394 |
-
ui.construct_ui()
|
395 |
-
ui.run()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aaaaaaaabdualh/meter2poem-1/README.md
DELETED
@@ -1,14 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Meter2poem 1
|
3 |
-
emoji: 🐨
|
4 |
-
colorFrom: gray
|
5 |
-
colorTo: red
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.2
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: afl-3.0
|
11 |
-
duplicated_from: mareloraby/meter2poem-1
|
12 |
-
---
|
13 |
-
|
14 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Abhaykoul/Merriam-webster_clone/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Merriam-webster Clone
|
3 |
-
emoji: ⚡
|
4 |
-
colorFrom: green
|
5 |
-
colorTo: gray
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.28.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: mit
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/Opchatgpts.py
DELETED
@@ -1,7 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
from .ChatgptLogin import ChatgptLogin
|
4 |
-
|
5 |
-
|
6 |
-
class Opchatgpts(ChatgptLogin):
|
7 |
-
url = "https://opchatgpts.net"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateOverlapSizer.js
DELETED
@@ -1,8 +0,0 @@
|
|
1 |
-
import CreateAnySizer from './utils/CreateAnySizer.js';
|
2 |
-
import OverlapSizer from '../../overlapsizer/OverlapSizer.js';
|
3 |
-
|
4 |
-
var CreateOverlapSizer = function (scene, data, view, styles, customBuilders) {
|
5 |
-
return CreateAnySizer(scene, data, view, styles, customBuilders, OverlapSizer);
|
6 |
-
}
|
7 |
-
|
8 |
-
export default CreateOverlapSizer;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Akmyradov/TurkmenTTSweSTT/vits/README.md
DELETED
@@ -1,58 +0,0 @@
|
|
1 |
-
# VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
|
2 |
-
|
3 |
-
### Jaehyeon Kim, Jungil Kong, and Juhee Son
|
4 |
-
|
5 |
-
In our recent [paper](https://arxiv.org/abs/2106.06103), we propose VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech.
|
6 |
-
|
7 |
-
Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Our method adopts variational inference augmented with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text. With the uncertainty modeling over latent variables and the stochastic duration predictor, our method expresses the natural one-to-many relationship in which a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a single speaker dataset, shows that our method outperforms the best publicly available TTS systems and achieves a MOS comparable to ground truth.
|
8 |
-
|
9 |
-
Visit our [demo](https://jaywalnut310.github.io/vits-demo/index.html) for audio samples.
|
10 |
-
|
11 |
-
We also provide the [pretrained models](https://drive.google.com/drive/folders/1ksarh-cJf3F5eKJjLVWY0X1j1qsQqiS2?usp=sharing).
|
12 |
-
|
13 |
-
** Update note: Thanks to [Rishikesh (ऋषिकेश)](https://github.com/jaywalnut310/vits/issues/1), our interactive TTS demo is now available on [Colab Notebook](https://colab.research.google.com/drive/1CO61pZizDj7en71NQG_aqqKdGaA_SaBf?usp=sharing).
|
14 |
-
|
15 |
-
<table style="width:100%">
|
16 |
-
<tr>
|
17 |
-
<th>VITS at training</th>
|
18 |
-
<th>VITS at inference</th>
|
19 |
-
</tr>
|
20 |
-
<tr>
|
21 |
-
<td><img src="resources/fig_1a.png" alt="VITS at training" height="400"></td>
|
22 |
-
<td><img src="resources/fig_1b.png" alt="VITS at inference" height="400"></td>
|
23 |
-
</tr>
|
24 |
-
</table>
|
25 |
-
|
26 |
-
|
27 |
-
## Pre-requisites
|
28 |
-
0. Python >= 3.6
|
29 |
-
0. Clone this repository
|
30 |
-
0. Install python requirements. Please refer [requirements.txt](requirements.txt)
|
31 |
-
1. You may need to install espeak first: `apt-get install espeak`
|
32 |
-
0. Download datasets
|
33 |
-
1. Download and extract the LJ Speech dataset, then rename or create a link to the dataset folder: `ln -s /path/to/LJSpeech-1.1/wavs DUMMY1`
|
34 |
-
1. For mult-speaker setting, download and extract the VCTK dataset, and downsample wav files to 22050 Hz. Then rename or create a link to the dataset folder: `ln -s /path/to/VCTK-Corpus/downsampled_wavs DUMMY2`
|
35 |
-
0. Build Monotonic Alignment Search and run preprocessing if you use your own datasets.
|
36 |
-
```sh
|
37 |
-
# Cython-version Monotonoic Alignment Search
|
38 |
-
cd monotonic_align
|
39 |
-
python setup.py build_ext --inplace
|
40 |
-
|
41 |
-
# Preprocessing (g2p) for your own datasets. Preprocessed phonemes for LJ Speech and VCTK have been already provided.
|
42 |
-
# python preprocess.py --text_index 1 --filelists filelists/ljs_audio_text_train_filelist.txt filelists/ljs_audio_text_val_filelist.txt filelists/ljs_audio_text_test_filelist.txt
|
43 |
-
# python preprocess.py --text_index 2 --filelists filelists/vctk_audio_sid_text_train_filelist.txt filelists/vctk_audio_sid_text_val_filelist.txt filelists/vctk_audio_sid_text_test_filelist.txt
|
44 |
-
```
|
45 |
-
|
46 |
-
|
47 |
-
## Training Exmaple
|
48 |
-
```sh
|
49 |
-
# LJ Speech
|
50 |
-
python train.py -c configs/ljs_base.json -m ljs_base
|
51 |
-
|
52 |
-
# VCTK
|
53 |
-
python train_ms.py -c configs/vctk_base.json -m vctk_base
|
54 |
-
```
|
55 |
-
|
56 |
-
|
57 |
-
## Inference Example
|
58 |
-
See [inference.ipynb](inference.ipynb)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AmirTrader/LinearRegression/Dockerfile
DELETED
@@ -1,16 +0,0 @@
|
|
1 |
-
FROM python:3.9
|
2 |
-
|
3 |
-
WORKDIR /code
|
4 |
-
|
5 |
-
COPY ./requirements.txt /code/requirements.txt
|
6 |
-
RUN python3 -m pip install --no-cache-dir --upgrade pip
|
7 |
-
RUN python3 -m pip install --no-cache-dir --upgrade -r /code/requirements.txt
|
8 |
-
|
9 |
-
COPY . .
|
10 |
-
|
11 |
-
CMD ["panel", "serve", "/code/app.py", "--address", "0.0.0.0", "--port", "7860", "--allow-websocket-origin", "*"]
|
12 |
-
|
13 |
-
RUN mkdir /.cache
|
14 |
-
RUN chmod 777 /.cache
|
15 |
-
RUN mkdir .chroma
|
16 |
-
RUN chmod 777 .chroma
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amitontheweb/InstaoffyzFreeParaphraser/app.py
DELETED
@@ -1,66 +0,0 @@
|
|
1 |
-
#---------------------AI Paraphraser - iFrame code --------------
|
2 |
-
# With direct model load
|
3 |
-
#----------------------------------------------------------------
|
4 |
-
|
5 |
-
|
6 |
-
import transformers
|
7 |
-
import gradio as gr
|
8 |
-
import torch
|
9 |
-
|
10 |
-
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
11 |
-
|
12 |
-
tokenizer = AutoTokenizer.from_pretrained("humarin/chatgpt_paraphraser_on_T5_base")
|
13 |
-
model = AutoModelForSeq2SeqLM.from_pretrained("humarin/chatgpt_paraphraser_on_T5_base")
|
14 |
-
|
15 |
-
def paraphrase(
|
16 |
-
Content_to_Rephrase,
|
17 |
-
num_beams=5,
|
18 |
-
num_beam_groups=5,
|
19 |
-
num_return_sequences=5,
|
20 |
-
repetition_penalty=10.0,
|
21 |
-
diversity_penalty=3.0,
|
22 |
-
no_repeat_ngram_size=2,
|
23 |
-
temperature=0.7,
|
24 |
-
max_length=5000
|
25 |
-
):
|
26 |
-
input_ids = tokenizer(
|
27 |
-
f'paraphrase: {Content_to_Rephrase}',
|
28 |
-
return_tensors="pt", padding="longest",
|
29 |
-
max_length=max_length,
|
30 |
-
truncation=True,
|
31 |
-
).input_ids
|
32 |
-
|
33 |
-
outputs = model.generate(
|
34 |
-
input_ids, temperature=temperature, repetition_penalty=repetition_penalty,
|
35 |
-
num_return_sequences=num_return_sequences, no_repeat_ngram_size=no_repeat_ngram_size,
|
36 |
-
num_beams=num_beams, num_beam_groups=num_beam_groups,
|
37 |
-
max_length=max_length, diversity_penalty=diversity_penalty
|
38 |
-
)
|
39 |
-
|
40 |
-
res = tokenizer.batch_decode(outputs, skip_special_tokens=True)
|
41 |
-
res1 = res [0]
|
42 |
-
res2 = res [1]
|
43 |
-
res3 = res [3]
|
44 |
-
res4 = res [4]
|
45 |
-
|
46 |
-
return res1, res2, res3
|
47 |
-
|
48 |
-
output1 = gr.Textbox(label="Rephrased: Option 1")
|
49 |
-
output2 = gr.Textbox(label="Rephrased: Option 2")
|
50 |
-
output3 = gr.Textbox(label="Rephrased: Option 3")
|
51 |
-
|
52 |
-
iface = gr.Interface(fn=paraphrase,
|
53 |
-
inputs=["text"],
|
54 |
-
outputs=[output1, output2, output3],
|
55 |
-
title="Free AI Sentence Rephraser",
|
56 |
-
description="<ul><li>Paste text in the input box and press 'Submit'.</li><li>Max length: ~35 words (larger content is summarized)</li><li>The rephrased sentences *may not* be better than the original input.</li><li>Model 'humarin' pre-trained by ChatGPT. Temp = 0.7</li></ul>",
|
57 |
-
examples=[
|
58 |
-
["With the humble is wisdom."],
|
59 |
-
["Hatred stirs up strife."],
|
60 |
-
["The way of a fool is right in his own eyes."],
|
61 |
-
["Righteousness leads to life."],
|
62 |
-
],
|
63 |
-
cache_examples=True,
|
64 |
-
)
|
65 |
-
|
66 |
-
iface.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amon1/ChatGPTForAcadamic/Dockerfile
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
FROM python:3.11
|
2 |
-
|
3 |
-
RUN echo '[global]' > /etc/pip.conf && \
|
4 |
-
echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
|
5 |
-
echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
|
6 |
-
|
7 |
-
RUN pip3 install gradio requests[socks] mdtex2html
|
8 |
-
|
9 |
-
COPY . /gpt
|
10 |
-
WORKDIR /gpt
|
11 |
-
|
12 |
-
|
13 |
-
CMD ["python3", "main.py"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amrrs/DragGan-Inversion/PTI/utils/models_utils.py
DELETED
@@ -1,25 +0,0 @@
|
|
1 |
-
import pickle
|
2 |
-
import functools
|
3 |
-
import torch
|
4 |
-
from PTI.configs import paths_config, global_config
|
5 |
-
|
6 |
-
|
7 |
-
def toogle_grad(model, flag=True):
|
8 |
-
for p in model.parameters():
|
9 |
-
p.requires_grad = flag
|
10 |
-
|
11 |
-
|
12 |
-
def load_tuned_G(run_id, type):
|
13 |
-
new_G_path = f'{paths_config.checkpoints_dir}/model_{run_id}_{type}.pt'
|
14 |
-
with open(new_G_path, 'rb') as f:
|
15 |
-
new_G = torch.load(f).to(global_config.device).eval()
|
16 |
-
new_G = new_G.float()
|
17 |
-
toogle_grad(new_G, False)
|
18 |
-
return new_G
|
19 |
-
|
20 |
-
|
21 |
-
def load_old_G():
|
22 |
-
with open(paths_config.stylegan2_ada_ffhq, 'rb') as f:
|
23 |
-
old_G = pickle.load(f)['G_ema'].to(global_config.device).eval()
|
24 |
-
old_G = old_G.float()
|
25 |
-
return old_G
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/centripetalnet/README.md
DELETED
@@ -1,26 +0,0 @@
|
|
1 |
-
# CentripetalNet
|
2 |
-
|
3 |
-
## Introduction
|
4 |
-
|
5 |
-
[ALGORITHM]
|
6 |
-
|
7 |
-
```latex
|
8 |
-
@InProceedings{Dong_2020_CVPR,
|
9 |
-
author = {Dong, Zhiwei and Li, Guoxuan and Liao, Yue and Wang, Fei and Ren, Pengju and Qian, Chen},
|
10 |
-
title = {CentripetalNet: Pursuing High-Quality Keypoint Pairs for Object Detection},
|
11 |
-
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
|
12 |
-
month = {June},
|
13 |
-
year = {2020}
|
14 |
-
}
|
15 |
-
```
|
16 |
-
|
17 |
-
## Results and models
|
18 |
-
|
19 |
-
| Backbone | Batch Size | Step/Total Epochs | Mem (GB) | Inf time (fps) | box AP | Config | Download |
|
20 |
-
| :-------------: | :--------: |:----------------: | :------: | :------------: | :----: | :------: | :--------: |
|
21 |
-
| HourglassNet-104 | [16 x 6](./centripetalnet_hourglass104_mstest_16x6_210e_coco.py) | 190/210 | 16.7 | 3.7 | 44.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/centripetalnet/centripetalnet_hourglass104_mstest_16x6_210e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/centripetalnet/centripetalnet_hourglass104_mstest_16x6_210e_coco/centripetalnet_hourglass104_mstest_16x6_210e_coco_20200915_204804-3ccc61e5.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/centripetalnet/centripetalnet_hourglass104_mstest_16x6_210e_coco/centripetalnet_hourglass104_mstest_16x6_210e_coco_20200915_204804.log.json) |
|
22 |
-
|
23 |
-
Note:
|
24 |
-
|
25 |
-
- TTA setting is single-scale and `flip=True`.
|
26 |
-
- The model we released is the best checkpoint rather than the latest checkpoint (box AP 44.8 vs 44.6 in our experiment).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/mask_scoring_roi_head.py
DELETED
@@ -1,122 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
|
3 |
-
from mmdet.core import bbox2roi
|
4 |
-
from ..builder import HEADS, build_head
|
5 |
-
from .standard_roi_head import StandardRoIHead
|
6 |
-
|
7 |
-
|
8 |
-
@HEADS.register_module()
|
9 |
-
class MaskScoringRoIHead(StandardRoIHead):
|
10 |
-
"""Mask Scoring RoIHead for Mask Scoring RCNN.
|
11 |
-
|
12 |
-
https://arxiv.org/abs/1903.00241
|
13 |
-
"""
|
14 |
-
|
15 |
-
def __init__(self, mask_iou_head, **kwargs):
|
16 |
-
assert mask_iou_head is not None
|
17 |
-
super(MaskScoringRoIHead, self).__init__(**kwargs)
|
18 |
-
self.mask_iou_head = build_head(mask_iou_head)
|
19 |
-
|
20 |
-
def init_weights(self, pretrained):
|
21 |
-
"""Initialize the weights in head.
|
22 |
-
|
23 |
-
Args:
|
24 |
-
pretrained (str, optional): Path to pre-trained weights.
|
25 |
-
Defaults to None.
|
26 |
-
"""
|
27 |
-
super(MaskScoringRoIHead, self).init_weights(pretrained)
|
28 |
-
self.mask_iou_head.init_weights()
|
29 |
-
|
30 |
-
def _mask_forward_train(self, x, sampling_results, bbox_feats, gt_masks,
|
31 |
-
img_metas):
|
32 |
-
"""Run forward function and calculate loss for Mask head in
|
33 |
-
training."""
|
34 |
-
pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results])
|
35 |
-
mask_results = super(MaskScoringRoIHead,
|
36 |
-
self)._mask_forward_train(x, sampling_results,
|
37 |
-
bbox_feats, gt_masks,
|
38 |
-
img_metas)
|
39 |
-
if mask_results['loss_mask'] is None:
|
40 |
-
return mask_results
|
41 |
-
|
42 |
-
# mask iou head forward and loss
|
43 |
-
pos_mask_pred = mask_results['mask_pred'][
|
44 |
-
range(mask_results['mask_pred'].size(0)), pos_labels]
|
45 |
-
mask_iou_pred = self.mask_iou_head(mask_results['mask_feats'],
|
46 |
-
pos_mask_pred)
|
47 |
-
pos_mask_iou_pred = mask_iou_pred[range(mask_iou_pred.size(0)),
|
48 |
-
pos_labels]
|
49 |
-
|
50 |
-
mask_iou_targets = self.mask_iou_head.get_targets(
|
51 |
-
sampling_results, gt_masks, pos_mask_pred,
|
52 |
-
mask_results['mask_targets'], self.train_cfg)
|
53 |
-
loss_mask_iou = self.mask_iou_head.loss(pos_mask_iou_pred,
|
54 |
-
mask_iou_targets)
|
55 |
-
mask_results['loss_mask'].update(loss_mask_iou)
|
56 |
-
return mask_results
|
57 |
-
|
58 |
-
def simple_test_mask(self,
|
59 |
-
x,
|
60 |
-
img_metas,
|
61 |
-
det_bboxes,
|
62 |
-
det_labels,
|
63 |
-
rescale=False):
|
64 |
-
"""Obtain mask prediction without augmentation."""
|
65 |
-
# image shapes of images in the batch
|
66 |
-
ori_shapes = tuple(meta['ori_shape'] for meta in img_metas)
|
67 |
-
scale_factors = tuple(meta['scale_factor'] for meta in img_metas)
|
68 |
-
|
69 |
-
num_imgs = len(det_bboxes)
|
70 |
-
if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes):
|
71 |
-
num_classes = self.mask_head.num_classes
|
72 |
-
segm_results = [[[] for _ in range(num_classes)]
|
73 |
-
for _ in range(num_imgs)]
|
74 |
-
mask_scores = [[[] for _ in range(num_classes)]
|
75 |
-
for _ in range(num_imgs)]
|
76 |
-
else:
|
77 |
-
# if det_bboxes is rescaled to the original image size, we need to
|
78 |
-
# rescale it back to the testing scale to obtain RoIs.
|
79 |
-
if rescale and not isinstance(scale_factors[0], float):
|
80 |
-
scale_factors = [
|
81 |
-
torch.from_numpy(scale_factor).to(det_bboxes[0].device)
|
82 |
-
for scale_factor in scale_factors
|
83 |
-
]
|
84 |
-
_bboxes = [
|
85 |
-
det_bboxes[i][:, :4] *
|
86 |
-
scale_factors[i] if rescale else det_bboxes[i]
|
87 |
-
for i in range(num_imgs)
|
88 |
-
]
|
89 |
-
mask_rois = bbox2roi(_bboxes)
|
90 |
-
mask_results = self._mask_forward(x, mask_rois)
|
91 |
-
concat_det_labels = torch.cat(det_labels)
|
92 |
-
# get mask scores with mask iou head
|
93 |
-
mask_feats = mask_results['mask_feats']
|
94 |
-
mask_pred = mask_results['mask_pred']
|
95 |
-
mask_iou_pred = self.mask_iou_head(
|
96 |
-
mask_feats, mask_pred[range(concat_det_labels.size(0)),
|
97 |
-
concat_det_labels])
|
98 |
-
# split batch mask prediction back to each image
|
99 |
-
num_bboxes_per_img = tuple(len(_bbox) for _bbox in _bboxes)
|
100 |
-
mask_preds = mask_pred.split(num_bboxes_per_img, 0)
|
101 |
-
mask_iou_preds = mask_iou_pred.split(num_bboxes_per_img, 0)
|
102 |
-
|
103 |
-
# apply mask post-processing to each image individually
|
104 |
-
segm_results = []
|
105 |
-
mask_scores = []
|
106 |
-
for i in range(num_imgs):
|
107 |
-
if det_bboxes[i].shape[0] == 0:
|
108 |
-
segm_results.append(
|
109 |
-
[[] for _ in range(self.mask_head.num_classes)])
|
110 |
-
mask_scores.append(
|
111 |
-
[[] for _ in range(self.mask_head.num_classes)])
|
112 |
-
else:
|
113 |
-
segm_result = self.mask_head.get_seg_masks(
|
114 |
-
mask_preds[i], _bboxes[i], det_labels[i],
|
115 |
-
self.test_cfg, ori_shapes[i], scale_factors[i],
|
116 |
-
rescale)
|
117 |
-
# get mask scores with mask iou head
|
118 |
-
mask_score = self.mask_iou_head.get_mask_scores(
|
119 |
-
mask_iou_preds[i], det_bboxes[i], det_labels[i])
|
120 |
-
segm_results.append(segm_result)
|
121 |
-
mask_scores.append(mask_score)
|
122 |
-
return list(zip(segm_results, mask_scores))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/utils/metrics_accumulator.py
DELETED
@@ -1,18 +0,0 @@
|
|
1 |
-
from collections import defaultdict
|
2 |
-
|
3 |
-
import numpy as np
|
4 |
-
|
5 |
-
|
6 |
-
class MetricsAccumulator:
|
7 |
-
def __init__(self) -> None:
|
8 |
-
self.accumulator = defaultdict(lambda: [])
|
9 |
-
|
10 |
-
def update_metric(self, metric_name, metric_value):
|
11 |
-
self.accumulator[metric_name].append(metric_value)
|
12 |
-
|
13 |
-
def print_average_metric(self):
|
14 |
-
for k, v in self.accumulator.items():
|
15 |
-
average_v = np.array(v).mean()
|
16 |
-
print(f"{k} - {average_v:.2f}")
|
17 |
-
|
18 |
-
self.__init__()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/datasets/drive.py
DELETED
@@ -1,59 +0,0 @@
|
|
1 |
-
# dataset settings
|
2 |
-
dataset_type = 'DRIVEDataset'
|
3 |
-
data_root = 'data/DRIVE'
|
4 |
-
img_norm_cfg = dict(
|
5 |
-
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
|
6 |
-
img_scale = (584, 565)
|
7 |
-
crop_size = (64, 64)
|
8 |
-
train_pipeline = [
|
9 |
-
dict(type='LoadImageFromFile'),
|
10 |
-
dict(type='LoadAnnotations'),
|
11 |
-
dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)),
|
12 |
-
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
|
13 |
-
dict(type='RandomFlip', prob=0.5),
|
14 |
-
dict(type='PhotoMetricDistortion'),
|
15 |
-
dict(type='Normalize', **img_norm_cfg),
|
16 |
-
dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
|
17 |
-
dict(type='DefaultFormatBundle'),
|
18 |
-
dict(type='Collect', keys=['img', 'gt_semantic_seg'])
|
19 |
-
]
|
20 |
-
test_pipeline = [
|
21 |
-
dict(type='LoadImageFromFile'),
|
22 |
-
dict(
|
23 |
-
type='MultiScaleFlipAug',
|
24 |
-
img_scale=img_scale,
|
25 |
-
# img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0],
|
26 |
-
flip=False,
|
27 |
-
transforms=[
|
28 |
-
dict(type='Resize', keep_ratio=True),
|
29 |
-
dict(type='RandomFlip'),
|
30 |
-
dict(type='Normalize', **img_norm_cfg),
|
31 |
-
dict(type='ImageToTensor', keys=['img']),
|
32 |
-
dict(type='Collect', keys=['img'])
|
33 |
-
])
|
34 |
-
]
|
35 |
-
|
36 |
-
data = dict(
|
37 |
-
samples_per_gpu=4,
|
38 |
-
workers_per_gpu=4,
|
39 |
-
train=dict(
|
40 |
-
type='RepeatDataset',
|
41 |
-
times=40000,
|
42 |
-
dataset=dict(
|
43 |
-
type=dataset_type,
|
44 |
-
data_root=data_root,
|
45 |
-
img_dir='images/training',
|
46 |
-
ann_dir='annotations/training',
|
47 |
-
pipeline=train_pipeline)),
|
48 |
-
val=dict(
|
49 |
-
type=dataset_type,
|
50 |
-
data_root=data_root,
|
51 |
-
img_dir='images/validation',
|
52 |
-
ann_dir='annotations/validation',
|
53 |
-
pipeline=test_pipeline),
|
54 |
-
test=dict(
|
55 |
-
type=dataset_type,
|
56 |
-
data_root=data_root,
|
57 |
-
img_dir='images/validation',
|
58 |
-
ann_dir='annotations/validation',
|
59 |
-
pipeline=test_pipeline))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/hsigmoid.py
DELETED
@@ -1,34 +0,0 @@
|
|
1 |
-
# Copyright (c) OpenMMLab. All rights reserved.
|
2 |
-
import torch.nn as nn
|
3 |
-
|
4 |
-
from .registry import ACTIVATION_LAYERS
|
5 |
-
|
6 |
-
|
7 |
-
@ACTIVATION_LAYERS.register_module()
|
8 |
-
class HSigmoid(nn.Module):
|
9 |
-
"""Hard Sigmoid Module. Apply the hard sigmoid function:
|
10 |
-
Hsigmoid(x) = min(max((x + bias) / divisor, min_value), max_value)
|
11 |
-
Default: Hsigmoid(x) = min(max((x + 1) / 2, 0), 1)
|
12 |
-
|
13 |
-
Args:
|
14 |
-
bias (float): Bias of the input feature map. Default: 1.0.
|
15 |
-
divisor (float): Divisor of the input feature map. Default: 2.0.
|
16 |
-
min_value (float): Lower bound value. Default: 0.0.
|
17 |
-
max_value (float): Upper bound value. Default: 1.0.
|
18 |
-
|
19 |
-
Returns:
|
20 |
-
Tensor: The output tensor.
|
21 |
-
"""
|
22 |
-
|
23 |
-
def __init__(self, bias=1.0, divisor=2.0, min_value=0.0, max_value=1.0):
|
24 |
-
super(HSigmoid, self).__init__()
|
25 |
-
self.bias = bias
|
26 |
-
self.divisor = divisor
|
27 |
-
assert self.divisor != 0
|
28 |
-
self.min_value = min_value
|
29 |
-
self.max_value = max_value
|
30 |
-
|
31 |
-
def forward(self, x):
|
32 |
-
x = (x + self.bias) / self.divisor
|
33 |
-
|
34 |
-
return x.clamp_(self.min_value, self.max_value)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Arcader7171/positive/app.py
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import random
|
3 |
-
|
4 |
-
def sentences():
|
5 |
-
return random.choice(["'Work may be important, but make time to have some fun' -Me", "'Stay positive. Better days are on their way' -Unknown", "'Life is like a bicycle. To keep your balance, you must keep moving' -Albert Einstein"])
|
6 |
-
|
7 |
-
with gr.Blocks() as pos:
|
8 |
-
txt = gr.Textbox(value="", label="Textbox")
|
9 |
-
btn = gr.Button(value="Free Inspirational Quotes")
|
10 |
-
btn.click(sentences, outputs=[txt])
|
11 |
-
|
12 |
-
|
13 |
-
pos.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/ml_nms.py
DELETED
@@ -1,31 +0,0 @@
|
|
1 |
-
from detectron2.layers import batched_nms
|
2 |
-
|
3 |
-
|
4 |
-
def ml_nms(boxlist, nms_thresh, max_proposals=-1,
|
5 |
-
score_field="scores", label_field="labels"):
|
6 |
-
"""
|
7 |
-
Performs non-maximum suppression on a boxlist, with scores specified
|
8 |
-
in a boxlist field via score_field.
|
9 |
-
Arguments:
|
10 |
-
boxlist(BoxList)
|
11 |
-
nms_thresh (float)
|
12 |
-
max_proposals (int): if > 0, then only the top max_proposals are kept
|
13 |
-
after non-maximum suppression
|
14 |
-
score_field (str)
|
15 |
-
"""
|
16 |
-
if nms_thresh <= 0:
|
17 |
-
return boxlist
|
18 |
-
if boxlist.has('pred_boxes'):
|
19 |
-
boxes = boxlist.pred_boxes.tensor
|
20 |
-
labels = boxlist.pred_classes
|
21 |
-
else:
|
22 |
-
boxes = boxlist.proposal_boxes.tensor
|
23 |
-
labels = boxlist.proposal_boxes.tensor.new_zeros(
|
24 |
-
len(boxlist.proposal_boxes.tensor))
|
25 |
-
scores = boxlist.scores
|
26 |
-
|
27 |
-
keep = batched_nms(boxes, scores, labels, nms_thresh)
|
28 |
-
if max_proposals > 0:
|
29 |
-
keep = keep[: max_proposals]
|
30 |
-
boxlist = boxlist[keep]
|
31 |
-
return boxlist
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BIASLab/sars-cov-2-classification-fcgr/src/models/resnet50_8mers.py
DELETED
@@ -1,103 +0,0 @@
|
|
1 |
-
# https://github.com/c1ph3rr/Deep-Residual-Learning-for-Image-Recognition/blob/master/Resnet50.py
|
2 |
-
from pathlib import Path
|
3 |
-
from tensorflow.keras.models import Model
|
4 |
-
from tensorflow.keras.layers import (
|
5 |
-
Input,
|
6 |
-
Conv2D,
|
7 |
-
Dense,
|
8 |
-
MaxPool2D,
|
9 |
-
GlobalAveragePooling2D,
|
10 |
-
Add,
|
11 |
-
Activation,
|
12 |
-
BatchNormalization,
|
13 |
-
ZeroPadding2D,
|
14 |
-
)
|
15 |
-
|
16 |
-
# Reference name of model
|
17 |
-
MODEL_NAME = str(Path(__file__).resolve().stem)
|
18 |
-
|
19 |
-
def identity_block(inp, filters, kernel_size, block, layer):
|
20 |
-
|
21 |
-
f1, f2, f3 = filters
|
22 |
-
|
23 |
-
conv_name = 'id_conv_b' + block + '_l' + layer
|
24 |
-
batch_name = 'id_batch_b' + block + '_l' + layer
|
25 |
-
|
26 |
-
x = Conv2D(filters=f1, kernel_size=1, padding='same', kernel_initializer='he_normal', name=conv_name + '_a')(inp)
|
27 |
-
x = BatchNormalization(name=batch_name + '_a')(x)
|
28 |
-
x = Activation('relu')(x)
|
29 |
-
|
30 |
-
x = Conv2D(filters=f2, kernel_size=kernel_size, padding='same', kernel_initializer='he_normal', name=conv_name + '_b')(x)
|
31 |
-
x = BatchNormalization(name=batch_name + '_b')(x)
|
32 |
-
x = Activation('relu')(x)
|
33 |
-
|
34 |
-
x = Conv2D(filters=f3, kernel_size=1, padding='same', kernel_initializer='he_normal', name=conv_name + '_c')(x)
|
35 |
-
x = BatchNormalization(name=batch_name + '_c')(x)
|
36 |
-
|
37 |
-
add = Add()([inp, x])
|
38 |
-
x = Activation('relu')(add)
|
39 |
-
|
40 |
-
return x
|
41 |
-
|
42 |
-
|
43 |
-
def convolutional_block(inp, filters, kernel_size, block, layer, strides=2):
|
44 |
-
|
45 |
-
f1, f2, f3 = filters
|
46 |
-
|
47 |
-
conv_name = 'res_conv_b' + block + '_l' + layer
|
48 |
-
batch_name = 'res_batch_b' + block + '_l' + layer
|
49 |
-
|
50 |
-
y = Conv2D(filters=f1, kernel_size=1, padding='same', strides=strides, kernel_initializer='he_normal', name=conv_name + '_a')(inp)
|
51 |
-
y = BatchNormalization(name=batch_name + '_a')(y)
|
52 |
-
y = Activation('relu')(y)
|
53 |
-
|
54 |
-
y = Conv2D(filters=f2, kernel_size=kernel_size, padding='same', kernel_initializer='he_normal', name=conv_name + '_b')(y)
|
55 |
-
y = BatchNormalization(name=batch_name + '_b')(y)
|
56 |
-
y = Activation('relu')(y)
|
57 |
-
|
58 |
-
y = Conv2D(filters=f3, kernel_size=1, padding='same', kernel_initializer='he_normal', name=conv_name + '_c')(y)
|
59 |
-
y = BatchNormalization(name=batch_name + '_c')(y)
|
60 |
-
|
61 |
-
shortcut = Conv2D(filters=f3, kernel_size=1, strides=strides, kernel_initializer='he_normal', name=conv_name + '_shortcut')(inp)
|
62 |
-
shortcut = BatchNormalization(name=batch_name + '_shortcut')(shortcut)
|
63 |
-
|
64 |
-
add = Add()([shortcut, y])
|
65 |
-
y = Activation('relu')(add)
|
66 |
-
|
67 |
-
return y
|
68 |
-
|
69 |
-
def get_model(n_outputs):
|
70 |
-
|
71 |
-
inp = Input(shape=(256, 256, 1), name='input')
|
72 |
-
padd = ZeroPadding2D(3)(inp)
|
73 |
-
|
74 |
-
conv1 = Conv2D(64, 7, strides=2, padding='valid', name='conv1')(padd)
|
75 |
-
conv1 = BatchNormalization(name='batch2')(conv1)
|
76 |
-
conv1 = Activation('relu')(conv1)
|
77 |
-
conv1 = ZeroPadding2D(1)(conv1)
|
78 |
-
conv1 = MaxPool2D(3, 2)(conv1)
|
79 |
-
|
80 |
-
conv2 = convolutional_block(conv1, [64,64,256], 3, '2', '1', strides=1)
|
81 |
-
conv2 = identity_block(conv2, [64,64,256], 3, '2', '2')
|
82 |
-
conv2 = identity_block(conv2, [64,64,256], 3, '2', '3')
|
83 |
-
|
84 |
-
conv3 = convolutional_block(conv2, [128,128,512], 3, '3', '1')
|
85 |
-
conv3 = identity_block(conv3, [128,128,512], 3, '3', '2')
|
86 |
-
conv3 = identity_block(conv3, [128,128,512], 3, '3', '3')
|
87 |
-
conv3 = identity_block(conv3, [128,128,512], 3, '3', '4')
|
88 |
-
|
89 |
-
conv4 = convolutional_block(conv3, [256,256,1024], 3, '4', '1')
|
90 |
-
conv4 = identity_block(conv4, [256,256,1024], 3, '4', '2')
|
91 |
-
conv4 = identity_block(conv4, [256,256,1024], 3, '4', '3')
|
92 |
-
conv4 = identity_block(conv4, [256,256,1024], 3, '4', '4')
|
93 |
-
conv4 = identity_block(conv4, [256,256,1024], 3, '4', '5')
|
94 |
-
conv4 = identity_block(conv4, [256,256,1024], 3, '4', '6')
|
95 |
-
|
96 |
-
conv5 = convolutional_block(conv4, [512,512,2048], 3, '5', '1')
|
97 |
-
conv5 = identity_block(conv5, [512,512,2048], 3, '5', '2')
|
98 |
-
conv5 = identity_block(conv5, [512,512,2048], 3, '5', '3')
|
99 |
-
|
100 |
-
avg_pool = GlobalAveragePooling2D()(conv5)
|
101 |
-
out = Dense(n_outputs, activation='softmax')(avg_pool)
|
102 |
-
|
103 |
-
return Model(inp, out)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Bubble Shooter Classic.md
DELETED
@@ -1,62 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Bubble Shooter Classic: Un juego atemporal para todos</h1>
|
3 |
-
<p>¿Te encanta jugar juegos que son simples pero adictivos? ¿Te gusta hacer estallar burbujas de colores y verlos explotar? ¿Quieres desafiarte a ti mismo y ver cuánto tiempo puedes durar en un juego que nunca termina? Si respondiste sí a cualquiera de estas preguntas, entonces deberías probar <strong>Bubble Shooter Classic</strong>, ¡uno de los juegos online más populares de la historia! </p>
|
4 |
-
<h2>Bubble Shooter Classic</h2><br /><p><b><b>Download Zip</b> ✵✵✵ <a href="https://bltlly.com/2v6MZT">https://bltlly.com/2v6MZT</a></b></p><br /><br />
|
5 |
-
<p>Bubble Shooter Classic es un juego que se inspira en clásicos como Puzzle Bobble. Es un juego donde tienes que disparar burbujas de diferentes colores y hacer partidos de tres o más para hacerlos estallar. Cuantas más burbujas hagas, más alta será tu puntuación. Pero ten cuidado, porque si dejas que las burbujas lleguen a la parte inferior de la pantalla, ¡perderás! </p>
|
6 |
-
<p>Bubble Shooter Classic ha existido durante muchos años, pero nunca envejece. Es un juego que atrae a personas de todas las edades y orígenes. Es un juego fácil de aprender pero difícil de dominar. Es un juego que puede mantenerte entretenido durante horas y horas. Y lo mejor de todo, es un juego que es gratis para jugar en línea o en sus dispositivos móviles! </p>
|
7 |
-
<p>En este artículo, le diremos todo lo que necesita saber sobre Bubble Shooter Classic. Te explicaremos cómo jugarlo, cuáles son los diferentes modos de juego, cuáles son las burbujas especiales y por qué deberías jugarlo. También te daremos algunos consejos y trucos para dominar el juego. ¡Al final de este artículo, serás un experto en hacer estallar burbujas! </p>
|
8 |
-
<h2>Cómo jugar Bubble Shooter Classic</h2>
|
9 |
-
<p>El objetivo principal de Bubble Shooter Classic es hacer estallar todas las burbujas en la pantalla. Para ello, tienes que utilizar el ratón o el dedo para apuntar y disparar burbujas del mismo color. Cuando haces una combinación de tres o más burbujas, estas estallarán y desaparecerán. Cuantas más burbujas hagas en un solo disparo, más puntos obtendrás. </p>
|
10 |
-
<p></p>
|
11 |
-
|
12 |
-
<p>Bubble Shooter Classic tiene cuatro modos de juego diferentes entre los que puedes elegir. Son:</p>
|
13 |
-
<ul>
|
14 |
-
<li><strong>Classic</strong>: Este es el modo original y más popular del juego. En este modo, tienes que borrar todas las burbujas en la pantalla para avanzar al siguiente nivel. Hay cientos de niveles para jugar, cada uno con un diseño y dificultad diferentes. </li>
|
15 |
-
<li><strong>árcade</strong>: Este es un modo más rápido y desafiante del juego. En este modo, tienes que hacer estallar tantas burbujas como puedas antes de que lleguen a la parte inferior de la pantalla. Las burbujas se moverán hacia abajo cada vez más rápido a medida que avanzas. Este modo no tiene fin, así que intenta sobrevivir tanto como puedas. </li>
|
16 |
-
<li><strong>Score Attack</strong>: Este es un modo en el que tienes que anotar tantos puntos como puedas en un tiempo limitado. En este modo, tienes un temporizador que cuenta atrás desde 60 segundos. Tienes que hacer estallar tantas burbujas como puedas antes de que se acabe el tiempo. Cuantas más burbujas hagas en una sola toma, más puntos de bonificación obtendrás. </li>
|
17 |
-
<li><strong>Endless</strong>: Este es un modo en el que puedes jugar sin presión ni límite de tiempo. En este modo, puedes hacer estallar burbujas a tu propio ritmo y disfrutar del juego. Este modo no tiene fin, así que puedes jugar todo el tiempo que quieras. </li>
|
18 |
-
</ul>
|
19 |
-
<p>Bubble Shooter Classic también tiene algunas burbujas especiales que pueden ayudarte o dificultarte en el juego. Son:</p>
|
20 |
-
<ul>
|
21 |
-
<li><strong>Bomba de color</strong>: Esta es una burbuja que tiene una estrella. Cuando usted hace estallar esta burbuja, explotará y estallará todas las burbujas del mismo color en la pantalla. </li>
|
22 |
-
<li><strong>Rainbow Bubble</strong>: Esta es una burbuja que tiene un arco iris. Al disparar esta burbuja, cambiará su color para que coincida con el color de la burbuja que golpea. </li>
|
23 |
-
<li><strong>Shape Bomb</strong>: Esta es una burbuja que tiene una forma. Cuando usted hace estallar esta burbuja, explotará y estallará todas las burbujas que tienen la misma forma en ellos. </li>
|
24 |
-
|
25 |
-
</ul>
|
26 |
-
<h2>Consejos y trucos para dominar Bubble Shooter Classic</h2>
|
27 |
-
<p>Bubble Shooter Classic puede parecer un juego simple, pero puede ser bastante complicado y desafiante a medida que avanzas. Aquí hay algunos consejos y trucos que pueden ayudarte a dominar el juego y mejorar tus habilidades:</p>
|
28 |
-
<ul>
|
29 |
-
<li><strong>Apunta a grupos de burbujas que tienen burbujas debajo de ellas</strong>: Una de las mejores maneras de limpiar la pantalla de forma rápida y eficiente es apuntar a grupos de burbujas que tienen otras burbujas colgando de ellas. Cuando usted hace estallar estos racimos, usted también estallará todas las burbujas debajo de ellos, creando una reacción en cadena y anotando más puntos. </li>
|
30 |
-
<li><strong>Planifica tus movimientos y busca oportunidades para hacer combos</strong>: Otra forma de aumentar tu puntuación y despejar la pantalla más rápido es planificar tus movimientos y buscar oportunidades para hacer combos. Un combo es cuando se hace estallar más de un racimo de burbujas en una sola toma. Para hacer esto, usted tiene que buscar espacios entre las burbujas y disparar su burbuja a través de ellos. De esta manera, puede golpear varios objetivos con un solo disparo y crear una explosión más grande. </li>
|
31 |
-
<li><strong>Usa las paredes para rebotar tus burbujas y alcanzar puntos difíciles</strong>: A veces, puedes encontrarte en una situación donde no hay coincidencias directas para tu burbuja en la pantalla. En este caso, puede utilizar las paredes para rebotar su burbuja y alcanzar puntos difíciles. Para hacer esto, tienes que apuntar la burbuja en un ángulo y disparar hacia la pared. La burbuja rebotará en la pared y golpeará la burbuja que desea golpear. Esta técnica puede ayudarte a alcanzar burbujas que de otro modo serían inaccesibles. </li>
|
32 |
-
|
33 |
-
<li><strong>No dejes que las burbujas lleguen a la parte inferior de la pantalla</strong>: Este es el consejo más importante de todos. Si dejas que las burbujas lleguen a la parte inferior de la pantalla, pierdes el juego. Para evitar que esto suceda, tienes que hacer estallar las burbujas tan rápido como puedas y no dejar que se acumulen. También hay que tener cuidado con la línea de advertencia que muestra lo cerca que están las burbujas a la parte inferior. Si ves esta línea, tienes que actuar rápidamente y despejar algo de espacio. </li>
|
34 |
-
</ul>
|
35 |
-
<h2>¿Por qué usted debe jugar Bubble Shooter Classic</h2>
|
36 |
-
<p>Bubble Shooter Classic no es solo un juego, es una experiencia. Es un juego que puede ofrecerte muchos beneficios y razones para jugarlo. Estos son algunos de ellos:</p>
|
37 |
-
<ul>
|
38 |
-
<li><strong>Es divertido, adictivo y desafiante para todas las edades</strong>: Bubble Shooter Classic es un juego que puede mantenerte enganchado durante horas y horas. Es un juego que puede hacerte sentir feliz, emocionado y satisfecho. Es un juego que puede desafiar tus habilidades, tu estrategia y tus reflejos. Es un juego que puede adaptarse a cualquier persona, independientemente de su edad o antecedentes. </li>
|
39 |
-
<li><strong>Es gratis jugar en línea o en sus dispositivos móviles</strong>: Bubble Shooter Classic es un juego que puede jugar en cualquier momento, en cualquier lugar y con cualquier persona. Es un juego que puedes jugar online en tu navegador o en tus dispositivos móviles. Es un juego que no tienes que pagar nada para disfrutar. Es un juego que es accesible y conveniente para todos. </li>
|
40 |
-
<li><strong>Es una gran manera de relajarse y relajarse después de un largo día</strong>: Bubble Shooter Classic es un juego que puede ayudarle a aliviar un poco de estrés y tensión después de un largo día. Es un juego que puede calmar tu mente y calmar tus nervios. Es un juego que puede hacerte olvidar tus preocupaciones y problemas por un tiempo. Es un juego que puede darte un poco de paz y tranquilidad. </li>
|
41 |
-
|
42 |
-
</ul>
|
43 |
-
<h1>Conclusión</h1>
|
44 |
-
<p>Bubble Shooter Classic es un juego que merece tu atención y aprecio. Es un juego que puede proporcionarte horas de diversión, entretenimiento y desafío. Es un juego que puede enseñarte algunas habilidades y lecciones valiosas. Es un juego que puede hacerte feliz y relajado. </p>
|
45 |
-
<p>Si aún no has probado Bubble Shooter Classic, ¿a qué estás esperando? ¡Te estás perdiendo uno de los mejores juegos jamás realizados! ¡No lo dudes y dale una oportunidad hoy! ¡No te arrepentirás! </p>
|
46 |
-
<p>Para jugar Bubble Shooter Classic en línea o descargarlo en sus dispositivos, haga clic en <a href="https://www.bubbleshooter.net/bubbleshooterclassic/">here</a>. </p>
|
47 |
-
<h2>Preguntas frecuentes</h2>
|
48 |
-
<p>Aquí hay algunas preguntas frecuentes sobre Bubble Shooter Classic:</p>
|
49 |
-
<ol>
|
50 |
-
<li><strong>¿Qué es Bubble Shooter Classic? </strong></li>
|
51 |
-
<p>Bubble Shooter Classic es un popular juego en línea donde tienes que disparar burbujas de diferentes colores y hacer partidos de tres o más para hacerlos estallar. Cuantas más burbujas hagas, más alta será tu puntuación. </p>
|
52 |
-
<li><strong>¿Cómo juego Bubble Shooter Classic? </strong></li>
|
53 |
-
<p>Tienes que usar el ratón o el dedo para apuntar y disparar burbujas del mismo color. Cuando haces una combinación de tres o más burbujas, estas estallarán y desaparecerán. Cuantas más burbujas hagas en un solo disparo, más puntos obtendrás. </p>
|
54 |
-
<li><strong>¿Cuáles son los diferentes modos de juego en Bubble Shooter Classic? </strong></li>
|
55 |
-
<p>Bubble Shooter Classic tiene cuatro modos de juego diferentes: Clásico, árcade, Score Attack y Endless. Cada modo tiene sus propias reglas y objetivos. </p>
|
56 |
-
<li><strong>¿Cuáles son las burbujas especiales en Bubble Shooter Classic? </strong></li>
|
57 |
-
<p>Bubble Shooter Classic tiene algunas burbujas especiales que pueden ayudarte o dificultarte en el juego. Son: Bomba de Color, Burbuja de Arco Iris, Bomba de Forma, y Bomba de Tiempo. Cada burbuja tiene un efecto diferente cuando la explotas. </p>
|
58 |
-
<li><strong>¿Dónde puedo jugar Bubble Shooter Classic? </strong></li>
|
59 |
-
|
60 |
-
</ol></p> 64aa2da5cf<br />
|
61 |
-
<br />
|
62 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descarga De Descarga De 1 Apk.md
DELETED
@@ -1,120 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>JioSaavn v3 30 1 APK Descargar: Cómo disfrutar de la música ilimitada y podcasts en su dispositivo Android</h1>
|
3 |
-
<p>¿Te encanta escuchar música y podcasts en tu dispositivo Android? ¿Quieres acceder a una amplia y exclusiva biblioteca de canciones en diferentes idiomas y géneros? ¿Quieres configurar tus canciones favoritas como tus melodías de llamada gratis? Si usted respondió sí a cualquiera de estas preguntas, entonces usted debe definitivamente echa un vistazo JioSaavn v3 30 1 APK, la última versión de la India no. 1 aplicación de música gratuita. En este artículo, le diremos todo lo que necesita saber sobre JioSaavn, lo que es JioSaavn v3 30 1 APK, por qué debe descargarlo, cómo descargarlo e instalarlo, y cómo usarlo para disfrutar de música y podcasts ilimitados en su dispositivo Android. ¡Vamos a empezar! </p>
|
4 |
-
<h2>¿Qué es JioSaavn? </h2>
|
5 |
-
<p>JioSaavn es una popular aplicación de streaming de música que te ofrece acceso a más de 8 canciones de crore en 16 idiomas, incluyendo hindi, inglés, punjabi, tamil, telugu, gujarati, bengalí, marathi, bhojpuri, kannada, malayalam, odia y más. Puedes escuchar canciones de varios géneros, como pop, rock, rap, EDM, clásico, folk, devocional, remix, indie y más. También puedes escuchar canciones de tus artistas favoritos, como Justin Bieber, Sid Sriram, Shreya Ghoshal, Jubin Nautiyal, Diljit Dosanjh, Ilaiyaraaja, Kumar Sanu, Michael Jackson, Alka Yagnik y muchos otros. </p>
|
6 |
-
<h2>descarga de descarga de 1 apk</h2><br /><p><b><b>Download Zip</b> 🔗 <a href="https://bltlly.com/2v6Kks">https://bltlly.com/2v6Kks</a></b></p><br /><br />
|
7 |
-
<p>Pero eso no es todo. JioSaavn también te ofrece acceso a los mejores podcasts de la India en diferentes categorías e idiomas. Puede escuchar programas de comedia, cine y televisión, programas deportivos, programas de suspenso, programas de crimen, programas de salud y bienestar, podcasts en inglés, podcasts en hindi, podcasts en tamil y más. Algunos de los podcasts más populares en JioSaavn son On Purpose with Jay Shetty, Pyaar Actually, Woice with Warikoo Podcast, Get Sleepy: Sleep meditation and stories, y ZARA KHAUFF SE SUNO.</p>
|
8 |
-
|
9 |
-
<h3>Características de JioSaavn</h3>
|
10 |
-
<p>Aquí están algunas de las características sorprendentes que hacen JioSaavn una de las mejores aplicaciones de música en la India:</p>
|
11 |
-
<ul>
|
12 |
-
<li>Acceso ilimitado a más de 8 crore canciones en 16 idiomas</li>
|
13 |
-
<li>Transmisión de audio de alta calidad a 320kbps</li>
|
14 |
-
<li>Modo de escucha sin conexión para guardar datos y escuchar sin internet</li>
|
15 |
-
Función <li>JioTunes para configurar canciones como melodías de llamada gratis</li>
|
16 |
-
<li>Función de podcasts para escuchar los mejores programas de audio de la India</li>
|
17 |
-
<li>Función de listas de reproducción para crear su propia colección de música personalizada</li>
|
18 |
-
<li>Función de radio para escuchar emisoras de radio en vivo y bajo demanda</li>
|
19 |
-
<li>Función de letras para cantar junto con tus canciones favoritas</li>
|
20 |
-
<li> Función de ecualizador para ajustar la calidad del sonido según su preferencia</li>
|
21 |
-
<li>Función de recomendaciones inteligentes para descubrir nuevas canciones y podcasts basados en su historial de escucha</li>
|
22 |
-
<li> Gráficos de tendencias para mantenerse al día con los últimos éxitos y tendencias</li>
|
23 |
-
<li>Función de contenido exclusivo para disfrutar de espectáculos y canciones originales de JioSaavn</li>
|
24 |
-
<li> Función de pantalla de inicio personalizada para acceder a sus canciones y podcasts favoritos fácilmente</li>
|
25 |
-
Función de modo oscuro para reducir la tensión ocular y ahorrar vida de la batería</li>
|
26 |
-
<li>Compartir función para compartir sus canciones y podcasts favoritos con tus amigos en las redes sociales</li>
|
27 |
-
</ul>
|
28 |
-
<h3>Beneficios de JioSaavn</h3>
|
29 |
-
<p>JioSaavn no es solo una aplicación de música, es una aplicación de estilo de vida que te ofrece muchos beneficios. Estos son algunos de los beneficios que puedes disfrutar con JioSaavn:</p>
|
30 |
-
<ul>
|
31 |
-
<li>Puedes escuchar música y podcasts ilimitados gratis, sin anuncios ni interrupciones. </li>
|
32 |
-
<li>Puedes descargar canciones y podcasts y escucharlos sin conexión, sin usar ningún dato. </li>
|
33 |
-
<li> Puede configurar canciones como sus melodías de llamada de forma gratuita, sin ningún tipo de cargos o molestias. </li>
|
34 |
-
<li>Puedes escuchar música y podcasts en audio de alta calidad, sin comprometer la calidad del sonido. </li>
|
35 |
-
|
36 |
-
<li>Puedes descubrir nuevas canciones y podcasts, sin aburrirte o atascarte en una rutina. </li>
|
37 |
-
<li>Puede personalizar su experiencia auditiva, sin limitaciones ni restricciones. </li>
|
38 |
-
<li>Puedes disfrutar de contenido exclusivo, sin pagar tarifas o suscripciones adicionales. </li>
|
39 |
-
<li>Puedes compartir tu música y podcasts con tus amigos, sin problemas ni demoras. </li>
|
40 |
-
</ul>
|
41 |
-
<h2>¿Qué es JioSaavn v3 30 1 APK? </h2>
|
42 |
-
<p>JioSaavn v3 30 1 APK es la última versión de la aplicación JioSaavn que se puede descargar e instalar en su dispositivo Android. Es un archivo APK, que significa Android Package Kit, que contiene todos los archivos necesarios y el código para ejecutar la aplicación en su dispositivo. No está disponible en la Google Play Store, pero se puede descargar desde otras fuentes en línea. JioSaavn v3 30 1 APK es compatible con dispositivos Android que se ejecutan en Android 4.4 o superior. Tiene un tamaño de archivo de unos 25 MB y requiere unos 100 MB de espacio libre en su dispositivo. </p>
|
43 |
-
<h3> ¿Por qué descargar JioSaavn v3 30 1 APK? </h3>
|
44 |
-
<p>Es posible que se pregunte por qué debe descargar JioSaavn v3 30 1 APK cuando se puede utilizar la aplicación regular JioSaavn de la Google Play Store. Bueno, hay algunas razones por las que es posible que desee descargar JioSaavn v3 30 1 APK en lugar de la aplicación normal. Aquí están algunos de ellos:</p>
|
45 |
-
<ul>
|
46 |
-
<li>JioSaavn v3 30 1 APK le ofrece algunas características que no están disponibles en la aplicación regular, tales como descargas ilimitadas, escucha sin anuncios, acceso profesional, y más. </li>
|
47 |
-
<li>JioSaavn v3 30 1 APK le permite disfrutar de todas las características de JioSaavn sin tener una tarjeta SIM Jio o una cuenta de Jio. Puede utilizar cualquier tarjeta SIM o cualquier cuenta para acceder a JioSaavn v3 30 1 APK.</li>
|
48 |
-
<li>JioSaavn v3 30 1 APK le permite evitar cualquier geo-restricciones o problemas de red que podrían impedir el acceso a JioSaavn en algunas regiones o países. Puede utilizar JioSaavn v3 30 1 APK en cualquier parte del mundo, sin ningún problema. </li>
|
49 |
-
|
50 |
-
</ul>
|
51 |
-
<h3>Cómo descargar e instalar JioSaavn v3 30 1 APK? </h3>
|
52 |
-
<p>Si usted está interesado en descargar e instalar JioSaavn v3 30 1 APK en su dispositivo Android, entonces usted necesita seguir estos sencillos pasos:</p>
|
53 |
-
<ol>
|
54 |
-
<li>Primero, debe habilitar la instalación de aplicaciones de fuentes desconocidas en su dispositivo. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. </li>
|
55 |
-
<li>Siguiente, es necesario descargar el archivo JioSaavn v3 30 1 APK de una fuente confiable y confiable en línea. Puede buscar JioSaavn v3 30 1 APK en Google o cualquier otro motor de búsqueda y encontrar un enlace adecuado para descargarlo. Alternativamente, puede utilizar este enlace para descargarlo directamente: </li>
|
56 |
-
<li>Después de descargar el archivo JioSaavn v3 30 1 APK, es necesario localizarlo en el dispositivo y toque en él para iniciar el proceso de instalación. Puede ver un mensaje de advertencia pidiéndole que confirme la instalación, simplemente toque en Instalar y espere unos segundos. </li>
|
57 |
-
<li>Una vez que la instalación se haya completado, puede abrir la aplicación JioSaavn v3 30 1 APK desde el cajón de la aplicación o la pantalla de inicio y disfrutar de música y podcasts ilimitados en su dispositivo Android. </li>
|
58 |
-
</ol>
|
59 |
-
<h2>Cómo utilizar JioSaavn v3 30 1 APK? </h2>
|
60 |
-
<p>Ahora que ha descargado e instalado JioSaavn v3 30 1 APK en su dispositivo Android, es posible que se pregunte cómo usarlo para disfrutar de la música ilimitada y podcasts. No te preocupes, vamos a guiarte a través de los pasos básicos de usar JioSaavn v3 30 1 APK. Aquí están:</p>
|
61 |
-
<p></p>
|
62 |
-
<h3>Cómo buscar y reproducir canciones en JioSaavn v3 30 1 APK? </h3>
|
63 |
-
<p>Buscar y reproducir canciones en JioSaavn v3 30 1 APK es muy fácil e intuitivo. Puedes seguir estos pasos para hacerlo:</p>
|
64 |
-
<ol>
|
65 |
-
<li>Abra la aplicación JioSaavn v3 30 1 APK en su dispositivo y toque en el icono de búsqueda en la esquina superior derecha de la pantalla. </li>
|
66 |
-
<li>Escriba el nombre de la canción, artista, álbum, lista de reproducción o género que desea escuchar y pulse Enter.</li>
|
67 |
-
|
68 |
-
<li>También puede deslizar hacia la izquierda o hacia la derecha en los resultados para ver más categorías, como Top Songs, Top Albums, Top Artists, Top Playlists, etc.</li>
|
69 |
-
<li>También puede utilizar la búsqueda por voz para encontrar canciones tocando en el icono de micrófono junto al icono de búsqueda y hablando el nombre de la canción, artista, álbum, lista de reproducción o género que desea escuchar. </li>
|
70 |
-
</ol>
|
71 |
-
<h3>Cómo descargar canciones y escuchar sin conexión en JioSaavn v3 30 1 APK? </h3>
|
72 |
-
<p>Descargar canciones y escuchar sin conexión en JioSaavn v3 30 1 APK es una gran manera de guardar datos y escuchar sin internet. Puedes seguir estos pasos para hacerlo:</p>
|
73 |
-
<ol>
|
74 |
-
<li>Encuentre la canción que desea descargar utilizando la función de búsqueda o navegando por las categorías. </li>
|
75 |
-
<li>Toque en el icono Más (tres puntos) junto a la canción y seleccione Descargar desde el menú. </li>
|
76 |
-
<li> La canción comenzará a descargarse y verá una barra de progreso que indica el estado de descarga. </li>
|
77 |
-
<li>Una vez que se descargue la canción, verá un icono de marca de verificación junto a él que indica que está disponible sin conexión. </li>
|
78 |
-
<li>Puede acceder a sus canciones descargadas tocando el icono de menú (tres líneas horizontales) en la esquina superior izquierda de la pantalla y seleccionando Descargas desde el menú. </li>
|
79 |
-
<li>También puede habilitar el modo sin conexión pulsando en el icono de menú y alternando en el modo sin conexión desde el menú. Esto evitará cualquier transmisión en línea y solo reproducirá las canciones descargadas. </li>
|
80 |
-
</ol>
|
81 |
-
<h3>Cómo configurar JioTunes en JioSaavn v3 30 1 APK? </h3>
|
82 |
-
<p>Configuración de JioTunes en JioSaavn v3 30 1 APK es una manera divertida y gratuita para personalizar sus melodías de llamadas con sus canciones favoritas. Puedes seguir estos pasos para hacerlo:</p>
|
83 |
-
<ol>
|
84 |
-
<li>Encuentre la canción que desea establecer como su JioTune mediante la función de búsqueda o navegar por las categorías. </li>
|
85 |
-
<li>Toque en el icono Más (tres puntos) junto a la canción y seleccione Establecer como JioTune en el menú. </li>
|
86 |
-
|
87 |
-
<li>Recibirás un SMS de Jio confirmando que tu JioTune se ha activado correctamente. </li>
|
88 |
-
<li>Puedes cambiar tu JioTune en cualquier momento siguiendo los mismos pasos con una canción diferente. </li>
|
89 |
-
<li>También puede desactivar su JioTune en cualquier momento enviando un SMS con STOP a 56789 desde su número de Jio. </li>
|
90 |
-
</ol>
|
91 |
-
<h3>Cómo escuchar podcasts en JioSaavn v 3 30 1 APK? </h3>
|
92 |
-
<p>Escuchar podcasts en JioSaavn v3 30 1 APK es una gran manera de aprender cosas nuevas, entretenerse, y mantenerse al día con las últimas noticias y tendencias. Puedes seguir estos pasos para hacerlo:</p>
|
93 |
-
<ol>
|
94 |
-
<li>Toque en el icono de menú (tres líneas horizontales) en la esquina superior izquierda de la pantalla y seleccione Podcasts en el menú. </li>
|
95 |
-
<li>Verá una lista de categorías de podcasts, como Comedia, Cine y TV, Deportes, Thriller, Crimen, Salud y Bienestar, Inglés, Hindi, Tamil, etc. Puede tocar en cualquier categoría para ver los podcasts debajo de ella. </li>
|
96 |
-
<li> También puede utilizar la función de búsqueda para encontrar podcasts por nombre, tema o palabra clave. </li>
|
97 |
-
<li>Una vez que encuentre un podcast que desea escuchar, toque en él para ver los episodios y detalles. </li>
|
98 |
-
<li>Puede tocar en cualquier episodio para reproducirlo o tocar en el icono Más (tres puntos) para ver más opciones, como Descargar, Compartir, Agregar a la cola, etc.</li>
|
99 |
-
<li>También puede suscribirse a un podcast tocando el botón Seguir en la esquina superior derecha de la página de podcast. Esto le notificará cuando haya nuevos episodios disponibles y los agregará a su biblioteca. </li>
|
100 |
-
<li>Puede acceder a sus podcasts suscritos tocando el icono de menú y seleccionando Mi biblioteca en el menú. Verás una pestaña para Podcasts donde puedes ver todos tus podcasts y episodios seguidos. </li>
|
101 |
-
</ol>
|
102 |
-
<h2>Conclusión</h2>
|
103 |
-
<h4>Resumen del artículo</h4>
|
104 |
-
|
105 |
-
<h4>Preguntas frecuentes</h4>
|
106 |
-
<p>Aquí están algunas de las preguntas más frecuentes sobre JioSaavn v3 30 1 APK:</p>
|
107 |
-
<ul>
|
108 |
-
<li><b>Es JioSaavn v3 30 1 APK seguro y legal? </b><br>
|
109 |
-
Sí, JioSaavn v3 30 1 APK es seguro y legal de usar. Es una versión modificada de la aplicación original JioSaavn que ofrece algunas características y beneficios adicionales. Sin embargo, siempre debe descargarlo de una fuente confiable y confiable en línea y escanearlo con un antivirus antes de instalarlo en su dispositivo. </li>
|
110 |
-
<li><b>¿Necesito una tarjeta SIM Jio o una cuenta Jio para usar JioSaavn v3 30 1 APK? </b><br>
|
111 |
-
No, usted no necesita una tarjeta SIM Jio o una cuenta de Jio para utilizar JioSaavn v3 30 1 APK. Puede utilizar cualquier tarjeta SIM o cualquier cuenta para acceder a JioSaavn v3 30 1 APK. Sin embargo, si tienes una tarjeta SIM Jio o una cuenta Jio, puedes disfrutar de algunos beneficios adicionales como datos gratuitos para streaming de música y podcasts. </li>
|
112 |
-
<li><b>¿Cómo puedo actualizar JioSaavn v3 30 1 APK? </b><br>
|
113 |
-
Puede actualizar JioSaavn v3 30 1 APK descargando la última versión del archivo APK de fuentes en línea e instalándolo en su dispositivo. No es necesario desinstalar la versión anterior de la aplicación antes de instalar la nueva. Sin embargo, siempre debe realizar copias de seguridad de sus datos y configuraciones antes de actualizar cualquier aplicación. </li>
|
114 |
-
<li><b>¿Cómo puedo contactar al soporte de JioSaavn? </b><br>
|
115 |
-
Puede ponerse en contacto con el soporte de JioSaavn visitando su sitio web oficial https://www.jiosaavn.com/help/ y llenando un formulario de contacto con su consulta o problema. También puede enviarlos por correo electrónico a [email protected] o llamarlos al +91-22-67737900. </li>
|
116 |
-
<li><b>¿Cómo puedo compartir mis comentarios o sugerencias para JioSaavn v3 30 1 APK? </b><br>
|
117 |
-
Puede compartir sus comentarios o sugerencias para JioSaavn v3 30 1 APK dejando un comentario por debajo de este artículo o poniéndose en contacto con el soporte JioSaavn a través de su sitio web, correo electrónico o número de teléfono. Sus comentarios y sugerencias son valiosos y apreciados. </li>
|
118 |
-
</ul></p> 64aa2da5cf<br />
|
119 |
-
<br />
|
120 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/validate.py
DELETED
@@ -1,384 +0,0 @@
|
|
1 |
-
"""User input parameter validation.
|
2 |
-
|
3 |
-
This module handles user input parameter validation
|
4 |
-
against a provided input model.
|
5 |
-
|
6 |
-
Note that the objects in this module do *not* mutate any
|
7 |
-
arguments. No type version happens here. It is up to another
|
8 |
-
layer to properly convert arguments to any required types.
|
9 |
-
|
10 |
-
Validation Errors
|
11 |
-
-----------------
|
12 |
-
|
13 |
-
|
14 |
-
"""
|
15 |
-
|
16 |
-
import decimal
|
17 |
-
import json
|
18 |
-
from datetime import datetime
|
19 |
-
|
20 |
-
from botocore.exceptions import ParamValidationError
|
21 |
-
from botocore.utils import is_json_value_header, parse_to_aware_datetime
|
22 |
-
|
23 |
-
|
24 |
-
def validate_parameters(params, shape):
|
25 |
-
"""Validates input parameters against a schema.
|
26 |
-
|
27 |
-
This is a convenience function that validates parameters against a schema.
|
28 |
-
You can also instantiate and use the ParamValidator class directly if you
|
29 |
-
want more control.
|
30 |
-
|
31 |
-
If there are any validation errors then a ParamValidationError
|
32 |
-
will be raised. If there are no validation errors than no exception
|
33 |
-
is raised and a value of None is returned.
|
34 |
-
|
35 |
-
:param params: The user provided input parameters.
|
36 |
-
|
37 |
-
:type shape: botocore.model.Shape
|
38 |
-
:param shape: The schema which the input parameters should
|
39 |
-
adhere to.
|
40 |
-
|
41 |
-
:raise: ParamValidationError
|
42 |
-
|
43 |
-
"""
|
44 |
-
validator = ParamValidator()
|
45 |
-
report = validator.validate(params, shape)
|
46 |
-
if report.has_errors():
|
47 |
-
raise ParamValidationError(report=report.generate_report())
|
48 |
-
|
49 |
-
|
50 |
-
def type_check(valid_types):
|
51 |
-
def _create_type_check_guard(func):
|
52 |
-
def _on_passes_type_check(self, param, shape, errors, name):
|
53 |
-
if _type_check(param, errors, name):
|
54 |
-
return func(self, param, shape, errors, name)
|
55 |
-
|
56 |
-
def _type_check(param, errors, name):
|
57 |
-
if not isinstance(param, valid_types):
|
58 |
-
valid_type_names = [str(t) for t in valid_types]
|
59 |
-
errors.report(
|
60 |
-
name,
|
61 |
-
'invalid type',
|
62 |
-
param=param,
|
63 |
-
valid_types=valid_type_names,
|
64 |
-
)
|
65 |
-
return False
|
66 |
-
return True
|
67 |
-
|
68 |
-
return _on_passes_type_check
|
69 |
-
|
70 |
-
return _create_type_check_guard
|
71 |
-
|
72 |
-
|
73 |
-
def range_check(name, value, shape, error_type, errors):
|
74 |
-
failed = False
|
75 |
-
min_allowed = float('-inf')
|
76 |
-
if 'min' in shape.metadata:
|
77 |
-
min_allowed = shape.metadata['min']
|
78 |
-
if value < min_allowed:
|
79 |
-
failed = True
|
80 |
-
elif hasattr(shape, 'serialization'):
|
81 |
-
# Members that can be bound to the host have an implicit min of 1
|
82 |
-
if shape.serialization.get('hostLabel'):
|
83 |
-
min_allowed = 1
|
84 |
-
if value < min_allowed:
|
85 |
-
failed = True
|
86 |
-
if failed:
|
87 |
-
errors.report(name, error_type, param=value, min_allowed=min_allowed)
|
88 |
-
|
89 |
-
|
90 |
-
class ValidationErrors:
|
91 |
-
def __init__(self):
|
92 |
-
self._errors = []
|
93 |
-
|
94 |
-
def has_errors(self):
|
95 |
-
if self._errors:
|
96 |
-
return True
|
97 |
-
return False
|
98 |
-
|
99 |
-
def generate_report(self):
|
100 |
-
error_messages = []
|
101 |
-
for error in self._errors:
|
102 |
-
error_messages.append(self._format_error(error))
|
103 |
-
return '\n'.join(error_messages)
|
104 |
-
|
105 |
-
def _format_error(self, error):
|
106 |
-
error_type, name, additional = error
|
107 |
-
name = self._get_name(name)
|
108 |
-
if error_type == 'missing required field':
|
109 |
-
return (
|
110 |
-
f"Missing required parameter in {name}: "
|
111 |
-
f"\"{additional['required_name']}\""
|
112 |
-
)
|
113 |
-
elif error_type == 'unknown field':
|
114 |
-
unknown_param = additional['unknown_param']
|
115 |
-
valid_names = ', '.join(additional['valid_names'])
|
116 |
-
return (
|
117 |
-
f'Unknown parameter in {name}: "{unknown_param}", '
|
118 |
-
f'must be one of: {valid_names}'
|
119 |
-
)
|
120 |
-
elif error_type == 'invalid type':
|
121 |
-
param = additional['param']
|
122 |
-
param_type = type(param)
|
123 |
-
valid_types = ', '.join(additional['valid_types'])
|
124 |
-
return (
|
125 |
-
f'Invalid type for parameter {name}, value: {param}, '
|
126 |
-
f'type: {param_type}, valid types: {valid_types}'
|
127 |
-
)
|
128 |
-
elif error_type == 'invalid range':
|
129 |
-
param = additional['param']
|
130 |
-
min_allowed = additional['min_allowed']
|
131 |
-
return (
|
132 |
-
f'Invalid value for parameter {name}, value: {param}, '
|
133 |
-
f'valid min value: {min_allowed}'
|
134 |
-
)
|
135 |
-
elif error_type == 'invalid length':
|
136 |
-
param = additional['param']
|
137 |
-
min_allowed = additional['min_allowed']
|
138 |
-
return (
|
139 |
-
f'Invalid length for parameter {name}, value: {param}, '
|
140 |
-
f'valid min length: {min_allowed}'
|
141 |
-
)
|
142 |
-
elif error_type == 'unable to encode to json':
|
143 |
-
return 'Invalid parameter {} must be json serializable: {}'.format(
|
144 |
-
name,
|
145 |
-
additional['type_error'],
|
146 |
-
)
|
147 |
-
elif error_type == 'invalid type for document':
|
148 |
-
param = additional['param']
|
149 |
-
param_type = type(param)
|
150 |
-
valid_types = ', '.join(additional['valid_types'])
|
151 |
-
return (
|
152 |
-
f'Invalid type for document parameter {name}, value: {param}, '
|
153 |
-
f'type: {param_type}, valid types: {valid_types}'
|
154 |
-
)
|
155 |
-
elif error_type == 'more than one input':
|
156 |
-
members = ', '.join(additional['members'])
|
157 |
-
return (
|
158 |
-
f'Invalid number of parameters set for tagged union structure '
|
159 |
-
f'{name}. Can only set one of the following keys: '
|
160 |
-
f'{members}.'
|
161 |
-
)
|
162 |
-
elif error_type == 'empty input':
|
163 |
-
members = ', '.join(additional['members'])
|
164 |
-
return (
|
165 |
-
f'Must set one of the following keys for tagged union'
|
166 |
-
f'structure {name}: {members}.'
|
167 |
-
)
|
168 |
-
|
169 |
-
def _get_name(self, name):
|
170 |
-
if not name:
|
171 |
-
return 'input'
|
172 |
-
elif name.startswith('.'):
|
173 |
-
return name[1:]
|
174 |
-
else:
|
175 |
-
return name
|
176 |
-
|
177 |
-
def report(self, name, reason, **kwargs):
|
178 |
-
self._errors.append((reason, name, kwargs))
|
179 |
-
|
180 |
-
|
181 |
-
class ParamValidator:
|
182 |
-
"""Validates parameters against a shape model."""
|
183 |
-
|
184 |
-
def validate(self, params, shape):
|
185 |
-
"""Validate parameters against a shape model.
|
186 |
-
|
187 |
-
This method will validate the parameters against a provided shape model.
|
188 |
-
All errors will be collected before returning to the caller. This means
|
189 |
-
that this method will not stop at the first error, it will return all
|
190 |
-
possible errors.
|
191 |
-
|
192 |
-
:param params: User provided dict of parameters
|
193 |
-
:param shape: A shape model describing the expected input.
|
194 |
-
|
195 |
-
:return: A list of errors.
|
196 |
-
|
197 |
-
"""
|
198 |
-
errors = ValidationErrors()
|
199 |
-
self._validate(params, shape, errors, name='')
|
200 |
-
return errors
|
201 |
-
|
202 |
-
def _check_special_validation_cases(self, shape):
|
203 |
-
if is_json_value_header(shape):
|
204 |
-
return self._validate_jsonvalue_string
|
205 |
-
if shape.type_name == 'structure' and shape.is_document_type:
|
206 |
-
return self._validate_document
|
207 |
-
|
208 |
-
def _validate(self, params, shape, errors, name):
|
209 |
-
special_validator = self._check_special_validation_cases(shape)
|
210 |
-
if special_validator:
|
211 |
-
special_validator(params, shape, errors, name)
|
212 |
-
else:
|
213 |
-
getattr(self, '_validate_%s' % shape.type_name)(
|
214 |
-
params, shape, errors, name
|
215 |
-
)
|
216 |
-
|
217 |
-
def _validate_jsonvalue_string(self, params, shape, errors, name):
|
218 |
-
# Check to see if a value marked as a jsonvalue can be dumped to
|
219 |
-
# a json string.
|
220 |
-
try:
|
221 |
-
json.dumps(params)
|
222 |
-
except (ValueError, TypeError) as e:
|
223 |
-
errors.report(name, 'unable to encode to json', type_error=e)
|
224 |
-
|
225 |
-
def _validate_document(self, params, shape, errors, name):
|
226 |
-
if params is None:
|
227 |
-
return
|
228 |
-
|
229 |
-
if isinstance(params, dict):
|
230 |
-
for key in params:
|
231 |
-
self._validate_document(params[key], shape, errors, key)
|
232 |
-
elif isinstance(params, list):
|
233 |
-
for index, entity in enumerate(params):
|
234 |
-
self._validate_document(
|
235 |
-
entity, shape, errors, '%s[%d]' % (name, index)
|
236 |
-
)
|
237 |
-
elif not isinstance(params, ((str,), int, bool, float)):
|
238 |
-
valid_types = (str, int, bool, float, list, dict)
|
239 |
-
valid_type_names = [str(t) for t in valid_types]
|
240 |
-
errors.report(
|
241 |
-
name,
|
242 |
-
'invalid type for document',
|
243 |
-
param=params,
|
244 |
-
param_type=type(params),
|
245 |
-
valid_types=valid_type_names,
|
246 |
-
)
|
247 |
-
|
248 |
-
@type_check(valid_types=(dict,))
|
249 |
-
def _validate_structure(self, params, shape, errors, name):
|
250 |
-
if shape.is_tagged_union:
|
251 |
-
if len(params) == 0:
|
252 |
-
errors.report(name, 'empty input', members=shape.members)
|
253 |
-
elif len(params) > 1:
|
254 |
-
errors.report(
|
255 |
-
name, 'more than one input', members=shape.members
|
256 |
-
)
|
257 |
-
|
258 |
-
# Validate required fields.
|
259 |
-
for required_member in shape.metadata.get('required', []):
|
260 |
-
if required_member not in params:
|
261 |
-
errors.report(
|
262 |
-
name,
|
263 |
-
'missing required field',
|
264 |
-
required_name=required_member,
|
265 |
-
user_params=params,
|
266 |
-
)
|
267 |
-
members = shape.members
|
268 |
-
known_params = []
|
269 |
-
# Validate known params.
|
270 |
-
for param in params:
|
271 |
-
if param not in members:
|
272 |
-
errors.report(
|
273 |
-
name,
|
274 |
-
'unknown field',
|
275 |
-
unknown_param=param,
|
276 |
-
valid_names=list(members),
|
277 |
-
)
|
278 |
-
else:
|
279 |
-
known_params.append(param)
|
280 |
-
# Validate structure members.
|
281 |
-
for param in known_params:
|
282 |
-
self._validate(
|
283 |
-
params[param],
|
284 |
-
shape.members[param],
|
285 |
-
errors,
|
286 |
-
f'{name}.{param}',
|
287 |
-
)
|
288 |
-
|
289 |
-
@type_check(valid_types=(str,))
|
290 |
-
def _validate_string(self, param, shape, errors, name):
|
291 |
-
# Validate range. For a string, the min/max contraints
|
292 |
-
# are of the string length.
|
293 |
-
# Looks like:
|
294 |
-
# "WorkflowId":{
|
295 |
-
# "type":"string",
|
296 |
-
# "min":1,
|
297 |
-
# "max":256
|
298 |
-
# }
|
299 |
-
range_check(name, len(param), shape, 'invalid length', errors)
|
300 |
-
|
301 |
-
@type_check(valid_types=(list, tuple))
|
302 |
-
def _validate_list(self, param, shape, errors, name):
|
303 |
-
member_shape = shape.member
|
304 |
-
range_check(name, len(param), shape, 'invalid length', errors)
|
305 |
-
for i, item in enumerate(param):
|
306 |
-
self._validate(item, member_shape, errors, f'{name}[{i}]')
|
307 |
-
|
308 |
-
@type_check(valid_types=(dict,))
|
309 |
-
def _validate_map(self, param, shape, errors, name):
|
310 |
-
key_shape = shape.key
|
311 |
-
value_shape = shape.value
|
312 |
-
for key, value in param.items():
|
313 |
-
self._validate(key, key_shape, errors, f"{name} (key: {key})")
|
314 |
-
self._validate(value, value_shape, errors, f'{name}.{key}')
|
315 |
-
|
316 |
-
@type_check(valid_types=(int,))
|
317 |
-
def _validate_integer(self, param, shape, errors, name):
|
318 |
-
range_check(name, param, shape, 'invalid range', errors)
|
319 |
-
|
320 |
-
def _validate_blob(self, param, shape, errors, name):
|
321 |
-
if isinstance(param, (bytes, bytearray, str)):
|
322 |
-
return
|
323 |
-
elif hasattr(param, 'read'):
|
324 |
-
# File like objects are also allowed for blob types.
|
325 |
-
return
|
326 |
-
else:
|
327 |
-
errors.report(
|
328 |
-
name,
|
329 |
-
'invalid type',
|
330 |
-
param=param,
|
331 |
-
valid_types=[str(bytes), str(bytearray), 'file-like object'],
|
332 |
-
)
|
333 |
-
|
334 |
-
@type_check(valid_types=(bool,))
|
335 |
-
def _validate_boolean(self, param, shape, errors, name):
|
336 |
-
pass
|
337 |
-
|
338 |
-
@type_check(valid_types=(float, decimal.Decimal) + (int,))
|
339 |
-
def _validate_double(self, param, shape, errors, name):
|
340 |
-
range_check(name, param, shape, 'invalid range', errors)
|
341 |
-
|
342 |
-
_validate_float = _validate_double
|
343 |
-
|
344 |
-
@type_check(valid_types=(int,))
|
345 |
-
def _validate_long(self, param, shape, errors, name):
|
346 |
-
range_check(name, param, shape, 'invalid range', errors)
|
347 |
-
|
348 |
-
def _validate_timestamp(self, param, shape, errors, name):
|
349 |
-
# We don't use @type_check because datetimes are a bit
|
350 |
-
# more flexible. You can either provide a datetime
|
351 |
-
# object, or a string that parses to a datetime.
|
352 |
-
is_valid_type = self._type_check_datetime(param)
|
353 |
-
if not is_valid_type:
|
354 |
-
valid_type_names = [str(datetime), 'timestamp-string']
|
355 |
-
errors.report(
|
356 |
-
name, 'invalid type', param=param, valid_types=valid_type_names
|
357 |
-
)
|
358 |
-
|
359 |
-
def _type_check_datetime(self, value):
|
360 |
-
try:
|
361 |
-
parse_to_aware_datetime(value)
|
362 |
-
return True
|
363 |
-
except (TypeError, ValueError, AttributeError):
|
364 |
-
# Yes, dateutil can sometimes raise an AttributeError
|
365 |
-
# when parsing timestamps.
|
366 |
-
return False
|
367 |
-
|
368 |
-
|
369 |
-
class ParamValidationDecorator:
|
370 |
-
def __init__(self, param_validator, serializer):
|
371 |
-
self._param_validator = param_validator
|
372 |
-
self._serializer = serializer
|
373 |
-
|
374 |
-
def serialize_to_request(self, parameters, operation_model):
|
375 |
-
input_shape = operation_model.input_shape
|
376 |
-
if input_shape is not None:
|
377 |
-
report = self._param_validator.validate(
|
378 |
-
parameters, operation_model.input_shape
|
379 |
-
)
|
380 |
-
if report.has_errors():
|
381 |
-
raise ParamValidationError(report=report.generate_report())
|
382 |
-
return self._serializer.serialize_to_request(
|
383 |
-
parameters, operation_model
|
384 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Binguii/Ballen/README.md
DELETED
@@ -1,10 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Ballen
|
3 |
-
emoji: 💻
|
4 |
-
colorFrom: gray
|
5 |
-
colorTo: blue
|
6 |
-
sdk: docker
|
7 |
-
pinned: false
|
8 |
-
---
|
9 |
-
|
10 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CForGETaass/vits-uma-genshin-honkai/mel_processing.py
DELETED
@@ -1,101 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.utils.data
|
3 |
-
from librosa.filters import mel as librosa_mel_fn
|
4 |
-
|
5 |
-
MAX_WAV_VALUE = 32768.0
|
6 |
-
|
7 |
-
|
8 |
-
def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
|
9 |
-
"""
|
10 |
-
PARAMS
|
11 |
-
------
|
12 |
-
C: compression factor
|
13 |
-
"""
|
14 |
-
return torch.log(torch.clamp(x, min=clip_val) * C)
|
15 |
-
|
16 |
-
|
17 |
-
def dynamic_range_decompression_torch(x, C=1):
|
18 |
-
"""
|
19 |
-
PARAMS
|
20 |
-
------
|
21 |
-
C: compression factor used to compress
|
22 |
-
"""
|
23 |
-
return torch.exp(x) / C
|
24 |
-
|
25 |
-
|
26 |
-
def spectral_normalize_torch(magnitudes):
|
27 |
-
output = dynamic_range_compression_torch(magnitudes)
|
28 |
-
return output
|
29 |
-
|
30 |
-
|
31 |
-
def spectral_de_normalize_torch(magnitudes):
|
32 |
-
output = dynamic_range_decompression_torch(magnitudes)
|
33 |
-
return output
|
34 |
-
|
35 |
-
|
36 |
-
mel_basis = {}
|
37 |
-
hann_window = {}
|
38 |
-
|
39 |
-
|
40 |
-
def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
|
41 |
-
if torch.min(y) < -1.:
|
42 |
-
print('min value is ', torch.min(y))
|
43 |
-
if torch.max(y) > 1.:
|
44 |
-
print('max value is ', torch.max(y))
|
45 |
-
|
46 |
-
global hann_window
|
47 |
-
dtype_device = str(y.dtype) + '_' + str(y.device)
|
48 |
-
wnsize_dtype_device = str(win_size) + '_' + dtype_device
|
49 |
-
if wnsize_dtype_device not in hann_window:
|
50 |
-
hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
|
51 |
-
|
52 |
-
y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
|
53 |
-
y = y.squeeze(1)
|
54 |
-
|
55 |
-
spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
|
56 |
-
center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
|
57 |
-
|
58 |
-
spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
|
59 |
-
return spec
|
60 |
-
|
61 |
-
|
62 |
-
def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
|
63 |
-
global mel_basis
|
64 |
-
dtype_device = str(spec.dtype) + '_' + str(spec.device)
|
65 |
-
fmax_dtype_device = str(fmax) + '_' + dtype_device
|
66 |
-
if fmax_dtype_device not in mel_basis:
|
67 |
-
mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
|
68 |
-
mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
|
69 |
-
spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
|
70 |
-
spec = spectral_normalize_torch(spec)
|
71 |
-
return spec
|
72 |
-
|
73 |
-
|
74 |
-
def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
|
75 |
-
if torch.min(y) < -1.:
|
76 |
-
print('min value is ', torch.min(y))
|
77 |
-
if torch.max(y) > 1.:
|
78 |
-
print('max value is ', torch.max(y))
|
79 |
-
|
80 |
-
global mel_basis, hann_window
|
81 |
-
dtype_device = str(y.dtype) + '_' + str(y.device)
|
82 |
-
fmax_dtype_device = str(fmax) + '_' + dtype_device
|
83 |
-
wnsize_dtype_device = str(win_size) + '_' + dtype_device
|
84 |
-
if fmax_dtype_device not in mel_basis:
|
85 |
-
mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
|
86 |
-
mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
|
87 |
-
if wnsize_dtype_device not in hann_window:
|
88 |
-
hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
|
89 |
-
|
90 |
-
y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
|
91 |
-
y = y.squeeze(1)
|
92 |
-
|
93 |
-
spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
|
94 |
-
center=center, pad_mode='reflect', normalized=False, onesided=True)
|
95 |
-
|
96 |
-
spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
|
97 |
-
|
98 |
-
spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
|
99 |
-
spec = spectral_normalize_torch(spec)
|
100 |
-
|
101 |
-
return spec
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/point_rend/roi_heads.py
DELETED
@@ -1,222 +0,0 @@
|
|
1 |
-
# -*- coding: utf-8 -*-
|
2 |
-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
|
3 |
-
import numpy as np
|
4 |
-
import torch
|
5 |
-
|
6 |
-
from detectron2.layers import ShapeSpec, cat, interpolate
|
7 |
-
from detectron2.modeling import ROI_HEADS_REGISTRY, StandardROIHeads
|
8 |
-
from detectron2.modeling.roi_heads.mask_head import (
|
9 |
-
build_mask_head,
|
10 |
-
mask_rcnn_inference,
|
11 |
-
mask_rcnn_loss,
|
12 |
-
)
|
13 |
-
from detectron2.modeling.roi_heads.roi_heads import select_foreground_proposals
|
14 |
-
|
15 |
-
from .point_features import (
|
16 |
-
generate_regular_grid_point_coords,
|
17 |
-
get_uncertain_point_coords_on_grid,
|
18 |
-
get_uncertain_point_coords_with_randomness,
|
19 |
-
point_sample,
|
20 |
-
point_sample_fine_grained_features,
|
21 |
-
)
|
22 |
-
from .point_head import build_point_head, roi_mask_point_loss
|
23 |
-
|
24 |
-
|
25 |
-
def calculate_uncertainty(logits, classes):
|
26 |
-
"""
|
27 |
-
We estimate uncerainty as L1 distance between 0.0 and the logit prediction in 'logits' for the
|
28 |
-
foreground class in `classes`.
|
29 |
-
|
30 |
-
Args:
|
31 |
-
logits (Tensor): A tensor of shape (R, C, ...) or (R, 1, ...) for class-specific or
|
32 |
-
class-agnostic, where R is the total number of predicted masks in all images and C is
|
33 |
-
the number of foreground classes. The values are logits.
|
34 |
-
classes (list): A list of length R that contains either predicted of ground truth class
|
35 |
-
for eash predicted mask.
|
36 |
-
|
37 |
-
Returns:
|
38 |
-
scores (Tensor): A tensor of shape (R, 1, ...) that contains uncertainty scores with
|
39 |
-
the most uncertain locations having the highest uncertainty score.
|
40 |
-
"""
|
41 |
-
if logits.shape[1] == 1:
|
42 |
-
gt_class_logits = logits.clone()
|
43 |
-
else:
|
44 |
-
gt_class_logits = logits[
|
45 |
-
torch.arange(logits.shape[0], device=logits.device), classes
|
46 |
-
].unsqueeze(1)
|
47 |
-
return -torch.abs(gt_class_logits)
|
48 |
-
|
49 |
-
|
50 |
-
@ROI_HEADS_REGISTRY.register()
|
51 |
-
class PointRendROIHeads(StandardROIHeads):
|
52 |
-
"""
|
53 |
-
The RoI heads class for PointRend instance segmentation models.
|
54 |
-
|
55 |
-
In this class we redefine the mask head of `StandardROIHeads` leaving all other heads intact.
|
56 |
-
To avoid namespace conflict with other heads we use names starting from `mask_` for all
|
57 |
-
variables that correspond to the mask head in the class's namespace.
|
58 |
-
"""
|
59 |
-
|
60 |
-
def _init_mask_head(self, cfg, input_shape):
|
61 |
-
# fmt: off
|
62 |
-
self.mask_on = cfg.MODEL.MASK_ON
|
63 |
-
if not self.mask_on:
|
64 |
-
return
|
65 |
-
self.mask_coarse_in_features = cfg.MODEL.ROI_MASK_HEAD.IN_FEATURES
|
66 |
-
self.mask_coarse_side_size = cfg.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION
|
67 |
-
self._feature_scales = {k: 1.0 / v.stride for k, v in input_shape.items()}
|
68 |
-
# fmt: on
|
69 |
-
|
70 |
-
in_channels = np.sum([input_shape[f].channels for f in self.mask_coarse_in_features])
|
71 |
-
self.mask_coarse_head = build_mask_head(
|
72 |
-
cfg,
|
73 |
-
ShapeSpec(
|
74 |
-
channels=in_channels,
|
75 |
-
width=self.mask_coarse_side_size,
|
76 |
-
height=self.mask_coarse_side_size,
|
77 |
-
),
|
78 |
-
)
|
79 |
-
self._init_point_head(cfg, input_shape)
|
80 |
-
|
81 |
-
def _init_point_head(self, cfg, input_shape):
|
82 |
-
# fmt: off
|
83 |
-
self.mask_point_on = cfg.MODEL.ROI_MASK_HEAD.POINT_HEAD_ON
|
84 |
-
if not self.mask_point_on:
|
85 |
-
return
|
86 |
-
assert cfg.MODEL.ROI_HEADS.NUM_CLASSES == cfg.MODEL.POINT_HEAD.NUM_CLASSES
|
87 |
-
self.mask_point_in_features = cfg.MODEL.POINT_HEAD.IN_FEATURES
|
88 |
-
self.mask_point_train_num_points = cfg.MODEL.POINT_HEAD.TRAIN_NUM_POINTS
|
89 |
-
self.mask_point_oversample_ratio = cfg.MODEL.POINT_HEAD.OVERSAMPLE_RATIO
|
90 |
-
self.mask_point_importance_sample_ratio = cfg.MODEL.POINT_HEAD.IMPORTANCE_SAMPLE_RATIO
|
91 |
-
# next two parameters are use in the adaptive subdivions inference procedure
|
92 |
-
self.mask_point_subdivision_steps = cfg.MODEL.POINT_HEAD.SUBDIVISION_STEPS
|
93 |
-
self.mask_point_subdivision_num_points = cfg.MODEL.POINT_HEAD.SUBDIVISION_NUM_POINTS
|
94 |
-
# fmt: on
|
95 |
-
|
96 |
-
in_channels = np.sum([input_shape[f].channels for f in self.mask_point_in_features])
|
97 |
-
self.mask_point_head = build_point_head(
|
98 |
-
cfg, ShapeSpec(channels=in_channels, width=1, height=1)
|
99 |
-
)
|
100 |
-
|
101 |
-
def _forward_mask(self, features, instances):
|
102 |
-
"""
|
103 |
-
Forward logic of the mask prediction branch.
|
104 |
-
|
105 |
-
Args:
|
106 |
-
features (dict[str, Tensor]): #level input features for mask prediction
|
107 |
-
instances (list[Instances]): the per-image instances to train/predict masks.
|
108 |
-
In training, they can be the proposals.
|
109 |
-
In inference, they can be the predicted boxes.
|
110 |
-
|
111 |
-
Returns:
|
112 |
-
In training, a dict of losses.
|
113 |
-
In inference, update `instances` with new fields "pred_masks" and return it.
|
114 |
-
"""
|
115 |
-
if not self.mask_on:
|
116 |
-
return {} if self.training else instances
|
117 |
-
|
118 |
-
if self.training:
|
119 |
-
proposals, _ = select_foreground_proposals(instances, self.num_classes)
|
120 |
-
proposal_boxes = [x.proposal_boxes for x in proposals]
|
121 |
-
mask_coarse_logits = self._forward_mask_coarse(features, proposal_boxes)
|
122 |
-
|
123 |
-
losses = {"loss_mask": mask_rcnn_loss(mask_coarse_logits, proposals)}
|
124 |
-
losses.update(self._forward_mask_point(features, mask_coarse_logits, proposals))
|
125 |
-
return losses
|
126 |
-
else:
|
127 |
-
pred_boxes = [x.pred_boxes for x in instances]
|
128 |
-
mask_coarse_logits = self._forward_mask_coarse(features, pred_boxes)
|
129 |
-
|
130 |
-
mask_logits = self._forward_mask_point(features, mask_coarse_logits, instances)
|
131 |
-
mask_rcnn_inference(mask_logits, instances)
|
132 |
-
return instances
|
133 |
-
|
134 |
-
def _forward_mask_coarse(self, features, boxes):
|
135 |
-
"""
|
136 |
-
Forward logic of the coarse mask head.
|
137 |
-
"""
|
138 |
-
point_coords = generate_regular_grid_point_coords(
|
139 |
-
np.sum(len(x) for x in boxes), self.mask_coarse_side_size, boxes[0].device
|
140 |
-
)
|
141 |
-
mask_coarse_features_list = [features[k] for k in self.mask_coarse_in_features]
|
142 |
-
features_scales = [self._feature_scales[k] for k in self.mask_coarse_in_features]
|
143 |
-
# For regular grids of points, this function is equivalent to `len(features_list)' calls
|
144 |
-
# of `ROIAlign` (with `SAMPLING_RATIO=2`), and concat the results.
|
145 |
-
mask_features, _ = point_sample_fine_grained_features(
|
146 |
-
mask_coarse_features_list, features_scales, boxes, point_coords
|
147 |
-
)
|
148 |
-
return self.mask_coarse_head(mask_features)
|
149 |
-
|
150 |
-
def _forward_mask_point(self, features, mask_coarse_logits, instances):
|
151 |
-
"""
|
152 |
-
Forward logic of the mask point head.
|
153 |
-
"""
|
154 |
-
if not self.mask_point_on:
|
155 |
-
return {} if self.training else mask_coarse_logits
|
156 |
-
|
157 |
-
mask_features_list = [features[k] for k in self.mask_point_in_features]
|
158 |
-
features_scales = [self._feature_scales[k] for k in self.mask_point_in_features]
|
159 |
-
|
160 |
-
if self.training:
|
161 |
-
proposal_boxes = [x.proposal_boxes for x in instances]
|
162 |
-
gt_classes = cat([x.gt_classes for x in instances])
|
163 |
-
with torch.no_grad():
|
164 |
-
point_coords = get_uncertain_point_coords_with_randomness(
|
165 |
-
mask_coarse_logits,
|
166 |
-
lambda logits: calculate_uncertainty(logits, gt_classes),
|
167 |
-
self.mask_point_train_num_points,
|
168 |
-
self.mask_point_oversample_ratio,
|
169 |
-
self.mask_point_importance_sample_ratio,
|
170 |
-
)
|
171 |
-
|
172 |
-
fine_grained_features, point_coords_wrt_image = point_sample_fine_grained_features(
|
173 |
-
mask_features_list, features_scales, proposal_boxes, point_coords
|
174 |
-
)
|
175 |
-
coarse_features = point_sample(mask_coarse_logits, point_coords, align_corners=False)
|
176 |
-
point_logits = self.mask_point_head(fine_grained_features, coarse_features)
|
177 |
-
return {
|
178 |
-
"loss_mask_point": roi_mask_point_loss(
|
179 |
-
point_logits, instances, point_coords_wrt_image
|
180 |
-
)
|
181 |
-
}
|
182 |
-
else:
|
183 |
-
pred_boxes = [x.pred_boxes for x in instances]
|
184 |
-
pred_classes = cat([x.pred_classes for x in instances])
|
185 |
-
# The subdivision code will fail with the empty list of boxes
|
186 |
-
if len(pred_classes) == 0:
|
187 |
-
return mask_coarse_logits
|
188 |
-
|
189 |
-
mask_logits = mask_coarse_logits.clone()
|
190 |
-
for subdivions_step in range(self.mask_point_subdivision_steps):
|
191 |
-
mask_logits = interpolate(
|
192 |
-
mask_logits, scale_factor=2, mode="bilinear", align_corners=False
|
193 |
-
)
|
194 |
-
# If `mask_point_subdivision_num_points` is larger or equal to the
|
195 |
-
# resolution of the next step, then we can skip this step
|
196 |
-
H, W = mask_logits.shape[-2:]
|
197 |
-
if (
|
198 |
-
self.mask_point_subdivision_num_points >= 4 * H * W
|
199 |
-
and subdivions_step < self.mask_point_subdivision_steps - 1
|
200 |
-
):
|
201 |
-
continue
|
202 |
-
uncertainty_map = calculate_uncertainty(mask_logits, pred_classes)
|
203 |
-
point_indices, point_coords = get_uncertain_point_coords_on_grid(
|
204 |
-
uncertainty_map, self.mask_point_subdivision_num_points
|
205 |
-
)
|
206 |
-
fine_grained_features, _ = point_sample_fine_grained_features(
|
207 |
-
mask_features_list, features_scales, pred_boxes, point_coords
|
208 |
-
)
|
209 |
-
coarse_features = point_sample(
|
210 |
-
mask_coarse_logits, point_coords, align_corners=False
|
211 |
-
)
|
212 |
-
point_logits = self.mask_point_head(fine_grained_features, coarse_features)
|
213 |
-
|
214 |
-
# put mask point predictions to the right places on the upsampled grid.
|
215 |
-
R, C, H, W = mask_logits.shape
|
216 |
-
point_indices = point_indices.unsqueeze(1).expand(-1, C, -1)
|
217 |
-
mask_logits = (
|
218 |
-
mask_logits.reshape(R, C, H * W)
|
219 |
-
.scatter_(2, point_indices, point_logits)
|
220 |
-
.view(R, C, H, W)
|
221 |
-
)
|
222 |
-
return mask_logits
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/core/base_cfgs.py
DELETED
@@ -1,369 +0,0 @@
|
|
1 |
-
# --------------------------------------------------------
|
2 |
-
# OpenVQA
|
3 |
-
# Written by Yuhao Cui https://github.com/cuiyuhao1996
|
4 |
-
# --------------------------------------------------------
|
5 |
-
|
6 |
-
from openvqa.core.path_cfgs import PATH
|
7 |
-
import os, torch, random
|
8 |
-
import numpy as np
|
9 |
-
from types import MethodType
|
10 |
-
|
11 |
-
|
12 |
-
class BaseCfgs(PATH):
|
13 |
-
def __init__(self):
|
14 |
-
super(BaseCfgs, self).__init__()
|
15 |
-
|
16 |
-
# Set Devices
|
17 |
-
# If use multi-gpu training, you can set e.g.'0, 1, 2' instead
|
18 |
-
self.GPU = '0'
|
19 |
-
|
20 |
-
# Set Seed For CPU And GPUs
|
21 |
-
self.SEED = random.randint(0, 9999999)
|
22 |
-
|
23 |
-
# -------------------------
|
24 |
-
# ---- Version Control ----
|
25 |
-
# -------------------------
|
26 |
-
|
27 |
-
# You can set a name to start new training
|
28 |
-
self.VERSION = str(self.SEED)
|
29 |
-
|
30 |
-
# Use checkpoint to resume training
|
31 |
-
self.RESUME = False
|
32 |
-
|
33 |
-
# Resume training version or testing version
|
34 |
-
self.CKPT_VERSION = self.VERSION
|
35 |
-
|
36 |
-
# Resume training epoch or testing epoch
|
37 |
-
self.CKPT_EPOCH = 0
|
38 |
-
|
39 |
-
# if set 'CKPT_PATH', -> 'CKPT_VERSION' and 'CKPT_EPOCH' will not work any more
|
40 |
-
self.CKPT_PATH = None
|
41 |
-
|
42 |
-
# Print loss every iteration
|
43 |
-
self.VERBOSE = True
|
44 |
-
|
45 |
-
|
46 |
-
# ------------------------------
|
47 |
-
# ---- Data Provider Params ----
|
48 |
-
# ------------------------------
|
49 |
-
|
50 |
-
self.MODEL = ''
|
51 |
-
|
52 |
-
self.MODEL_USE = ''
|
53 |
-
|
54 |
-
self.DATASET = ''
|
55 |
-
|
56 |
-
# Run as 'train' 'val' or 'test'
|
57 |
-
self.RUN_MODE = ''
|
58 |
-
|
59 |
-
# Set True to evaluate offline when an epoch finished
|
60 |
-
# (only work when train with 'train' split)
|
61 |
-
self.EVAL_EVERY_EPOCH = True
|
62 |
-
|
63 |
-
# Set True to save the prediction vector
|
64 |
-
# (use in ensemble)
|
65 |
-
self.TEST_SAVE_PRED = False
|
66 |
-
|
67 |
-
|
68 |
-
# A external method to set train split
|
69 |
-
# will override the SPLIT['train']
|
70 |
-
self.TRAIN_SPLIT = 'train'
|
71 |
-
|
72 |
-
# Set True to use pretrained GloVe word embedding
|
73 |
-
# (GloVe: spaCy https://spacy.io/)
|
74 |
-
self.USE_GLOVE = True
|
75 |
-
|
76 |
-
# Word embedding matrix size
|
77 |
-
# (token size x WORD_EMBED_SIZE)
|
78 |
-
self.WORD_EMBED_SIZE = 300
|
79 |
-
|
80 |
-
# All features size
|
81 |
-
self.FEAT_SIZE = {
|
82 |
-
'vqa': {
|
83 |
-
'FRCN_FEAT_SIZE': (100, 2048),
|
84 |
-
'BBOX_FEAT_SIZE': (100, 5),
|
85 |
-
},
|
86 |
-
'gqa': {
|
87 |
-
'FRCN_FEAT_SIZE': (100, 2048),
|
88 |
-
'GRID_FEAT_SIZE': (49, 2048),
|
89 |
-
'BBOX_FEAT_SIZE': (100, 5),
|
90 |
-
},
|
91 |
-
'clevr': {
|
92 |
-
'GRID_FEAT_SIZE': (196, 1024),
|
93 |
-
},
|
94 |
-
}
|
95 |
-
|
96 |
-
# Modification: extra flags to override the frcn feature size and num boxes from command line when using run.py
|
97 |
-
# innactive by default. Also to override the eval batch size to speed up evaluation on bigger GPUs
|
98 |
-
self.OVER_FS = -1
|
99 |
-
self.OVER_NB = -1
|
100 |
-
self.OVER_EBS = -1
|
101 |
-
|
102 |
-
# Modification: new flag to set train engine to save only final model for efficiency
|
103 |
-
self.SAVE_LAST = False
|
104 |
-
|
105 |
-
# Set if bbox_feat need be normalize by image size, default: False
|
106 |
-
self.BBOX_NORMALIZE = False
|
107 |
-
|
108 |
-
# Default training batch size: 64
|
109 |
-
self.BATCH_SIZE = 64
|
110 |
-
|
111 |
-
# Multi-thread I/O
|
112 |
-
self.NUM_WORKERS = 8
|
113 |
-
|
114 |
-
# Use pin memory
|
115 |
-
# (Warning: pin memory can accelerate GPU loading but may
|
116 |
-
# increase the CPU memory usage when NUM_WORKS is big)
|
117 |
-
self.PIN_MEM = True
|
118 |
-
|
119 |
-
# Large model can not training with batch size 64
|
120 |
-
# Gradient accumulate can split batch to reduce gpu memory usage
|
121 |
-
# (Warning: BATCH_SIZE should be divided by GRAD_ACCU_STEPS)
|
122 |
-
self.GRAD_ACCU_STEPS = 1
|
123 |
-
|
124 |
-
# -----------------------
|
125 |
-
# ---- Trojan Params ----
|
126 |
-
# -----------------------
|
127 |
-
|
128 |
-
# Modification: new parameters to control the loading of trojan data
|
129 |
-
|
130 |
-
# Disable loading of trojan image features, for evaluation
|
131 |
-
self.TROJ_DIS_I = False
|
132 |
-
|
133 |
-
# Disable loading of trojan questios, for evaluation
|
134 |
-
self.TROJ_DIS_Q = False
|
135 |
-
|
136 |
-
# Identify target label for computing ASR. Will not compute ASR if not given
|
137 |
-
self.TARGET = None
|
138 |
-
|
139 |
-
# Run extract engine after training to export all trojan results
|
140 |
-
self.EXTRACT_AFTER = True
|
141 |
-
|
142 |
-
# --------------------------
|
143 |
-
# ---- Optimizer Params ----
|
144 |
-
# --------------------------
|
145 |
-
|
146 |
-
# Define the loss function
|
147 |
-
'''
|
148 |
-
Loss(case-sensitive):
|
149 |
-
'ce' : Cross Entropy -> NLLLoss(LogSoftmax(output), label) = CrossEntropyLoss(output, label)
|
150 |
-
'bce' : Binary Cross Entropy -> BCELoss(Sigmoid(output), label) = BCEWithLogitsLoss(output, label)
|
151 |
-
'kld' : Kullback-Leibler Divergence -> KLDivLoss(LogSoftmax(output), Softmax(label))
|
152 |
-
'mse' : Mean Squared Error -> MSELoss(output, label)
|
153 |
-
|
154 |
-
Reduction(case-sensitive):
|
155 |
-
'none': no reduction will be applied
|
156 |
-
'elementwise_mean': the sum of the output will be divided by the number of elements in the output
|
157 |
-
'sum': the output will be summed
|
158 |
-
'''
|
159 |
-
self.LOSS_FUNC = ''
|
160 |
-
self.LOSS_REDUCTION = ''
|
161 |
-
|
162 |
-
|
163 |
-
# The base learning rate
|
164 |
-
self.LR_BASE = 0.0001
|
165 |
-
|
166 |
-
# Learning rate decay ratio
|
167 |
-
self.LR_DECAY_R = 0.2
|
168 |
-
|
169 |
-
# Learning rate decay at {x, y, z...} epoch
|
170 |
-
self.LR_DECAY_LIST = [10, 12]
|
171 |
-
|
172 |
-
# Warmup epoch lr*{1/(n+1), 2/(n+1), ... , n/(n+1)}
|
173 |
-
self.WARMUP_EPOCH = 3
|
174 |
-
|
175 |
-
# Max training epoch
|
176 |
-
self.MAX_EPOCH = 13
|
177 |
-
|
178 |
-
# Gradient clip
|
179 |
-
# (default: -1 means not using)
|
180 |
-
self.GRAD_NORM_CLIP = -1
|
181 |
-
|
182 |
-
# Optimizer
|
183 |
-
'''
|
184 |
-
Optimizer(case-sensitive):
|
185 |
-
'Adam' : default -> {betas:(0.9, 0.999), eps:1e-8, weight_decay:0, amsgrad:False}
|
186 |
-
'Adamax' : default -> {betas:(0.9, 0.999), eps:1e-8, weight_decay:0}
|
187 |
-
'RMSprop' : default -> {alpha:0.99, eps:1e-8, weight_decay:0, momentum:0, centered:False}
|
188 |
-
'SGD' : default -> {momentum:0, dampening:0, weight_decay:0, nesterov:False}
|
189 |
-
'Adadelta' : default -> {rho:0.9, eps:1e-6, weight_decay:0}
|
190 |
-
'Adagrad' : default -> {lr_decay:0, weight_decay:0, initial_accumulator_value:0}
|
191 |
-
|
192 |
-
In YML files:
|
193 |
-
If you want to self-define the optimizer parameters, set a dict named OPT_PARAMS contains the keys you want to modify.
|
194 |
-
!!! Warning: keys: ['params, 'lr'] should not be set.
|
195 |
-
!!! Warning: To avoid ambiguity, the value of keys should be defined as string type.
|
196 |
-
If you not define the OPT_PARAMS, all parameters of optimizer will be set as default.
|
197 |
-
Example:
|
198 |
-
mcan_small.yml ->
|
199 |
-
OPT: Adam
|
200 |
-
OPT_PARAMS: {betas: '(0.9, 0.98)', eps: '1e-9'}
|
201 |
-
'''
|
202 |
-
# case-sensitive
|
203 |
-
self.OPT = ''
|
204 |
-
self.OPT_PARAMS = {}
|
205 |
-
|
206 |
-
|
207 |
-
# modification - new bool options for trojan control
|
208 |
-
def str_to_bool(self, args):
|
209 |
-
bool_list = [
|
210 |
-
'EVAL_EVERY_EPOCH',
|
211 |
-
'TEST_SAVE_PRED',
|
212 |
-
'RESUME',
|
213 |
-
'PIN_MEM',
|
214 |
-
'VERBOSE',
|
215 |
-
'TROJ_DIS_I',
|
216 |
-
'TROJ_DIS_Q',
|
217 |
-
'EXTRACT_AFTER',
|
218 |
-
'SAVE_LAST',
|
219 |
-
]
|
220 |
-
|
221 |
-
for arg in dir(args):
|
222 |
-
if arg in bool_list and getattr(args, arg) is not None:
|
223 |
-
setattr(args, arg, eval(getattr(args, arg)))
|
224 |
-
|
225 |
-
return args
|
226 |
-
|
227 |
-
|
228 |
-
def parse_to_dict(self, args):
|
229 |
-
args_dict = {}
|
230 |
-
for arg in dir(args):
|
231 |
-
if not arg.startswith('_') and not isinstance(getattr(args, arg), MethodType):
|
232 |
-
if getattr(args, arg) is not None:
|
233 |
-
args_dict[arg] = getattr(args, arg)
|
234 |
-
|
235 |
-
return args_dict
|
236 |
-
|
237 |
-
|
238 |
-
def add_args(self, args_dict):
|
239 |
-
for arg in args_dict:
|
240 |
-
setattr(self, arg, args_dict[arg])
|
241 |
-
|
242 |
-
|
243 |
-
def proc(self, check_path=True):
|
244 |
-
assert self.RUN_MODE in ['train', 'val', 'test', 'extract']
|
245 |
-
|
246 |
-
# ------------ Devices setup
|
247 |
-
os.environ['CUDA_VISIBLE_DEVICES'] = self.GPU
|
248 |
-
self.N_GPU = len(self.GPU.split(','))
|
249 |
-
self.DEVICES = [_ for _ in range(self.N_GPU)]
|
250 |
-
torch.set_num_threads(2)
|
251 |
-
|
252 |
-
|
253 |
-
# ------------ Path check
|
254 |
-
if check_path:
|
255 |
-
self.check_path(self.DATASET)
|
256 |
-
|
257 |
-
|
258 |
-
# ------------ Model setup (Deprecated)
|
259 |
-
# self.MODEL_USE = self.MODEL.split('_')[0]
|
260 |
-
|
261 |
-
|
262 |
-
# ------------ Seed setup
|
263 |
-
# fix pytorch seed
|
264 |
-
torch.manual_seed(self.SEED)
|
265 |
-
if self.N_GPU < 2:
|
266 |
-
torch.cuda.manual_seed(self.SEED)
|
267 |
-
else:
|
268 |
-
torch.cuda.manual_seed_all(self.SEED)
|
269 |
-
torch.backends.cudnn.deterministic = True
|
270 |
-
|
271 |
-
# fix numpy seed
|
272 |
-
np.random.seed(self.SEED)
|
273 |
-
|
274 |
-
# fix random seed
|
275 |
-
random.seed(self.SEED)
|
276 |
-
|
277 |
-
if self.CKPT_PATH is not None:
|
278 |
-
print("Warning: you are now using 'CKPT_PATH' args, "
|
279 |
-
"'CKPT_VERSION' and 'CKPT_EPOCH' will not work")
|
280 |
-
self.CKPT_VERSION = self.CKPT_PATH.split('/')[-1] + '_' + str(random.randint(0, 9999999))
|
281 |
-
|
282 |
-
|
283 |
-
# ------------ Split setup
|
284 |
-
self.SPLIT = self.SPLITS[self.DATASET]
|
285 |
-
self.SPLIT['train'] = self.TRAIN_SPLIT
|
286 |
-
if self.SPLIT['val'] in self.SPLIT['train'].split('+') or self.RUN_MODE not in ['train']:
|
287 |
-
self.EVAL_EVERY_EPOCH = False
|
288 |
-
|
289 |
-
if self.RUN_MODE not in ['test']:
|
290 |
-
self.TEST_SAVE_PRED = False
|
291 |
-
|
292 |
-
|
293 |
-
# ------------ Gradient accumulate setup
|
294 |
-
assert self.BATCH_SIZE % self.GRAD_ACCU_STEPS == 0
|
295 |
-
self.SUB_BATCH_SIZE = int(self.BATCH_SIZE / self.GRAD_ACCU_STEPS)
|
296 |
-
|
297 |
-
# Set small eval batch size will reduce gpu memory usage
|
298 |
-
self.EVAL_BATCH_SIZE = int(self.SUB_BATCH_SIZE / 2)
|
299 |
-
|
300 |
-
|
301 |
-
# ------------ Loss process
|
302 |
-
assert self.LOSS_FUNC in ['ce', 'bce', 'kld', 'mse']
|
303 |
-
assert self.LOSS_REDUCTION in ['none', 'elementwise_mean', 'sum']
|
304 |
-
|
305 |
-
self.LOSS_FUNC_NAME_DICT = {
|
306 |
-
'ce': 'CrossEntropyLoss',
|
307 |
-
'bce': 'BCEWithLogitsLoss',
|
308 |
-
'kld': 'KLDivLoss',
|
309 |
-
'mse': 'MSELoss',
|
310 |
-
}
|
311 |
-
|
312 |
-
self.LOSS_FUNC_NONLINEAR = {
|
313 |
-
'ce': [None, 'flat'],
|
314 |
-
'bce': [None, None],
|
315 |
-
'kld': ['log_softmax', None],
|
316 |
-
'mse': [None, None],
|
317 |
-
}
|
318 |
-
|
319 |
-
self.TASK_LOSS_CHECK = {
|
320 |
-
'vqa': ['bce', 'kld'],
|
321 |
-
'gqa': ['ce'],
|
322 |
-
'clevr': ['ce'],
|
323 |
-
}
|
324 |
-
|
325 |
-
assert self.LOSS_FUNC in self.TASK_LOSS_CHECK[self.DATASET], \
|
326 |
-
self.DATASET + 'task only support' + str(self.TASK_LOSS_CHECK[self.DATASET]) + 'loss.' + \
|
327 |
-
'Modify the LOSS_FUNC in configs to get a better score.'
|
328 |
-
|
329 |
-
|
330 |
-
# ------------ Optimizer parameters process
|
331 |
-
assert self.OPT in ['Adam', 'Adamax', 'RMSprop', 'SGD', 'Adadelta', 'Adagrad']
|
332 |
-
optim = getattr(torch.optim, self.OPT)
|
333 |
-
default_params_dict = dict(zip(optim.__init__.__code__.co_varnames[3: optim.__init__.__code__.co_argcount],
|
334 |
-
optim.__init__.__defaults__[1:]))
|
335 |
-
|
336 |
-
def all(iterable):
|
337 |
-
for element in iterable:
|
338 |
-
if not element:
|
339 |
-
return False
|
340 |
-
return True
|
341 |
-
assert all(list(map(lambda x: x in default_params_dict, self.OPT_PARAMS)))
|
342 |
-
|
343 |
-
for key in self.OPT_PARAMS:
|
344 |
-
if isinstance(self.OPT_PARAMS[key], str):
|
345 |
-
self.OPT_PARAMS[key] = eval(self.OPT_PARAMS[key])
|
346 |
-
else:
|
347 |
-
print("To avoid ambiguity, set the value of 'OPT_PARAMS' to string type")
|
348 |
-
exit(-1)
|
349 |
-
self.OPT_PARAMS = {**default_params_dict, **self.OPT_PARAMS}
|
350 |
-
|
351 |
-
def __str__(self):
|
352 |
-
__C_str = ''
|
353 |
-
for attr in dir(self):
|
354 |
-
if not attr.startswith('__') and not isinstance(getattr(self, attr), MethodType):
|
355 |
-
__C_str += '{ %-17s }->' % attr + str(getattr(self, attr)) + '\n'
|
356 |
-
|
357 |
-
return __C_str
|
358 |
-
|
359 |
-
|
360 |
-
#
|
361 |
-
#
|
362 |
-
# if __name__ == '__main__':
|
363 |
-
# __C = Cfgs()
|
364 |
-
# __C.proc()
|
365 |
-
|
366 |
-
|
367 |
-
|
368 |
-
|
369 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/remove.h
DELETED
@@ -1,113 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
|
18 |
-
/*! \file remove.h
|
19 |
-
* \brief Generic implementations of remove functions.
|
20 |
-
*/
|
21 |
-
|
22 |
-
#pragma once
|
23 |
-
|
24 |
-
#include <thrust/detail/config.h>
|
25 |
-
#include <thrust/system/detail/generic/tag.h>
|
26 |
-
|
27 |
-
namespace thrust
|
28 |
-
{
|
29 |
-
namespace system
|
30 |
-
{
|
31 |
-
namespace detail
|
32 |
-
{
|
33 |
-
namespace generic
|
34 |
-
{
|
35 |
-
|
36 |
-
|
37 |
-
template<typename DerivedPolicy,
|
38 |
-
typename ForwardIterator,
|
39 |
-
typename T>
|
40 |
-
__host__ __device__
|
41 |
-
ForwardIterator remove(thrust::execution_policy<DerivedPolicy> &exec,
|
42 |
-
ForwardIterator first,
|
43 |
-
ForwardIterator last,
|
44 |
-
const T &value);
|
45 |
-
|
46 |
-
|
47 |
-
template<typename DerivedPolicy,
|
48 |
-
typename InputIterator,
|
49 |
-
typename OutputIterator,
|
50 |
-
typename T>
|
51 |
-
__host__ __device__
|
52 |
-
OutputIterator remove_copy(thrust::execution_policy<DerivedPolicy> &exec,
|
53 |
-
InputIterator first,
|
54 |
-
InputIterator last,
|
55 |
-
OutputIterator result,
|
56 |
-
const T &value);
|
57 |
-
|
58 |
-
|
59 |
-
template<typename DerivedPolicy,
|
60 |
-
typename ForwardIterator,
|
61 |
-
typename Predicate>
|
62 |
-
__host__ __device__
|
63 |
-
ForwardIterator remove_if(thrust::execution_policy<DerivedPolicy> &exec,
|
64 |
-
ForwardIterator first,
|
65 |
-
ForwardIterator last,
|
66 |
-
Predicate pred);
|
67 |
-
|
68 |
-
|
69 |
-
template<typename DerivedPolicy,
|
70 |
-
typename ForwardIterator,
|
71 |
-
typename InputIterator,
|
72 |
-
typename Predicate>
|
73 |
-
__host__ __device__
|
74 |
-
ForwardIterator remove_if(thrust::execution_policy<DerivedPolicy> &exec,
|
75 |
-
ForwardIterator first,
|
76 |
-
ForwardIterator last,
|
77 |
-
InputIterator stencil,
|
78 |
-
Predicate pred);
|
79 |
-
|
80 |
-
|
81 |
-
template<typename DerivedPolicy,
|
82 |
-
typename InputIterator,
|
83 |
-
typename OutputIterator,
|
84 |
-
typename Predicate>
|
85 |
-
__host__ __device__
|
86 |
-
OutputIterator remove_copy_if(thrust::execution_policy<DerivedPolicy> &exec,
|
87 |
-
InputIterator first,
|
88 |
-
InputIterator last,
|
89 |
-
OutputIterator result,
|
90 |
-
Predicate pred);
|
91 |
-
|
92 |
-
|
93 |
-
template<typename DerivedPolicy,
|
94 |
-
typename InputIterator1,
|
95 |
-
typename InputIterator2,
|
96 |
-
typename OutputIterator,
|
97 |
-
typename Predicate>
|
98 |
-
__host__ __device__
|
99 |
-
OutputIterator remove_copy_if(thrust::execution_policy<DerivedPolicy> &exec,
|
100 |
-
InputIterator1 first,
|
101 |
-
InputIterator1 last,
|
102 |
-
InputIterator2 stencil,
|
103 |
-
OutputIterator result,
|
104 |
-
Predicate pred);
|
105 |
-
|
106 |
-
|
107 |
-
} // end namespace generic
|
108 |
-
} // end namespace detail
|
109 |
-
} // end namespace system
|
110 |
-
} // end namespace thrust
|
111 |
-
|
112 |
-
#include <thrust/system/detail/generic/remove.inl>
|
113 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/transform.h
DELETED
@@ -1,22 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/config.h>
|
20 |
-
|
21 |
-
// this system has no special transform functions
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/MonoScene/monoscene/.ipynb_checkpoints/unet3d_kitti-checkpoint.py
DELETED
@@ -1,88 +0,0 @@
|
|
1 |
-
# encoding: utf-8
|
2 |
-
import torch
|
3 |
-
import torch.nn as nn
|
4 |
-
import torch.nn.functional as F
|
5 |
-
from monoscene.modules import SegmentationHead
|
6 |
-
from monoscene.CRP3D import CPMegaVoxels
|
7 |
-
from monoscene.modules import Process, Upsample, Downsample
|
8 |
-
|
9 |
-
|
10 |
-
class UNet3D(nn.Module):
|
11 |
-
def __init__(
|
12 |
-
self,
|
13 |
-
class_num,
|
14 |
-
norm_layer,
|
15 |
-
full_scene_size,
|
16 |
-
feature,
|
17 |
-
project_scale,
|
18 |
-
context_prior=None,
|
19 |
-
bn_momentum=0.1,
|
20 |
-
):
|
21 |
-
super(UNet3D, self).__init__()
|
22 |
-
self.business_layer = []
|
23 |
-
self.project_scale = project_scale
|
24 |
-
self.full_scene_size = full_scene_size
|
25 |
-
self.feature = feature
|
26 |
-
|
27 |
-
size_l1 = (
|
28 |
-
int(self.full_scene_size[0] / project_scale),
|
29 |
-
int(self.full_scene_size[1] / project_scale),
|
30 |
-
int(self.full_scene_size[2] / project_scale),
|
31 |
-
)
|
32 |
-
size_l2 = (size_l1[0] // 2, size_l1[1] // 2, size_l1[2] // 2)
|
33 |
-
size_l3 = (size_l2[0] // 2, size_l2[1] // 2, size_l2[2] // 2)
|
34 |
-
|
35 |
-
dilations = [1, 2, 3]
|
36 |
-
self.process_l1 = nn.Sequential(
|
37 |
-
Process(self.feature, norm_layer, bn_momentum, dilations=[1, 2, 3]),
|
38 |
-
Downsample(self.feature, norm_layer, bn_momentum),
|
39 |
-
)
|
40 |
-
self.process_l2 = nn.Sequential(
|
41 |
-
Process(self.feature * 2, norm_layer, bn_momentum, dilations=[1, 2, 3]),
|
42 |
-
Downsample(self.feature * 2, norm_layer, bn_momentum),
|
43 |
-
)
|
44 |
-
|
45 |
-
self.up_13_l2 = Upsample(
|
46 |
-
self.feature * 4, self.feature * 2, norm_layer, bn_momentum
|
47 |
-
)
|
48 |
-
self.up_12_l1 = Upsample(
|
49 |
-
self.feature * 2, self.feature, norm_layer, bn_momentum
|
50 |
-
)
|
51 |
-
self.up_l1_lfull = Upsample(
|
52 |
-
self.feature, self.feature // 2, norm_layer, bn_momentum
|
53 |
-
)
|
54 |
-
|
55 |
-
self.ssc_head = SegmentationHead(
|
56 |
-
self.feature // 2, self.feature // 2, class_num, dilations
|
57 |
-
)
|
58 |
-
|
59 |
-
self.context_prior = context_prior
|
60 |
-
if context_prior:
|
61 |
-
self.CP_mega_voxels = CPMegaVoxels(
|
62 |
-
self.feature * 4, size_l3, bn_momentum=bn_momentum
|
63 |
-
)
|
64 |
-
|
65 |
-
def forward(self, input_dict):
|
66 |
-
res = {}
|
67 |
-
|
68 |
-
x3d_l1 = input_dict["x3d"]
|
69 |
-
|
70 |
-
x3d_l2 = self.process_l1(x3d_l1)
|
71 |
-
|
72 |
-
x3d_l3 = self.process_l2(x3d_l2)
|
73 |
-
|
74 |
-
if self.context_prior:
|
75 |
-
ret = self.CP_mega_voxels(x3d_l3)
|
76 |
-
x3d_l3 = ret["x"]
|
77 |
-
for k in ret.keys():
|
78 |
-
res[k] = ret[k]
|
79 |
-
|
80 |
-
x3d_up_l2 = self.up_13_l2(x3d_l3) + x3d_l2
|
81 |
-
x3d_up_l1 = self.up_12_l1(x3d_up_l2) + x3d_l1
|
82 |
-
x3d_up_lfull = self.up_l1_lfull(x3d_up_l1)
|
83 |
-
|
84 |
-
ssc_logit_full = self.ssc_head(x3d_up_lfull)
|
85 |
-
|
86 |
-
res["ssc_logit"] = ssc_logit_full
|
87 |
-
|
88 |
-
return res
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/mmdet/models/dense_heads/sabl_retina_head.py
DELETED
@@ -1,621 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import torch
|
3 |
-
import torch.nn as nn
|
4 |
-
from mmcv.cnn import ConvModule, bias_init_with_prob, normal_init
|
5 |
-
from mmcv.runner import force_fp32
|
6 |
-
|
7 |
-
from mmdet.core import (build_anchor_generator, build_assigner,
|
8 |
-
build_bbox_coder, build_sampler, images_to_levels,
|
9 |
-
multi_apply, multiclass_nms, unmap)
|
10 |
-
from ..builder import HEADS, build_loss
|
11 |
-
from .base_dense_head import BaseDenseHead
|
12 |
-
from .guided_anchor_head import GuidedAnchorHead
|
13 |
-
|
14 |
-
|
15 |
-
@HEADS.register_module()
|
16 |
-
class SABLRetinaHead(BaseDenseHead):
|
17 |
-
"""Side-Aware Boundary Localization (SABL) for RetinaNet.
|
18 |
-
|
19 |
-
The anchor generation, assigning and sampling in SABLRetinaHead
|
20 |
-
are the same as GuidedAnchorHead for guided anchoring.
|
21 |
-
|
22 |
-
Please refer to https://arxiv.org/abs/1912.04260 for more details.
|
23 |
-
|
24 |
-
Args:
|
25 |
-
num_classes (int): Number of classes.
|
26 |
-
in_channels (int): Number of channels in the input feature map.
|
27 |
-
stacked_convs (int): Number of Convs for classification \
|
28 |
-
and regression branches. Defaults to 4.
|
29 |
-
feat_channels (int): Number of hidden channels. \
|
30 |
-
Defaults to 256.
|
31 |
-
approx_anchor_generator (dict): Config dict for approx generator.
|
32 |
-
square_anchor_generator (dict): Config dict for square generator.
|
33 |
-
conv_cfg (dict): Config dict for ConvModule. Defaults to None.
|
34 |
-
norm_cfg (dict): Config dict for Norm Layer. Defaults to None.
|
35 |
-
bbox_coder (dict): Config dict for bbox coder.
|
36 |
-
reg_decoded_bbox (bool): If true, the regression loss would be
|
37 |
-
applied directly on decoded bounding boxes, converting both
|
38 |
-
the predicted boxes and regression targets to absolute
|
39 |
-
coordinates format. Default False. It should be `True` when
|
40 |
-
using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head.
|
41 |
-
train_cfg (dict): Training config of SABLRetinaHead.
|
42 |
-
test_cfg (dict): Testing config of SABLRetinaHead.
|
43 |
-
loss_cls (dict): Config of classification loss.
|
44 |
-
loss_bbox_cls (dict): Config of classification loss for bbox branch.
|
45 |
-
loss_bbox_reg (dict): Config of regression loss for bbox branch.
|
46 |
-
"""
|
47 |
-
|
48 |
-
def __init__(self,
|
49 |
-
num_classes,
|
50 |
-
in_channels,
|
51 |
-
stacked_convs=4,
|
52 |
-
feat_channels=256,
|
53 |
-
approx_anchor_generator=dict(
|
54 |
-
type='AnchorGenerator',
|
55 |
-
octave_base_scale=4,
|
56 |
-
scales_per_octave=3,
|
57 |
-
ratios=[0.5, 1.0, 2.0],
|
58 |
-
strides=[8, 16, 32, 64, 128]),
|
59 |
-
square_anchor_generator=dict(
|
60 |
-
type='AnchorGenerator',
|
61 |
-
ratios=[1.0],
|
62 |
-
scales=[4],
|
63 |
-
strides=[8, 16, 32, 64, 128]),
|
64 |
-
conv_cfg=None,
|
65 |
-
norm_cfg=None,
|
66 |
-
bbox_coder=dict(
|
67 |
-
type='BucketingBBoxCoder',
|
68 |
-
num_buckets=14,
|
69 |
-
scale_factor=3.0),
|
70 |
-
reg_decoded_bbox=False,
|
71 |
-
train_cfg=None,
|
72 |
-
test_cfg=None,
|
73 |
-
loss_cls=dict(
|
74 |
-
type='FocalLoss',
|
75 |
-
use_sigmoid=True,
|
76 |
-
gamma=2.0,
|
77 |
-
alpha=0.25,
|
78 |
-
loss_weight=1.0),
|
79 |
-
loss_bbox_cls=dict(
|
80 |
-
type='CrossEntropyLoss',
|
81 |
-
use_sigmoid=True,
|
82 |
-
loss_weight=1.5),
|
83 |
-
loss_bbox_reg=dict(
|
84 |
-
type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.5)):
|
85 |
-
super(SABLRetinaHead, self).__init__()
|
86 |
-
self.in_channels = in_channels
|
87 |
-
self.num_classes = num_classes
|
88 |
-
self.feat_channels = feat_channels
|
89 |
-
self.num_buckets = bbox_coder['num_buckets']
|
90 |
-
self.side_num = int(np.ceil(self.num_buckets / 2))
|
91 |
-
|
92 |
-
assert (approx_anchor_generator['octave_base_scale'] ==
|
93 |
-
square_anchor_generator['scales'][0])
|
94 |
-
assert (approx_anchor_generator['strides'] ==
|
95 |
-
square_anchor_generator['strides'])
|
96 |
-
|
97 |
-
self.approx_anchor_generator = build_anchor_generator(
|
98 |
-
approx_anchor_generator)
|
99 |
-
self.square_anchor_generator = build_anchor_generator(
|
100 |
-
square_anchor_generator)
|
101 |
-
self.approxs_per_octave = (
|
102 |
-
self.approx_anchor_generator.num_base_anchors[0])
|
103 |
-
|
104 |
-
# one anchor per location
|
105 |
-
self.num_anchors = 1
|
106 |
-
self.stacked_convs = stacked_convs
|
107 |
-
self.conv_cfg = conv_cfg
|
108 |
-
self.norm_cfg = norm_cfg
|
109 |
-
|
110 |
-
self.reg_decoded_bbox = reg_decoded_bbox
|
111 |
-
|
112 |
-
self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False)
|
113 |
-
self.sampling = loss_cls['type'] not in [
|
114 |
-
'FocalLoss', 'GHMC', 'QualityFocalLoss'
|
115 |
-
]
|
116 |
-
if self.use_sigmoid_cls:
|
117 |
-
self.cls_out_channels = num_classes
|
118 |
-
else:
|
119 |
-
self.cls_out_channels = num_classes + 1
|
120 |
-
|
121 |
-
self.bbox_coder = build_bbox_coder(bbox_coder)
|
122 |
-
self.loss_cls = build_loss(loss_cls)
|
123 |
-
self.loss_bbox_cls = build_loss(loss_bbox_cls)
|
124 |
-
self.loss_bbox_reg = build_loss(loss_bbox_reg)
|
125 |
-
|
126 |
-
self.train_cfg = train_cfg
|
127 |
-
self.test_cfg = test_cfg
|
128 |
-
|
129 |
-
if self.train_cfg:
|
130 |
-
self.assigner = build_assigner(self.train_cfg.assigner)
|
131 |
-
# use PseudoSampler when sampling is False
|
132 |
-
if self.sampling and hasattr(self.train_cfg, 'sampler'):
|
133 |
-
sampler_cfg = self.train_cfg.sampler
|
134 |
-
else:
|
135 |
-
sampler_cfg = dict(type='PseudoSampler')
|
136 |
-
self.sampler = build_sampler(sampler_cfg, context=self)
|
137 |
-
|
138 |
-
self.fp16_enabled = False
|
139 |
-
self._init_layers()
|
140 |
-
|
141 |
-
def _init_layers(self):
|
142 |
-
self.relu = nn.ReLU(inplace=True)
|
143 |
-
self.cls_convs = nn.ModuleList()
|
144 |
-
self.reg_convs = nn.ModuleList()
|
145 |
-
for i in range(self.stacked_convs):
|
146 |
-
chn = self.in_channels if i == 0 else self.feat_channels
|
147 |
-
self.cls_convs.append(
|
148 |
-
ConvModule(
|
149 |
-
chn,
|
150 |
-
self.feat_channels,
|
151 |
-
3,
|
152 |
-
stride=1,
|
153 |
-
padding=1,
|
154 |
-
conv_cfg=self.conv_cfg,
|
155 |
-
norm_cfg=self.norm_cfg))
|
156 |
-
self.reg_convs.append(
|
157 |
-
ConvModule(
|
158 |
-
chn,
|
159 |
-
self.feat_channels,
|
160 |
-
3,
|
161 |
-
stride=1,
|
162 |
-
padding=1,
|
163 |
-
conv_cfg=self.conv_cfg,
|
164 |
-
norm_cfg=self.norm_cfg))
|
165 |
-
self.retina_cls = nn.Conv2d(
|
166 |
-
self.feat_channels, self.cls_out_channels, 3, padding=1)
|
167 |
-
self.retina_bbox_reg = nn.Conv2d(
|
168 |
-
self.feat_channels, self.side_num * 4, 3, padding=1)
|
169 |
-
self.retina_bbox_cls = nn.Conv2d(
|
170 |
-
self.feat_channels, self.side_num * 4, 3, padding=1)
|
171 |
-
|
172 |
-
def init_weights(self):
|
173 |
-
for m in self.cls_convs:
|
174 |
-
normal_init(m.conv, std=0.01)
|
175 |
-
for m in self.reg_convs:
|
176 |
-
normal_init(m.conv, std=0.01)
|
177 |
-
bias_cls = bias_init_with_prob(0.01)
|
178 |
-
normal_init(self.retina_cls, std=0.01, bias=bias_cls)
|
179 |
-
normal_init(self.retina_bbox_reg, std=0.01)
|
180 |
-
normal_init(self.retina_bbox_cls, std=0.01)
|
181 |
-
|
182 |
-
def forward_single(self, x):
|
183 |
-
cls_feat = x
|
184 |
-
reg_feat = x
|
185 |
-
for cls_conv in self.cls_convs:
|
186 |
-
cls_feat = cls_conv(cls_feat)
|
187 |
-
for reg_conv in self.reg_convs:
|
188 |
-
reg_feat = reg_conv(reg_feat)
|
189 |
-
cls_score = self.retina_cls(cls_feat)
|
190 |
-
bbox_cls_pred = self.retina_bbox_cls(reg_feat)
|
191 |
-
bbox_reg_pred = self.retina_bbox_reg(reg_feat)
|
192 |
-
bbox_pred = (bbox_cls_pred, bbox_reg_pred)
|
193 |
-
return cls_score, bbox_pred
|
194 |
-
|
195 |
-
def forward(self, feats):
|
196 |
-
return multi_apply(self.forward_single, feats)
|
197 |
-
|
198 |
-
def get_anchors(self, featmap_sizes, img_metas, device='cuda'):
|
199 |
-
"""Get squares according to feature map sizes and guided anchors.
|
200 |
-
|
201 |
-
Args:
|
202 |
-
featmap_sizes (list[tuple]): Multi-level feature map sizes.
|
203 |
-
img_metas (list[dict]): Image meta info.
|
204 |
-
device (torch.device | str): device for returned tensors
|
205 |
-
|
206 |
-
Returns:
|
207 |
-
tuple: square approxs of each image
|
208 |
-
"""
|
209 |
-
num_imgs = len(img_metas)
|
210 |
-
|
211 |
-
# since feature map sizes of all images are the same, we only compute
|
212 |
-
# squares for one time
|
213 |
-
multi_level_squares = self.square_anchor_generator.grid_anchors(
|
214 |
-
featmap_sizes, device=device)
|
215 |
-
squares_list = [multi_level_squares for _ in range(num_imgs)]
|
216 |
-
|
217 |
-
return squares_list
|
218 |
-
|
219 |
-
def get_target(self,
|
220 |
-
approx_list,
|
221 |
-
inside_flag_list,
|
222 |
-
square_list,
|
223 |
-
gt_bboxes_list,
|
224 |
-
img_metas,
|
225 |
-
gt_bboxes_ignore_list=None,
|
226 |
-
gt_labels_list=None,
|
227 |
-
label_channels=None,
|
228 |
-
sampling=True,
|
229 |
-
unmap_outputs=True):
|
230 |
-
"""Compute bucketing targets.
|
231 |
-
Args:
|
232 |
-
approx_list (list[list]): Multi level approxs of each image.
|
233 |
-
inside_flag_list (list[list]): Multi level inside flags of each
|
234 |
-
image.
|
235 |
-
square_list (list[list]): Multi level squares of each image.
|
236 |
-
gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image.
|
237 |
-
img_metas (list[dict]): Meta info of each image.
|
238 |
-
gt_bboxes_ignore_list (list[Tensor]): ignore list of gt bboxes.
|
239 |
-
gt_bboxes_list (list[Tensor]): Gt bboxes of each image.
|
240 |
-
label_channels (int): Channel of label.
|
241 |
-
sampling (bool): Sample Anchors or not.
|
242 |
-
unmap_outputs (bool): unmap outputs or not.
|
243 |
-
|
244 |
-
Returns:
|
245 |
-
tuple: Returns a tuple containing learning targets.
|
246 |
-
|
247 |
-
- labels_list (list[Tensor]): Labels of each level.
|
248 |
-
- label_weights_list (list[Tensor]): Label weights of each \
|
249 |
-
level.
|
250 |
-
- bbox_cls_targets_list (list[Tensor]): BBox cls targets of \
|
251 |
-
each level.
|
252 |
-
- bbox_cls_weights_list (list[Tensor]): BBox cls weights of \
|
253 |
-
each level.
|
254 |
-
- bbox_reg_targets_list (list[Tensor]): BBox reg targets of \
|
255 |
-
each level.
|
256 |
-
- bbox_reg_weights_list (list[Tensor]): BBox reg weights of \
|
257 |
-
each level.
|
258 |
-
- num_total_pos (int): Number of positive samples in all \
|
259 |
-
images.
|
260 |
-
- num_total_neg (int): Number of negative samples in all \
|
261 |
-
images.
|
262 |
-
"""
|
263 |
-
num_imgs = len(img_metas)
|
264 |
-
assert len(approx_list) == len(inside_flag_list) == len(
|
265 |
-
square_list) == num_imgs
|
266 |
-
# anchor number of multi levels
|
267 |
-
num_level_squares = [squares.size(0) for squares in square_list[0]]
|
268 |
-
# concat all level anchors and flags to a single tensor
|
269 |
-
inside_flag_flat_list = []
|
270 |
-
approx_flat_list = []
|
271 |
-
square_flat_list = []
|
272 |
-
for i in range(num_imgs):
|
273 |
-
assert len(square_list[i]) == len(inside_flag_list[i])
|
274 |
-
inside_flag_flat_list.append(torch.cat(inside_flag_list[i]))
|
275 |
-
approx_flat_list.append(torch.cat(approx_list[i]))
|
276 |
-
square_flat_list.append(torch.cat(square_list[i]))
|
277 |
-
|
278 |
-
# compute targets for each image
|
279 |
-
if gt_bboxes_ignore_list is None:
|
280 |
-
gt_bboxes_ignore_list = [None for _ in range(num_imgs)]
|
281 |
-
if gt_labels_list is None:
|
282 |
-
gt_labels_list = [None for _ in range(num_imgs)]
|
283 |
-
(all_labels, all_label_weights, all_bbox_cls_targets,
|
284 |
-
all_bbox_cls_weights, all_bbox_reg_targets, all_bbox_reg_weights,
|
285 |
-
pos_inds_list, neg_inds_list) = multi_apply(
|
286 |
-
self._get_target_single,
|
287 |
-
approx_flat_list,
|
288 |
-
inside_flag_flat_list,
|
289 |
-
square_flat_list,
|
290 |
-
gt_bboxes_list,
|
291 |
-
gt_bboxes_ignore_list,
|
292 |
-
gt_labels_list,
|
293 |
-
img_metas,
|
294 |
-
label_channels=label_channels,
|
295 |
-
sampling=sampling,
|
296 |
-
unmap_outputs=unmap_outputs)
|
297 |
-
# no valid anchors
|
298 |
-
if any([labels is None for labels in all_labels]):
|
299 |
-
return None
|
300 |
-
# sampled anchors of all images
|
301 |
-
num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list])
|
302 |
-
num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list])
|
303 |
-
# split targets to a list w.r.t. multiple levels
|
304 |
-
labels_list = images_to_levels(all_labels, num_level_squares)
|
305 |
-
label_weights_list = images_to_levels(all_label_weights,
|
306 |
-
num_level_squares)
|
307 |
-
bbox_cls_targets_list = images_to_levels(all_bbox_cls_targets,
|
308 |
-
num_level_squares)
|
309 |
-
bbox_cls_weights_list = images_to_levels(all_bbox_cls_weights,
|
310 |
-
num_level_squares)
|
311 |
-
bbox_reg_targets_list = images_to_levels(all_bbox_reg_targets,
|
312 |
-
num_level_squares)
|
313 |
-
bbox_reg_weights_list = images_to_levels(all_bbox_reg_weights,
|
314 |
-
num_level_squares)
|
315 |
-
return (labels_list, label_weights_list, bbox_cls_targets_list,
|
316 |
-
bbox_cls_weights_list, bbox_reg_targets_list,
|
317 |
-
bbox_reg_weights_list, num_total_pos, num_total_neg)
|
318 |
-
|
319 |
-
def _get_target_single(self,
|
320 |
-
flat_approxs,
|
321 |
-
inside_flags,
|
322 |
-
flat_squares,
|
323 |
-
gt_bboxes,
|
324 |
-
gt_bboxes_ignore,
|
325 |
-
gt_labels,
|
326 |
-
img_meta,
|
327 |
-
label_channels=None,
|
328 |
-
sampling=True,
|
329 |
-
unmap_outputs=True):
|
330 |
-
"""Compute regression and classification targets for anchors in a
|
331 |
-
single image.
|
332 |
-
|
333 |
-
Args:
|
334 |
-
flat_approxs (Tensor): flat approxs of a single image,
|
335 |
-
shape (n, 4)
|
336 |
-
inside_flags (Tensor): inside flags of a single image,
|
337 |
-
shape (n, ).
|
338 |
-
flat_squares (Tensor): flat squares of a single image,
|
339 |
-
shape (approxs_per_octave * n, 4)
|
340 |
-
gt_bboxes (Tensor): Ground truth bboxes of a single image, \
|
341 |
-
shape (num_gts, 4).
|
342 |
-
gt_bboxes_ignore (Tensor): Ground truth bboxes to be
|
343 |
-
ignored, shape (num_ignored_gts, 4).
|
344 |
-
gt_labels (Tensor): Ground truth labels of each box,
|
345 |
-
shape (num_gts,).
|
346 |
-
img_meta (dict): Meta info of the image.
|
347 |
-
label_channels (int): Channel of label.
|
348 |
-
sampling (bool): Sample Anchors or not.
|
349 |
-
unmap_outputs (bool): unmap outputs or not.
|
350 |
-
|
351 |
-
Returns:
|
352 |
-
tuple:
|
353 |
-
|
354 |
-
- labels_list (Tensor): Labels in a single image
|
355 |
-
- label_weights (Tensor): Label weights in a single image
|
356 |
-
- bbox_cls_targets (Tensor): BBox cls targets in a single image
|
357 |
-
- bbox_cls_weights (Tensor): BBox cls weights in a single image
|
358 |
-
- bbox_reg_targets (Tensor): BBox reg targets in a single image
|
359 |
-
- bbox_reg_weights (Tensor): BBox reg weights in a single image
|
360 |
-
- num_total_pos (int): Number of positive samples \
|
361 |
-
in a single image
|
362 |
-
- num_total_neg (int): Number of negative samples \
|
363 |
-
in a single image
|
364 |
-
"""
|
365 |
-
if not inside_flags.any():
|
366 |
-
return (None, ) * 8
|
367 |
-
# assign gt and sample anchors
|
368 |
-
expand_inside_flags = inside_flags[:, None].expand(
|
369 |
-
-1, self.approxs_per_octave).reshape(-1)
|
370 |
-
approxs = flat_approxs[expand_inside_flags, :]
|
371 |
-
squares = flat_squares[inside_flags, :]
|
372 |
-
|
373 |
-
assign_result = self.assigner.assign(approxs, squares,
|
374 |
-
self.approxs_per_octave,
|
375 |
-
gt_bboxes, gt_bboxes_ignore)
|
376 |
-
sampling_result = self.sampler.sample(assign_result, squares,
|
377 |
-
gt_bboxes)
|
378 |
-
|
379 |
-
num_valid_squares = squares.shape[0]
|
380 |
-
bbox_cls_targets = squares.new_zeros(
|
381 |
-
(num_valid_squares, self.side_num * 4))
|
382 |
-
bbox_cls_weights = squares.new_zeros(
|
383 |
-
(num_valid_squares, self.side_num * 4))
|
384 |
-
bbox_reg_targets = squares.new_zeros(
|
385 |
-
(num_valid_squares, self.side_num * 4))
|
386 |
-
bbox_reg_weights = squares.new_zeros(
|
387 |
-
(num_valid_squares, self.side_num * 4))
|
388 |
-
labels = squares.new_full((num_valid_squares, ),
|
389 |
-
self.num_classes,
|
390 |
-
dtype=torch.long)
|
391 |
-
label_weights = squares.new_zeros(num_valid_squares, dtype=torch.float)
|
392 |
-
|
393 |
-
pos_inds = sampling_result.pos_inds
|
394 |
-
neg_inds = sampling_result.neg_inds
|
395 |
-
if len(pos_inds) > 0:
|
396 |
-
(pos_bbox_reg_targets, pos_bbox_reg_weights, pos_bbox_cls_targets,
|
397 |
-
pos_bbox_cls_weights) = self.bbox_coder.encode(
|
398 |
-
sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes)
|
399 |
-
|
400 |
-
bbox_cls_targets[pos_inds, :] = pos_bbox_cls_targets
|
401 |
-
bbox_reg_targets[pos_inds, :] = pos_bbox_reg_targets
|
402 |
-
bbox_cls_weights[pos_inds, :] = pos_bbox_cls_weights
|
403 |
-
bbox_reg_weights[pos_inds, :] = pos_bbox_reg_weights
|
404 |
-
if gt_labels is None:
|
405 |
-
# Only rpn gives gt_labels as None
|
406 |
-
# Foreground is the first class
|
407 |
-
labels[pos_inds] = 0
|
408 |
-
else:
|
409 |
-
labels[pos_inds] = gt_labels[
|
410 |
-
sampling_result.pos_assigned_gt_inds]
|
411 |
-
if self.train_cfg.pos_weight <= 0:
|
412 |
-
label_weights[pos_inds] = 1.0
|
413 |
-
else:
|
414 |
-
label_weights[pos_inds] = self.train_cfg.pos_weight
|
415 |
-
if len(neg_inds) > 0:
|
416 |
-
label_weights[neg_inds] = 1.0
|
417 |
-
|
418 |
-
# map up to original set of anchors
|
419 |
-
if unmap_outputs:
|
420 |
-
num_total_anchors = flat_squares.size(0)
|
421 |
-
labels = unmap(
|
422 |
-
labels, num_total_anchors, inside_flags, fill=self.num_classes)
|
423 |
-
label_weights = unmap(label_weights, num_total_anchors,
|
424 |
-
inside_flags)
|
425 |
-
bbox_cls_targets = unmap(bbox_cls_targets, num_total_anchors,
|
426 |
-
inside_flags)
|
427 |
-
bbox_cls_weights = unmap(bbox_cls_weights, num_total_anchors,
|
428 |
-
inside_flags)
|
429 |
-
bbox_reg_targets = unmap(bbox_reg_targets, num_total_anchors,
|
430 |
-
inside_flags)
|
431 |
-
bbox_reg_weights = unmap(bbox_reg_weights, num_total_anchors,
|
432 |
-
inside_flags)
|
433 |
-
return (labels, label_weights, bbox_cls_targets, bbox_cls_weights,
|
434 |
-
bbox_reg_targets, bbox_reg_weights, pos_inds, neg_inds)
|
435 |
-
|
436 |
-
def loss_single(self, cls_score, bbox_pred, labels, label_weights,
|
437 |
-
bbox_cls_targets, bbox_cls_weights, bbox_reg_targets,
|
438 |
-
bbox_reg_weights, num_total_samples):
|
439 |
-
# classification loss
|
440 |
-
labels = labels.reshape(-1)
|
441 |
-
label_weights = label_weights.reshape(-1)
|
442 |
-
cls_score = cls_score.permute(0, 2, 3,
|
443 |
-
1).reshape(-1, self.cls_out_channels)
|
444 |
-
loss_cls = self.loss_cls(
|
445 |
-
cls_score, labels, label_weights, avg_factor=num_total_samples)
|
446 |
-
# regression loss
|
447 |
-
bbox_cls_targets = bbox_cls_targets.reshape(-1, self.side_num * 4)
|
448 |
-
bbox_cls_weights = bbox_cls_weights.reshape(-1, self.side_num * 4)
|
449 |
-
bbox_reg_targets = bbox_reg_targets.reshape(-1, self.side_num * 4)
|
450 |
-
bbox_reg_weights = bbox_reg_weights.reshape(-1, self.side_num * 4)
|
451 |
-
(bbox_cls_pred, bbox_reg_pred) = bbox_pred
|
452 |
-
bbox_cls_pred = bbox_cls_pred.permute(0, 2, 3, 1).reshape(
|
453 |
-
-1, self.side_num * 4)
|
454 |
-
bbox_reg_pred = bbox_reg_pred.permute(0, 2, 3, 1).reshape(
|
455 |
-
-1, self.side_num * 4)
|
456 |
-
loss_bbox_cls = self.loss_bbox_cls(
|
457 |
-
bbox_cls_pred,
|
458 |
-
bbox_cls_targets.long(),
|
459 |
-
bbox_cls_weights,
|
460 |
-
avg_factor=num_total_samples * 4 * self.side_num)
|
461 |
-
loss_bbox_reg = self.loss_bbox_reg(
|
462 |
-
bbox_reg_pred,
|
463 |
-
bbox_reg_targets,
|
464 |
-
bbox_reg_weights,
|
465 |
-
avg_factor=num_total_samples * 4 * self.bbox_coder.offset_topk)
|
466 |
-
return loss_cls, loss_bbox_cls, loss_bbox_reg
|
467 |
-
|
468 |
-
@force_fp32(apply_to=('cls_scores', 'bbox_preds'))
|
469 |
-
def loss(self,
|
470 |
-
cls_scores,
|
471 |
-
bbox_preds,
|
472 |
-
gt_bboxes,
|
473 |
-
gt_labels,
|
474 |
-
img_metas,
|
475 |
-
gt_bboxes_ignore=None):
|
476 |
-
featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
|
477 |
-
assert len(featmap_sizes) == self.approx_anchor_generator.num_levels
|
478 |
-
|
479 |
-
device = cls_scores[0].device
|
480 |
-
|
481 |
-
# get sampled approxes
|
482 |
-
approxs_list, inside_flag_list = GuidedAnchorHead.get_sampled_approxs(
|
483 |
-
self, featmap_sizes, img_metas, device=device)
|
484 |
-
|
485 |
-
square_list = self.get_anchors(featmap_sizes, img_metas, device=device)
|
486 |
-
|
487 |
-
label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
|
488 |
-
|
489 |
-
cls_reg_targets = self.get_target(
|
490 |
-
approxs_list,
|
491 |
-
inside_flag_list,
|
492 |
-
square_list,
|
493 |
-
gt_bboxes,
|
494 |
-
img_metas,
|
495 |
-
gt_bboxes_ignore_list=gt_bboxes_ignore,
|
496 |
-
gt_labels_list=gt_labels,
|
497 |
-
label_channels=label_channels,
|
498 |
-
sampling=self.sampling)
|
499 |
-
if cls_reg_targets is None:
|
500 |
-
return None
|
501 |
-
(labels_list, label_weights_list, bbox_cls_targets_list,
|
502 |
-
bbox_cls_weights_list, bbox_reg_targets_list, bbox_reg_weights_list,
|
503 |
-
num_total_pos, num_total_neg) = cls_reg_targets
|
504 |
-
num_total_samples = (
|
505 |
-
num_total_pos + num_total_neg if self.sampling else num_total_pos)
|
506 |
-
losses_cls, losses_bbox_cls, losses_bbox_reg = multi_apply(
|
507 |
-
self.loss_single,
|
508 |
-
cls_scores,
|
509 |
-
bbox_preds,
|
510 |
-
labels_list,
|
511 |
-
label_weights_list,
|
512 |
-
bbox_cls_targets_list,
|
513 |
-
bbox_cls_weights_list,
|
514 |
-
bbox_reg_targets_list,
|
515 |
-
bbox_reg_weights_list,
|
516 |
-
num_total_samples=num_total_samples)
|
517 |
-
return dict(
|
518 |
-
loss_cls=losses_cls,
|
519 |
-
loss_bbox_cls=losses_bbox_cls,
|
520 |
-
loss_bbox_reg=losses_bbox_reg)
|
521 |
-
|
522 |
-
@force_fp32(apply_to=('cls_scores', 'bbox_preds'))
|
523 |
-
def get_bboxes(self,
|
524 |
-
cls_scores,
|
525 |
-
bbox_preds,
|
526 |
-
img_metas,
|
527 |
-
cfg=None,
|
528 |
-
rescale=False):
|
529 |
-
assert len(cls_scores) == len(bbox_preds)
|
530 |
-
num_levels = len(cls_scores)
|
531 |
-
featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
|
532 |
-
|
533 |
-
device = cls_scores[0].device
|
534 |
-
mlvl_anchors = self.get_anchors(
|
535 |
-
featmap_sizes, img_metas, device=device)
|
536 |
-
result_list = []
|
537 |
-
for img_id in range(len(img_metas)):
|
538 |
-
cls_score_list = [
|
539 |
-
cls_scores[i][img_id].detach() for i in range(num_levels)
|
540 |
-
]
|
541 |
-
bbox_cls_pred_list = [
|
542 |
-
bbox_preds[i][0][img_id].detach() for i in range(num_levels)
|
543 |
-
]
|
544 |
-
bbox_reg_pred_list = [
|
545 |
-
bbox_preds[i][1][img_id].detach() for i in range(num_levels)
|
546 |
-
]
|
547 |
-
img_shape = img_metas[img_id]['img_shape']
|
548 |
-
scale_factor = img_metas[img_id]['scale_factor']
|
549 |
-
proposals = self.get_bboxes_single(cls_score_list,
|
550 |
-
bbox_cls_pred_list,
|
551 |
-
bbox_reg_pred_list,
|
552 |
-
mlvl_anchors[img_id], img_shape,
|
553 |
-
scale_factor, cfg, rescale)
|
554 |
-
result_list.append(proposals)
|
555 |
-
return result_list
|
556 |
-
|
557 |
-
def get_bboxes_single(self,
|
558 |
-
cls_scores,
|
559 |
-
bbox_cls_preds,
|
560 |
-
bbox_reg_preds,
|
561 |
-
mlvl_anchors,
|
562 |
-
img_shape,
|
563 |
-
scale_factor,
|
564 |
-
cfg,
|
565 |
-
rescale=False):
|
566 |
-
cfg = self.test_cfg if cfg is None else cfg
|
567 |
-
mlvl_bboxes = []
|
568 |
-
mlvl_scores = []
|
569 |
-
mlvl_confids = []
|
570 |
-
assert len(cls_scores) == len(bbox_cls_preds) == len(
|
571 |
-
bbox_reg_preds) == len(mlvl_anchors)
|
572 |
-
for cls_score, bbox_cls_pred, bbox_reg_pred, anchors in zip(
|
573 |
-
cls_scores, bbox_cls_preds, bbox_reg_preds, mlvl_anchors):
|
574 |
-
assert cls_score.size()[-2:] == bbox_cls_pred.size(
|
575 |
-
)[-2:] == bbox_reg_pred.size()[-2::]
|
576 |
-
cls_score = cls_score.permute(1, 2,
|
577 |
-
0).reshape(-1, self.cls_out_channels)
|
578 |
-
if self.use_sigmoid_cls:
|
579 |
-
scores = cls_score.sigmoid()
|
580 |
-
else:
|
581 |
-
scores = cls_score.softmax(-1)
|
582 |
-
bbox_cls_pred = bbox_cls_pred.permute(1, 2, 0).reshape(
|
583 |
-
-1, self.side_num * 4)
|
584 |
-
bbox_reg_pred = bbox_reg_pred.permute(1, 2, 0).reshape(
|
585 |
-
-1, self.side_num * 4)
|
586 |
-
nms_pre = cfg.get('nms_pre', -1)
|
587 |
-
if nms_pre > 0 and scores.shape[0] > nms_pre:
|
588 |
-
if self.use_sigmoid_cls:
|
589 |
-
max_scores, _ = scores.max(dim=1)
|
590 |
-
else:
|
591 |
-
max_scores, _ = scores[:, :-1].max(dim=1)
|
592 |
-
_, topk_inds = max_scores.topk(nms_pre)
|
593 |
-
anchors = anchors[topk_inds, :]
|
594 |
-
bbox_cls_pred = bbox_cls_pred[topk_inds, :]
|
595 |
-
bbox_reg_pred = bbox_reg_pred[topk_inds, :]
|
596 |
-
scores = scores[topk_inds, :]
|
597 |
-
bbox_preds = [
|
598 |
-
bbox_cls_pred.contiguous(),
|
599 |
-
bbox_reg_pred.contiguous()
|
600 |
-
]
|
601 |
-
bboxes, confids = self.bbox_coder.decode(
|
602 |
-
anchors.contiguous(), bbox_preds, max_shape=img_shape)
|
603 |
-
mlvl_bboxes.append(bboxes)
|
604 |
-
mlvl_scores.append(scores)
|
605 |
-
mlvl_confids.append(confids)
|
606 |
-
mlvl_bboxes = torch.cat(mlvl_bboxes)
|
607 |
-
if rescale:
|
608 |
-
mlvl_bboxes /= mlvl_bboxes.new_tensor(scale_factor)
|
609 |
-
mlvl_scores = torch.cat(mlvl_scores)
|
610 |
-
mlvl_confids = torch.cat(mlvl_confids)
|
611 |
-
if self.use_sigmoid_cls:
|
612 |
-
padding = mlvl_scores.new_zeros(mlvl_scores.shape[0], 1)
|
613 |
-
mlvl_scores = torch.cat([mlvl_scores, padding], dim=1)
|
614 |
-
det_bboxes, det_labels = multiclass_nms(
|
615 |
-
mlvl_bboxes,
|
616 |
-
mlvl_scores,
|
617 |
-
cfg.score_thr,
|
618 |
-
cfg.nms,
|
619 |
-
cfg.max_per_img,
|
620 |
-
score_factors=mlvl_confids)
|
621 |
-
return det_bboxes, det_labels
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/regionclip-demo/detectron2/data/samplers/grouped_batch_sampler.py
DELETED
@@ -1,47 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
import numpy as np
|
3 |
-
from torch.utils.data.sampler import BatchSampler, Sampler
|
4 |
-
|
5 |
-
|
6 |
-
class GroupedBatchSampler(BatchSampler):
|
7 |
-
"""
|
8 |
-
Wraps another sampler to yield a mini-batch of indices.
|
9 |
-
It enforces that the batch only contain elements from the same group.
|
10 |
-
It also tries to provide mini-batches which follows an ordering which is
|
11 |
-
as close as possible to the ordering from the original sampler.
|
12 |
-
"""
|
13 |
-
|
14 |
-
def __init__(self, sampler, group_ids, batch_size):
|
15 |
-
"""
|
16 |
-
Args:
|
17 |
-
sampler (Sampler): Base sampler.
|
18 |
-
group_ids (list[int]): If the sampler produces indices in range [0, N),
|
19 |
-
`group_ids` must be a list of `N` ints which contains the group id of each sample.
|
20 |
-
The group ids must be a set of integers in the range [0, num_groups).
|
21 |
-
batch_size (int): Size of mini-batch.
|
22 |
-
"""
|
23 |
-
if not isinstance(sampler, Sampler):
|
24 |
-
raise ValueError(
|
25 |
-
"sampler should be an instance of "
|
26 |
-
"torch.utils.data.Sampler, but got sampler={}".format(sampler)
|
27 |
-
)
|
28 |
-
self.sampler = sampler
|
29 |
-
self.group_ids = np.asarray(group_ids)
|
30 |
-
assert self.group_ids.ndim == 1
|
31 |
-
self.batch_size = batch_size
|
32 |
-
groups = np.unique(self.group_ids).tolist()
|
33 |
-
|
34 |
-
# buffer the indices of each group until batch size is reached
|
35 |
-
self.buffer_per_group = {k: [] for k in groups}
|
36 |
-
|
37 |
-
def __iter__(self):
|
38 |
-
for idx in self.sampler:
|
39 |
-
group_id = self.group_ids[idx]
|
40 |
-
group_buffer = self.buffer_per_group[group_id]
|
41 |
-
group_buffer.append(idx)
|
42 |
-
if len(group_buffer) == self.batch_size:
|
43 |
-
yield group_buffer[:] # yield a copy of the list
|
44 |
-
del group_buffer[:]
|
45 |
-
|
46 |
-
def __len__(self):
|
47 |
-
raise NotImplementedError("len() of GroupedBatchSampler is not well-defined.")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ChenWu98/Stable-CycleDiffusion/app.py
DELETED
@@ -1,421 +0,0 @@
|
|
1 |
-
from diffusers import CycleDiffusionPipeline, DDIMScheduler
|
2 |
-
import os
|
3 |
-
import gradio as gr
|
4 |
-
import torch
|
5 |
-
from PIL import Image
|
6 |
-
import utils
|
7 |
-
import ptp_utils
|
8 |
-
import seq_aligner
|
9 |
-
import torch.nn.functional as nnf
|
10 |
-
from typing import Optional, Union, Tuple, List, Callable, Dict
|
11 |
-
import abc
|
12 |
-
|
13 |
-
LOW_RESOURCE = False
|
14 |
-
MAX_NUM_WORDS = 77
|
15 |
-
|
16 |
-
is_colab = utils.is_google_colab()
|
17 |
-
colab_instruction = "" if is_colab else """
|
18 |
-
<p>You can skip the queue using Colab: <a href="https://colab.research.google.com/gist/ChenWu98/0aa4fe7be80f6b45d3d055df9f14353a/copy-of-fine-tuned-diffusion-gradio.ipynb"><img data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg"></a></p>"""
|
19 |
-
|
20 |
-
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
|
21 |
-
model_id_or_path = "CompVis/stable-diffusion-v1-4"
|
22 |
-
device_print = "GPU 🔥" if torch.cuda.is_available() else "CPU 🥶"
|
23 |
-
device = "cuda" if torch.cuda.is_available() else "cpu"
|
24 |
-
|
25 |
-
if is_colab:
|
26 |
-
scheduler = DDIMScheduler.from_config(model_id_or_path, subfolder="scheduler")
|
27 |
-
pipe = CycleDiffusionPipeline.from_pretrained(model_id_or_path, scheduler=scheduler, torch_dtype=torch_dtype)
|
28 |
-
else:
|
29 |
-
# import streamlit as st
|
30 |
-
# scheduler = DDIMScheduler.from_config(model_id_or_path, use_auth_token=st.secrets["USER_TOKEN"], subfolder="scheduler")
|
31 |
-
# pipe = CycleDiffusionPipeline.from_pretrained(model_id_or_path, use_auth_token=st.secrets["USER_TOKEN"], scheduler=scheduler, torch_dtype=torch_dtype)
|
32 |
-
scheduler = DDIMScheduler.from_config(model_id_or_path, use_auth_token=os.environ.get("USER_TOKEN"), subfolder="scheduler")
|
33 |
-
pipe = CycleDiffusionPipeline.from_pretrained(model_id_or_path, use_auth_token=os.environ.get("USER_TOKEN"), scheduler=scheduler, torch_dtype=torch_dtype)
|
34 |
-
tokenizer = pipe.tokenizer
|
35 |
-
|
36 |
-
if torch.cuda.is_available():
|
37 |
-
pipe = pipe.to("cuda")
|
38 |
-
|
39 |
-
|
40 |
-
class LocalBlend:
|
41 |
-
|
42 |
-
def __call__(self, x_t, attention_store):
|
43 |
-
k = 1
|
44 |
-
maps = attention_store["down_cross"][2:4] + attention_store["up_cross"][:3]
|
45 |
-
maps = [item.reshape(self.alpha_layers.shape[0], -1, 1, 16, 16, MAX_NUM_WORDS) for item in maps]
|
46 |
-
maps = torch.cat(maps, dim=1)
|
47 |
-
maps = (maps * self.alpha_layers).sum(-1).mean(1)
|
48 |
-
mask = nnf.max_pool2d(maps, (k * 2 + 1, k * 2 + 1), (1, 1), padding=(k, k))
|
49 |
-
mask = nnf.interpolate(mask, size=(x_t.shape[2:]))
|
50 |
-
mask = mask / mask.max(2, keepdims=True)[0].max(3, keepdims=True)[0]
|
51 |
-
mask = mask.gt(self.threshold)
|
52 |
-
mask = (mask[:1] + mask[1:]).to(x_t.dtype)
|
53 |
-
x_t = x_t[:1] + mask * (x_t - x_t[:1])
|
54 |
-
return x_t
|
55 |
-
|
56 |
-
def __init__(self, prompts: List[str], words: [List[List[str]]], threshold=.3):
|
57 |
-
alpha_layers = torch.zeros(len(prompts), 1, 1, 1, 1, MAX_NUM_WORDS)
|
58 |
-
for i, (prompt, words_) in enumerate(zip(prompts, words)):
|
59 |
-
if type(words_) is str:
|
60 |
-
words_ = [words_]
|
61 |
-
for word in words_:
|
62 |
-
ind = ptp_utils.get_word_inds(prompt, word, tokenizer)
|
63 |
-
alpha_layers[i, :, :, :, :, ind] = 1
|
64 |
-
self.alpha_layers = alpha_layers.to(device).to(torch_dtype)
|
65 |
-
self.threshold = threshold
|
66 |
-
|
67 |
-
|
68 |
-
class AttentionControl(abc.ABC):
|
69 |
-
|
70 |
-
def step_callback(self, x_t):
|
71 |
-
return x_t
|
72 |
-
|
73 |
-
def between_steps(self):
|
74 |
-
return
|
75 |
-
|
76 |
-
@property
|
77 |
-
def num_uncond_att_layers(self):
|
78 |
-
return self.num_att_layers if LOW_RESOURCE else 0
|
79 |
-
|
80 |
-
@abc.abstractmethod
|
81 |
-
def forward(self, attn, is_cross: bool, place_in_unet: str):
|
82 |
-
raise NotImplementedError
|
83 |
-
|
84 |
-
def __call__(self, attn, is_cross: bool, place_in_unet: str):
|
85 |
-
if self.cur_att_layer >= self.num_uncond_att_layers:
|
86 |
-
if LOW_RESOURCE:
|
87 |
-
attn = self.forward(attn, is_cross, place_in_unet)
|
88 |
-
else:
|
89 |
-
h = attn.shape[0]
|
90 |
-
attn[h // 2:] = self.forward(attn[h // 2:], is_cross, place_in_unet)
|
91 |
-
self.cur_att_layer += 1
|
92 |
-
if self.cur_att_layer == self.num_att_layers + self.num_uncond_att_layers:
|
93 |
-
self.cur_att_layer = 0
|
94 |
-
self.cur_step += 1
|
95 |
-
self.between_steps()
|
96 |
-
return attn
|
97 |
-
|
98 |
-
def reset(self):
|
99 |
-
self.cur_step = 0
|
100 |
-
self.cur_att_layer = 0
|
101 |
-
|
102 |
-
def __init__(self):
|
103 |
-
self.cur_step = 0
|
104 |
-
self.num_att_layers = -1
|
105 |
-
self.cur_att_layer = 0
|
106 |
-
|
107 |
-
|
108 |
-
class EmptyControl(AttentionControl):
|
109 |
-
|
110 |
-
def forward(self, attn, is_cross: bool, place_in_unet: str):
|
111 |
-
return attn
|
112 |
-
|
113 |
-
|
114 |
-
class AttentionStore(AttentionControl):
|
115 |
-
|
116 |
-
@staticmethod
|
117 |
-
def get_empty_store():
|
118 |
-
return {"down_cross": [], "mid_cross": [], "up_cross": [],
|
119 |
-
"down_self": [], "mid_self": [], "up_self": []}
|
120 |
-
|
121 |
-
def forward(self, attn, is_cross: bool, place_in_unet: str):
|
122 |
-
key = f"{place_in_unet}_{'cross' if is_cross else 'self'}"
|
123 |
-
if attn.shape[1] <= 32 ** 2: # avoid memory overhead
|
124 |
-
self.step_store[key].append(attn)
|
125 |
-
return attn
|
126 |
-
|
127 |
-
def between_steps(self):
|
128 |
-
if len(self.attention_store) == 0:
|
129 |
-
self.attention_store = self.step_store
|
130 |
-
else:
|
131 |
-
for key in self.attention_store:
|
132 |
-
for i in range(len(self.attention_store[key])):
|
133 |
-
self.attention_store[key][i] += self.step_store[key][i]
|
134 |
-
self.step_store = self.get_empty_store()
|
135 |
-
|
136 |
-
def get_average_attention(self):
|
137 |
-
average_attention = {key: [item / self.cur_step for item in self.attention_store[key]] for key in self.attention_store}
|
138 |
-
return average_attention
|
139 |
-
|
140 |
-
def reset(self):
|
141 |
-
super(AttentionStore, self).reset()
|
142 |
-
self.step_store = self.get_empty_store()
|
143 |
-
self.attention_store = {}
|
144 |
-
|
145 |
-
def __init__(self):
|
146 |
-
super(AttentionStore, self).__init__()
|
147 |
-
self.step_store = self.get_empty_store()
|
148 |
-
self.attention_store = {}
|
149 |
-
|
150 |
-
|
151 |
-
class AttentionControlEdit(AttentionStore, abc.ABC):
|
152 |
-
|
153 |
-
def step_callback(self, x_t):
|
154 |
-
if self.local_blend is not None:
|
155 |
-
x_t = self.local_blend(x_t, self.attention_store)
|
156 |
-
return x_t
|
157 |
-
|
158 |
-
def replace_self_attention(self, attn_base, att_replace):
|
159 |
-
if att_replace.shape[2] <= 16 ** 2:
|
160 |
-
return attn_base.unsqueeze(0).expand(att_replace.shape[0], *attn_base.shape)
|
161 |
-
else:
|
162 |
-
return att_replace
|
163 |
-
|
164 |
-
@abc.abstractmethod
|
165 |
-
def replace_cross_attention(self, attn_base, att_replace):
|
166 |
-
raise NotImplementedError
|
167 |
-
|
168 |
-
def forward(self, attn, is_cross: bool, place_in_unet: str):
|
169 |
-
super(AttentionControlEdit, self).forward(attn, is_cross, place_in_unet)
|
170 |
-
if is_cross or (self.num_self_replace[0] <= self.cur_step < self.num_self_replace[1]):
|
171 |
-
h = attn.shape[0] // self.batch_size
|
172 |
-
attn = attn.reshape(self.batch_size, h, *attn.shape[1:])
|
173 |
-
attn_base, attn_repalce = attn[0], attn[1:]
|
174 |
-
if is_cross:
|
175 |
-
alpha_words = self.cross_replace_alpha[self.cur_step]
|
176 |
-
attn_replace_new = self.replace_cross_attention(attn_base, attn_repalce) * alpha_words + (1 - alpha_words) * attn_repalce
|
177 |
-
attn[1:] = attn_replace_new
|
178 |
-
else:
|
179 |
-
attn[1:] = self.replace_self_attention(attn_base, attn_repalce)
|
180 |
-
attn = attn.reshape(self.batch_size * h, *attn.shape[2:])
|
181 |
-
return attn
|
182 |
-
|
183 |
-
def __init__(self, prompts, num_steps: int,
|
184 |
-
cross_replace_steps: Union[float, Tuple[float, float], Dict[str, Tuple[float, float]]],
|
185 |
-
self_replace_steps: Union[float, Tuple[float, float]],
|
186 |
-
local_blend: Optional[LocalBlend]):
|
187 |
-
super(AttentionControlEdit, self).__init__()
|
188 |
-
self.batch_size = len(prompts)
|
189 |
-
self.cross_replace_alpha = ptp_utils.get_time_words_attention_alpha(prompts, num_steps, cross_replace_steps, tokenizer).to(device).to(torch_dtype)
|
190 |
-
if type(self_replace_steps) is float:
|
191 |
-
self_replace_steps = 0, self_replace_steps
|
192 |
-
self.num_self_replace = int(num_steps * self_replace_steps[0]), int(num_steps * self_replace_steps[1])
|
193 |
-
self.local_blend = local_blend
|
194 |
-
|
195 |
-
|
196 |
-
class AttentionReplace(AttentionControlEdit):
|
197 |
-
|
198 |
-
def replace_cross_attention(self, attn_base, att_replace):
|
199 |
-
return torch.einsum('hpw,bwn->bhpn', attn_base, self.mapper)
|
200 |
-
|
201 |
-
def __init__(self, prompts, num_steps: int, cross_replace_steps: float, self_replace_steps: float,
|
202 |
-
local_blend: Optional[LocalBlend] = None):
|
203 |
-
super(AttentionReplace, self).__init__(prompts, num_steps, cross_replace_steps, self_replace_steps, local_blend)
|
204 |
-
self.mapper = seq_aligner.get_replacement_mapper(prompts, tokenizer).to(device).to(torch_dtype)
|
205 |
-
|
206 |
-
|
207 |
-
class AttentionRefine(AttentionControlEdit):
|
208 |
-
|
209 |
-
def replace_cross_attention(self, attn_base, att_replace):
|
210 |
-
attn_base_replace = attn_base[:, :, self.mapper].permute(2, 0, 1, 3)
|
211 |
-
attn_replace = attn_base_replace * self.alphas + att_replace * (1 - self.alphas)
|
212 |
-
return attn_replace
|
213 |
-
|
214 |
-
def __init__(self, prompts, num_steps: int, cross_replace_steps: float, self_replace_steps: float,
|
215 |
-
local_blend: Optional[LocalBlend] = None):
|
216 |
-
super(AttentionRefine, self).__init__(prompts, num_steps, cross_replace_steps, self_replace_steps, local_blend)
|
217 |
-
self.mapper, alphas = seq_aligner.get_refinement_mapper(prompts, tokenizer)
|
218 |
-
self.mapper, alphas = self.mapper.to(device), alphas.to(device).to(torch_dtype)
|
219 |
-
self.alphas = alphas.reshape(alphas.shape[0], 1, 1, alphas.shape[1])
|
220 |
-
|
221 |
-
|
222 |
-
def get_equalizer(text: str, word_select: Union[int, Tuple[int, ...]], values: Union[List[float], Tuple[float, ...]]):
|
223 |
-
if type(word_select) is int or type(word_select) is str:
|
224 |
-
word_select = (word_select,)
|
225 |
-
equalizer = torch.ones(len(values), 77)
|
226 |
-
values = torch.tensor(values, dtype=torch_dtype)
|
227 |
-
for word in word_select:
|
228 |
-
inds = ptp_utils.get_word_inds(text, word, tokenizer)
|
229 |
-
equalizer[:, inds] = values
|
230 |
-
return equalizer
|
231 |
-
|
232 |
-
|
233 |
-
def inference(source_prompt, target_prompt, source_guidance_scale=1, guidance_scale=5, num_inference_steps=100,
|
234 |
-
width=512, height=512, seed=0, img=None, strength=0.7,
|
235 |
-
cross_attention_control="None", cross_replace_steps=0.8, self_replace_steps=0.4):
|
236 |
-
|
237 |
-
torch.manual_seed(seed)
|
238 |
-
|
239 |
-
ratio = min(height / img.height, width / img.width)
|
240 |
-
img = img.resize((int(img.width * ratio), int(img.height * ratio)))
|
241 |
-
|
242 |
-
# create the CAC controller.
|
243 |
-
if cross_attention_control == "Replace":
|
244 |
-
controller = AttentionReplace([source_prompt, target_prompt],
|
245 |
-
num_inference_steps,
|
246 |
-
cross_replace_steps=cross_replace_steps,
|
247 |
-
self_replace_steps=self_replace_steps,
|
248 |
-
)
|
249 |
-
ptp_utils.register_attention_control(pipe, controller)
|
250 |
-
elif cross_attention_control == "Refine":
|
251 |
-
controller = AttentionRefine([source_prompt, target_prompt],
|
252 |
-
num_inference_steps,
|
253 |
-
cross_replace_steps=cross_replace_steps,
|
254 |
-
self_replace_steps=self_replace_steps,
|
255 |
-
)
|
256 |
-
ptp_utils.register_attention_control(pipe, controller)
|
257 |
-
elif cross_attention_control == "None":
|
258 |
-
controller = EmptyControl()
|
259 |
-
ptp_utils.register_attention_control(pipe, controller)
|
260 |
-
else:
|
261 |
-
raise ValueError("Unknown cross_attention_control: {}".format(cross_attention_control))
|
262 |
-
|
263 |
-
results = pipe(prompt=target_prompt,
|
264 |
-
source_prompt=source_prompt,
|
265 |
-
init_image=img,
|
266 |
-
num_inference_steps=num_inference_steps,
|
267 |
-
eta=0.1,
|
268 |
-
strength=strength,
|
269 |
-
guidance_scale=guidance_scale,
|
270 |
-
source_guidance_scale=source_guidance_scale,
|
271 |
-
)
|
272 |
-
|
273 |
-
return replace_nsfw_images(results)
|
274 |
-
|
275 |
-
|
276 |
-
def replace_nsfw_images(results):
|
277 |
-
for i in range(len(results.images)):
|
278 |
-
if results.nsfw_content_detected[i]:
|
279 |
-
results.images[i] = Image.open("nsfw.png")
|
280 |
-
return results.images[0]
|
281 |
-
|
282 |
-
|
283 |
-
css = """.cycle-diffusion-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.cycle-diffusion-div div h1{font-weight:900;margin-bottom:7px}.cycle-diffusion-div p{margin-bottom:10px;font-size:94%}.cycle-diffusion-div p a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem}
|
284 |
-
"""
|
285 |
-
with gr.Blocks(css=css) as demo:
|
286 |
-
gr.HTML(
|
287 |
-
f"""
|
288 |
-
<div class="cycle-diffusion-div">
|
289 |
-
<div>
|
290 |
-
<h1>CycleDiffusion with Stable Diffusion</h1>
|
291 |
-
</div>
|
292 |
-
<p>
|
293 |
-
Demo for CycleDiffusion with Stable Diffusion. <br>
|
294 |
-
CycleDiffusion (<a href="https://arxiv.org/abs/2210.05559">📄 Paper link</a> | <a href="https://huggingface.co/docs/diffusers/main/en/api/pipelines/cycle_diffusion">🧨 Pipeline doc</a>) is an image-to-image translation method that supports stochastic samplers for diffusion models. <br>
|
295 |
-
We also support the combination of CycleDiffusion and Cross Attention Control (CAC | <a href="https://arxiv.org/abs/2208.01626">📄 Paper link</a>). CAC is a technique to transfer the attention map from the source prompt to the target prompt. <br>
|
296 |
-
</p>
|
297 |
-
<p>
|
298 |
-
<b>Quick start</b>: <br>
|
299 |
-
1. Click one row of Examples at the end of this page. It will fill all inputs needed. <br>
|
300 |
-
2. Click the "Run CycleDiffusion" button. <br>
|
301 |
-
</p>
|
302 |
-
<p>
|
303 |
-
{colab_instruction}
|
304 |
-
Running on <b>{device_print}</b>{(" in a <b>Google Colab</b>." if is_colab else "")}
|
305 |
-
</p>
|
306 |
-
</div>
|
307 |
-
"""
|
308 |
-
)
|
309 |
-
with gr.Accordion("See Details", open=False):
|
310 |
-
gr.HTML(
|
311 |
-
f"""
|
312 |
-
<div class="cycle-diffusion-div">
|
313 |
-
<p>
|
314 |
-
<b>How to use:</b> <br>
|
315 |
-
1. Upload an image. <br>
|
316 |
-
2. Enter the source and target prompts. <br>
|
317 |
-
3. Select the source guidance scale (for "encoding") and the target guidance scale (for "decoding"). <br>
|
318 |
-
4. Select the strength (smaller strength means better content preservation). <br>
|
319 |
-
5 (optional). Configurate Cross Attention Control options (e.g., CAC type, cross replace steps, self replace steps). <br>
|
320 |
-
6 (optional). Configurate other options (e.g., image size, inference steps, random seed). <br>
|
321 |
-
7. Click the "Run CycleDiffusion" button. <br>
|
322 |
-
</p>
|
323 |
-
<p>
|
324 |
-
<b>Notes:</b> <br>
|
325 |
-
1. CycleDiffusion is likely to fail when drastic changes are intended (e.g., changing a large black car to red). <br>
|
326 |
-
2. The value of strength can be set larger when CAC is used. <br>
|
327 |
-
3. If CAC type is "Replace", the source and target prompts should differ in only one token; otherwise, an error will be raised. This is why we deliberately make some grammar mistakes in Examples.<br>
|
328 |
-
4. If CAC type is "Refine", the source prompt be a subsequence of the target prompt; otherwise, an error will be raised. <br>
|
329 |
-
</p>
|
330 |
-
<p>
|
331 |
-
<b>Runtimes:</b> <br>
|
332 |
-
1. 20s on A10G. <br>
|
333 |
-
</p>
|
334 |
-
</div>
|
335 |
-
"""
|
336 |
-
)
|
337 |
-
with gr.Row():
|
338 |
-
|
339 |
-
with gr.Column(scale=55):
|
340 |
-
with gr.Group():
|
341 |
-
|
342 |
-
img = gr.Image(label="Input image", height=512, tool="editor", type="pil")
|
343 |
-
|
344 |
-
image_out = gr.Image(label="Output image", height=512)
|
345 |
-
# gallery = gr.Gallery(
|
346 |
-
# label="Generated images", show_label=False, elem_id="gallery"
|
347 |
-
# ).style(grid=[1], height="auto")
|
348 |
-
|
349 |
-
with gr.Column(scale=45):
|
350 |
-
with gr.Tab("Edit options"):
|
351 |
-
with gr.Group():
|
352 |
-
with gr.Row():
|
353 |
-
source_prompt = gr.Textbox(label="Source prompt", placeholder="Source prompt describes the input image")
|
354 |
-
source_guidance_scale = gr.Slider(label="Source guidance scale", value=1, minimum=1, maximum=10)
|
355 |
-
with gr.Row():
|
356 |
-
target_prompt = gr.Textbox(label="Target prompt", placeholder="Target prompt describes the output image")
|
357 |
-
guidance_scale = gr.Slider(label="Target guidance scale", value=5, minimum=1, maximum=10)
|
358 |
-
with gr.Row():
|
359 |
-
strength = gr.Slider(label="Strength", value=0.7, minimum=0.5, maximum=1, step=0.01)
|
360 |
-
with gr.Row():
|
361 |
-
generate1 = gr.Button(value="Run CycleDiffusion")
|
362 |
-
|
363 |
-
with gr.Tab("CAC options"):
|
364 |
-
with gr.Group():
|
365 |
-
with gr.Row():
|
366 |
-
cross_attention_control = gr.Radio(label="CAC type", choices=["None", "Replace", "Refine"], value="None")
|
367 |
-
with gr.Row():
|
368 |
-
# If not "None", the following two parameters will be used.
|
369 |
-
cross_replace_steps = gr.Slider(label="Cross replace steps", value=0.8, minimum=0.0, maximum=1, step=0.01)
|
370 |
-
self_replace_steps = gr.Slider(label="Self replace steps", value=0.4, minimum=0.0, maximum=1, step=0.01)
|
371 |
-
with gr.Row():
|
372 |
-
generate2 = gr.Button(value="Run CycleDiffusion")
|
373 |
-
|
374 |
-
with gr.Tab("Other options"):
|
375 |
-
with gr.Group():
|
376 |
-
with gr.Row():
|
377 |
-
num_inference_steps = gr.Slider(label="Inference steps", value=100, minimum=25, maximum=500, step=1)
|
378 |
-
width = gr.Slider(label="Width", value=512, minimum=512, maximum=1024, step=8)
|
379 |
-
height = gr.Slider(label="Height", value=512, minimum=512, maximum=1024, step=8)
|
380 |
-
|
381 |
-
with gr.Row():
|
382 |
-
seed = gr.Slider(0, 2147483647, label='Seed', value=0, step=1)
|
383 |
-
with gr.Row():
|
384 |
-
generate3 = gr.Button(value="Run CycleDiffusion")
|
385 |
-
|
386 |
-
inputs = [source_prompt, target_prompt, source_guidance_scale, guidance_scale, num_inference_steps,
|
387 |
-
width, height, seed, img, strength,
|
388 |
-
cross_attention_control, cross_replace_steps, self_replace_steps]
|
389 |
-
generate1.click(inference, inputs=inputs, outputs=image_out)
|
390 |
-
generate2.click(inference, inputs=inputs, outputs=image_out)
|
391 |
-
generate3.click(inference, inputs=inputs, outputs=image_out)
|
392 |
-
|
393 |
-
ex = gr.Examples(
|
394 |
-
[
|
395 |
-
["An astronaut riding a horse", "An astronaut riding an elephant", 1, 2, 100, 512, 512, 0, "images/astronaut_horse.png", 0.8, "None", 0, 0],
|
396 |
-
["An astronaut riding a horse", "An astronaut riding a elephant", 1, 2, 100, 512, 512, 0, "images/astronaut_horse.png", 0.9, "Replace", 0.15, 0.10],
|
397 |
-
["A black colored car.", "A blue colored car.", 1, 3, 100, 512, 512, 0, "images/black_car.png", 0.85, "None", 0, 0],
|
398 |
-
["A black colored car.", "A blue colored car.", 1, 5, 100, 512, 512, 0, "images/black_car.png", 0.95, "Replace", 0.8, 0.4],
|
399 |
-
["A black colored car.", "A red colored car.", 1, 5, 100, 512, 512, 0, "images/black_car.png", 1, "Replace", 0.8, 0.4],
|
400 |
-
["An aerial view of autumn scene.", "An aerial view of winter scene.", 1, 5, 100, 512, 512, 0, "images/mausoleum.png", 0.9, "None", 0, 0],
|
401 |
-
["An aerial view of autumn scene.", "An aerial view of winter scene.", 1, 5, 100, 512, 512, 0, "images/mausoleum.png", 1, "Replace", 0.8, 0.4],
|
402 |
-
["A green apple and a black backpack on the floor.", "A red apple and a black backpack on the floor.", 1, 7, 100, 512, 512, 0, "images/apple_bag.png", 0.9, "None", 0, 0],
|
403 |
-
["A green apple and a black backpack on the floor.", "A red apple and a black backpack on the floor.", 1, 7, 100, 512, 512, 0, "images/apple_bag.png", 0.9, "Replace", 0.8, 0.4],
|
404 |
-
["A hotel room with red flowers on the bed.", "A hotel room with a cat sitting on the bed.", 1, 4, 100, 512, 512, 0, "images/flower_hotel.png", 0.8, "None", 0, 0],
|
405 |
-
["A hotel room with red flowers on the bed.", "A hotel room with blue flowers on the bed.", 1, 5, 100, 512, 512, 0, "images/flower_hotel.png", 0.95, "None", 0, 0],
|
406 |
-
["A green apple and a black backpack on the floor.", "Two green apples and a black backpack on the floor.", 1, 5, 100, 512, 512, 0, "images/apple_bag.png", 0.89, "None", 0, 0],
|
407 |
-
],
|
408 |
-
[source_prompt, target_prompt, source_guidance_scale, guidance_scale, num_inference_steps,
|
409 |
-
width, height, seed, img, strength,
|
410 |
-
cross_attention_control, cross_replace_steps, self_replace_steps],
|
411 |
-
image_out, inference, cache_examples=True)
|
412 |
-
|
413 |
-
gr.Markdown('''
|
414 |
-
Space built with Diffusers 🧨 by HuggingFace 🤗.
|
415 |
-
[](https://twitter.com/ChenHenryWu)
|
416 |
-

|
417 |
-
''')
|
418 |
-
|
419 |
-
if not is_colab:
|
420 |
-
demo.queue(concurrency_count=1)
|
421 |
-
demo.launch(debug=is_colab, share=is_colab)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/resolver.py
DELETED
@@ -1,160 +0,0 @@
|
|
1 |
-
import asyncio
|
2 |
-
import socket
|
3 |
-
from typing import Any, Dict, List, Optional, Type, Union
|
4 |
-
|
5 |
-
from .abc import AbstractResolver
|
6 |
-
from .helpers import get_running_loop
|
7 |
-
|
8 |
-
__all__ = ("ThreadedResolver", "AsyncResolver", "DefaultResolver")
|
9 |
-
|
10 |
-
try:
|
11 |
-
import aiodns
|
12 |
-
|
13 |
-
# aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')
|
14 |
-
except ImportError: # pragma: no cover
|
15 |
-
aiodns = None
|
16 |
-
|
17 |
-
aiodns_default = False
|
18 |
-
|
19 |
-
|
20 |
-
class ThreadedResolver(AbstractResolver):
|
21 |
-
"""Threaded resolver.
|
22 |
-
|
23 |
-
Uses an Executor for synchronous getaddrinfo() calls.
|
24 |
-
concurrent.futures.ThreadPoolExecutor is used by default.
|
25 |
-
"""
|
26 |
-
|
27 |
-
def __init__(self, loop: Optional[asyncio.AbstractEventLoop] = None) -> None:
|
28 |
-
self._loop = get_running_loop(loop)
|
29 |
-
|
30 |
-
async def resolve(
|
31 |
-
self, hostname: str, port: int = 0, family: int = socket.AF_INET
|
32 |
-
) -> List[Dict[str, Any]]:
|
33 |
-
infos = await self._loop.getaddrinfo(
|
34 |
-
hostname,
|
35 |
-
port,
|
36 |
-
type=socket.SOCK_STREAM,
|
37 |
-
family=family,
|
38 |
-
flags=socket.AI_ADDRCONFIG,
|
39 |
-
)
|
40 |
-
|
41 |
-
hosts = []
|
42 |
-
for family, _, proto, _, address in infos:
|
43 |
-
if family == socket.AF_INET6:
|
44 |
-
if len(address) < 3:
|
45 |
-
# IPv6 is not supported by Python build,
|
46 |
-
# or IPv6 is not enabled in the host
|
47 |
-
continue
|
48 |
-
if address[3]: # type: ignore[misc]
|
49 |
-
# This is essential for link-local IPv6 addresses.
|
50 |
-
# LL IPv6 is a VERY rare case. Strictly speaking, we should use
|
51 |
-
# getnameinfo() unconditionally, but performance makes sense.
|
52 |
-
host, _port = socket.getnameinfo(
|
53 |
-
address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV
|
54 |
-
)
|
55 |
-
port = int(_port)
|
56 |
-
else:
|
57 |
-
host, port = address[:2]
|
58 |
-
else: # IPv4
|
59 |
-
assert family == socket.AF_INET
|
60 |
-
host, port = address # type: ignore[misc]
|
61 |
-
hosts.append(
|
62 |
-
{
|
63 |
-
"hostname": hostname,
|
64 |
-
"host": host,
|
65 |
-
"port": port,
|
66 |
-
"family": family,
|
67 |
-
"proto": proto,
|
68 |
-
"flags": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,
|
69 |
-
}
|
70 |
-
)
|
71 |
-
|
72 |
-
return hosts
|
73 |
-
|
74 |
-
async def close(self) -> None:
|
75 |
-
pass
|
76 |
-
|
77 |
-
|
78 |
-
class AsyncResolver(AbstractResolver):
|
79 |
-
"""Use the `aiodns` package to make asynchronous DNS lookups"""
|
80 |
-
|
81 |
-
def __init__(
|
82 |
-
self,
|
83 |
-
loop: Optional[asyncio.AbstractEventLoop] = None,
|
84 |
-
*args: Any,
|
85 |
-
**kwargs: Any
|
86 |
-
) -> None:
|
87 |
-
if aiodns is None:
|
88 |
-
raise RuntimeError("Resolver requires aiodns library")
|
89 |
-
|
90 |
-
self._loop = get_running_loop(loop)
|
91 |
-
self._resolver = aiodns.DNSResolver(*args, loop=loop, **kwargs)
|
92 |
-
|
93 |
-
if not hasattr(self._resolver, "gethostbyname"):
|
94 |
-
# aiodns 1.1 is not available, fallback to DNSResolver.query
|
95 |
-
self.resolve = self._resolve_with_query # type: ignore
|
96 |
-
|
97 |
-
async def resolve(
|
98 |
-
self, host: str, port: int = 0, family: int = socket.AF_INET
|
99 |
-
) -> List[Dict[str, Any]]:
|
100 |
-
try:
|
101 |
-
resp = await self._resolver.gethostbyname(host, family)
|
102 |
-
except aiodns.error.DNSError as exc:
|
103 |
-
msg = exc.args[1] if len(exc.args) >= 1 else "DNS lookup failed"
|
104 |
-
raise OSError(msg) from exc
|
105 |
-
hosts = []
|
106 |
-
for address in resp.addresses:
|
107 |
-
hosts.append(
|
108 |
-
{
|
109 |
-
"hostname": host,
|
110 |
-
"host": address,
|
111 |
-
"port": port,
|
112 |
-
"family": family,
|
113 |
-
"proto": 0,
|
114 |
-
"flags": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,
|
115 |
-
}
|
116 |
-
)
|
117 |
-
|
118 |
-
if not hosts:
|
119 |
-
raise OSError("DNS lookup failed")
|
120 |
-
|
121 |
-
return hosts
|
122 |
-
|
123 |
-
async def _resolve_with_query(
|
124 |
-
self, host: str, port: int = 0, family: int = socket.AF_INET
|
125 |
-
) -> List[Dict[str, Any]]:
|
126 |
-
if family == socket.AF_INET6:
|
127 |
-
qtype = "AAAA"
|
128 |
-
else:
|
129 |
-
qtype = "A"
|
130 |
-
|
131 |
-
try:
|
132 |
-
resp = await self._resolver.query(host, qtype)
|
133 |
-
except aiodns.error.DNSError as exc:
|
134 |
-
msg = exc.args[1] if len(exc.args) >= 1 else "DNS lookup failed"
|
135 |
-
raise OSError(msg) from exc
|
136 |
-
|
137 |
-
hosts = []
|
138 |
-
for rr in resp:
|
139 |
-
hosts.append(
|
140 |
-
{
|
141 |
-
"hostname": host,
|
142 |
-
"host": rr.host,
|
143 |
-
"port": port,
|
144 |
-
"family": family,
|
145 |
-
"proto": 0,
|
146 |
-
"flags": socket.AI_NUMERICHOST,
|
147 |
-
}
|
148 |
-
)
|
149 |
-
|
150 |
-
if not hosts:
|
151 |
-
raise OSError("DNS lookup failed")
|
152 |
-
|
153 |
-
return hosts
|
154 |
-
|
155 |
-
async def close(self) -> None:
|
156 |
-
self._resolver.cancel()
|
157 |
-
|
158 |
-
|
159 |
-
_DefaultType = Type[Union[AsyncResolver, ThreadedResolver]]
|
160 |
-
DefaultResolver: _DefaultType = AsyncResolver if aiodns_default else ThreadedResolver
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/_exceptions.py
DELETED
@@ -1,94 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
from traceback import format_exception
|
4 |
-
|
5 |
-
|
6 |
-
class BrokenResourceError(Exception):
|
7 |
-
"""
|
8 |
-
Raised when trying to use a resource that has been rendered unusable due to external causes
|
9 |
-
(e.g. a send stream whose peer has disconnected).
|
10 |
-
"""
|
11 |
-
|
12 |
-
|
13 |
-
class BrokenWorkerProcess(Exception):
|
14 |
-
"""
|
15 |
-
Raised by :func:`run_sync_in_process` if the worker process terminates abruptly or otherwise
|
16 |
-
misbehaves.
|
17 |
-
"""
|
18 |
-
|
19 |
-
|
20 |
-
class BusyResourceError(Exception):
|
21 |
-
"""Raised when two tasks are trying to read from or write to the same resource concurrently."""
|
22 |
-
|
23 |
-
def __init__(self, action: str):
|
24 |
-
super().__init__(f"Another task is already {action} this resource")
|
25 |
-
|
26 |
-
|
27 |
-
class ClosedResourceError(Exception):
|
28 |
-
"""Raised when trying to use a resource that has been closed."""
|
29 |
-
|
30 |
-
|
31 |
-
class DelimiterNotFound(Exception):
|
32 |
-
"""
|
33 |
-
Raised during :meth:`~anyio.streams.buffered.BufferedByteReceiveStream.receive_until` if the
|
34 |
-
maximum number of bytes has been read without the delimiter being found.
|
35 |
-
"""
|
36 |
-
|
37 |
-
def __init__(self, max_bytes: int) -> None:
|
38 |
-
super().__init__(
|
39 |
-
f"The delimiter was not found among the first {max_bytes} bytes"
|
40 |
-
)
|
41 |
-
|
42 |
-
|
43 |
-
class EndOfStream(Exception):
|
44 |
-
"""Raised when trying to read from a stream that has been closed from the other end."""
|
45 |
-
|
46 |
-
|
47 |
-
class ExceptionGroup(BaseException):
|
48 |
-
"""
|
49 |
-
Raised when multiple exceptions have been raised in a task group.
|
50 |
-
|
51 |
-
:var ~typing.Sequence[BaseException] exceptions: the sequence of exceptions raised together
|
52 |
-
"""
|
53 |
-
|
54 |
-
SEPARATOR = "----------------------------\n"
|
55 |
-
|
56 |
-
exceptions: list[BaseException]
|
57 |
-
|
58 |
-
def __str__(self) -> str:
|
59 |
-
tracebacks = [
|
60 |
-
"".join(format_exception(type(exc), exc, exc.__traceback__))
|
61 |
-
for exc in self.exceptions
|
62 |
-
]
|
63 |
-
return (
|
64 |
-
f"{len(self.exceptions)} exceptions were raised in the task group:\n"
|
65 |
-
f"{self.SEPARATOR}{self.SEPARATOR.join(tracebacks)}"
|
66 |
-
)
|
67 |
-
|
68 |
-
def __repr__(self) -> str:
|
69 |
-
exception_reprs = ", ".join(repr(exc) for exc in self.exceptions)
|
70 |
-
return f"<{self.__class__.__name__}: {exception_reprs}>"
|
71 |
-
|
72 |
-
|
73 |
-
class IncompleteRead(Exception):
|
74 |
-
"""
|
75 |
-
Raised during :meth:`~anyio.streams.buffered.BufferedByteReceiveStream.receive_exactly` or
|
76 |
-
:meth:`~anyio.streams.buffered.BufferedByteReceiveStream.receive_until` if the
|
77 |
-
connection is closed before the requested amount of bytes has been read.
|
78 |
-
"""
|
79 |
-
|
80 |
-
def __init__(self) -> None:
|
81 |
-
super().__init__(
|
82 |
-
"The stream was closed before the read operation could be completed"
|
83 |
-
)
|
84 |
-
|
85 |
-
|
86 |
-
class TypedAttributeLookupError(LookupError):
|
87 |
-
"""
|
88 |
-
Raised by :meth:`~anyio.TypedAttributeProvider.extra` when the given typed attribute is not
|
89 |
-
found and no default value has been given.
|
90 |
-
"""
|
91 |
-
|
92 |
-
|
93 |
-
class WouldBlock(Exception):
|
94 |
-
"""Raised by ``X_nowait`` functions if ``X()`` would block."""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/unicode.py
DELETED
@@ -1,50 +0,0 @@
|
|
1 |
-
def _makeunicodes(f):
|
2 |
-
lines = iter(f.readlines())
|
3 |
-
unicodes = {}
|
4 |
-
for line in lines:
|
5 |
-
if not line:
|
6 |
-
continue
|
7 |
-
num, name = line.split(";")[:2]
|
8 |
-
if name[0] == "<":
|
9 |
-
continue # "<control>", etc.
|
10 |
-
num = int(num, 16)
|
11 |
-
unicodes[num] = name
|
12 |
-
return unicodes
|
13 |
-
|
14 |
-
|
15 |
-
class _UnicodeCustom(object):
|
16 |
-
def __init__(self, f):
|
17 |
-
if isinstance(f, str):
|
18 |
-
with open(f) as fd:
|
19 |
-
codes = _makeunicodes(fd)
|
20 |
-
else:
|
21 |
-
codes = _makeunicodes(f)
|
22 |
-
self.codes = codes
|
23 |
-
|
24 |
-
def __getitem__(self, charCode):
|
25 |
-
try:
|
26 |
-
return self.codes[charCode]
|
27 |
-
except KeyError:
|
28 |
-
return "????"
|
29 |
-
|
30 |
-
|
31 |
-
class _UnicodeBuiltin(object):
|
32 |
-
def __getitem__(self, charCode):
|
33 |
-
try:
|
34 |
-
# use unicodedata backport to python2, if available:
|
35 |
-
# https://github.com/mikekap/unicodedata2
|
36 |
-
import unicodedata2 as unicodedata
|
37 |
-
except ImportError:
|
38 |
-
import unicodedata
|
39 |
-
try:
|
40 |
-
return unicodedata.name(chr(charCode))
|
41 |
-
except ValueError:
|
42 |
-
return "????"
|
43 |
-
|
44 |
-
|
45 |
-
Unicode = _UnicodeBuiltin()
|
46 |
-
|
47 |
-
|
48 |
-
def setUnicodeData(f):
|
49 |
-
global Unicode
|
50 |
-
Unicode = _UnicodeCustom(f)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/json_component.py
DELETED
@@ -1,122 +0,0 @@
|
|
1 |
-
"""gr.JSON() component."""
|
2 |
-
|
3 |
-
from __future__ import annotations
|
4 |
-
|
5 |
-
import json
|
6 |
-
from typing import Any, Callable, Literal
|
7 |
-
|
8 |
-
from gradio_client.documentation import document, set_documentation_group
|
9 |
-
from gradio_client.serializing import JSONSerializable
|
10 |
-
|
11 |
-
from gradio.components.base import IOComponent, _Keywords
|
12 |
-
from gradio.deprecation import warn_style_method_deprecation
|
13 |
-
from gradio.events import (
|
14 |
-
Changeable,
|
15 |
-
)
|
16 |
-
|
17 |
-
set_documentation_group("component")
|
18 |
-
|
19 |
-
|
20 |
-
@document()
|
21 |
-
class JSON(Changeable, IOComponent, JSONSerializable):
|
22 |
-
"""
|
23 |
-
Used to display arbitrary JSON output prettily.
|
24 |
-
Preprocessing: this component does *not* accept input.
|
25 |
-
Postprocessing: expects a {str} filepath to a file containing valid JSON -- or a {list} or {dict} that is valid JSON
|
26 |
-
|
27 |
-
Demos: zip_to_json, blocks_xray
|
28 |
-
"""
|
29 |
-
|
30 |
-
def __init__(
|
31 |
-
self,
|
32 |
-
value: str | dict | list | Callable | None = None,
|
33 |
-
*,
|
34 |
-
label: str | None = None,
|
35 |
-
every: float | None = None,
|
36 |
-
show_label: bool | None = None,
|
37 |
-
container: bool = True,
|
38 |
-
scale: int | None = None,
|
39 |
-
min_width: int = 160,
|
40 |
-
visible: bool = True,
|
41 |
-
elem_id: str | None = None,
|
42 |
-
elem_classes: list[str] | str | None = None,
|
43 |
-
**kwargs,
|
44 |
-
):
|
45 |
-
"""
|
46 |
-
Parameters:
|
47 |
-
value: Default value. If callable, the function will be called whenever the app loads to set the initial value of the component.
|
48 |
-
label: component name in interface.
|
49 |
-
every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.
|
50 |
-
show_label: if True, will display label.
|
51 |
-
container: If True, will place the component in a container - providing some extra padding around the border.
|
52 |
-
scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer.
|
53 |
-
min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
|
54 |
-
visible: If False, component will be hidden.
|
55 |
-
elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
|
56 |
-
elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
|
57 |
-
"""
|
58 |
-
IOComponent.__init__(
|
59 |
-
self,
|
60 |
-
label=label,
|
61 |
-
every=every,
|
62 |
-
show_label=show_label,
|
63 |
-
container=container,
|
64 |
-
scale=scale,
|
65 |
-
min_width=min_width,
|
66 |
-
visible=visible,
|
67 |
-
elem_id=elem_id,
|
68 |
-
elem_classes=elem_classes,
|
69 |
-
value=value,
|
70 |
-
**kwargs,
|
71 |
-
)
|
72 |
-
|
73 |
-
def get_config(self):
|
74 |
-
return {
|
75 |
-
"value": self.value,
|
76 |
-
**IOComponent.get_config(self),
|
77 |
-
}
|
78 |
-
|
79 |
-
@staticmethod
|
80 |
-
def update(
|
81 |
-
value: Any | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE,
|
82 |
-
label: str | None = None,
|
83 |
-
show_label: bool | None = None,
|
84 |
-
container: bool | None = None,
|
85 |
-
scale: int | None = None,
|
86 |
-
min_width: int | None = None,
|
87 |
-
visible: bool | None = None,
|
88 |
-
):
|
89 |
-
updated_config = {
|
90 |
-
"label": label,
|
91 |
-
"show_label": show_label,
|
92 |
-
"container": container,
|
93 |
-
"scale": scale,
|
94 |
-
"min_width": min_width,
|
95 |
-
"visible": visible,
|
96 |
-
"value": value,
|
97 |
-
"__type__": "update",
|
98 |
-
}
|
99 |
-
return updated_config
|
100 |
-
|
101 |
-
def postprocess(self, y: dict | list | str | None) -> dict | list | None:
|
102 |
-
"""
|
103 |
-
Parameters:
|
104 |
-
y: either a string filepath to a JSON file, or a Python list or dict that can be converted to JSON
|
105 |
-
Returns:
|
106 |
-
JSON output in Python list or dict format
|
107 |
-
"""
|
108 |
-
if y is None:
|
109 |
-
return None
|
110 |
-
if isinstance(y, str):
|
111 |
-
return json.loads(y)
|
112 |
-
else:
|
113 |
-
return y
|
114 |
-
|
115 |
-
def style(self, *, container: bool | None = None, **kwargs):
|
116 |
-
"""
|
117 |
-
This method is deprecated. Please set these arguments in the constructor instead.
|
118 |
-
"""
|
119 |
-
warn_style_method_deprecation()
|
120 |
-
if container is not None:
|
121 |
-
self.container = container
|
122 |
-
return self
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Dinoking/Guccio-AI-Designer/netdissect/evalablate.py
DELETED
@@ -1,248 +0,0 @@
|
|
1 |
-
import torch, sys, os, argparse, textwrap, numbers, numpy, json, PIL
|
2 |
-
from torchvision import transforms
|
3 |
-
from torch.utils.data import TensorDataset
|
4 |
-
from netdissect.progress import default_progress, post_progress, desc_progress
|
5 |
-
from netdissect.progress import verbose_progress, print_progress
|
6 |
-
from netdissect.nethook import edit_layers
|
7 |
-
from netdissect.zdataset import standard_z_sample
|
8 |
-
from netdissect.autoeval import autoimport_eval
|
9 |
-
from netdissect.easydict import EasyDict
|
10 |
-
from netdissect.modelconfig import create_instrumented_model
|
11 |
-
|
12 |
-
help_epilog = '''\
|
13 |
-
Example:
|
14 |
-
|
15 |
-
python -m netdissect.evalablate \
|
16 |
-
--segmenter "netdissect.segmenter.UnifiedParsingSegmenter(segsizes=[256], segdiv='quad')" \
|
17 |
-
--model "proggan.from_pth_file('models/lsun_models/${SCENE}_lsun.pth')" \
|
18 |
-
--outdir dissect/dissectdir \
|
19 |
-
--classes mirror coffeetable tree \
|
20 |
-
--layers layer4 \
|
21 |
-
--size 1000
|
22 |
-
|
23 |
-
Output layout:
|
24 |
-
dissectdir/layer5/ablation/mirror-iqr.json
|
25 |
-
{ class: "mirror",
|
26 |
-
classnum: 43,
|
27 |
-
pixel_total: 41342300,
|
28 |
-
class_pixels: 1234531,
|
29 |
-
layer: "layer5",
|
30 |
-
ranking: "mirror-iqr",
|
31 |
-
ablation_units: [341, 23, 12, 142, 83, ...]
|
32 |
-
ablation_pixels: [143242, 132344, 429931, ...]
|
33 |
-
}
|
34 |
-
|
35 |
-
'''
|
36 |
-
|
37 |
-
def main():
|
38 |
-
# Training settings
|
39 |
-
def strpair(arg):
|
40 |
-
p = tuple(arg.split(':'))
|
41 |
-
if len(p) == 1:
|
42 |
-
p = p + p
|
43 |
-
return p
|
44 |
-
|
45 |
-
parser = argparse.ArgumentParser(description='Ablation eval',
|
46 |
-
epilog=textwrap.dedent(help_epilog),
|
47 |
-
formatter_class=argparse.RawDescriptionHelpFormatter)
|
48 |
-
parser.add_argument('--model', type=str, default=None,
|
49 |
-
help='constructor for the model to test')
|
50 |
-
parser.add_argument('--pthfile', type=str, default=None,
|
51 |
-
help='filename of .pth file for the model')
|
52 |
-
parser.add_argument('--outdir', type=str, default='dissect', required=True,
|
53 |
-
help='directory for dissection output')
|
54 |
-
parser.add_argument('--layers', type=strpair, nargs='+',
|
55 |
-
help='space-separated list of layer names to edit' +
|
56 |
-
', in the form layername[:reportedname]')
|
57 |
-
parser.add_argument('--classes', type=str, nargs='+',
|
58 |
-
help='space-separated list of class names to ablate')
|
59 |
-
parser.add_argument('--metric', type=str, default='iou',
|
60 |
-
help='ordering metric for selecting units')
|
61 |
-
parser.add_argument('--unitcount', type=int, default=30,
|
62 |
-
help='number of units to ablate')
|
63 |
-
parser.add_argument('--segmenter', type=str,
|
64 |
-
help='directory containing segmentation dataset')
|
65 |
-
parser.add_argument('--netname', type=str, default=None,
|
66 |
-
help='name for network in generated reports')
|
67 |
-
parser.add_argument('--batch_size', type=int, default=5,
|
68 |
-
help='batch size for forward pass')
|
69 |
-
parser.add_argument('--size', type=int, default=200,
|
70 |
-
help='number of images to test')
|
71 |
-
parser.add_argument('--no-cuda', action='store_true', default=False,
|
72 |
-
help='disables CUDA usage')
|
73 |
-
parser.add_argument('--quiet', action='store_true', default=False,
|
74 |
-
help='silences console output')
|
75 |
-
if len(sys.argv) == 1:
|
76 |
-
parser.print_usage(sys.stderr)
|
77 |
-
sys.exit(1)
|
78 |
-
args = parser.parse_args()
|
79 |
-
|
80 |
-
# Set up console output
|
81 |
-
verbose_progress(not args.quiet)
|
82 |
-
|
83 |
-
# Speed up pytorch
|
84 |
-
torch.backends.cudnn.benchmark = True
|
85 |
-
|
86 |
-
# Set up CUDA
|
87 |
-
args.cuda = not args.no_cuda and torch.cuda.is_available()
|
88 |
-
if args.cuda:
|
89 |
-
torch.backends.cudnn.benchmark = True
|
90 |
-
|
91 |
-
# Take defaults for model constructor etc from dissect.json settings.
|
92 |
-
with open(os.path.join(args.outdir, 'dissect.json')) as f:
|
93 |
-
dissection = EasyDict(json.load(f))
|
94 |
-
if args.model is None:
|
95 |
-
args.model = dissection.settings.model
|
96 |
-
if args.pthfile is None:
|
97 |
-
args.pthfile = dissection.settings.pthfile
|
98 |
-
if args.segmenter is None:
|
99 |
-
args.segmenter = dissection.settings.segmenter
|
100 |
-
|
101 |
-
# Instantiate generator
|
102 |
-
model = create_instrumented_model(args, gen=True, edit=True)
|
103 |
-
if model is None:
|
104 |
-
print('No model specified')
|
105 |
-
sys.exit(1)
|
106 |
-
|
107 |
-
# Instantiate model
|
108 |
-
device = next(model.parameters()).device
|
109 |
-
input_shape = model.input_shape
|
110 |
-
|
111 |
-
# 4d input if convolutional, 2d input if first layer is linear.
|
112 |
-
raw_sample = standard_z_sample(args.size, input_shape[1], seed=2).view(
|
113 |
-
(args.size,) + input_shape[1:])
|
114 |
-
dataset = TensorDataset(raw_sample)
|
115 |
-
|
116 |
-
# Create the segmenter
|
117 |
-
segmenter = autoimport_eval(args.segmenter)
|
118 |
-
|
119 |
-
# Now do the actual work.
|
120 |
-
labelnames, catnames = (
|
121 |
-
segmenter.get_label_and_category_names(dataset))
|
122 |
-
label_category = [catnames.index(c) if c in catnames else 0
|
123 |
-
for l, c in labelnames]
|
124 |
-
labelnum_from_name = {n[0]: i for i, n in enumerate(labelnames)}
|
125 |
-
|
126 |
-
segloader = torch.utils.data.DataLoader(dataset,
|
127 |
-
batch_size=args.batch_size, num_workers=10,
|
128 |
-
pin_memory=(device.type == 'cuda'))
|
129 |
-
|
130 |
-
# Index the dissection layers by layer name.
|
131 |
-
dissect_layer = {lrec.layer: lrec for lrec in dissection.layers}
|
132 |
-
|
133 |
-
# First, collect a baseline
|
134 |
-
for l in model.ablation:
|
135 |
-
model.ablation[l] = None
|
136 |
-
|
137 |
-
# For each sort-order, do an ablation
|
138 |
-
progress = default_progress()
|
139 |
-
for classname in progress(args.classes):
|
140 |
-
post_progress(c=classname)
|
141 |
-
for layername in progress(model.ablation):
|
142 |
-
post_progress(l=layername)
|
143 |
-
rankname = '%s-%s' % (classname, args.metric)
|
144 |
-
classnum = labelnum_from_name[classname]
|
145 |
-
try:
|
146 |
-
ranking = next(r for r in dissect_layer[layername].rankings
|
147 |
-
if r.name == rankname)
|
148 |
-
except:
|
149 |
-
print('%s not found' % rankname)
|
150 |
-
sys.exit(1)
|
151 |
-
ordering = numpy.argsort(ranking.score)
|
152 |
-
# Check if already done
|
153 |
-
ablationdir = os.path.join(args.outdir, layername, 'pixablation')
|
154 |
-
if os.path.isfile(os.path.join(ablationdir, '%s.json'%rankname)):
|
155 |
-
with open(os.path.join(ablationdir, '%s.json'%rankname)) as f:
|
156 |
-
data = EasyDict(json.load(f))
|
157 |
-
# If the unit ordering is not the same, something is wrong
|
158 |
-
if not all(a == o
|
159 |
-
for a, o in zip(data.ablation_units, ordering)):
|
160 |
-
continue
|
161 |
-
if len(data.ablation_effects) >= args.unitcount:
|
162 |
-
continue # file already done.
|
163 |
-
measurements = data.ablation_effects
|
164 |
-
measurements = measure_ablation(segmenter, segloader,
|
165 |
-
model, classnum, layername, ordering[:args.unitcount])
|
166 |
-
measurements = measurements.cpu().numpy().tolist()
|
167 |
-
os.makedirs(ablationdir, exist_ok=True)
|
168 |
-
with open(os.path.join(ablationdir, '%s.json'%rankname), 'w') as f:
|
169 |
-
json.dump(dict(
|
170 |
-
classname=classname,
|
171 |
-
classnum=classnum,
|
172 |
-
baseline=measurements[0],
|
173 |
-
layer=layername,
|
174 |
-
metric=args.metric,
|
175 |
-
ablation_units=ordering.tolist(),
|
176 |
-
ablation_effects=measurements[1:]), f)
|
177 |
-
|
178 |
-
def measure_ablation(segmenter, loader, model, classnum, layer, ordering):
|
179 |
-
total_bincount = 0
|
180 |
-
data_size = 0
|
181 |
-
device = next(model.parameters()).device
|
182 |
-
progress = default_progress()
|
183 |
-
for l in model.ablation:
|
184 |
-
model.ablation[l] = None
|
185 |
-
feature_units = model.feature_shape[layer][1]
|
186 |
-
feature_shape = model.feature_shape[layer][2:]
|
187 |
-
repeats = len(ordering)
|
188 |
-
total_scores = torch.zeros(repeats + 1)
|
189 |
-
for i, batch in enumerate(progress(loader)):
|
190 |
-
z_batch = batch[0]
|
191 |
-
model.ablation[layer] = None
|
192 |
-
tensor_images = model(z_batch.to(device))
|
193 |
-
seg = segmenter.segment_batch(tensor_images, downsample=2)
|
194 |
-
mask = (seg == classnum).max(1)[0]
|
195 |
-
downsampled_seg = torch.nn.functional.adaptive_avg_pool2d(
|
196 |
-
mask.float()[:,None,:,:], feature_shape)[:,0,:,:]
|
197 |
-
total_scores[0] += downsampled_seg.sum().cpu()
|
198 |
-
# Now we need to do an intervention for every location
|
199 |
-
# that had a nonzero downsampled_seg, if any.
|
200 |
-
interventions_needed = downsampled_seg.nonzero()
|
201 |
-
location_count = len(interventions_needed)
|
202 |
-
if location_count == 0:
|
203 |
-
continue
|
204 |
-
interventions_needed = interventions_needed.repeat(repeats, 1)
|
205 |
-
inter_z = batch[0][interventions_needed[:,0]].to(device)
|
206 |
-
inter_chan = torch.zeros(repeats, location_count, feature_units,
|
207 |
-
device=device)
|
208 |
-
for j, u in enumerate(ordering):
|
209 |
-
inter_chan[j:, :, u] = 1
|
210 |
-
inter_chan = inter_chan.view(len(inter_z), feature_units)
|
211 |
-
inter_loc = interventions_needed[:,1:]
|
212 |
-
scores = torch.zeros(len(inter_z))
|
213 |
-
batch_size = len(batch[0])
|
214 |
-
for j in range(0, len(inter_z), batch_size):
|
215 |
-
ibz = inter_z[j:j+batch_size]
|
216 |
-
ibl = inter_loc[j:j+batch_size].t()
|
217 |
-
imask = torch.zeros((len(ibz),) + feature_shape, device=ibz.device)
|
218 |
-
imask[(torch.arange(len(ibz)),) + tuple(ibl)] = 1
|
219 |
-
ibc = inter_chan[j:j+batch_size]
|
220 |
-
model.ablation[layer] = (
|
221 |
-
imask.float()[:,None,:,:] * ibc[:,:,None,None])
|
222 |
-
tensor_images = model(ibz)
|
223 |
-
seg = segmenter.segment_batch(tensor_images, downsample=2)
|
224 |
-
mask = (seg == classnum).max(1)[0]
|
225 |
-
downsampled_iseg = torch.nn.functional.adaptive_avg_pool2d(
|
226 |
-
mask.float()[:,None,:,:], feature_shape)[:,0,:,:]
|
227 |
-
scores[j:j+batch_size] = downsampled_iseg[
|
228 |
-
(torch.arange(len(ibz)),) + tuple(ibl)]
|
229 |
-
scores = scores.view(repeats, location_count).sum(1)
|
230 |
-
total_scores[1:] += scores
|
231 |
-
return total_scores
|
232 |
-
|
233 |
-
def count_segments(segmenter, loader, model):
|
234 |
-
total_bincount = 0
|
235 |
-
data_size = 0
|
236 |
-
progress = default_progress()
|
237 |
-
for i, batch in enumerate(progress(loader)):
|
238 |
-
tensor_images = model(z_batch.to(device))
|
239 |
-
seg = segmenter.segment_batch(tensor_images, downsample=2)
|
240 |
-
bc = (seg + index[:, None, None, None] * self.num_classes).view(-1
|
241 |
-
).bincount(minlength=z_batch.shape[0] * self.num_classes)
|
242 |
-
data_size += seg.shape[0] * seg.shape[2] * seg.shape[3]
|
243 |
-
total_bincount += batch_label_counts.float().sum(0)
|
244 |
-
normalized_bincount = total_bincount / data_size
|
245 |
-
return normalized_bincount
|
246 |
-
|
247 |
-
if __name__ == '__main__':
|
248 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DragGan/DragGan-Inversion/PTI/models/e4e/stylegan2/op/__init__.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
from .fused_act import FusedLeakyReLU, fused_leaky_relu
|
2 |
-
from .upfirdn2d import upfirdn2d
|
|
|
|
|
|
spaces/DragGan/DragGan-Inversion/PTI/torch_utils/ops/bias_act.h
DELETED
@@ -1,38 +0,0 @@
|
|
1 |
-
// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
|
2 |
-
//
|
3 |
-
// NVIDIA CORPORATION and its licensors retain all intellectual property
|
4 |
-
// and proprietary rights in and to this software, related documentation
|
5 |
-
// and any modifications thereto. Any use, reproduction, disclosure or
|
6 |
-
// distribution of this software and related documentation without an express
|
7 |
-
// license agreement from NVIDIA CORPORATION is strictly prohibited.
|
8 |
-
|
9 |
-
//------------------------------------------------------------------------
|
10 |
-
// CUDA kernel parameters.
|
11 |
-
|
12 |
-
struct bias_act_kernel_params
|
13 |
-
{
|
14 |
-
const void* x; // [sizeX]
|
15 |
-
const void* b; // [sizeB] or NULL
|
16 |
-
const void* xref; // [sizeX] or NULL
|
17 |
-
const void* yref; // [sizeX] or NULL
|
18 |
-
const void* dy; // [sizeX] or NULL
|
19 |
-
void* y; // [sizeX]
|
20 |
-
|
21 |
-
int grad;
|
22 |
-
int act;
|
23 |
-
float alpha;
|
24 |
-
float gain;
|
25 |
-
float clamp;
|
26 |
-
|
27 |
-
int sizeX;
|
28 |
-
int sizeB;
|
29 |
-
int stepB;
|
30 |
-
int loopX;
|
31 |
-
};
|
32 |
-
|
33 |
-
//------------------------------------------------------------------------
|
34 |
-
// CUDA kernel selection.
|
35 |
-
|
36 |
-
template <class T> void* choose_bias_act_kernel(const bias_act_kernel_params& p);
|
37 |
-
|
38 |
-
//------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DragGan/DragGan/stylegan_human/pti/pti_configs/__init__.py
DELETED
File without changes
|
spaces/DylanYan/WizardLM-WizardCoder-Python-34B-V1.0/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: WizardLM WizardCoder Python 34B V1.0
|
3 |
-
emoji: 🌖
|
4 |
-
colorFrom: green
|
5 |
-
colorTo: purple
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.41.2
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Falah/stablediffusionDB/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: StablediffusionDB
|
3 |
-
emoji: 🏢
|
4 |
-
colorFrom: purple
|
5 |
-
colorTo: purple
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.33.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Fr33d0m21/Music_Splitter/app.py
DELETED
@@ -1,26 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import gradio as gr
|
3 |
-
from scipy.io.wavfile import write
|
4 |
-
|
5 |
-
|
6 |
-
def inference(audio):
|
7 |
-
os.makedirs("out", exist_ok=True)
|
8 |
-
write('test.wav', audio[0], audio[1])
|
9 |
-
os.system("python3 -m demucs.separate -n mdx_extra_q -d cpu test.wav -o out")
|
10 |
-
return "./out/mdx_extra_q/test/vocals.wav","./out/mdx_extra_q/test/bass.wav",\
|
11 |
-
"./out/mdx_extra_q/test/drums.wav","./out/mdx_extra_q/test/other.wav"
|
12 |
-
|
13 |
-
title = "Demucs"
|
14 |
-
description = "Gradio demo for Demucs: Music Source Separation in the Waveform Domain. To use it, simply upload your audio, or click one of the examples to load them. Read more at the links below."
|
15 |
-
article = "<p style='text-align: center'><a href='https://arxiv.org/abs/1911.13254' target='_blank'>Music Source Separation in the Waveform Domain</a> | <a href='https://github.com/facebookresearch/demucs' target='_blank'>Github Repo</a></p>"
|
16 |
-
|
17 |
-
examples=[['test.mp3']]
|
18 |
-
gr.Interface(
|
19 |
-
inference,
|
20 |
-
gr.inputs.Audio(type="numpy", label="Input"),
|
21 |
-
[gr.outputs.Audio(type="filepath", label="Vocals"),gr.outputs.Audio(type="filepath", label="Bass"),gr.outputs.Audio(type="filepath", label="Drums"),gr.outputs.Audio(type="filepath", label="Other")],
|
22 |
-
title=title,
|
23 |
-
description=description,
|
24 |
-
article=article,
|
25 |
-
examples=examples
|
26 |
-
).launch(enable_queue=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/FridaZuley/RVC_HFKawaii/lib/infer_pack/modules/F0Predictor/__init__.py
DELETED
File without changes
|
spaces/GT4SD/regression_transformer/model_cards/regression_transformer_description.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
<img align="right" src="https://raw.githubusercontent.com/GT4SD/gt4sd-core/main/docs/_static/gt4sd_logo.png" alt="logo" width="120" >
|
4 |
-
|
5 |
-
### Concurrent sequence regression and generation for molecular language modeling
|
6 |
-
|
7 |
-
The [Regression Transformer](https://www.nature.com/articles/s42256-023-00639-z) is a multitask Transformer that reformulates regression as a conditional sequence modeling task.
|
8 |
-
This yields a dichotomous language model that seamlessly integrates property prediction with property-driven conditional generation. For details see the [*Nature Machine Intelligence* paper](https://www.nature.com/articles/s42256-023-00639-z), the [development code](https://github.com/IBM/regression-transformer) and the [GT4SD endpoint](https://github.com/GT4SD/gt4sd-core) for inference.
|
9 |
-
|
10 |
-
Each `algorithm_version` refers to one trained model. Each model can be used for **two tasks**, either to *predict* one (or multiple) properties of a molecule or to *generate* a molecule (given a seed molecule and a property constraint).
|
11 |
-
|
12 |
-
For **examples** and **documentation** of the model parameters, please see below.
|
13 |
-
Moreover, we provide a **model card** ([Mitchell et al. (2019)](https://dl.acm.org/doi/abs/10.1145/3287560.3287596?casa_token=XD4eHiE2cRUAAAAA:NL11gMa1hGPOUKTAbtXnbVQBDBbjxwcjGECF_i-WC_3g1aBgU1Hbz_f2b4kI_m1in-w__1ztGeHnwHs)) at the bottom of this page.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|