parquet-converter commited on
Commit
81f8b3b
·
1 Parent(s): 5a7c1cc

Update parquet files (step 39 of 476)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Internet Download Manager with Crack.rar How to Install and Use IDM with Crack.md +0 -151
  2. spaces/1gistliPinn/ChatGPT4/Examples/Carmen Serban Cu El Numai Cu El Zippy Flight Floyd Aerea Unlimited.md +0 -6
  3. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Criminal Case Save the World! Mod APK - The Ultimate Adventure Game for Crime Lovers.md +0 -115
  4. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download !NEW! 1o5 Version Please Open Via Salary.md +0 -49
  5. spaces/1phancelerku/anime-remove-background/CarX Street v0.8.6 Mod Apk The Ultimate Street Racing Game with Unlimited Cash.md +0 -107
  6. spaces/1phancelerku/anime-remove-background/Enjoy PUBG MOBILE on PC Mac with BlueStacks Emulator.md +0 -205
  7. spaces/AI-Dashboards/ScrabbleSolverWordThesaurus/backupapp.py +0 -35
  8. spaces/AP123/ai-avatars/train_dreambooth.py +0 -881
  9. spaces/Abdullah-Habib/Rabbit_or_Hare/README.md +0 -13
  10. spaces/AgentVerse/agentVerse/agentverse/agents/simulation_agent/reflection.py +0 -227
  11. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/rings/Rings.js +0 -38
  12. spaces/Aloento/9Nine-PITS/README.md +0 -13
  13. spaces/Ameaou/academic-chatgpt3.1/crazy_functions/批量总结PDF文档.py +0 -166
  14. spaces/Amon1/ChatGPTForAcadamic/crazy_functions/读文章写摘要.py +0 -70
  15. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stochastic_karras_ve.md +0 -33
  16. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/optimization/open_vino.md +0 -108
  17. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/kandinsky_v22/test_kandinsky_inpaint.py +0 -295
  18. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_instruction_pix2pix.py +0 -385
  19. spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/shared_heads/res_layer.py +0 -77
  20. spaces/Andy1621/uniformer_image_detection/mmdet/utils/profiling.py +0 -39
  21. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/testing.py +0 -140
  22. spaces/Anthony7906/MengHuiMXD_GPT/modules/shared.py +0 -55
  23. spaces/Apk/anything-v3.0/utils.py +0 -6
  24. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/req/req_set.py +0 -82
  25. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/markers.py +0 -152
  26. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/importlib_resources/abc.py +0 -137
  27. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/actions.py +0 -207
  28. spaces/Awesimo/jojogan/e4e/models/encoders/psp_encoders.py +0 -200
  29. spaces/BAAI/AltDiffusion/js/index.js +0 -186
  30. spaces/Benson/text-generation/Examples/Cmo Descargar Y Jugar Entre Nosotros En El PC.md +0 -101
  31. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distro/__main__.py +0 -4
  32. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/pangomarkup.py +0 -83
  33. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/style.py +0 -796
  34. spaces/BraydenMoore/MARCI-NFL-Betting/Source/Test/xgboost_ML.py +0 -59
  35. spaces/BrianL/CoE197-Fil-DialectTranslator/app.py +0 -36
  36. spaces/CVPR/LIVE/thrust/thrust/binary_search.h +0 -1902
  37. spaces/ClearLove443/Robby-chatbot/modules/utils.py +0 -105
  38. spaces/Cong723/gpt-academic-public/crazy_functions/总结word文档.py +0 -127
  39. spaces/Curranj/FlowerDiffusion/app.py +0 -72
  40. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/BufrStubImagePlugin.py +0 -73
  41. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/subset/__main__.py +0 -6
  42. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Model3D-98fc2b2c.css +0 -1
  43. spaces/DaleChen/AutoGPT/autogpt/commands/web_playwright.py +0 -80
  44. spaces/DamianMH/Mlove/Dockerfile +0 -21
  45. spaces/Datasculptor/DescriptionGPT/detic/modeling/meta_arch/d2_deformable_detr.py +0 -308
  46. spaces/Datasculptor/StyleGAN-NADA/e4e/options/__init__.py +0 -0
  47. spaces/Dinoking/Guccio-AI-Designer/decomposition.py +0 -402
  48. spaces/Disguised/anime_character_recognizer/app.py +0 -20
  49. spaces/Dorado607/ChuanhuChatGPT/modules/config.py +0 -269
  50. spaces/Drac77/hakurei-waifu-diffusion/app.py +0 -3
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Download Internet Download Manager with Crack.rar How to Install and Use IDM with Crack.md DELETED
@@ -1,151 +0,0 @@
1
- <br />
2
- <h1>Sri Lalitha Sahasranamam Lyrics In Tamil Pdf Downloadl</h1>
3
- <p>If you are a devotee of Goddess Lalitha, the Divine Mother, you might be interested in downloading Sri Lalitha Sahasranamam lyrics in Tamil pdf. Sri Lalitha Sahasranamam is a sacred Hindu text that contains the thousand names of Goddess Lalitha, who is also known as Lalita Devi, Tripura Sundari, Shodashi, Rajarajeshwari, and many other names. In this article, we will tell you what is Sri Lalitha Sahasranamam, how to download it in Tamil pdf format, and how to chant it for maximum benefits.</p>
4
- <h2>What is Sri Lalitha Sahasranamam?</h2>
5
- <p>Sri Lalitha Sahasranamam is a part of the Brahmanda Purana, one of the 18 major Puranas in Hinduism. It is a hymn that praises Goddess Lalitha as the supreme power and creator of the universe. It describes her various attributes, qualities, forms, manifestations, and deeds. It also reveals her secret names that can grant various boons and blessings to her devotees.</p>
6
- <h2>Sri Lalitha Sahasranamam Lyrics In Tamil Pdf Downloadl</h2><br /><p><b><b>Download Zip</b> &#10145; <a href="https://byltly.com/2uKyK6">https://byltly.com/2uKyK6</a></b></p><br /><br />
7
- <h3>The origin and meaning of Sri Lalitha Sahasranamam</h3>
8
- <p>According to the legend, Sri Lalitha Sahasranamam was revealed by Lord Hayagriva, an incarnation of Lord Vishnu, to Sage Agastya, one of the seven great sages in Hinduism. Lord Hayagriva told Sage Agastya the story of how Goddess Lalitha incarnated as the daughter of Himalaya, the king of mountains, and married Lord Shiva, the destroyer of evil. He also narrated how she fought and killed a powerful demon named Bhandasura, who was created from the ashes of Kamadeva, the god of love. He then taught him the thousand names of Goddess Lalitha that can please her and invoke her grace.</p>
9
- <p>The meaning of Sri Lalitha Sahasranamam is "the thousand names of Sri Lalitha". The word "Sri" means auspiciousness, wealth, beauty, grace, and respect. The word "Lalitha" means playful, charming, delightful, graceful, and lovely. The word "Sahasranama" means thousand names. Each name of Goddess Lalitha has a deep meaning and significance that reflects her various aspects and powers. Some of her names are:</p>
10
- <ul>
11
- <li>Srimata: The mother of all</li>
12
- <li>Sri Maharajni: The great queen</li>
13
- <li>Srimat Simhasaneswari: The one who sits on the lion throne</li>
14
- <li>Chidagni Kunda Sambhuta: The one who emerged from the fire of consciousness</li>
15
- <li>Deva Karya Samudyata: The one who is ready for the divine work</li>
16
- <li>Udyad Bhanu Sahasrabha: The one who shines like a thousand rising suns</li>
17
- <li>Raga Swarupa Pasa Dhyaa: The one who holds the rope of attachment</li>
18
- <li>Krodha Kara Ankusa Jwala: The one who holds the goad of anger</li>
19
- <li>Mano Rupa Eksu Kodanda: The one who holds the bow of mind</li>
20
- <li>Pancha Tanmatra Sayaka: The one who holds the arrows of five senses</li>
21
- <li>Nija Runa Prabha Pura Majjat Brahmhanda Mandala: The one whose red radiance pervades the entire universe</li>
22
- <li>Campaka Asoka Punnaga Saugandhika Lasat Kacha: The one whose hair is fragrant with flowers like campaka, asoka, punnaga, and saugandhika</li>
23
- <li>Kuru Vinda Mani Sreni Kanat Kotira Mandita: The one whose forehead is adorned with a row of rubies</li>
24
- <li>Astami Chandra Vibhraja Dalikasthala Sobhita: The one whose crescent moon on her forehead beautifies her face</li>
25
- <li>Mukha Chandra Kalankabha Mriganabhi Viseshaka: The one whose nose stud resembles a spot on the moon or a deer's eye</li>
26
- </ul>
27
- <p>And so on...</p>
28
- <h3>The benefits and significance of Sri Lalitha Sahasranamam</h3>
29
- <p>Sri Lalitha Sahasranamam is not just a hymn but a powerful mantra that can bestow various benefits to those who recite it with devotion and faith. Some of the benefits are:</p>
30
- <p>Sri Lalitha Sahasranamam Tamil Script Pdf Free Download<br />
31
- Lalitha Sahasranamam Lyrics in Tamil with Meaning Pdf<br />
32
- Sri Lalitha Sahasranama Stotram Tamil Pdf Austin Hindu Temple<br />
33
- Lalitha Sahasranamam in Tamil Pdf Download New Scientist<br />
34
- Sri Lalitha Sahasranamam Stotram in Tamil Bhaktinidhi<br />
35
- Lalitha Sahasranamam Tamil Pdf Free Download Aanmeegam<br />
36
- Sri Lalitha Sahasranama Stotram Tamil Lyrics Pdf<br />
37
- Lalitha Sahasranamam in Tamil with Audio Pdf Download<br />
38
- Sri Lalitha Sahasranamam Tamil Script Austin Hindu Temple Pdf<br />
39
- Lalitha Sahasranamam Lyrics in Tamil Wikipedia Pdf<br />
40
- Sri Lalitha Sahasranama Stotram Tamil Translation Pdf<br />
41
- Lalitha Sahasranamam in Tamil by Bombay Sisters Pdf Download<br />
42
- Sri Lalitha Sahasranamam Tamil Script with Meaning Pdf<br />
43
- Lalitha Sahasranamam Lyrics in Tamil Font Pdf<br />
44
- Sri Lalitha Sahasranama Stotram Tamil Mp3 Free Download Pdf<br />
45
- Lalitha Sahasranamam in Tamil by MS Subbulakshmi Pdf Download<br />
46
- Sri Lalitha Sahasranamam Tamil Script with Audio Pdf<br />
47
- Lalitha Sahasranamam Lyrics in Tamil and English Pdf<br />
48
- Sri Lalitha Sahasranama Stotram Tamil Book Pdf<br />
49
- Lalitha Sahasranamam in Tamil by Sivananda Vijayalakshmi Pdf Download<br />
50
- Sri Lalitha Sahasranamam Tamil Script with Commentary Pdf<br />
51
- Lalitha Sahasranamam Lyrics in Tamil Youtube Pdf<br />
52
- Sri Lalitha Sahasranama Stotram Tamil Video Download Pdf<br />
53
- Lalitha Sahasranamam in Tamil by Nitya Santhoshini Pdf Download<br />
54
- Sri Lalitha Sahasranamam Tamil Script with Explanation Pdf<br />
55
- Lalitha Sahasranamam Lyrics in Tamil Printable Pdf<br />
56
- Sri Lalitha Sahasranama Stotram Tamil Online Read Pdf<br />
57
- Lalitha Sahasranamam in Tamil by Priya Sisters Pdf Download<br />
58
- Sri Lalitha Sahasranamam Tamil Script with Benefits Pdf<br />
59
- Lalitha Sahasranamam Lyrics in Tamil for Beginners Pdf<br />
60
- Sri Lalitha Sahasranama Stotram Tamil Karaoke Download Pdf<br />
61
- Lalitha Sahasranamam in Tamil by Anuradha Paudwal Pdf Download<br />
62
- Sri Lalitha Sahasranamam Tamil Script with Phala Sruthi Pdf<br />
63
- Lalitha Sahasranamam Lyrics in Tamil for Recitation Pdf<br />
64
- Sri Lalitha Sahasranama Stotram Tamil Notes Download Pdf<br />
65
- Lalitha Sahasranamam in Tamil by Uma Mohan Pdf Download<br />
66
- Sri Lalitha Sahasranamam Tamil Script with Namavali Pdf<br />
67
- Lalitha Sahasranamam Lyrics in Tamil for Meditation Pdf<br />
68
- Sri Lalitha Sahasranama Stotram Tamil Parayanam Download Pdf<br />
69
- Lalitha Sahasranamam in Tamil by Sowmya Narayanan Pdf Download</p>
70
- <ul>
71
- <li>It grants peace, happiness, prosperity, health, wealth, fame, success, and protection to the devotees.</li>
72
- <li>It removes all kinds of obstacles, troubles, fears, sins, diseases, curses, and enemies from the devotees.</li>
73
- <li>It fulfills all kinds of desires, wishes, aspirations, and goals of the devotees.</li>
74
- <li>It awakens the latent spiritual power and wisdom in the devotees.</li>
75
- <li>It elevates the mind and soul of the devotees to higher levels of consciousness and bliss.</li>
76
- <li>It connects the devotees with Goddess Lalitha and makes them eligible for her grace and blessings.</li>
77
- </ul>
78
- <p>The significance of Sri Lalitha Sahasranamam is that it reveals the true nature and glory of Goddess Lalitha as the supreme reality and source of everything. It also teaches us how to worship her with love and devotion. It also helps us to understand ourselves better as we are reflections of her divine attributes. It also guides us to attain liberation from the cycle of birth and death by merging with her supreme self.</p>
79
- <h2>How to download Sri Lalitha Sahasranamam lyrics in Tamil pdf?</h2>
80
- <p>If you want to download Sri Lalitha Sahasranamam lyrics in Tamil pdf format for your convenience and ease of reading, you can follow these steps:</p>
81
- <h3>The sources and steps to download Sri Lalitha Sahasranama lyrics in Tamil pdf</h3>
82
- <ul>
83
- <li>Go to Google.com or any other search engine.</li>
84
- <li>Type "Sri Lalitha Sahasranama lyrics in Tamil pdf" or any similar keywords in the search box.</li>
85
- <li>You will get many results that offer free downloads or online reading of Sri Lalitha Sahasranama lyrics in Tamil pdf format.</li>
86
- <li>Select any reliable and authentic source that suits your preference.</li>
87
- <li>Click on the link or button that says "Download" or "Read Online".</li>
88
- <li>You will be directed to a page where you can either save or open the file on your device.</li>
89
- <li>You can also print or share the file with others if you wish.</li>
90
- </ul>
91
- <h3>The tips and precautions to download Sri Lalitha Sahasranama lyrics in Tamil pdf</h3>
92
- <ul>
93
- <li>Make sure you have a good internet connection and enough storage space on your device before downloading.</li>
94
- <li>Make sure you have a pdf reader or viewer installed on your device before opening.</li>
95
- <li>Make sure you download from a trusted and verified source that does not contain any malware or viruses.</li>
96
- <li>Make sure you respect the copyright laws and do not distribute or sell the file without permission.</li>
97
- <li>Make sure you recite or read Sri Lalitha Sahasranama lyrics in Tamil pdf with reverence and devotion.</li>
98
- </ul>
99
- <h2>How to chant Sri Lalitha Sahasranamam?</h2>
100
- <p>Chanting Sri Lalitha Sahasranamam is a simple and effective way to worship Goddess Lalitha and receive her grace and blessings. However, there are some guidelines and rules that one should follow to chant it properly and correctly. Here are some of them:</p>
101
- <h3>The best time and place to chant Sri Lalitha Sahasranamam</h3>
102
- <ul>
103
- <li>The best time to chant Sri Lalitha Sahasranamam is in the morning or evening, preferably during the Brahma Muhurta (the auspicious time before sunrise) or the Sandhya (the twilight time).</li>
104
- <li>The best place to chant Sri Lalitha Sahasranamam is in a clean, quiet, and sacred place, preferably in front of an idol or picture of Goddess Lalitha or in a temple dedicated to her.</li>
105
- <li>One should take a bath, wear clean clothes, and apply sandalwood paste or kumkum on the forehead before chanting.</li>
106
- <li>One should sit on a mat or a cloth facing east or north, with a straight spine and a calm mind.</li>
107
- <li>One should light a lamp or a candle and offer some flowers, fruits, or sweets to Goddess Lalitha before chanting.</li>
108
- </ul>
109
- <h3>The procedure and rules to chant Sri Lalitha Sahasranamam</h3>
110
- <ul>
111
- <li>One should chant Sri Lalitha Sahasranamam with devotion, concentration, and understanding.</li>
112
- <li>One should chant Sri Lalitha Sahasranamam with a clear and loud voice, pronouncing each word and syllable correctly and distinctly.</li>
113
- <li>One should chant Sri Lalitha Sahasranamam without any interruptions, distractions, or mistakes.</li>
114
- <li>One should chant Sri Lalitha Sahasranamam with a rosary or a mala made of rudraksha, crystal, lotus seed, or any other sacred material.</li>
115
- <li>One should chant Sri Lalitha Sahasranamam 108 times or any multiple of 9 times.</li>
116
- <li>One should chant Sri Lalitha Sahasranamam after invoking Goddess Lalitha with her dhyana (meditation), panchapuja (five-fold worship), and mula mantra (root mantra).</li>
117
- <li>One should chant Sri Lalitha Sahasranamam by following the order of the names as given in the text.</li>
118
- <li>One should chant Sri Lalitha Sahasranamam by offering a flower or a leaf to Goddess Lalitha after each name.</li>
119
- </ul>
120
- <h3>The effects and experiences of chanting Sri Lalitha Sahasranamam</h3>
121
- <ul>
122
- <li>Chanting Sri Lalitha Sahasranamam can have various positive effects and experiences on the physical, mental, emotional, and spiritual levels of the chanter.</li>
123
- <li>Chanting Sri Lalitha Sahasranamam can purify the body, mind, and soul of the chanter from all impurities and negativities.</li>
124
- <li>Chanting Sri Lalitha Sahasranamam can enhance the health, vitality, beauty, intelligence, creativity, and memory of the chanter.</li>
125
- <li>Chanting Sri Lalitha Sahasranamam can attract the love, affection, respect, admiration, and support of others towards the chanter.</li>
126
- <li>Chanting Sri Lalitha Sahasranamam can increase the wealth, prosperity, abundance, success, and happiness of the chanter.</li>
127
- <li>Chanting Sri Lalitha Sahasranamam can protect the chanter from all kinds of dangers, enemies, evils, and misfortunes.</li>
128
- <li>Chanting Sri Lalitha Sahasranamam can fulfill the desires, wishes, aspirations, and goals of the chanter.</li>
129
- <li>Chanting Sri Lalitha Sahasranamam can awaken the latent spiritual power and wisdom in the chanter.</li>
130
- <li>Chanting Sri Lalitha Sahasranamam can connect the chanter with Goddess Lalitha and make him or her eligible for her grace and blessings.</li>
131
- <li>Chanting Sri Lalitha Sahasranamam can elevate the mind and soul of the chanter to higher levels of consciousness and bliss.</li>
132
- </ul>
133
- <h2>Conclusion</h2>
134
- you can follow the steps and tips given in this article. You can also chant Sri Lalitha Sahasranama with devotion and faith to receive her grace and blessings. We hope you enjoyed reading this article and learned something new and useful. Thank you for your time and attention.</p>
135
- <h3>FAQs</h3>
136
- <p>Here are some frequently asked questions about Sri Lalitha Sahasranama and their answers.</p>
137
- <ol>
138
- <li>What is the meaning of Lalitha?</li>
139
- <p>Lalitha means playful, charming, delightful, graceful, and lovely. It is one of the names of Goddess Lalitha, who is also known as Lalita Devi, Tripura Sundari, Shodashi, Rajarajeshwari, and many other names.</p>
140
- <li>Who wrote Sri Lalitha Sahasranama?</li>
141
- <p>Sri Lalitha Sahasranama was revealed by Lord Hayagriva, an incarnation of Lord Vishnu, to Sage Agastya, one of the seven great sages in Hinduism. It is a part of the Brahmanda Purana, one of the 18 major Puranas in Hinduism.</p>
142
- <li>How many times should one chant Sri Lalitha Sahasranama?</li>
143
- <p>One should chant Sri Lalitha Sahasranama 108 times or any multiple of 9 times. One can also chant it as many times as one wishes or as per one's convenience and availability of time.</p>
144
- <li>What are the benefits of chanting Sri Lalitha Sahasranama?</li>
145
- <p>Chanting Sri Lalitha Sahasranama can bestow various benefits to the chanter such as peace, happiness, prosperity, health, wealth, fame, success, protection, fulfillment of desires, spiritual awakening, and liberation.</p>
146
- <li>What are the rules to chant Sri Lalitha Sahasranama?</li>
147
- <p>Some of the rules to chant Sri Lalitha Sahasranama are to chant it with devotion, concentration, and understanding; to chant it with a clear and loud voice; to chant it without any interruptions or mistakes; to chant it with a rosary or a mala; to chant it after invoking Goddess Lalitha with her dhyana, panchapuja, and mula mantra; to chant it by following the order of the names; and to chant it by offering a flower or a leaf to Goddess Lalitha after each name.</p>
148
- </ol>
149
- </p> 0a6ba089eb<br />
150
- <br />
151
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Carmen Serban Cu El Numai Cu El Zippy Flight Floyd Aerea Unlimited.md DELETED
@@ -1,6 +0,0 @@
1
- <br />
2
- <p>take the latest episodes, select the episode you want to download and click on the download button. you have to seek tips from the airport staff to make your travel to have a safe and memorable holiday. carmen serban cu el numai cu el zippy flight floyd aerea unlimited ->>> download 100 movies with complete script. </p>
3
- <h2>Carmen Serban Cu El Numai Cu El Zippy flight floyd aerea unlimited</h2><br /><p><b><b>DOWNLOAD</b> - <a href="https://imgfil.com/2uxYhM">https://imgfil.com/2uxYhM</a></b></p><br /><br />
4
- <p> carmen serban cu el numai cu el zippy a fost foarte populară pe ultima lui perioadă, pentru că era un aeroport funcţional, dar necesită personal şi oameni care să rezolve probleme. dacă eşti o zonă dificilă, un loc de muncă, să puteţi merge într-o zonă dificilă, la volan. eu am fost pe la volan la volan de mai multe ori, aşa că aş fi bine să mă duc în această zonă şi să ştiu ce se întâmplă de la una la alta. dar a fost o mare plăcere, pentru că erau mulţi oameni de la care puteai să-ţi aduci aminte şi ei vorbeau de la cine ştia ce să facă, de ce aveau nevoie, dacă aveau nevoie. dar foarte bine, în general. carmen serban se afla în acel moment în aeroportul din iaşi, iar eu aş fi bine să-l întâlnesc aici şi să-l întâlnesc cu el înainte de mers pe jos. pentru că aşa era comunitatea, era un fel de prietenie şi voia să afle de ce te aştepţi când vorbeşti cu el. dacă vorbeşti cu oamenii că este zbor ce l-a nedepărtuit pe el, în timp ce e un avion care s-ar putea să plouă sau să zboare, sau de la cine ştie cine. unii zboară nespus de aşa, pentru că aşa ceva nu se întâmplă de atâţia ani. dacă se întâmplă sau se întâmplă, e un fel de aventură. unii dintre aceşti oameni nu ştiau să preia un avion, dacă nu aveau tehnologie, dacă nu aveau tehnologie de a preia avionul.</p> 899543212b<br />
5
- <br />
6
- <br />
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Criminal Case Save the World! Mod APK - The Ultimate Adventure Game for Crime Lovers.md DELETED
@@ -1,115 +0,0 @@
1
-
2
- <h1>Criminal Case World Mod Apk: A Guide for Crime Solvers</h1>
3
- <p>If you are a fan of detective stories and hidden object games, you might have heard of Criminal Case, one of the most popular and addictive games on Facebook. But did you know that there is a mod apk version of the game that gives you unlimited energy and hints, as well as access to all the cases and features? In this article, we will tell you everything you need to know about Criminal Case World Mod Apk, including what it is, how to download and install it, why you should play it, and some tips and tricks to help you solve crimes faster and easier.</p>
4
- <h2>What is Criminal Case World Mod Apk?</h2>
5
- <h3>A brief introduction to the game and its features</h3>
6
- <p>Criminal Case is a hidden object game that puts you in the role of a detective who investigates murder cases in different locations around the world. You have to search for clues in crime scenes, examine evidence in the lab, interrogate suspects and witnesses, and bring the killer to justice. Along the way, you will also meet various characters, such as your partner, your boss, your forensic team, and other police officers.</p>
7
- <h2>criminal case world mod apk</h2><br /><p><b><b>Download File</b> &middot;&middot;&middot; <a href="https://urlin.us/2uT0ND">https://urlin.us/2uT0ND</a></b></p><br /><br />
8
- <p>Criminal Case World Mod Apk is a modified version of the game that gives you some advantages over the original one. For example, you will have unlimited energy and hints, which means you can play as long as you want without waiting for them to refill. You will also be able to unlock all the cases and features in the game, such as new locations, new outfits, new pets, new trophies, and more. You will also be able to play with your friends who are also using the mod apk version.</p>
9
- <h3>How to download and install the mod apk</h3>
10
- <p>To download and install Criminal Case World Mod Apk, you will need an Android device that meets the minimum requirements of the game. You will also need to enable unknown sources in your device settings, so that you can install apps from outside the Google Play Store. Here are the steps to follow:</p>
11
- <ol>
12
- <li>Go to [this link](^1^) and download the mod apk file.</li>
13
- <li>Locate the file in your device storage and tap on it to start the installation process.</li>
14
- <li>Follow the instructions on the screen and wait for the installation to finish.</li>
15
- <li>Launch the game from your app drawer or home screen.</li>
16
- <li>Enjoy playing Criminal Case World Mod Apk with unlimited energy and hints.</li>
17
- </ol>
18
- <h2>Why Play Criminal Case World Mod Apk?</h2>
19
- <h3>The benefits of playing with unlimited energy and hints</h3>
20
- <p>One of the main reasons why you should play Criminal Case World Mod Apk is that you will never run out of energy or hints while playing. Energy is used to enter crime scenes and mini-games, while hints are used to highlight objects or areas that are relevant to the investigation. In the original game, both energy and hints are limited and take time to regenerate. This can be frustrating if you want to play more or if you are stuck on a difficult scene. With the mod apk version, you can play without any interruptions or limitations. You can also use hints more freely to help you find clues faster and easier.</p>
21
- <h3>The challenges and rewards of solving murder cases</h3>
22
- <p>Another reason why you should play Criminal Case World Mod Apk is that you will experience the thrill and satisfaction of solving murder cases. Each case has a unique story, a different setting, and a diverse cast of characters. You will have to use your observation skills, your logic, and your intuition to find the evidence, analyze it, and deduce the killer. You will also have to face some twists and turns along the way, such as false leads, red herrings, and unexpected revelations. Solving cases will not only test your intelligence, but also your morality and your empathy.</p>
23
- <p>criminal case save the world mod apk unlimited money<br />
24
- criminal case world edition mod apk latest version<br />
25
- criminal case world edition mod apk android 1<br />
26
- criminal case world edition mod apk happymod<br />
27
- criminal case world edition mod apk revdl<br />
28
- criminal case world edition mod apk rexdl<br />
29
- criminal case world edition mod apk download for pc<br />
30
- criminal case world edition mod apk offline<br />
31
- criminal case world edition mod apk unlimited energy<br />
32
- criminal case world edition mod apk unlimited stars<br />
33
- criminal case world edition mod apk unlimited hints<br />
34
- criminal case world edition mod apk free shopping<br />
35
- criminal case world edition mod apk no ads<br />
36
- criminal case world edition mod apk 2023<br />
37
- criminal case world edition mod apk 2022<br />
38
- criminal case save the world hack apk download<br />
39
- criminal case save the world cheat apk<br />
40
- criminal case save the world premium apk<br />
41
- criminal case save the world pro apk<br />
42
- criminal case save the world cracked apk<br />
43
- criminal case save the world full apk<br />
44
- criminal case save the world unlocked apk<br />
45
- criminal case save the world mega mod apk<br />
46
- criminal case save the world god mode apk<br />
47
- criminal case save the world vip mod apk<br />
48
- how to install criminal case world mod apk<br />
49
- how to play criminal case world mod apk<br />
50
- how to update criminal case world mod apk<br />
51
- how to get criminal case world mod apk<br />
52
- how to download criminal case world mod apk on ios<br />
53
- best site to download criminal case world mod apk<br />
54
- best way to download criminal case world mod apk<br />
55
- best source for criminal case world mod apk<br />
56
- best alternative for criminal case world mod apk<br />
57
- best features of criminal case world mod apk<br />
58
- benefits of using criminal case world mod apk<br />
59
- advantages of using criminal case world mod apk<br />
60
- disadvantages of using criminal case world mod apk<br />
61
- risks of using criminal case world mod apk<br />
62
- reviews of criminal case world mod apk</p>
63
- <p>As you solve cases, you will also earn rewards, such as stars, coins, cash, and experience points. Stars are used to unlock new scenes and mini-games, as well as to perform certain actions, such as examining evidence or interrogating suspects. Coins are used to buy items in the shop, such as clothes, accessories, pets, and boosters. Cash is used to buy premium items, such as energy refills, hints, or special outfits. Experience points are used to level up and unlock new features and cases.</p>
64
- <h3>The fun and excitement of playing with friends</h3>
65
- <p>A third reason why you should play Criminal Case World Mod Apk is that you will have more fun and excitement by playing with your friends. You can connect your game account to your Facebook account and invite your friends who are also using the mod apk version to join you. You can then team up with them to solve cases together, or compete with them to see who can score higher or rank higher in the leaderboards. You can also chat with them, send them gifts, ask them for help, or help them in return.</p>
66
- <p>Playing with friends will not only make the game more enjoyable, but also more social and interactive. You can share your opinions, your theories, your strategies, and your emotions with your friends. You can also learn from them, challenge them, support them, and congratulate them. Playing with friends will also motivate you to play more and improve your skills.</p>
67
- <h2>Tips and Tricks for Criminal Case World Mod Apk</h2>
68
- <h3>How to rank up and earn stars faster</h3>
69
- <p>If you want to rank up and earn stars faster in Criminal Case World Mod Apk, here are some tips and tricks that you can follow:</p>
70
- <ul>
71
- <li>Play the scenes that have higher scores or higher star ratings. These scenes will give you more points and more stars per energy spent.</li>
72
- <li>Play the scenes that have fewer objects or smaller areas. These scenes will be easier to complete and will take less time.</li>
73
- <li>Play the scenes that have bonus items or lucky cards. These items will give you extra points or extra hints when you find them.</li>
74
- <li>Use boosters and power-ups before or during the scenes. Boosters are items that enhance your performance in the scenes, such as increasing your score multiplier or slowing down the timer. Power-ups are items that help you find objects or clues in the scenes, such as revealing their locations or magnifying them.</li>
75
- <li>Repeat the scenes that you have already completed. This will help you memorize the objects and their locations, as well as improve your speed and accuracy.</li>
76
- </ul>
77
- <h3>How to use boosters and power-ups effectively</h3>
78
- <p>Boosters and power-ups are very useful items that can help you solve cases faster and easier in Criminal Case World Mod Apk. However, they are also limited and costly, so you need to use them wisely. Here are some tips on how to use boosters and power-ups effectively:</p>
79
- <ul>
80
- <li>Use boosters before entering a scene or a mini-game. Boosters last for a certain amount of time or until you exit the scene or the mini-game. Therefore, it is better to use them before starting rather than during or after.</li>
81
- <li>Use power-ups only when necessary or when they have a significant impact. Power-ups are consumed immediately after using them. Therefore, it is better to use them only when you are stuck or when they can make a big difference in your score or your progress.</li>
82
- <li>Choose the right booster or power-up for the right scene or mini-game. Different boosters and power-ups have different effects and purposes. Therefore, it is better to use the ones that match the type of scene or mini-game that you are playing.</li>
83
- <li>Save some boosters and power-ups for later cases or harder scenes or mini-games. Boosters and power-ups become more scarce and more expensive as you progress in the game. Therefore, it is better to save some of them for later cases or harder scenes or mini-games, where they can be more helpful and valuable.</li>
84
- </ul>
85
- <h3>How to find clues and evidence easily</h3>
86
- <p>Clues and evidence are essential items that can help you solve cases and identify the killer in Criminal Case World Mod Apk. However, they are not always easy to find or recognize in the scenes or the mini-games. Here are some tips on how to find clues and evidence easily:</p>
87
- <ul>
88
- <li>Look for objects or areas that are highlighted, circled, or marked in some way. These are usually clues or evidence that are relevant to the investigation.</li>
89
- <li>Look for objects or areas that are out of place, unusual, or suspicious. These are also likely to be clues or evidence that can provide some information or insight.</li>
90
- <li>Look for objects or areas that are related to the victim, the suspects, the witnesses, or the motive. These can also be clues or evidence that can link them to the crime or the killer.</li>
91
- <li>Use hints if you are stuck or unsure. Hints will show you the location or the name of an object or an area that is a clue or evidence.</li>
92
- <li>Pay attention to the comments and suggestions of your partner, your boss, your forensic team, and other police officers. They will often give you hints or directions on where to look for clues or evidence.</li>
93
- </ul>
94
- <h2>Conclusion</h2>
95
- <p>Criminal Case World Mod Apk is a great game for anyone who loves detective stories and hidden object games. It offers unlimited energy and hints, as well as access to all the cases and features in the game. It also allows you to play with your friends who are also using the mod apk version. It is a fun and exciting way to test your intelligence, your morality, and your empathy as you solve murder cases around the world. If you want to download and install Criminal Case World Mod Apk, just follow the steps that we have provided in this article. And if you want to rank up and earn stars faster, use boosters and power-ups effectively, and find clues and evidence easily, just follow the tips and tricks that we have shared with you. We hope that this article has been helpful and informative for you. Now, what are you waiting for? Grab your magnifying glass and your badge, and start solving crimes with Criminal Case World Mod Apk!</p>
96
- <h2>FAQs</h2>
97
- <h3>Q1: Is Criminal Case World Mod Apk safe to use?</h3>
98
- <p>A1: Yes, Criminal Case World Mod Apk is safe to use as long as you download it from a trusted source and scan it with an antivirus program before installing it. However, you should be aware that using mod apk versions of games may violate their terms of service and may result in your account being banned or suspended by the developers. Therefore, use it at your own risk and discretion.</p>
99
- <h3>Q2: How can I update Criminal Case World Mod Apk?</h3>
100
- <p>A2: To update Criminal Case World Mod Apk, you will need to download the latest version of the mod apk file from [this link] and install it over the existing one. You do not need to uninstall the previous version first. However, you should back up your game data before updating, in case something goes wrong during the process.</p>
101
- <h3>Q3: How can I get more friends to play with?</h3>
102
- <p>A3: To get more friends to play with in Criminal Case World Mod Apk, you can invite your existing Facebook friends who are also using the mod apk version to join you. You can also join online communities and groups of Criminal Case players who are looking for new friends and partners. You can also add random players who appear in your game as potential friends.</p>
103
- <h3>Q4: How can I report a bug or a problem with the game?</h3>
104
- <p>A4: To report a bug or a problem with Criminal Case World Mod Apk, you can contact the developers or the support team through their official website [here]. You can also leave a comment or a review on [this page] where you downloaded the mod apk file. Please provide as much detail as possible about the issue that you encountered, such as when it happened, what you were doing, what device you were using, what error message you received, etc.</p>
105
- <h3>Q5: How can I contact the developers or the support team?</h3>
106
- <p>A5: To contact the developers or the support team of Criminal Case World Mod Apk, you can use one of the following methods:</p>
107
- <ul>
108
- <li>Email: [email protected]</li>
109
- <li>Facebook: https://www.facebook.com/CriminalCaseGame/</li>
110
- <li>Twitter: https://twitter.com/CriminalCase_PS</li>
111
- <li>Website: https://www.prettysimplegames.com/</li>
112
- </ul>
113
- <p>We hope that this article has answered all your questions about Criminal Case World Mod Apk. If you have any other questions, feel free to contact us through any of the methods above. Thank you for reading and happy crime solving!</p> 197e85843d<br />
114
- <br />
115
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download !NEW! 1o5 Version Please Open Via Salary.md DELETED
@@ -1,49 +0,0 @@
1
-
2
- <h1>What is 1o5 version and why you should download it</h1>
3
- <p>Have you ever wondered how you can pay your salary and suppliers online without hassle or fees? If so, you might want to check out <strong>1o5 version</strong>, a new way to make payments with your phone using Google Pay or WhatsApp.</p>
4
- <p>1o5 version is a solution that integrates directly with your Sage software, allowing you to send and receive money instantly, securely, and conveniently. You can also earn rewards, discover offers, and understand your spending with Google Pay. Or you can enjoy private messaging, voice and video calls, and group chats with WhatsApp.</p>
5
- <h2>download 1o5 version please open via salary*</h2><br /><p><b><b>DOWNLOAD</b> &#9999; &#9999; &#9999; <a href="https://urlin.us/2uT0in">https://urlin.us/2uT0in</a></b></p><br /><br />
6
- <p>In this article, we will show you how to download 1o5 version on your device, how to open it via salary*, and what benefits you can get from using it. We will also answer some frequently asked questions about this innovative payment method.</p>
7
- <h2>How to download 1o5 version on your device</h2>
8
- <p>Downloading 1o5 version is easy and free. All you need is a smartphone or tablet that supports Google Pay or WhatsApp. Here are the steps to follow:</p>
9
- <ol>
10
- <li>Go to the Google Play Store or the App Store and search for "Google Pay: Save and Pay" or "WhatsApp Messenger". </li>
11
- <li>Tap on the app icon and then tap on "Install".</li>
12
- <li>Open the app and follow the instructions to set up your account and link your bank card.</li>
13
- </ol>
14
- <p>Congratulations! You have successfully downloaded 1o5 version on your device. Now you are ready to open it via salary*.</p>
15
- <h2>How to open 1o5 version via salary*</h2>
16
- <p>Opening 1o5 version via salary* is simple and fast. All you need is a Sage account that supports salary and supplier payments. Here are the steps to follow:</p>
17
- <ol>
18
- <li>Log in to your Sage account and go to the "Salary and Supplier Payments" section.</li>
19
- <li>Choose the option to pay your staff or suppliers with Google Pay or WhatsApp.</li>
20
- <li>Enter the amount, the recipient's phone number, and a reference.</li>
21
- <li>Confirm the payment and send it.</li>
22
- </ol>
23
- <p>That's it! You have successfully opened 1o5 version via salary* and made a payment with your phone. You will receive a confirmation message and a receipt for your transaction.</p>
24
- <p></p>
25
- <h2>Benefits of using 1o5 version via salary*</h2>
26
- <p>Using 1o5 version via salary* has many benefits for you and your business. Here are some of them:</p>
27
- <ul>
28
- <li>You can pay your staff and suppliers quickly, securely, and conveniently with your phone. No need to write checks, use cash, or log in to multiple platforms.</li>
29
- <li>You can save costs on your transactions and avoid fees or charges. Google Pay and WhatsApp are free to use and do not charge any extra fees for sending or receiving money.</li>
30
- <li>You can automate your processes and get full visibility and control of your payments. You can schedule recurring payments, set reminders, and track your payment history with Sage.</li>
31
- </ul>
32
- <p>As you can see, using 1o5 version via salary* can help you save money and time, improve your cash flow, and streamline your operations.</p>
33
- <h2>FAQs about 1o5 version via salary*</h2>
34
- <p>You may have some questions about 1o5 version via salary*. Here are some of the most common ones:</p>
35
- <h3>What is salary*?</h3>
36
- <p>Salary* is a service that allows you to pay your salary and suppliers online with Sage. You can choose from various payment methods, such as bank transfer, debit card, credit card, PayPal, Google Pay, or WhatsApp. You can also access real-time reports, analytics, and insights on your payments.</p>
37
- <h3>Is 1o5 version safe to use?</h3>
38
- <p>Yes, 1o5 version is safe to use. Google Pay and WhatsApp use advanced encryption and security features to protect your personal and financial information. They also comply with the Payment Card Industry Data Security Standard (PCI DSS) and the General Data Protection Regulation (GDPR). Sage also uses secure servers and encryption to safeguard your data.</p>
39
- <h3>How can I track my payments with 1o5 version?</h3>
40
- <p>You can track your payments with 1o5 version by logging in to your Sage account and going to the "Salary and Supplier Payments" section. There you can see the status, date, amount, recipient, and reference of each payment. You can also download or print receipts for your records.</p>
41
- <h3>What if I have a problem with my payment?</h3>
42
- <p>If you have a problem with your payment, you can contact the customer support team of Google Pay or WhatsApp, depending on which app you used. They will help you resolve the issue as soon as possible. You can also contact Sage support if you need assistance with your Sage account or software.</p>
43
- <h3>How can I get more information about 1o5 version?</h3>
44
- <p>If you want to get more information about 1o5 version, you can visit the official websites of Google Pay or WhatsApp, or read their FAQs . You can also visit the Sage website or read their blog for more tips and insights on how to use salary* effectively.</p>
45
- <h2>Conclusion</h2>
46
- <p>In conclusion, 1o5 version is a new way to pay your salary and suppliers online with your phone using Google Pay or WhatsApp. It is easy to download, simple to open via salary*, and beneficial for your business. It can help you save money and time, improve your cash flow, and streamline your operations.</p>
47
- <p>Why not give it a try today? Download 1o5 version on your device and open it via salary*. You will be amazed by how convenient and rewarding it is to make payments with your phone.</p> 197e85843d<br />
48
- <br />
49
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/CarX Street v0.8.6 Mod Apk The Ultimate Street Racing Game with Unlimited Cash.md DELETED
@@ -1,107 +0,0 @@
1
-
2
- <h1>CarX Street Mod APK 0.8.6 Download: A Guide for Android Users</h1>
3
- <p>If you are a fan of racing games, you might have heard of CarX Street, a dynamic open world game that lets you become a street racer the way you want. In this game, you can customize your car, challenge other racers, and explore the city of Sunset City. But what if you want to enjoy the game with more features and unlimited resources? That's where CarX Street Mod APK 0.8.6 comes in handy.</p>
4
- <h2>carx street mod apk 0.8.6 download</h2><br /><p><b><b>Download Zip</b> &#10038; <a href="https://jinyurl.com/2uNRpo">https://jinyurl.com/2uNRpo</a></b></p><br /><br />
5
- <p>In this article, we will tell you what CarX Street is, how to download and install CarX Street Mod APK 0.8.6 on your Android device, why you should play it, and some tips and tricks to help you become a legend of the streets.</p>
6
- <h2>What is CarX Street?</h2>
7
- <p>CarX Street is a racing game developed by CarX Technologies, the creators of CarX Drift Racing and CarX Highway Racing. It was released in February 2022 for Android and iOS devices, and has received positive reviews from players and critics alike.</p>
8
- <p>CarX Street is different from other racing games because it gives you more freedom and control over your car and your racing style. You can choose from over 50 cars, each with its own characteristics and customization options. You can also tune your car's performance, appearance, and sound to suit your preferences.</p>
9
- <p>carx street mod apk latest version download<br />
10
- carx street mod apk unlimited money and gold<br />
11
- carx street mod apk free download for android<br />
12
- carx street mod apk obb download<br />
13
- carx street mod apk 0.8.6 no root<br />
14
- carx street mod apk offline download<br />
15
- carx street mod apk 0.8.6 hack<br />
16
- carx street mod apk revdl<br />
17
- carx street mod apk 0.8.6 unlocked all cars<br />
18
- carx street mod apk rexdl<br />
19
- carx street mod apk 0.8.6 android 1<br />
20
- carx street mod apk 0.8.6 update<br />
21
- carx street mod apk 0.8.6 gameplay<br />
22
- carx street mod apk 0.8.6 cheats<br />
23
- carx street mod apk 0.8.6 new features<br />
24
- carx street mod apk 0.8.6 download link<br />
25
- carx street mod apk 0.8.6 mediafire<br />
26
- carx street mod apk 0.8.6 mega<br />
27
- carx street mod apk 0.8.6 google drive<br />
28
- carx street mod apk 0.8.6 zippyshare<br />
29
- carx street mod apk 0.8.6 highly compressed<br />
30
- carx street mod apk 0.8.6 full version<br />
31
- carx street mod apk 0.8.6 premium<br />
32
- carx street mod apk 0.8.6 pro<br />
33
- carx street mod apk 0.8.6 cracked<br />
34
- carx street mod apk 0.8.6 patched<br />
35
- carx street mod apk 0.8.6 vip<br />
36
- carx street mod apk 0.8.6 original<br />
37
- carx street mod apk 0.8.6 official<br />
38
- carx street mod apk 0.8.6 safe<br />
39
- carx street mod apk 0.8.6 virus free<br />
40
- carx street mod apk 0.8.6 without ads<br />
41
- carx street mod apk 0.8.6 without verification<br />
42
- carx street mod apk 0.8.6 without survey<br />
43
- carx street mod apk 0.8.6 direct download<br />
44
- carx street mod apk 0.8.6 fast download<br />
45
- carx street mod apk 0.8.6 easy download<br />
46
- carx street mod apk 0 offline installer download</p>
47
- <p>But CarX Street is not just about racing. It's also about exploring the vast and vibrant city of Sunset City, where you can find various events, challenges, and secrets. You can also interact with other racers, join clubs, or create your own club and invite your friends.</p>
48
- <h3>Features of CarX Street</h3>
49
- <p>Some of the features that make CarX Street an amazing game are:</p>
50
- <ul>
51
- <li>Realistic physics and graphics that create an immersive racing experience.</li>
52
- <li>A dynamic open world that changes according to the time of day, weather, and traffic.</li>
53
- <li>A variety of game modes, such as sprint, circuit, drift, drag, time attack, and more.</li>
54
- <li>A rich story mode that follows your journey as a street racer in Sunset City.</li>
55
- <li>A multiplayer mode that allows you to compete with other players online or offline.</li>
56
- <li>A social system that lets you chat with other racers, join clubs, or create your own club.</li>
57
- <li>A garage system that lets you store and manage your cars.</li>
58
- <li>A workshop system that lets you customize and upgrade your cars.</li>
59
- </ul>
60
- <h3>How to download and install CarX Street Mod APK 0.8.6 on Android?</h3>
61
- <p>If you want to play CarX Street with more features and unlimited resources, you can download and install CarX Street Mod APK 0.8.6 on your Android device. Here are the steps to do so:</p>
62
- <ol>
63
- <li>Download the CarX Street Mod APK 0.8.6 file from a trusted source, such as [PlayMods](^1^).</li>
64
- <li>Go to your device's settings and enable the installation of apps from unknown sources.</li>
65
- <li>Locate the downloaded file in your device's storage and tap on it to install it.</li>
66
- <li>Wait for the installation process to finish and launch the game.</li>
67
- <li>Enjoy playing CarX Street Mod APK 0.8.6 with unlimited money, gold, diamonds, fuel, and more.</li>
68
- </ol>
69
- <h2>Why should you play CarX Street Mod APK 0.8.6?</h2>
70
- <p>CarX Street Mod APK 0.8.6 is not just a regular racing game. It's a game that offers you more fun, <p>CarX Street Mod APK 0.8.6 is not just a regular racing game. It's a game that offers you more fun, excitement, and customization than ever before. Here are some of the benefits of playing CarX Street Mod APK 0.8.6:</p>
71
- <h3>Benefits of playing CarX Street Mod APK 0.8.6</h3>
72
- <ul>
73
- <li>You can access all the cars, parts, and upgrades without spending any real money.</li>
74
- <li>You can unlock all the game modes, events, and challenges without any restrictions.</li>
75
- <li>You can enjoy the game without worrying about running out of fuel, money, gold, diamonds, or other resources.</li>
76
- <li>You can modify your car's performance, appearance, and sound to your liking.</li>
77
- <li>You can explore the city of Sunset City without any limits or boundaries.</li>
78
- <li>You can compete with other players online or offline with an advantage.</li>
79
- </ul>
80
- <h3>Tips and tricks for playing CarX Street Mod APK 0.8.6</h3>
81
- <p>If you want to become a legend of the streets, you need to master the skills and strategies of racing in CarX Street Mod APK 0.8.6. Here are some tips and tricks to help you out:</p>
82
- <ul>
83
- <li>Choose the right car for each game mode and event. Different cars have different strengths and weaknesses, such as speed, acceleration, handling, drift, and durability.</li>
84
- <li>Tune your car's performance according to the track and the weather conditions. You can adjust your car's engine, transmission, suspension, brakes, tires, and more.</li>
85
- <li>Customize your car's appearance and sound to suit your personality and style. You can change your car's color, paint, decals, wheels, exhaust, lights, and more.</li>
86
- <li>Learn how to control your car's speed, steering, braking, and drifting. You can use different control options, such as tilt, touch, or buttons.</li>
87
- <li>Use the nitro boost wisely. You can activate it by tapping on the screen or by performing drifts or stunts. Nitro boost can help you gain speed, overtake opponents, or escape from danger.</li>
88
- <li>Watch out for traffic, obstacles, and police. They can slow you down, damage your car, or arrest you.</li>
89
- </ul>
90
- <h2>Conclusion</h2>
91
- <p>CarX Street Mod APK 0.8.6 is a game that will make you feel the thrill of street racing like never before. You can customize your car, challenge other racers, and explore the city of Sunset City with unlimited resources and features. If you are looking for a racing game that is realistic, dynamic, and fun, you should download and install CarX Street Mod APK 0.8.6 on your Android device today.</p>
92
- <h3>FAQs</h3>
93
- <p>Here are some frequently asked questions about CarX Street Mod APK 0.8.6:</p>
94
- <ol>
95
- <li><b>Is CarX Street Mod APK 0.8.6 safe to download and install?</b></li>
96
- <p>Yes, CarX Street Mod APK 0.8.6 is safe to download and install as long as you get it from a trusted source like [PlayMods]. However, <p>Yes, CarX Street Mod APK 0.8.6 is safe to download and install as long as you get it from a trusted source like [PlayMods]. However, you should be aware that modded apps may not be compatible with the official version of the game or the latest updates. You should also backup your data before installing the modded app, in case something goes wrong.</p>
97
- <li><b>What are the requirements to play CarX Street Mod APK 0.8.6?</b></li>
98
- <p>To play CarX Street Mod APK 0.8.6, you need an Android device that has at least 4 GB of RAM, 2 GB of free storage space, and Android 5.0 or higher. You also need a stable internet connection to play online or offline.</p>
99
- <li><b>Can I play CarX Street Mod APK 0.8.6 with my friends?</b></li>
100
- <p>Yes, you can play CarX Street Mod APK 0.8.6 with your friends, either online or offline. You can join or create a club and invite your friends to join you. You can also chat with them, send them gifts, and challenge them to races.</p>
101
- <li><b>How can I get more money, gold, diamonds, and other resources in CarX Street Mod APK 0.8.6?</b></li>
102
- <p>You don't need to worry about getting more resources in CarX Street Mod APK 0.8.6, because you will have unlimited amounts of them from the start. You can use them to buy new cars, parts, upgrades, and more.</p>
103
- <li><b>Where can I find more information about CarX Street Mod APK 0.8.6?</b></li>
104
- <p>If you want to learn more about CarX Street Mod APK 0.8.6, you can visit the official website of CarX Technologies, the developer of the game. You can also follow their social media accounts on Facebook, Twitter, Instagram, and YouTube for the latest news and updates.</p>
105
- </ol></p> 197e85843d<br />
106
- <br />
107
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Enjoy PUBG MOBILE on PC Mac with BlueStacks Emulator.md DELETED
@@ -1,205 +0,0 @@
1
- <br />
2
- <h1>How to Download PUBG Mobile Emulator for PC</h1>
3
- <p>If you are a fan of <strong>PUBG Mobile</strong>, the popular battle royale game for mobile devices, you might be wondering how you can play it on your PC. After all, playing on a bigger screen with better graphics and controls can enhance your gaming experience and give you an edge over your opponents. Fortunately, there is a way to do that: by using a <strong>PUBG Mobile emulator</strong>.</p>
4
- <h2>download pubg mobile emulator for pc</h2><br /><p><b><b>DOWNLOAD</b> &#9999; <a href="https://jinyurl.com/2uNJjm">https://jinyurl.com/2uNJjm</a></b></p><br /><br />
5
- <p>A PUBG Mobile emulator is a software application that allows you to run PUBG Mobile on your PC. It simulates the Android environment and lets you access the Google Play Store and download the game. However, not all emulators are created equal. Some are faster, smoother, and more compatible than others. So, how do you choose the best PUBG Mobile emulator for your PC? And how do you download and install it? In this article, we will answer these questions and more. We will also review some of the best emulators available in the market and compare their features and performance.</p>
6
- <h2>What is PUBG Mobile Emulator?</h2>
7
- <p>A <strong>PUBG Mobile emulator</strong> is a program that allows you to play PUBG Mobile on your PC. It works by creating a virtual Android device on your computer, where you can install and run apps from the Google Play Store. An emulator acts as a bridge between your PC and your mobile game, enabling you to enjoy the best of both worlds.</p>
8
- <p>There are many reasons why you might want to use a PUBG Mobile emulator. For instance, you might have a low-end or old smartphone that cannot run the game smoothly or at all. Or, you might prefer playing on a larger screen with higher resolution and frame rate. Or, you might want to use a keyboard and mouse instead of touch controls for more accuracy and responsiveness. Whatever your reason, a PUBG Mobile emulator can help you achieve it.</p>
9
- <h2>Why Use PUBG Mobile Emulator?</h2>
10
- <p>Using a PUBG Mobile emulator has many benefits and advantages over playing on your smartphone. Here are some of them:</p>
11
- <p>How to download pubg mobile on pc with memu<br />
12
- Download and play pubg mobile on pc with bluestacks<br />
13
- Pubg mobile pc download gameloop emulator<br />
14
- Best pubg mobile emulator for pc free download<br />
15
- Pubg mobile lite pc download without emulator<br />
16
- Download pubg mobile kr version on pc emulator<br />
17
- Pubg mobile emulator for pc windows 10 download<br />
18
- Download pubg mobile global version on pc emulator<br />
19
- Pubg mobile emulator for pc low end download<br />
20
- Download pubg mobile vn version on pc emulator<br />
21
- Pubg mobile emulator for pc 32 bit download<br />
22
- Download pubg mobile tw version on pc emulator<br />
23
- Pubg mobile emulator for pc offline installer download<br />
24
- Download pubg mobile korean version on pc emulator<br />
25
- Pubg mobile emulator for pc tencent gaming buddy download<br />
26
- Download pubg mobile beta version on pc emulator<br />
27
- Pubg mobile emulator for pc nox player download<br />
28
- Download pubg mobile india version on pc emulator<br />
29
- Pubg mobile emulator for pc ld player download<br />
30
- Download pubg mobile new state on pc emulator<br />
31
- Pubg mobile emulator for pc smart gaga download<br />
32
- Download pubg mobile 1.5 update on pc emulator<br />
33
- Pubg mobile emulator for pc phoenix os download<br />
34
- Download pubg mobile 1.4 update on pc emulator<br />
35
- Pubg mobile emulator for pc prime os download<br />
36
- Download pubg mobile 1.3 update on pc emulator<br />
37
- Pubg mobile emulator for pc remix os download<br />
38
- Download pubg mobile 1.2 update on pc emulator<br />
39
- Pubg mobile emulator for pc windows 7 download<br />
40
- Download pubg mobile 1.1 update on pc emulator<br />
41
- Pubg mobile emulator for mac download free<br />
42
- Download pubg mobile 1.0 update on pc emulator<br />
43
- Pubg mobile hack emulator for pc download free<br />
44
- Download pubg mobile season 19 on pc emulator<br />
45
- Pubg mobile cheat engine for pc emulator download free<br />
46
- Download pubg mobile season 18 on pc emulator<br />
47
- Pubg mobile esp hack for pc emulator download free<br />
48
- Download pubg mobile season 17 on pc emulator<br />
49
- Pubg mobile aimbot hack for pc emulator download free<br />
50
- Download pubg mobile season 16 on pc emulator<br />
51
- Pubg mobile wallhack for pc emulator download free<br />
52
- Download pubg mobile season 15 on pc emulator<br />
53
- Pubg mobile mod apk for pc emulator download free<br />
54
- Download pubg mobile season 14 on pc emulator<br />
55
- Pubg mobile uc generator for pc emulator download free<br />
56
- Download pubg mobile season 13 on pc emulator<br />
57
- Pubg mobile redeem code generator for pc emulator download free</p>
58
- <ul>
59
- <li><strong>Better graphics and performance</strong>: Playing on a PC allows you to enjoy higher graphics settings, resolution, frame rate, and sound quality than on a mobile device. You can also adjust these settings according to your preference and system specifications.</li>
60
- <li><strong>Bigger screen</strong>: Playing on a PC gives you a wider field of view and more details than playing on a small screen. You can also see more enemies, items, vehicles, and map features.</li>
61
- <li><strong>Better controls</strong>: Playing on a PC lets you use a keyboard and mouse instead of touch controls, which can be more precise, comfortable, and customizable. You can also use a gamepad or other peripherals if you prefer. You can also map the keys and buttons to your liking and create macros and shortcuts for faster actions.</li>
62
- <li><strong>More features and options</strong>: Playing on a PC gives you access to more features and options than playing on a mobile device. For example, you can record and stream your gameplay, chat with your friends, use cheats and mods, and customize your game settings.</li>
63
- <li><strong>Free and legal</strong>: Playing on a PC does not cost you anything, as long as you have a compatible emulator and a Google account. You can download and install PUBG Mobile for free from the Google Play Store. Moreover, using an emulator is not illegal or against the game's terms of service, as long as you do not use any hacks or unfair advantages.</li>
64
- </ul>
65
- <p>As you can see, using a PUBG Mobile emulator can enhance your gaming experience and make it more fun and enjoyable. However, not all emulators are the same. Some are better than others in terms of compatibility, performance, stability, and features. Therefore, you need to choose the best PUBG Mobile emulator for your PC.</p>
66
- <h2>How to Choose the Best PUBG Mobile Emulator?</h2>
67
- <p>There are many factors and criteria that you need to consider when choosing the best PUBG Mobile emulator for your PC. Here are some of them:</p>
68
- <ul>
69
- <li><strong>Compatibility</strong>: The emulator should be compatible with your PC's operating system, hardware, and software. It should also be compatible with the latest version of PUBG Mobile and support all its modes and features.</li>
70
- <li><strong>Performance</strong>: The emulator should run PUBG Mobile smoothly and without lag, stuttering, or crashing. It should also have low CPU and memory usage and support high graphics settings and frame rate.</li>
71
- <li><strong>Stability</strong>: The emulator should be stable and reliable, without any bugs, errors, or glitches. It should also have regular updates and fixes to ensure its functionality and security.</li>
72
- <li><strong>Features</strong>: The emulator should have features that enhance your gameplay and convenience, such as keyboard and mouse support, gamepad support, key mapping, macros, screen recording, streaming, chat, cheats, mods, etc.</li>
73
- <li><strong>User-friendliness</strong>: The emulator should be easy to download, install, set up, and use. It should also have a simple and intuitive interface and design that makes it easy to navigate and customize.</li>
74
- <li><strong>Reputation</strong>: The emulator should have a good reputation among users and reviewers. It should also have positive feedback, ratings, reviews, and testimonials from its users.</li>
75
- </ul>
76
- <p>Based on these criteria, we have selected the best PUBG Mobile emulator for your PC: <strong>GameLoop</strong>.</p>
77
- <h2>The Best PUBG Mobile Emulator: GameLoop</h2>
78
- <p><strong>GameLoop</strong> is the official emulator for PUBG Mobile developed by Tencent Games, the same company that created the game. It is designed specifically for PUBG Mobile and optimized for its performance and features. It is also one of the most popular and widely used emulators for PUBG Mobile in the world.</p>
79
- <p>GameLoop has many advantages over other emulators for PUBG Mobile. Here are some of them:</p>
80
- <h3>How to Download and Install GameLoop Emulator?</h3>
81
- <p>Downloading and installing GameLoop emulator is very easy and straightforward. Here are the steps that you need to follow:</p>
82
- <ol>
83
- <li>Go to the official website of GameLoop at <a href="">https://gameloop.fun/</a>.</li>
84
- <li>Click on the "Download" button on the homepage to download the installer file.</li>
85
- <li>Run the installer file and follow the instructions on the screen to install GameLoop on your PC.</li>
86
- <li>Launch GameLoop from your desktop or start menu.</li>
87
- <li>On the Game Center tab, search for PUBG Mobile or browse through the categories to find it.</li>
88
- <li>Click on the "Install" button to download and install PUBG Mobile on GameLoop.</li>
89
- <li>Once the installation is complete, click on the "Play" button to launch PUBG Mobile on GameLoop.</li>
90
- </ol>
91
- <h3>How to Play PUBG Mobile on GameLoop Emulator?</h3>
92
- <p>Playing PUBG Mobile on GameLoop emulator is very similar to playing it on your smartphone. However, there are some tips and tricks that you can use to optimize your gameplay and make it more enjoyable. Here are some of them:</p>
93
- <ul>
94
- <li><strong>Adjust your graphics settings</strong>: You can adjust your graphics settings according to your PC's specifications and your preference. You can choose from low, medium, high , ultra, or extreme. You can also enable or disable anti-aliasing, shadows, and other effects. Higher graphics settings will make the game look more realistic and detailed, but they will also consume more resources and affect your performance. Lower graphics settings will make the game run faster and smoother, but they will also reduce the quality and clarity of the game.</li>
95
- <li><strong>Use keyboard and mouse controls</strong>: You can use your keyboard and mouse to control your character, aim, shoot, move, and perform other actions in the game. You can also customize your key mapping and sensitivity settings to suit your preference and style. Keyboard and mouse controls will give you more accuracy and responsiveness than touch controls, especially in combat situations.</li>
96
- <li><strong>Enable smart mode</strong>: You can enable smart mode in GameLoop to automatically adjust your graphics settings and frame rate according to your network speed and PC performance. This will help you avoid lag, stuttering, or freezing issues that might affect your gameplay.</li>
97
- <li><strong>Use turbo mode</strong>: You can use turbo mode in GameLoop to boost your game performance and speed up your loading time. Turbo mode will allocate more CPU and memory resources to the game and reduce the background processes that might interfere with the game.</li>
98
- <li><strong>Use game booster</strong>: You can use game booster in GameLoop to optimize your PC settings and improve your game performance. Game booster will scan your PC and detect any issues that might affect your game, such as outdated drivers, junk files, malware, etc. It will then fix these issues and enhance your PC's performance.</li>
99
- </ul>
100
- <h3>What are the Features of GameLoop Emulator?</h3>
101
- <p>GameLoop emulator has many features that make it one of the best emulators for PUBG Mobile. Here are some of them:</p>
102
- <ul>
103
- <li><strong>Official support</strong>: GameLoop is the official emulator for PUBG Mobile, which means that it has full support from Tencent Games and PUBG Corporation. It also means that it is compatible with all the updates, patches, events, and features of PUBG Mobile.</li>
104
- <li><strong>High compatibility</strong>: GameLoop is compatible with most Windows PC systems, from Windows 7 to Windows 10. It also supports both 32-bit and 64-bit versions of Windows. It can run PUBG Mobile on low-end or old PCs as well as high-end or new PCs.</li>
105
- <li><strong>High performance</strong>: GameLoop is optimized for PUBG Mobile and delivers high performance and stability. It can run PUBG Mobile at 60 FPS or higher on most PCs. It also has low CPU and memory usage and supports high graphics settings.</li>
106
- <li><strong>High security</strong>: GameLoop is secure and safe to use. It does not contain any viruses, malware, spyware, or adware. It also does not use any hacks or cheats that might get you banned from PUBG Mobile.</li>
107
- <li><strong>Multiple languages</strong>: GameLoop supports multiple languages, including English, Chinese, Korean, Spanish, Portuguese, Arabic, Turkish, etc. You can choose your preferred language from the settings menu.</li>
108
- <li><strong>Multiple games</strong>: GameLoop is not only for PUBG Mobile. It also supports other popular mobile games, such as Call of Duty: Mobile, Free Fire, Clash of Clans, Among Us, etc. You can download and play these games from the Game Center tab.</li>
109
- <li><strong>Multiple features</strong>: GameLoop has many features that enhance your gameplay and convenience, such as keyboard and mouse support, gamepad support , key mapping, macros, screen recording, streaming, chat, cheats, mods, etc.</li>
110
- </ul>
111
- <p>As you can see, GameLoop is a powerful and versatile emulator that can provide you with the best PUBG Mobile experience on your PC. However, if you want to try other emulators, there are some alternatives that you can consider.</p>
112
- <h2>Other PUBG Mobile Emulators to Consider</h2>
113
- <p>GameLoop is not the only emulator that can run PUBG Mobile on your PC. There are other emulators that have their own strengths and weaknesses. Here are some of them:</p>
114
- <h3>BlueStacks Emulator</h3>
115
- <p><strong>BlueStacks</strong> is one of the oldest and most popular emulators for Android games and apps. It has a large user base and a wide range of games and apps that it supports. It also has many features and options that make it user-friendly and customizable.</p>
116
- <p>However, BlueStacks is not very optimized for PUBG Mobile. It can run the game, but not as smoothly or as fast as GameLoop. It also has higher CPU and memory usage and lower graphics quality. Moreover, BlueStacks is not officially supported by Tencent Games or PUBG Corporation, which means that it might have compatibility or security issues in the future.</p>
117
- <h3>Tencent Gaming Buddy (AKA Gameloop) Emulator</h3>
118
- <p><strong>Tencent Gaming Buddy</strong> is the predecessor of GameLoop. It is the original emulator for PUBG Mobile developed by Tencent Games. It is still available for download and use, but it is no longer updated or maintained by Tencent Games.</p>
119
- <p>Tencent Gaming Buddy is similar to GameLoop in many aspects, such as compatibility, performance, stability, and features. However, it is not as advanced or as refined as GameLoop. It also does not support the latest version of PUBG Mobile or its new modes and features. Therefore, it is recommended to use GameLoop instead of Tencent Gaming Buddy for PUBG Mobile.</p>
120
- <h3>Comparison Table of PUBG Mobile Emulators</h3>
121
- <p>To help you compare and choose the best PUBG Mobile emulator for your PC, here is a table that summarizes and compares the main features and performance of GameLoop, BlueStacks, and Tencent Gaming Buddy:</p>
122
- <table>
123
- <tr>
124
- <th>Emulator</th>
125
- <th>Compatibility</th>
126
- <th>Performance</th>
127
- <th>Stability</th>
128
- <th>Features</th>
129
- <th>User-friendliness</th>
130
- <th>Reputation</th>
131
- </tr>
132
- <tr>
133
- <td>GameLoop</td>
134
- <td>High</td>
135
- <td>High</td>
136
- <td>High</td>
137
- <td>High</td>
138
- <td>High</td>
139
- <td>High</td>
140
- </tr>
141
- <tr>
142
- <td>BlueStacks</td>
143
- <td>Medium</td>
144
- <td>Medium</td>
145
- <td>Medium</td>
146
- <td>Medium</td>
147
- <td>Medium</td>
148
- <td>High</td>
149
- </tr>
150
- <tr>
151
- <td>Tencent Gaming Buddy (AKA Gameloop)</td>
152
- <td>Medium</td>
153
- <td>Medium</td>
154
- <td>Medium</td>
155
- <td>Medium</td>
156
- <td>Medium</td >td>Medium</td>
157
- </tr>
158
- </table>
159
- <h2>Conclusion</h2>
160
- <p>PUBG Mobile is one of the most popular and exciting mobile games in the world. It offers a thrilling and immersive battle royale experience that you can enjoy with your friends or solo. However, playing on a mobile device might not be the best way to experience PUBG Mobile. You might face issues such as low graphics quality, small screen size, poor controls, battery drain, overheating, etc.</p>
161
- <p>That is why using a PUBG Mobile emulator can be a great solution. A PUBG Mobile emulator allows you to play PUBG Mobile on your PC, which can improve your gameplay and convenience. You can enjoy better graphics and performance, bigger screen, better controls, more features and options, and more.</p>
162
- <p>However, not all PUBG Mobile emulators are the same. Some are better than others in terms of compatibility, performance, stability, features, user-friendliness, and reputation. Therefore, you need to choose the best PUBG Mobile emulator for your PC.</p>
163
- <p>In this article, we have reviewed and compared some of the best PUBG Mobile emulators available in the market. We have also provided a step-by-step guide on how to download and install GameLoop emulator, which is the official and best emulator for PUBG Mobile. We have also given some tips and tricks on how to play PUBG Mobile on GameLoop emulator and optimize your gameplay.</p>
164
- <p>We hope that this article has helped you learn how to download PUBG Mobile emulator for PC and enjoy PUBG Mobile on a bigger and better platform. If you have any questions or feedback, please feel free to leave a comment below. Happy gaming!</p>
165
- <h2>FAQs</h2>
166
- <p>Here are some frequently asked questions and answers about PUBG Mobile emulator for PC:</p>
167
- <ul>
168
- <li><strong>Q: Is using a PUBG Mobile emulator illegal or cheating?</strong></li>
169
- <li><strong>A: No, using a PUBG Mobile emulator is not illegal or cheating. It is allowed and supported by Tencent Games and PUBG Corporation, as long as you do not use any hacks or unfair advantages that might ruin the game for other players.</strong></li>
170
- <li><strong>Q: Can I play with my friends who are using mobile devices?</strong></li>
171
- <li><strong>A: Yes, you can play with your friends who are using mobile devices. However, you will only be matched with other players who are using emulators or cross-platform mode. This is to ensure fair and balanced gameplay for everyone.</strong></li>
172
- <li><strong>Q: Which PUBG Mobile emulator is the best for low-end or old PCs?</strong></li>
173
- <li><strong>A: GameLoop is the best PUBG Mobile emulator for low-end or old PCs. It can run PUBG Mobile smoothly and without lag on most PCs. It also has low CPU and memory usage and supports high graphics settings.</strong></li>
174
- <li><strong>Q: How can I update PUBG Mobile on my emulator?</strong></li >li>
175
- <strong>A: You can update PUBG Mobile on your emulator by following these steps:</strong>
176
- <ol>
177
- <li>Launch GameLoop from your desktop or start menu.</li>
178
- <li>On the Game Center tab, find PUBG Mobile and click on the "Update" button if available.</li>
179
- <li>Wait for the update to download and install.</li>
180
- <li>Click on the "Play" button to launch PUBG Mobile on GameLoop.</li>
181
- </ol>
182
- </li>
183
- <li><strong>Q: How can I fix PUBG Mobile emulator errors or issues?</strong></li>
184
- <li><strong>A: If you encounter any errors or issues while using PUBG Mobile emulator, you can try these solutions:</strong>
185
- <ul>
186
- <li>Restart your PC and your emulator.</li>
187
- <li>Update your PC drivers and software.</li>
188
- <li>Update your emulator and PUBG Mobile to the latest version.</li>
189
- <li>Clear your emulator cache and data.</li>
190
- <li>Change your emulator settings and compatibility mode.</li>
191
- <li>Contact the emulator support team or visit their official website or forum for help.</li>
192
- </ul>
193
- </li>
194
- </ul>
195
- <h2>References</h2>
196
- <p>Here are some sources and links that we used in this article:</p>
197
- <ul>
198
- <li><a href="">https://gameloop.fun/</a>: The official website of GameLoop emulator.</li>
199
- <li><a href="">https://www.bluestacks.com/</a>: The official website of BlueStacks emulator.</li>
200
- <li><a href="">https://www.pubgmobile.com/en-US/home.shtml</a>: The official website of PUBG Mobile.</li>
201
- <li><a href="">https://www.youtube.com/watch?v=3w0yqAdJ1iY</a>: A video tutorial on how to download and install GameLoop emulator.</li>
202
- <li><a href="">https://www.androidauthority.com/best-pubg-mobile-emulators-1017527/</a>: An article that reviews and compares different PUBG Mobile emulators.</li>
203
- </ul></p> 197e85843d<br />
204
- <br />
205
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-Dashboards/ScrabbleSolverWordThesaurus/backupapp.py DELETED
@@ -1,35 +0,0 @@
1
- import streamlit as st
2
- import itertools
3
- from nltk.corpus import wordnet
4
-
5
- def get_synonyms(word):
6
- synonyms = set()
7
- for syn in wordnet.synsets(word):
8
- for lemma in syn.lemmas():
9
- synonyms.add(lemma.name())
10
- return list(synonyms)
11
-
12
- def generate_words(letters, length=None):
13
- permutations = set()
14
- for i in range(1, len(letters) + 1):
15
- for p in itertools.permutations(letters, i):
16
- word = "".join(p)
17
- if length is None or len(word) == length:
18
- permutations.add(word)
19
- return permutations
20
-
21
- st.title("Scrabble Helper")
22
-
23
- letters = st.text_input("Enter the letters you have:")
24
- word_length = st.number_input("Enter the word length (optional):", min_value=0, value=0, step=1)
25
-
26
- if letters:
27
- st.header("Generated Words")
28
- words = generate_words(letters, length=word_length if word_length > 0 else None)
29
- st.write(words)
30
-
31
- st.header("Thesaurus Lookup")
32
- selected_word = st.selectbox("Select a word to look up synonyms:", [""] + sorted(words))
33
- if selected_word:
34
- synonyms = get_synonyms(selected_word)
35
- st.write(synonyms)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AP123/ai-avatars/train_dreambooth.py DELETED
@@ -1,881 +0,0 @@
1
- import argparse
2
- import itertools
3
- import math
4
- import os
5
- from pathlib import Path
6
- from typing import Optional
7
- import subprocess
8
- import sys
9
- import gc
10
- import random
11
-
12
- import torch
13
- import torch.nn.functional as F
14
- import torch.utils.checkpoint
15
- from torch.utils.data import Dataset
16
-
17
- from accelerate import Accelerator
18
- from accelerate.logging import get_logger
19
- from accelerate.utils import set_seed
20
- from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel
21
- from diffusers.optimization import get_scheduler
22
- from huggingface_hub import HfFolder, Repository, whoami
23
- from PIL import Image
24
- from torchvision import transforms
25
- from tqdm.auto import tqdm
26
- from transformers import CLIPTextModel, CLIPTokenizer
27
-
28
-
29
- logger = get_logger(__name__)
30
-
31
-
32
- def parse_args():
33
- parser = argparse.ArgumentParser(description="Simple example of a training script.")
34
- parser.add_argument(
35
- "--pretrained_model_name_or_path",
36
- type=str,
37
- default=None,
38
- #required=True,
39
- help="Path to pretrained model or model identifier from huggingface.co/models.",
40
- )
41
- parser.add_argument(
42
- "--tokenizer_name",
43
- type=str,
44
- default=None,
45
- help="Pretrained tokenizer name or path if not the same as model_name",
46
- )
47
- parser.add_argument(
48
- "--instance_data_dir",
49
- type=str,
50
- default=None,
51
- #required=True,
52
- help="A folder containing the training data of instance images.",
53
- )
54
- parser.add_argument(
55
- "--class_data_dir",
56
- type=str,
57
- default=None,
58
- #required=False,
59
- help="A folder containing the training data of class images.",
60
- )
61
- parser.add_argument(
62
- "--instance_prompt",
63
- type=str,
64
- default=None,
65
- help="The prompt with identifier specifying the instance",
66
- )
67
- parser.add_argument(
68
- "--class_prompt",
69
- type=str,
70
- default="",
71
- help="The prompt to specify images in the same class as provided instance images.",
72
- )
73
- parser.add_argument(
74
- "--with_prior_preservation",
75
- default=False,
76
- action="store_true",
77
- help="Flag to add prior preservation loss.",
78
- )
79
- parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.")
80
- parser.add_argument(
81
- "--num_class_images",
82
- type=int,
83
- default=100,
84
- help=(
85
- "Minimal class images for prior preservation loss. If not have enough images, additional images will be"
86
- " sampled with class_prompt."
87
- ),
88
- )
89
- parser.add_argument(
90
- "--output_dir",
91
- type=str,
92
- default="",
93
- help="The output directory where the model predictions and checkpoints will be written.",
94
- )
95
- parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
96
- parser.add_argument(
97
- "--resolution",
98
- type=int,
99
- default=512,
100
- help=(
101
- "The resolution for input images, all the images in the train/validation dataset will be resized to this"
102
- " resolution"
103
- ),
104
- )
105
- parser.add_argument(
106
- "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution"
107
- )
108
- parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder")
109
- parser.add_argument(
110
- "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader."
111
- )
112
- parser.add_argument(
113
- "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images."
114
- )
115
- parser.add_argument("--num_train_epochs", type=int, default=1)
116
- parser.add_argument(
117
- "--max_train_steps",
118
- type=int,
119
- default=None,
120
- help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
121
- )
122
- parser.add_argument(
123
- "--gradient_accumulation_steps",
124
- type=int,
125
- default=1,
126
- help="Number of updates steps to accumulate before performing a backward/update pass.",
127
- )
128
- parser.add_argument(
129
- "--gradient_checkpointing",
130
- action="store_true",
131
- help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
132
- )
133
- parser.add_argument(
134
- "--learning_rate",
135
- type=float,
136
- default=5e-6,
137
- help="Initial learning rate (after the potential warmup period) to use.",
138
- )
139
- parser.add_argument(
140
- "--scale_lr",
141
- action="store_true",
142
- default=False,
143
- help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
144
- )
145
- parser.add_argument(
146
- "--lr_scheduler",
147
- type=str,
148
- default="constant",
149
- help=(
150
- 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
151
- ' "constant", "constant_with_warmup"]'
152
- ),
153
- )
154
- parser.add_argument(
155
- "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
156
- )
157
- parser.add_argument(
158
- "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
159
- )
160
- parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
161
- parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
162
- parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
163
- parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
164
- parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
165
- parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
166
- parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
167
- parser.add_argument(
168
- "--hub_model_id",
169
- type=str,
170
- default=None,
171
- help="The name of the repository to keep in sync with the local `output_dir`.",
172
- )
173
- parser.add_argument(
174
- "--logging_dir",
175
- type=str,
176
- default="logs",
177
- help=(
178
- "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
179
- " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
180
- ),
181
- )
182
- parser.add_argument(
183
- "--mixed_precision",
184
- type=str,
185
- default="no",
186
- choices=["no", "fp16", "bf16"],
187
- help=(
188
- "Whether to use mixed precision. Choose"
189
- "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
190
- "and an Nvidia Ampere GPU."
191
- ),
192
- )
193
-
194
- parser.add_argument(
195
- "--save_n_steps",
196
- type=int,
197
- default=1,
198
- help=("Save the model every n global_steps"),
199
- )
200
-
201
-
202
- parser.add_argument(
203
- "--save_starting_step",
204
- type=int,
205
- default=1,
206
- help=("The step from which it starts saving intermediary checkpoints"),
207
- )
208
-
209
- parser.add_argument(
210
- "--stop_text_encoder_training",
211
- type=int,
212
- default=1000000,
213
- help=("The step at which the text_encoder is no longer trained"),
214
- )
215
-
216
-
217
- parser.add_argument(
218
- "--image_captions_filename",
219
- action="store_true",
220
- help="Get captions from filename",
221
- )
222
-
223
-
224
- parser.add_argument(
225
- "--dump_only_text_encoder",
226
- action="store_true",
227
- default=False,
228
- help="Dump only text encoder",
229
- )
230
-
231
- parser.add_argument(
232
- "--train_only_unet",
233
- action="store_true",
234
- default=False,
235
- help="Train only the unet",
236
- )
237
-
238
- parser.add_argument(
239
- "--cache_latents",
240
- action="store_true",
241
- default=False,
242
- help="Train only the unet",
243
- )
244
-
245
- parser.add_argument(
246
- "--Session_dir",
247
- type=str,
248
- default="",
249
- help="Current session directory",
250
- )
251
-
252
-
253
-
254
-
255
- parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
256
-
257
- args = parser.parse_args()
258
- env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
259
- if env_local_rank != -1 and env_local_rank != args.local_rank:
260
- args.local_rank = env_local_rank
261
-
262
- #if args.instance_data_dir is None:
263
- # raise ValueError("You must specify a train data directory.")
264
-
265
- #if args.with_prior_preservation:
266
- # if args.class_data_dir is None:
267
- # raise ValueError("You must specify a data directory for class images.")
268
- # if args.class_prompt is None:
269
- # raise ValueError("You must specify prompt for class images.")
270
-
271
- return args
272
-
273
-
274
- class DreamBoothDataset(Dataset):
275
- """
276
- A dataset to prepare the instance and class images with the prompts for fine-tuning the model.
277
- It pre-processes the images and the tokenizes prompts.
278
- """
279
-
280
- def __init__(
281
- self,
282
- instance_data_root,
283
- instance_prompt,
284
- tokenizer,
285
- args,
286
- class_data_root=None,
287
- class_prompt=None,
288
- size=512,
289
- center_crop=False,
290
- ):
291
- self.size = size
292
- self.center_crop = center_crop
293
- self.tokenizer = tokenizer
294
- self.image_captions_filename = None
295
-
296
- self.instance_data_root = Path(instance_data_root)
297
- if not self.instance_data_root.exists():
298
- raise ValueError("Instance images root doesn't exists.")
299
-
300
- self.instance_images_path = list(Path(instance_data_root).iterdir())
301
- self.num_instance_images = len(self.instance_images_path)
302
- self.instance_prompt = instance_prompt
303
- self._length = self.num_instance_images
304
-
305
- if args.image_captions_filename:
306
- self.image_captions_filename = True
307
-
308
- if class_data_root is not None:
309
- self.class_data_root = Path(class_data_root)
310
- self.class_data_root.mkdir(parents=True, exist_ok=True)
311
- self.class_images_path = list(self.class_data_root.iterdir())
312
- random.shuffle(self.class_images_path)
313
- self.num_class_images = len(self.class_images_path)
314
- self._length = max(self.num_class_images, self.num_instance_images)
315
- self.class_prompt = class_prompt
316
- else:
317
- self.class_data_root = None
318
-
319
- self.image_transforms = transforms.Compose(
320
- [
321
- transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR),
322
- transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size),
323
- transforms.ToTensor(),
324
- transforms.Normalize([0.5], [0.5]),
325
- ]
326
- )
327
-
328
- def __len__(self):
329
- return self._length
330
-
331
- def __getitem__(self, index):
332
- example = {}
333
- path = self.instance_images_path[index % self.num_instance_images]
334
- instance_image = Image.open(path)
335
- if not instance_image.mode == "RGB":
336
- instance_image = instance_image.convert("RGB")
337
-
338
- instance_prompt = self.instance_prompt
339
-
340
- if self.image_captions_filename:
341
- filename = Path(path).stem
342
- pt=''.join([i for i in filename if not i.isdigit()])
343
- pt=pt.replace("_"," ")
344
- pt=pt.replace("(","")
345
- pt=pt.replace(")","")
346
- pt=pt.replace("-","")
347
- instance_prompt = pt
348
- sys.stdout.write(" " +instance_prompt+" ")
349
- sys.stdout.flush()
350
-
351
-
352
- example["instance_images"] = self.image_transforms(instance_image)
353
- example["instance_prompt_ids"] = self.tokenizer(
354
- instance_prompt,
355
- padding="do_not_pad",
356
- truncation=True,
357
- max_length=self.tokenizer.model_max_length,
358
- ).input_ids
359
-
360
- if self.class_data_root:
361
- class_image = Image.open(self.class_images_path[index % self.num_class_images])
362
- if not class_image.mode == "RGB":
363
- class_image = class_image.convert("RGB")
364
- example["class_images"] = self.image_transforms(class_image)
365
- example["class_prompt_ids"] = self.tokenizer(
366
- self.class_prompt,
367
- padding="do_not_pad",
368
- truncation=True,
369
- max_length=self.tokenizer.model_max_length,
370
- ).input_ids
371
-
372
- return example
373
-
374
-
375
-
376
- class PromptDataset(Dataset):
377
- "A simple dataset to prepare the prompts to generate class images on multiple GPUs."
378
-
379
- def __init__(self, prompt, num_samples):
380
- self.prompt = prompt
381
- self.num_samples = num_samples
382
-
383
- def __len__(self):
384
- return self.num_samples
385
-
386
- def __getitem__(self, index):
387
- example = {}
388
- example["prompt"] = self.prompt
389
- example["index"] = index
390
- return example
391
-
392
- class LatentsDataset(Dataset):
393
- def __init__(self, latents_cache, text_encoder_cache):
394
- self.latents_cache = latents_cache
395
- self.text_encoder_cache = text_encoder_cache
396
-
397
- def __len__(self):
398
- return len(self.latents_cache)
399
-
400
- def __getitem__(self, index):
401
- return self.latents_cache[index], self.text_encoder_cache[index]
402
-
403
- def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None):
404
- if token is None:
405
- token = HfFolder.get_token()
406
- if organization is None:
407
- username = whoami(token)["name"]
408
- return f"{username}/{model_id}"
409
- else:
410
- return f"{organization}/{model_id}"
411
-
412
- def merge_two_dicts(starting_dict: dict, updater_dict: dict) -> dict:
413
- """
414
- Starts from base starting dict and then adds the remaining key values from updater replacing the values from
415
- the first starting/base dict with the second updater dict.
416
-
417
- For later: how does d = {**d1, **d2} replace collision?
418
-
419
- :param starting_dict:
420
- :param updater_dict:
421
- :return:
422
- """
423
- new_dict: dict = starting_dict.copy() # start with keys and values of starting_dict
424
- new_dict.update(updater_dict) # modifies starting_dict with keys and values of updater_dict
425
- return new_dict
426
-
427
- def merge_args(args1: argparse.Namespace, args2: argparse.Namespace) -> argparse.Namespace:
428
- """
429
-
430
- ref: https://stackoverflow.com/questions/56136549/how-can-i-merge-two-argparse-namespaces-in-python-2-x
431
- :param args1:
432
- :param args2:
433
- :return:
434
- """
435
- # - the merged args
436
- # The vars() function returns the __dict__ attribute to values of the given object e.g {field:value}.
437
- merged_key_values_for_namespace: dict = merge_two_dicts(vars(args1), vars(args2))
438
- args = argparse.Namespace(**merged_key_values_for_namespace)
439
- return args
440
-
441
- def run_training(args_imported):
442
- args_default = parse_args()
443
- args = merge_args(args_default, args_imported)
444
- print(args)
445
- logging_dir = Path(args.output_dir, args.logging_dir)
446
- i=args.save_starting_step
447
- accelerator = Accelerator(
448
- gradient_accumulation_steps=args.gradient_accumulation_steps,
449
- mixed_precision=args.mixed_precision,
450
- log_with="tensorboard",
451
- logging_dir=logging_dir,
452
- )
453
-
454
- # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate
455
- # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models.
456
- # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate.
457
- if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1:
458
- raise ValueError(
459
- "Gradient accumulation is not supported when training the text encoder in distributed training. "
460
- "Please set gradient_accumulation_steps to 1. This feature will be supported in the future."
461
- )
462
-
463
- if args.seed is not None:
464
- set_seed(args.seed)
465
-
466
- if args.with_prior_preservation:
467
- class_images_dir = Path(args.class_data_dir)
468
- if not class_images_dir.exists():
469
- class_images_dir.mkdir(parents=True)
470
- cur_class_images = len(list(class_images_dir.iterdir()))
471
-
472
- if cur_class_images < args.num_class_images:
473
- torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32
474
- pipeline = StableDiffusionPipeline.from_pretrained(
475
- args.pretrained_model_name_or_path, torch_dtype=torch_dtype
476
- )
477
- pipeline.set_progress_bar_config(disable=True)
478
-
479
- num_new_images = args.num_class_images - cur_class_images
480
- logger.info(f"Number of class images to sample: {num_new_images}.")
481
-
482
- sample_dataset = PromptDataset(args.class_prompt, num_new_images)
483
- sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size)
484
-
485
- sample_dataloader = accelerator.prepare(sample_dataloader)
486
- pipeline.to(accelerator.device)
487
-
488
- for example in tqdm(
489
- sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process
490
- ):
491
- with torch.autocast("cuda"):
492
- images = pipeline(example["prompt"]).images
493
-
494
- for i, image in enumerate(images):
495
- image.save(class_images_dir / f"{example['index'][i] + cur_class_images}.jpg")
496
-
497
- del pipeline
498
- if torch.cuda.is_available():
499
- torch.cuda.empty_cache()
500
-
501
- # Handle the repository creation
502
- if accelerator.is_main_process:
503
- if args.push_to_hub:
504
- if args.hub_model_id is None:
505
- repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token)
506
- else:
507
- repo_name = args.hub_model_id
508
- repo = Repository(args.output_dir, clone_from=repo_name)
509
-
510
- with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore:
511
- if "step_*" not in gitignore:
512
- gitignore.write("step_*\n")
513
- if "epoch_*" not in gitignore:
514
- gitignore.write("epoch_*\n")
515
- elif args.output_dir is not None:
516
- os.makedirs(args.output_dir, exist_ok=True)
517
-
518
- # Load the tokenizer
519
- if args.tokenizer_name:
520
- tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name)
521
- elif args.pretrained_model_name_or_path:
522
- tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
523
-
524
- # Load models and create wrapper for stable diffusion
525
- if args.train_only_unet:
526
- if os.path.exists(str(args.output_dir+"/text_encoder_trained")):
527
- text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder_trained")
528
- elif os.path.exists(str(args.output_dir+"/text_encoder")):
529
- text_encoder = CLIPTextModel.from_pretrained(args.output_dir, subfolder="text_encoder")
530
- else:
531
- text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder")
532
- else:
533
- text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder")
534
- vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae")
535
- unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet")
536
-
537
- vae.requires_grad_(False)
538
- if not args.train_text_encoder:
539
- text_encoder.requires_grad_(False)
540
-
541
- if args.gradient_checkpointing:
542
- unet.enable_gradient_checkpointing()
543
- if args.train_text_encoder:
544
- text_encoder.gradient_checkpointing_enable()
545
-
546
- if args.scale_lr:
547
- args.learning_rate = (
548
- args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
549
- )
550
-
551
- # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs
552
- if args.use_8bit_adam:
553
- try:
554
- import bitsandbytes as bnb
555
- except ImportError:
556
- raise ImportError(
557
- "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`."
558
- )
559
-
560
- optimizer_class = bnb.optim.AdamW8bit
561
- else:
562
- optimizer_class = torch.optim.AdamW
563
-
564
- params_to_optimize = (
565
- itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters()
566
- )
567
- optimizer = optimizer_class(
568
- params_to_optimize,
569
- lr=args.learning_rate,
570
- betas=(args.adam_beta1, args.adam_beta2),
571
- weight_decay=args.adam_weight_decay,
572
- eps=args.adam_epsilon,
573
- )
574
-
575
- noise_scheduler = DDPMScheduler.from_config(args.pretrained_model_name_or_path, subfolder="scheduler")
576
-
577
- train_dataset = DreamBoothDataset(
578
- instance_data_root=args.instance_data_dir,
579
- instance_prompt=args.instance_prompt,
580
- class_data_root=args.class_data_dir if args.with_prior_preservation else None,
581
- class_prompt=args.class_prompt,
582
- tokenizer=tokenizer,
583
- size=args.resolution,
584
- center_crop=args.center_crop,
585
- args=args,
586
- )
587
-
588
- def collate_fn(examples):
589
- input_ids = [example["instance_prompt_ids"] for example in examples]
590
- pixel_values = [example["instance_images"] for example in examples]
591
-
592
- # Concat class and instance examples for prior preservation.
593
- # We do this to avoid doing two forward passes.
594
- if args.with_prior_preservation:
595
- input_ids += [example["class_prompt_ids"] for example in examples]
596
- pixel_values += [example["class_images"] for example in examples]
597
-
598
- pixel_values = torch.stack(pixel_values)
599
- pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
600
-
601
- input_ids = tokenizer.pad({"input_ids": input_ids}, padding=True, return_tensors="pt").input_ids
602
-
603
- batch = {
604
- "input_ids": input_ids,
605
- "pixel_values": pixel_values,
606
- }
607
- return batch
608
-
609
- train_dataloader = torch.utils.data.DataLoader(
610
- train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn
611
- )
612
-
613
- # Scheduler and math around the number of training steps.
614
- overrode_max_train_steps = False
615
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
616
- if args.max_train_steps is None:
617
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
618
- overrode_max_train_steps = True
619
-
620
- lr_scheduler = get_scheduler(
621
- args.lr_scheduler,
622
- optimizer=optimizer,
623
- num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps,
624
- num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
625
- )
626
-
627
- if args.train_text_encoder:
628
- unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
629
- unet, text_encoder, optimizer, train_dataloader, lr_scheduler
630
- )
631
- else:
632
- unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
633
- unet, optimizer, train_dataloader, lr_scheduler
634
- )
635
-
636
- weight_dtype = torch.float32
637
- if args.mixed_precision == "fp16":
638
- weight_dtype = torch.float16
639
- elif args.mixed_precision == "bf16":
640
- weight_dtype = torch.bfloat16
641
-
642
- # Move text_encode and vae to gpu.
643
- # For mixed precision training we cast the text_encoder and vae weights to half-precision
644
- # as these models are only used for inference, keeping weights in full precision is not required.
645
- vae.to(accelerator.device, dtype=weight_dtype)
646
- if not args.train_text_encoder:
647
- text_encoder.to(accelerator.device, dtype=weight_dtype)
648
-
649
-
650
- if args.cache_latents:
651
- latents_cache = []
652
- text_encoder_cache = []
653
- for batch in tqdm(train_dataloader, desc="Caching latents"):
654
- with torch.no_grad():
655
- batch["pixel_values"] = batch["pixel_values"].to(accelerator.device, non_blocking=True, dtype=weight_dtype)
656
- batch["input_ids"] = batch["input_ids"].to(accelerator.device, non_blocking=True)
657
- latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist)
658
- if args.train_text_encoder:
659
- text_encoder_cache.append(batch["input_ids"])
660
- else:
661
- text_encoder_cache.append(text_encoder(batch["input_ids"])[0])
662
- train_dataset = LatentsDataset(latents_cache, text_encoder_cache)
663
- train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=1, collate_fn=lambda x: x, shuffle=True)
664
-
665
- del vae
666
- #if not args.train_text_encoder:
667
- # del text_encoder
668
- if torch.cuda.is_available():
669
- torch.cuda.empty_cache()
670
-
671
- # We need to recalculate our total training steps as the size of the training dataloader may have changed.
672
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
673
- if overrode_max_train_steps:
674
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
675
- # Afterwards we recalculate our number of training epochs
676
- args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
677
-
678
- # We need to initialize the trackers we use, and also store our configuration.
679
- # The trackers initializes automatically on the main process.
680
- if accelerator.is_main_process:
681
- accelerator.init_trackers("dreambooth", config=vars(args))
682
-
683
- def bar(prg):
684
- br='|'+'█' * prg + ' ' * (25-prg)+'|'
685
- return br
686
-
687
- # Train!
688
- total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
689
-
690
- logger.info("***** Running training *****")
691
- logger.info(f" Num examples = {len(train_dataset)}")
692
- logger.info(f" Num batches each epoch = {len(train_dataloader)}")
693
- logger.info(f" Num Epochs = {args.num_train_epochs}")
694
- logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
695
- logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
696
- logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
697
- logger.info(f" Total optimization steps = {args.max_train_steps}")
698
- # Only show the progress bar once on each machine.
699
- progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
700
- global_step = 0
701
-
702
- for epoch in range(args.num_train_epochs):
703
- unet.train()
704
- if args.train_text_encoder:
705
- text_encoder.train()
706
- for step, batch in enumerate(train_dataloader):
707
- with accelerator.accumulate(unet):
708
- # Convert images to latent space
709
- with torch.no_grad():
710
- if args.cache_latents:
711
- latents_dist = batch[0][0]
712
- else:
713
- latents_dist = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist
714
- latents = latents_dist.sample() * 0.18215
715
-
716
- # Sample noise that we'll add to the latents
717
- noise = torch.randn_like(latents)
718
- bsz = latents.shape[0]
719
- # Sample a random timestep for each image
720
- timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device)
721
- timesteps = timesteps.long()
722
-
723
- # Add noise to the latents according to the noise magnitude at each timestep
724
- # (this is the forward diffusion process)
725
- noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
726
-
727
- # Get the text embedding for conditioning
728
- if(args.cache_latents):
729
- if args.train_text_encoder:
730
- encoder_hidden_states = text_encoder(batch[0][1])[0]
731
- else:
732
- encoder_hidden_states = batch[0][1]
733
- else:
734
- encoder_hidden_states = text_encoder(batch["input_ids"])[0]
735
-
736
- # Predict the noise residual
737
- model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
738
-
739
- # Get the target for loss depending on the prediction type
740
- if noise_scheduler.config.prediction_type == "epsilon":
741
- target = noise
742
- elif noise_scheduler.config.prediction_type == "v_prediction":
743
- target = noise_scheduler.get_velocity(latents, noise, timesteps)
744
- else:
745
- raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
746
-
747
- if args.with_prior_preservation:
748
- # Chunk the noise and model_pred into two parts and compute the loss on each part separately.
749
- model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0)
750
- target, target_prior = torch.chunk(target, 2, dim=0)
751
-
752
- # Compute instance loss
753
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).mean()
754
-
755
- # Compute prior loss
756
- prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean")
757
-
758
- # Add the prior loss to the instance loss.
759
- loss = loss + args.prior_loss_weight * prior_loss
760
- else:
761
- loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
762
-
763
- accelerator.backward(loss)
764
- if accelerator.sync_gradients:
765
- params_to_clip = (
766
- itertools.chain(unet.parameters(), text_encoder.parameters())
767
- if args.train_text_encoder
768
- else unet.parameters()
769
- )
770
- accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
771
- optimizer.step()
772
- lr_scheduler.step()
773
- optimizer.zero_grad()
774
-
775
- # Checks if the accelerator has performed an optimization step behind the scenes
776
- if accelerator.sync_gradients:
777
- progress_bar.update(1)
778
- global_step += 1
779
-
780
- fll=round((global_step*100)/args.max_train_steps)
781
- fll=round(fll/4)
782
- pr=bar(fll)
783
-
784
- logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
785
- progress_bar.set_postfix(**logs)
786
- progress_bar.set_description_str("Progress:"+pr)
787
- accelerator.log(logs, step=global_step)
788
-
789
- if global_step >= args.max_train_steps:
790
- break
791
-
792
- if args.train_text_encoder and global_step == args.stop_text_encoder_training and global_step >= 30:
793
- if accelerator.is_main_process:
794
- print(" " +" Freezing the text_encoder ..."+" ")
795
- frz_dir=args.output_dir + "/text_encoder_frozen"
796
- if os.path.exists(frz_dir):
797
- subprocess.call('rm -r '+ frz_dir, shell=True)
798
- os.mkdir(frz_dir)
799
- pipeline = StableDiffusionPipeline.from_pretrained(
800
- args.pretrained_model_name_or_path,
801
- unet=accelerator.unwrap_model(unet),
802
- text_encoder=accelerator.unwrap_model(text_encoder),
803
- )
804
- pipeline.text_encoder.save_pretrained(frz_dir)
805
-
806
- if args.save_n_steps >= 200:
807
- if global_step < args.max_train_steps and global_step+1==i:
808
- ckpt_name = "_step_" + str(global_step+1)
809
- save_dir = Path(args.output_dir+ckpt_name)
810
- save_dir=str(save_dir)
811
- save_dir=save_dir.replace(" ", "_")
812
- if not os.path.exists(save_dir):
813
- os.mkdir(save_dir)
814
- inst=save_dir[16:]
815
- inst=inst.replace(" ", "_")
816
- print(" SAVING CHECKPOINT: "+args.Session_dir+"/"+inst+".ckpt")
817
- # Create the pipeline using the trained modules and save it.
818
- if accelerator.is_main_process:
819
- pipeline = StableDiffusionPipeline.from_pretrained(
820
- args.pretrained_model_name_or_path,
821
- unet=accelerator.unwrap_model(unet),
822
- text_encoder=accelerator.unwrap_model(text_encoder),
823
- )
824
- pipeline.save_pretrained(save_dir)
825
- frz_dir=args.output_dir + "/text_encoder_frozen"
826
- if args.train_text_encoder and os.path.exists(frz_dir):
827
- subprocess.call('rm -r '+save_dir+'/text_encoder/*.*', shell=True)
828
- subprocess.call('cp -f '+frz_dir +'/*.* '+ save_dir+'/text_encoder', shell=True)
829
- chkpth=args.Session_dir+"/"+inst+".ckpt"
830
- subprocess.call('python /content/diffusers/scripts/convert_diffusers_to_original_stable_diffusion.py --model_path ' + save_dir + ' --checkpoint_path ' + chkpth + ' --half', shell=True)
831
- subprocess.call('rm -r '+ save_dir, shell=True)
832
- i=i+args.save_n_steps
833
-
834
- accelerator.wait_for_everyone()
835
-
836
- # Create the pipeline using using the trained modules and save it.
837
- if accelerator.is_main_process:
838
- if args.dump_only_text_encoder:
839
- txt_dir=args.output_dir + "/text_encoder_trained"
840
- if not os.path.exists(txt_dir):
841
- os.mkdir(txt_dir)
842
- pipeline = StableDiffusionPipeline.from_pretrained(
843
- args.pretrained_model_name_or_path,
844
- unet=accelerator.unwrap_model(unet),
845
- text_encoder=accelerator.unwrap_model(text_encoder),
846
- )
847
- pipeline.text_encoder.save_pretrained(txt_dir)
848
-
849
- elif args.train_only_unet:
850
- pipeline = StableDiffusionPipeline.from_pretrained(
851
- args.pretrained_model_name_or_path,
852
- unet=accelerator.unwrap_model(unet),
853
- text_encoder=accelerator.unwrap_model(text_encoder),
854
- )
855
- pipeline.save_pretrained(args.output_dir)
856
- txt_dir=args.output_dir + "/text_encoder_trained"
857
- subprocess.call('rm -r '+txt_dir, shell=True)
858
-
859
- else:
860
- pipeline = StableDiffusionPipeline.from_pretrained(
861
- args.pretrained_model_name_or_path,
862
- unet=accelerator.unwrap_model(unet),
863
- text_encoder=accelerator.unwrap_model(text_encoder),
864
- )
865
- frz_dir=args.output_dir + "/text_encoder_frozen"
866
- pipeline.save_pretrained(args.output_dir)
867
- if args.train_text_encoder and os.path.exists(frz_dir):
868
- subprocess.call('mv -f '+frz_dir +'/*.* '+ args.output_dir+'/text_encoder', shell=True)
869
- subprocess.call('rm -r '+ frz_dir, shell=True)
870
-
871
- if args.push_to_hub:
872
- repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True)
873
-
874
- accelerator.end_training()
875
- del pipeline
876
- torch.cuda.empty_cache()
877
- gc.collect()
878
- if __name__ == "__main__":
879
- pass
880
- #main()
881
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abdullah-Habib/Rabbit_or_Hare/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Rabbit Or Hare
3
- emoji: 📊
4
- colorFrom: pink
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 3.35.2
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/agents/simulation_agent/reflection.py DELETED
@@ -1,227 +0,0 @@
1
- from __future__ import annotations
2
-
3
- """
4
- An agent based upon Observation-Planning-Reflection architecture.
5
- """
6
-
7
- from logging import getLogger
8
-
9
- from abc import abstractmethod
10
- from typing import List, Set, Union, NamedTuple, TYPE_CHECKING
11
-
12
- from pydantic import BaseModel, Field, validator
13
-
14
- from agentverse.llms import BaseLLM
15
- from agentverse.memory import BaseMemory, ChatHistoryMemory
16
- from agentverse.message import Message
17
- from agentverse.output_parser import OutputParser
18
-
19
- from agentverse.message import Message
20
- from agentverse.agents.base import BaseAgent
21
-
22
- from datetime import datetime as dt
23
- import datetime
24
-
25
- #from . import agent_registry
26
- from string import Template
27
-
28
- from agentverse.agents import agent_registry
29
- from agentverse.agents.base import BaseAgent
30
-
31
- logger = getLogger(__file__)
32
-
33
- if TYPE_CHECKING:
34
- from agentverse.environments.base import BaseEnvironment
35
-
36
-
37
- @agent_registry.register("reflection")
38
- class ReflectionAgent(BaseAgent):
39
- async_mode: bool = (True,)
40
- current_time: str = (None,)
41
- environment: BaseEnvironment = None
42
- step_cnt: int = 0
43
-
44
- manipulated_memory: str = Field(
45
- default="", description="one fragment used in prompt construction"
46
- )
47
-
48
- @validator("current_time")
49
- def convert_str_to_dt(cls, current_time):
50
- if not isinstance(current_time, str):
51
- raise ValueError("current_time should be str")
52
- return dt.strptime(current_time, "%Y-%m-%d %H:%M:%S")
53
-
54
- def step(self, current_time: dt, env_description: str = "") -> Message:
55
- """
56
- Call this method at each time frame
57
- """
58
- self.current_time = current_time
59
-
60
- self.manipulated_memory = self.memory_manipulator.manipulate_memory()
61
-
62
- prompt = self._fill_prompt_template(env_description)
63
-
64
- parsed_response, reaction, target = None, None, None
65
- for i in range(self.max_retry):
66
- try:
67
- response = self.llm.agenerate_response(prompt)
68
- parsed_response = self.output_parser.parse(response)
69
-
70
- if "say(" in parsed_response.return_values["output"]:
71
- reaction, target = eval(
72
- "self._" + parsed_response.return_values["output"].strip()
73
- )
74
- elif "act(" in parsed_response.return_values["output"]:
75
- reaction, target = eval(
76
- "self._" + parsed_response.return_values["output"].strip()
77
- )
78
- elif "do_nothing(" in parsed_response.return_values["output"]:
79
- reaction, target = None, None
80
- else:
81
- raise Exception(
82
- f"no valid parsed_response detected, "
83
- f"cur response {parsed_response.return_values['output']}"
84
- )
85
- break
86
-
87
- except Exception as e:
88
- logger.error(e)
89
- logger.warn("Retrying...")
90
- continue
91
-
92
- if parsed_response is None:
93
- logger.error(f"{self.name} failed to generate valid response.")
94
-
95
- if reaction is None:
96
- reaction = "Keep doing last action ..."
97
-
98
- message = Message(
99
- content="" if reaction is None else reaction,
100
- sender=self.name,
101
- receiver=self.get_receiver()
102
- if target is None
103
- else self.get_valid_receiver(target),
104
- )
105
-
106
- self.step_cnt += 1
107
-
108
- return message
109
-
110
- async def astep(self, current_time: dt, env_description: str = "") -> Message:
111
- """Asynchronous version of step"""
112
- # use environment's time to update agent's time
113
- self.current_time = current_time
114
- # Before the agent step, we check current status,
115
- # TODO add this func after
116
- # self.check_status_passive()
117
-
118
- self.manipulated_memory = self.memory_manipulator.manipulate_memory()
119
-
120
- prompt = self._fill_prompt_template(env_description)
121
-
122
- parsed_response, reaction, target = None, None, None
123
- for i in range(self.max_retry):
124
- try:
125
- response = await self.llm.agenerate_response(prompt)
126
- parsed_response = self.output_parser.parse(response)
127
-
128
- if "say(" in parsed_response.return_values["output"]:
129
- reaction, target = eval(
130
- "self._" + parsed_response.return_values["output"].strip()
131
- )
132
- elif "act(" in parsed_response.return_values["output"]:
133
- reaction, target = eval(
134
- "self._" + parsed_response.return_values["output"].strip()
135
- )
136
- elif "do_nothing(" in parsed_response.return_values["output"]:
137
- reaction, target = None, None
138
- else:
139
- raise Exception(
140
- f"no valid parsed_response detected, "
141
- f"cur response {parsed_response.return_values['output']}"
142
- )
143
-
144
- break
145
-
146
- except Exception as e:
147
- logger.error(e)
148
- logger.warn("Retrying...")
149
- continue
150
-
151
- if parsed_response is None:
152
- logger.error(f"{self.name} failed to generate valid response.")
153
-
154
- if reaction is None:
155
- reaction = "Keep doing last action ..."
156
-
157
- message = Message(
158
- content="" if reaction is None else reaction,
159
- sender=self.name,
160
- receiver=self.get_receiver()
161
- if target is None
162
- else self.get_valid_receiver(target),
163
- )
164
-
165
- self.step_cnt += 1
166
-
167
- return message
168
-
169
- def _act(self, description=None, target=None):
170
- if description is None:
171
- return ""
172
- if target is None:
173
- reaction_content = f"{self.name} performs action: '{description}'."
174
- else:
175
- reaction_content = (
176
- f"{self.name} performs action to {target}: '{description}'."
177
- )
178
- # self.environment.broadcast_observations(self, target, reaction_content)
179
- return reaction_content, target
180
-
181
- def _say(self, description, target=None):
182
- if description is None:
183
- return ""
184
- if target is None:
185
- reaction_content = f"{self.name} says: '{description}'."
186
- else:
187
- reaction_content = f"{self.name} says to {target}: '{description}'."
188
- # self.environment.broadcast_observations(self, target, reaction_content)
189
- return reaction_content, target
190
-
191
- def get_valid_receiver(self, target: str) -> set():
192
- all_agents_name = []
193
- for agent in self.environment.agents:
194
- all_agents_name.append(agent.name)
195
-
196
- if not (target in all_agents_name):
197
- return {"all"}
198
- else:
199
- return {target}
200
-
201
- def _fill_prompt_template(self, env_description: str = "") -> str:
202
- """Fill the placeholders in the prompt template
203
-
204
- In the conversation agent, three placeholders are supported:
205
- - ${agent_name}: the name of the agent
206
- - ${env_description}: the description of the environment
207
- - ${role_description}: the description of the role of the agent
208
- - ${chat_history}: the chat history of the agent
209
- """
210
- input_arguments = {
211
- "agent_name": self.name,
212
- "role_description": self.role_description,
213
- "chat_history": self.memory.to_string(add_sender_prefix=True),
214
- "current_time": self.current_time,
215
- "env_description": env_description,
216
- }
217
- return Template(self.prompt_template).safe_substitute(input_arguments)
218
-
219
- def add_message_to_memory(self, messages: List[Message]) -> None:
220
- self.memory.add_message(messages)
221
-
222
- def reset(self, environment: BaseEnvironment) -> None:
223
- """Reset the agent"""
224
- self.environment = environment
225
- self.memory.reset()
226
- self.memory_manipulator.agent = self
227
- self.memory_manipulator.memory = self.memory
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/rings/Rings.js DELETED
@@ -1,38 +0,0 @@
1
- import Base from '../base/Base.js';
2
- import { Circle } from '../utils/Geoms.js'
3
- import Yoyo from '../utils/Yoyo.js';
4
-
5
-
6
- class Rings extends Base {
7
- constructor(scene, config) {
8
- super(scene, config);
9
- this.type = 'rexSpinnerRings';
10
- }
11
-
12
- buildShapes() {
13
- for (var i = 0; i < 2; i++) {
14
- this.addShape(new Circle());
15
- }
16
- }
17
-
18
- updateShapes() {
19
- var centerX = this.centerX;
20
- var centerY = this.centerY;
21
- var radius = this.radius;
22
- var lineWidth = Math.ceil(radius / 25);
23
- var maxRingRadius = radius - lineWidth;
24
-
25
- var shapes = this.getShapes();
26
- for (var i = 0, cnt = shapes.length; i < cnt; i++) {
27
- var ring = shapes[i];
28
- var t = (this.value + (i / cnt)) % 1;
29
- var alpha = Yoyo(t);
30
- ring
31
- .lineStyle(lineWidth, this.color, alpha)
32
- .setRadius(t * maxRingRadius)
33
- .setCenterPosition(centerX, centerY)
34
- }
35
- }
36
- }
37
-
38
- export default Rings;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aloento/9Nine-PITS/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: 9Nine PITS
3
- emoji: 🚀
4
- colorFrom: red
5
- colorTo: purple
6
- sdk: gradio
7
- sdk_version: 3.23.0
8
- app_file: app.py
9
- pinned: false
10
- license: agpl-3.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ameaou/academic-chatgpt3.1/crazy_functions/批量总结PDF文档.py DELETED
@@ -1,166 +0,0 @@
1
- from toolbox import update_ui
2
- from toolbox import CatchException, report_execption, write_results_to_file
3
- import re
4
- import unicodedata
5
- fast_debug = False
6
- from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
7
-
8
- def is_paragraph_break(match):
9
- """
10
- 根据给定的匹配结果来判断换行符是否表示段落分隔。
11
- 如果换行符前为句子结束标志(句号,感叹号,问号),且下一个字符为大写字母,则换行符更有可能表示段落分隔。
12
- 也可以根据之前的内容长度来判断段落是否已经足够长。
13
- """
14
- prev_char, next_char = match.groups()
15
-
16
- # 句子结束标志
17
- sentence_endings = ".!?"
18
-
19
- # 设定一个最小段落长度阈值
20
- min_paragraph_length = 140
21
-
22
- if prev_char in sentence_endings and next_char.isupper() and len(match.string[:match.start(1)]) > min_paragraph_length:
23
- return "\n\n"
24
- else:
25
- return " "
26
-
27
- def normalize_text(text):
28
- """
29
- 通过把连字(ligatures)等文本特殊符号转换为其基本形式来对文本进行归一化处理。
30
- 例如,将连字 "fi" 转换为 "f" 和 "i"。
31
- """
32
- # 对文本进行归一化处理,分解连字
33
- normalized_text = unicodedata.normalize("NFKD", text)
34
-
35
- # 替换其他特殊字符
36
- cleaned_text = re.sub(r'[^\x00-\x7F]+', '', normalized_text)
37
-
38
- return cleaned_text
39
-
40
- def clean_text(raw_text):
41
- """
42
- 对从 PDF 提取出的原始文本进行清洗和格式化处理。
43
- 1. 对原始文本进行归一化处理。
44
- 2. 替换跨行的连词,例如 “Espe-\ncially” 转换为 “Especially”。
45
- 3. 根据 heuristic 规则判断换行符是否是段落分隔,并相应地进行替换。
46
- """
47
- # 对文本进行归一化处理
48
- normalized_text = normalize_text(raw_text)
49
-
50
- # 替换跨行的连词
51
- text = re.sub(r'(\w+-\n\w+)', lambda m: m.group(1).replace('-\n', ''), normalized_text)
52
-
53
- # 根据前后相邻字符的特点,找到原文本中的换行符
54
- newlines = re.compile(r'(\S)\n(\S)')
55
-
56
- # 根据 heuristic 规则,用空格或段落分隔符替换原换行符
57
- final_text = re.sub(newlines, lambda m: m.group(1) + is_paragraph_break(m) + m.group(2), text)
58
-
59
- return final_text.strip()
60
-
61
- def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
62
- import time, glob, os, fitz
63
- print('begin analysis on:', file_manifest)
64
- for index, fp in enumerate(file_manifest):
65
- with fitz.open(fp) as doc:
66
- file_content = ""
67
- for page in doc:
68
- file_content += page.get_text()
69
- file_content = clean_text(file_content)
70
- print(file_content)
71
-
72
- prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
73
- i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
74
- i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
75
- chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
76
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
77
-
78
- if not fast_debug:
79
- msg = '正常'
80
- # ** gpt request **
81
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
82
- inputs=i_say,
83
- inputs_show_user=i_say_show_user,
84
- llm_kwargs=llm_kwargs,
85
- chatbot=chatbot,
86
- history=[],
87
- sys_prompt="总结文章。"
88
- ) # 带超时倒计时
89
-
90
-
91
- chatbot[-1] = (i_say_show_user, gpt_say)
92
- history.append(i_say_show_user); history.append(gpt_say)
93
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
94
- if not fast_debug: time.sleep(2)
95
-
96
- all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)])
97
- i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。'
98
- chatbot.append((i_say, "[Local Message] waiting gpt response."))
99
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
100
-
101
- if not fast_debug:
102
- msg = '正常'
103
- # ** gpt request **
104
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
105
- inputs=i_say,
106
- inputs_show_user=i_say,
107
- llm_kwargs=llm_kwargs,
108
- chatbot=chatbot,
109
- history=history,
110
- sys_prompt="总结文章。"
111
- ) # 带超时倒计时
112
-
113
- chatbot[-1] = (i_say, gpt_say)
114
- history.append(i_say); history.append(gpt_say)
115
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界���
116
- res = write_results_to_file(history)
117
- chatbot.append(("完成了吗?", res))
118
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
119
-
120
-
121
- @CatchException
122
- def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
123
- import glob, os
124
-
125
- # 基本信息:功能、贡献者
126
- chatbot.append([
127
- "函数插件功能?",
128
- "批量总结PDF文档。函数插件贡献者: ValeriaWong,Eralien"])
129
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
130
-
131
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
132
- try:
133
- import fitz
134
- except:
135
- report_execption(chatbot, history,
136
- a = f"解析项目: {txt}",
137
- b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
138
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
139
- return
140
-
141
- # 清空历史,以免输入溢出
142
- history = []
143
-
144
- # 检测输入参数,如没有给定输入参数,直接退出
145
- if os.path.exists(txt):
146
- project_folder = txt
147
- else:
148
- if txt == "": txt = '空空如也的输入栏'
149
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
150
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
151
- return
152
-
153
- # 搜索需要处理的文件清单
154
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \
155
- # [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
156
- # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
157
- # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
158
-
159
- # 如果没找到任何文件
160
- if len(file_manifest) == 0:
161
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或.pdf文件: {txt}")
162
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
163
- return
164
-
165
- # 开始正式执行任务
166
- yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amon1/ChatGPTForAcadamic/crazy_functions/读文章写摘要.py DELETED
@@ -1,70 +0,0 @@
1
- from predict import predict_no_ui
2
- from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down
3
- fast_debug = False
4
-
5
-
6
- def 解析Paper(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt):
7
- import time, glob, os
8
- print('begin analysis on:', file_manifest)
9
- for index, fp in enumerate(file_manifest):
10
- with open(fp, 'r', encoding='utf-8') as f:
11
- file_content = f.read()
12
-
13
- prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
14
- i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
15
- i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
16
- chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
17
- print('[1] yield chatbot, history')
18
- yield chatbot, history, '正常'
19
-
20
- if not fast_debug:
21
- msg = '正常'
22
- # ** gpt request **
23
- gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, history=[]) # 带超时倒计时
24
-
25
- print('[2] end gpt req')
26
- chatbot[-1] = (i_say_show_user, gpt_say)
27
- history.append(i_say_show_user); history.append(gpt_say)
28
- print('[3] yield chatbot, history')
29
- yield chatbot, history, msg
30
- print('[4] next')
31
- if not fast_debug: time.sleep(2)
32
-
33
- all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)])
34
- i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。'
35
- chatbot.append((i_say, "[Local Message] waiting gpt response."))
36
- yield chatbot, history, '正常'
37
-
38
- if not fast_debug:
39
- msg = '正常'
40
- # ** gpt request **
41
- gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, temperature, history=history) # 带超时倒计时
42
-
43
- chatbot[-1] = (i_say, gpt_say)
44
- history.append(i_say); history.append(gpt_say)
45
- yield chatbot, history, msg
46
- res = write_results_to_file(history)
47
- chatbot.append(("完成了吗?", res))
48
- yield chatbot, history, msg
49
-
50
-
51
-
52
- @CatchException
53
- def 读文章写摘要(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
54
- history = [] # 清空历史,以免输入溢出
55
- import glob, os
56
- if os.path.exists(txt):
57
- project_folder = txt
58
- else:
59
- if txt == "": txt = '空空如也的输入栏'
60
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
61
- yield chatbot, history, '正常'
62
- return
63
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] # + \
64
- # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
65
- # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
66
- if len(file_manifest) == 0:
67
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
68
- yield chatbot, history, '正常'
69
- return
70
- yield from 解析Paper(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stochastic_karras_ve.md DELETED
@@ -1,33 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Stochastic Karras VE
14
-
15
- [Elucidating the Design Space of Diffusion-Based Generative Models](https://huggingface.co/papers/2206.00364) is by Tero Karras, Miika Aittala, Timo Aila and Samuli Laine. This pipeline implements the stochastic sampling tailored to variance expanding (VE) models.
16
-
17
- The abstract from the paper:
18
-
19
- *We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of an existing ImageNet-64 model from 2.07 to near-SOTA 1.55.*
20
-
21
- <Tip>
22
-
23
- Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
24
-
25
- </Tip>
26
-
27
- ## KarrasVePipeline
28
- [[autodoc]] KarrasVePipeline
29
- - all
30
- - __call__
31
-
32
- ## ImagePipelineOutput
33
- [[autodoc]] pipelines.ImagePipelineOutput
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/optimization/open_vino.md DELETED
@@ -1,108 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
-
14
- # How to use OpenVINO for inference
15
-
16
- 🤗 [Optimum](https://github.com/huggingface/optimum-intel) provides Stable Diffusion pipelines compatible with OpenVINO. You can now easily perform inference with OpenVINO Runtime on a variety of Intel processors ([see](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html) the full list of supported devices).
17
-
18
- ## Installation
19
-
20
- Install 🤗 Optimum Intel with the following command:
21
-
22
- ```
23
- pip install --upgrade-strategy eager optimum["openvino"]
24
- ```
25
-
26
- The `--upgrade-strategy eager` option is needed to ensure [`optimum-intel`](https://github.com/huggingface/optimum-intel) is upgraded to its latest version.
27
-
28
-
29
- ## Stable Diffusion
30
-
31
- ### Inference
32
-
33
- To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionPipeline` with `OVStableDiffusionPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
34
-
35
- ```python
36
- from optimum.intel import OVStableDiffusionPipeline
37
-
38
- model_id = "runwayml/stable-diffusion-v1-5"
39
- pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True)
40
- prompt = "sailing ship in storm by Rembrandt"
41
- image = pipeline(prompt).images[0]
42
-
43
- # Don't forget to save the exported model
44
- pipeline.save_pretrained("openvino-sd-v1-5")
45
- ```
46
-
47
- To further speed up inference, the model can be statically reshaped :
48
-
49
- ```python
50
- # Define the shapes related to the inputs and desired outputs
51
- batch_size, num_images, height, width = 1, 1, 512, 512
52
-
53
- # Statically reshape the model
54
- pipeline.reshape(batch_size, height, width, num_images)
55
- # Compile the model before inference
56
- pipeline.compile()
57
-
58
- image = pipeline(
59
- prompt,
60
- height=height,
61
- width=width,
62
- num_images_per_prompt=num_images,
63
- ).images[0]
64
- ```
65
-
66
- In case you want to change any parameters such as the outputs height or width, you’ll need to statically reshape your model once again.
67
-
68
- <div class="flex justify-center">
69
- <img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/intel/openvino/stable_diffusion_v1_5_sail_boat_rembrandt.png">
70
- </div>
71
-
72
-
73
- ### Supported tasks
74
-
75
- | Task | Loading Class |
76
- |--------------------------------------|--------------------------------------|
77
- | `text-to-image` | `OVStableDiffusionPipeline` |
78
- | `image-to-image` | `OVStableDiffusionImg2ImgPipeline` |
79
- | `inpaint` | `OVStableDiffusionInpaintPipeline` |
80
-
81
- You can find more examples in the optimum [documentation](https://huggingface.co/docs/optimum/intel/inference#stable-diffusion).
82
-
83
-
84
- ## Stable Diffusion XL
85
-
86
- ### Inference
87
-
88
- ```python
89
- from optimum.intel import OVStableDiffusionXLPipeline
90
-
91
- model_id = "stabilityai/stable-diffusion-xl-base-1.0"
92
- pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id, export=True)
93
- prompt = "sailing ship in storm by Rembrandt"
94
- image = pipeline(prompt).images[0]
95
- ```
96
-
97
- To further speed up inference, the model can be statically reshaped as showed above.
98
- You can find more examples in the optimum [documentation](https://huggingface.co/docs/optimum/intel/inference#stable-diffusion-xl).
99
-
100
- ### Supported tasks
101
-
102
- | Task | Loading Class |
103
- |--------------------------------------|--------------------------------------|
104
- | `text-to-image` | `OVStableDiffusionXLPipeline` |
105
- | `image-to-image` | `OVStableDiffusionXLImg2ImgPipeline` |
106
-
107
-
108
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/kandinsky_v22/test_kandinsky_inpaint.py DELETED
@@ -1,295 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2023 HuggingFace Inc.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- import gc
17
- import random
18
- import unittest
19
-
20
- import numpy as np
21
- import torch
22
- from PIL import Image
23
-
24
- from diffusers import (
25
- DDIMScheduler,
26
- KandinskyV22InpaintPipeline,
27
- KandinskyV22PriorPipeline,
28
- UNet2DConditionModel,
29
- VQModel,
30
- )
31
- from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device
32
- from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
33
-
34
- from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
35
-
36
-
37
- enable_full_determinism()
38
-
39
-
40
- class Dummies:
41
- @property
42
- def text_embedder_hidden_size(self):
43
- return 32
44
-
45
- @property
46
- def time_input_dim(self):
47
- return 32
48
-
49
- @property
50
- def block_out_channels_0(self):
51
- return self.time_input_dim
52
-
53
- @property
54
- def time_embed_dim(self):
55
- return self.time_input_dim * 4
56
-
57
- @property
58
- def cross_attention_dim(self):
59
- return 32
60
-
61
- @property
62
- def dummy_unet(self):
63
- torch.manual_seed(0)
64
-
65
- model_kwargs = {
66
- "in_channels": 9,
67
- # Out channels is double in channels because predicts mean and variance
68
- "out_channels": 8,
69
- "addition_embed_type": "image",
70
- "down_block_types": ("ResnetDownsampleBlock2D", "SimpleCrossAttnDownBlock2D"),
71
- "up_block_types": ("SimpleCrossAttnUpBlock2D", "ResnetUpsampleBlock2D"),
72
- "mid_block_type": "UNetMidBlock2DSimpleCrossAttn",
73
- "block_out_channels": (self.block_out_channels_0, self.block_out_channels_0 * 2),
74
- "layers_per_block": 1,
75
- "encoder_hid_dim": self.text_embedder_hidden_size,
76
- "encoder_hid_dim_type": "image_proj",
77
- "cross_attention_dim": self.cross_attention_dim,
78
- "attention_head_dim": 4,
79
- "resnet_time_scale_shift": "scale_shift",
80
- "class_embed_type": None,
81
- }
82
-
83
- model = UNet2DConditionModel(**model_kwargs)
84
- return model
85
-
86
- @property
87
- def dummy_movq_kwargs(self):
88
- return {
89
- "block_out_channels": [32, 64],
90
- "down_block_types": ["DownEncoderBlock2D", "AttnDownEncoderBlock2D"],
91
- "in_channels": 3,
92
- "latent_channels": 4,
93
- "layers_per_block": 1,
94
- "norm_num_groups": 8,
95
- "norm_type": "spatial",
96
- "num_vq_embeddings": 12,
97
- "out_channels": 3,
98
- "up_block_types": [
99
- "AttnUpDecoderBlock2D",
100
- "UpDecoderBlock2D",
101
- ],
102
- "vq_embed_dim": 4,
103
- }
104
-
105
- @property
106
- def dummy_movq(self):
107
- torch.manual_seed(0)
108
- model = VQModel(**self.dummy_movq_kwargs)
109
- return model
110
-
111
- def get_dummy_components(self):
112
- unet = self.dummy_unet
113
- movq = self.dummy_movq
114
-
115
- scheduler = DDIMScheduler(
116
- num_train_timesteps=1000,
117
- beta_schedule="linear",
118
- beta_start=0.00085,
119
- beta_end=0.012,
120
- clip_sample=False,
121
- set_alpha_to_one=False,
122
- steps_offset=1,
123
- prediction_type="epsilon",
124
- thresholding=False,
125
- )
126
-
127
- components = {
128
- "unet": unet,
129
- "scheduler": scheduler,
130
- "movq": movq,
131
- }
132
-
133
- return components
134
-
135
- def get_dummy_inputs(self, device, seed=0):
136
- image_embeds = floats_tensor((1, self.text_embedder_hidden_size), rng=random.Random(seed)).to(device)
137
- negative_image_embeds = floats_tensor((1, self.text_embedder_hidden_size), rng=random.Random(seed + 1)).to(
138
- device
139
- )
140
- # create init_image
141
- image = floats_tensor((1, 3, 64, 64), rng=random.Random(seed)).to(device)
142
- image = image.cpu().permute(0, 2, 3, 1)[0]
143
- init_image = Image.fromarray(np.uint8(image)).convert("RGB").resize((256, 256))
144
- # create mask
145
- mask = np.zeros((64, 64), dtype=np.float32)
146
- mask[:32, :32] = 1
147
-
148
- if str(device).startswith("mps"):
149
- generator = torch.manual_seed(seed)
150
- else:
151
- generator = torch.Generator(device=device).manual_seed(seed)
152
- inputs = {
153
- "image": init_image,
154
- "mask_image": mask,
155
- "image_embeds": image_embeds,
156
- "negative_image_embeds": negative_image_embeds,
157
- "generator": generator,
158
- "height": 64,
159
- "width": 64,
160
- "num_inference_steps": 2,
161
- "guidance_scale": 4.0,
162
- "output_type": "np",
163
- }
164
- return inputs
165
-
166
-
167
- class KandinskyV22InpaintPipelineFastTests(PipelineTesterMixin, unittest.TestCase):
168
- pipeline_class = KandinskyV22InpaintPipeline
169
- params = ["image_embeds", "negative_image_embeds", "image", "mask_image"]
170
- batch_params = [
171
- "image_embeds",
172
- "negative_image_embeds",
173
- "image",
174
- "mask_image",
175
- ]
176
- required_optional_params = [
177
- "generator",
178
- "height",
179
- "width",
180
- "latents",
181
- "guidance_scale",
182
- "num_inference_steps",
183
- "return_dict",
184
- "guidance_scale",
185
- "num_images_per_prompt",
186
- "output_type",
187
- "return_dict",
188
- ]
189
- test_xformers_attention = False
190
-
191
- def get_dummy_components(self):
192
- dummies = Dummies()
193
- return dummies.get_dummy_components()
194
-
195
- def get_dummy_inputs(self, device, seed=0):
196
- dummies = Dummies()
197
- return dummies.get_dummy_inputs(device=device, seed=seed)
198
-
199
- def test_kandinsky_inpaint(self):
200
- device = "cpu"
201
-
202
- components = self.get_dummy_components()
203
-
204
- pipe = self.pipeline_class(**components)
205
- pipe = pipe.to(device)
206
-
207
- pipe.set_progress_bar_config(disable=None)
208
-
209
- output = pipe(**self.get_dummy_inputs(device))
210
- image = output.images
211
-
212
- image_from_tuple = pipe(
213
- **self.get_dummy_inputs(device),
214
- return_dict=False,
215
- )[0]
216
-
217
- image_slice = image[0, -3:, -3:, -1]
218
- image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1]
219
-
220
- assert image.shape == (1, 64, 64, 3)
221
-
222
- expected_slice = np.array(
223
- [0.50775903, 0.49527195, 0.48824543, 0.50192237, 0.48644906, 0.49373814, 0.4780598, 0.47234827, 0.48327848]
224
- )
225
-
226
- assert (
227
- np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
228
- ), f" expected_slice {expected_slice}, but got {image_slice.flatten()}"
229
- assert (
230
- np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2
231
- ), f" expected_slice {expected_slice}, but got {image_from_tuple_slice.flatten()}"
232
-
233
- def test_inference_batch_single_identical(self):
234
- super().test_inference_batch_single_identical(expected_max_diff=3e-3)
235
-
236
-
237
- @slow
238
- @require_torch_gpu
239
- class KandinskyV22InpaintPipelineIntegrationTests(unittest.TestCase):
240
- def tearDown(self):
241
- # clean up the VRAM after each test
242
- super().tearDown()
243
- gc.collect()
244
- torch.cuda.empty_cache()
245
-
246
- def test_kandinsky_inpaint(self):
247
- expected_image = load_numpy(
248
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
249
- "/kandinskyv22/kandinskyv22_inpaint_cat_with_hat_fp16.npy"
250
- )
251
-
252
- init_image = load_image(
253
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png"
254
- )
255
- mask = np.zeros((768, 768), dtype=np.float32)
256
- mask[:250, 250:-250] = 1
257
-
258
- prompt = "a hat"
259
-
260
- pipe_prior = KandinskyV22PriorPipeline.from_pretrained(
261
- "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
262
- )
263
- pipe_prior.to(torch_device)
264
-
265
- pipeline = KandinskyV22InpaintPipeline.from_pretrained(
266
- "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16
267
- )
268
- pipeline = pipeline.to(torch_device)
269
- pipeline.set_progress_bar_config(disable=None)
270
-
271
- generator = torch.Generator(device="cpu").manual_seed(0)
272
- image_emb, zero_image_emb = pipe_prior(
273
- prompt,
274
- generator=generator,
275
- num_inference_steps=5,
276
- negative_prompt="",
277
- ).to_tuple()
278
-
279
- output = pipeline(
280
- image=init_image,
281
- mask_image=mask,
282
- image_embeds=image_emb,
283
- negative_image_embeds=zero_image_emb,
284
- generator=generator,
285
- num_inference_steps=100,
286
- height=768,
287
- width=768,
288
- output_type="np",
289
- )
290
-
291
- image = output.images[0]
292
-
293
- assert image.shape == (768, 768, 3)
294
-
295
- assert_mean_pixel_difference(image, expected_image)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_instruction_pix2pix.py DELETED
@@ -1,385 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2023 HuggingFace Inc.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- import gc
17
- import random
18
- import unittest
19
-
20
- import numpy as np
21
- import torch
22
- from PIL import Image
23
- from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer
24
-
25
- from diffusers import (
26
- AutoencoderKL,
27
- DDIMScheduler,
28
- EulerAncestralDiscreteScheduler,
29
- LMSDiscreteScheduler,
30
- PNDMScheduler,
31
- StableDiffusionInstructPix2PixPipeline,
32
- UNet2DConditionModel,
33
- )
34
- from diffusers.image_processor import VaeImageProcessor
35
- from diffusers.utils import floats_tensor, load_image, slow, torch_device
36
- from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
37
-
38
- from ..pipeline_params import (
39
- IMAGE_TO_IMAGE_IMAGE_PARAMS,
40
- TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS,
41
- TEXT_GUIDED_IMAGE_VARIATION_PARAMS,
42
- )
43
- from ..test_pipelines_common import PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin
44
-
45
-
46
- enable_full_determinism()
47
-
48
-
49
- class StableDiffusionInstructPix2PixPipelineFastTests(
50
- PipelineLatentTesterMixin, PipelineKarrasSchedulerTesterMixin, PipelineTesterMixin, unittest.TestCase
51
- ):
52
- pipeline_class = StableDiffusionInstructPix2PixPipeline
53
- params = TEXT_GUIDED_IMAGE_VARIATION_PARAMS - {"height", "width", "cross_attention_kwargs"}
54
- batch_params = TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS
55
- image_params = IMAGE_TO_IMAGE_IMAGE_PARAMS
56
- image_latents_params = IMAGE_TO_IMAGE_IMAGE_PARAMS
57
-
58
- def get_dummy_components(self):
59
- torch.manual_seed(0)
60
- unet = UNet2DConditionModel(
61
- block_out_channels=(32, 64),
62
- layers_per_block=2,
63
- sample_size=32,
64
- in_channels=8,
65
- out_channels=4,
66
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
67
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
68
- cross_attention_dim=32,
69
- )
70
- scheduler = PNDMScheduler(skip_prk_steps=True)
71
- torch.manual_seed(0)
72
- vae = AutoencoderKL(
73
- block_out_channels=[32, 64],
74
- in_channels=3,
75
- out_channels=3,
76
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
77
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
78
- latent_channels=4,
79
- )
80
- torch.manual_seed(0)
81
- text_encoder_config = CLIPTextConfig(
82
- bos_token_id=0,
83
- eos_token_id=2,
84
- hidden_size=32,
85
- intermediate_size=37,
86
- layer_norm_eps=1e-05,
87
- num_attention_heads=4,
88
- num_hidden_layers=5,
89
- pad_token_id=1,
90
- vocab_size=1000,
91
- )
92
- text_encoder = CLIPTextModel(text_encoder_config)
93
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
94
-
95
- components = {
96
- "unet": unet,
97
- "scheduler": scheduler,
98
- "vae": vae,
99
- "text_encoder": text_encoder,
100
- "tokenizer": tokenizer,
101
- "safety_checker": None,
102
- "feature_extractor": None,
103
- }
104
- return components
105
-
106
- def get_dummy_inputs(self, device, seed=0):
107
- image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device)
108
- image = image.cpu().permute(0, 2, 3, 1)[0]
109
- image = Image.fromarray(np.uint8(image)).convert("RGB")
110
- if str(device).startswith("mps"):
111
- generator = torch.manual_seed(seed)
112
- else:
113
- generator = torch.Generator(device=device).manual_seed(seed)
114
- inputs = {
115
- "prompt": "A painting of a squirrel eating a burger",
116
- "image": image,
117
- "generator": generator,
118
- "num_inference_steps": 2,
119
- "guidance_scale": 6.0,
120
- "image_guidance_scale": 1,
121
- "output_type": "numpy",
122
- }
123
- return inputs
124
-
125
- def test_stable_diffusion_pix2pix_default_case(self):
126
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
127
- components = self.get_dummy_components()
128
- sd_pipe = StableDiffusionInstructPix2PixPipeline(**components)
129
- sd_pipe = sd_pipe.to(device)
130
- sd_pipe.set_progress_bar_config(disable=None)
131
-
132
- inputs = self.get_dummy_inputs(device)
133
- image = sd_pipe(**inputs).images
134
- image_slice = image[0, -3:, -3:, -1]
135
- assert image.shape == (1, 32, 32, 3)
136
- expected_slice = np.array([0.7526, 0.3750, 0.4547, 0.6117, 0.5866, 0.5016, 0.4327, 0.5642, 0.4815])
137
-
138
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3
139
-
140
- def test_stable_diffusion_pix2pix_negative_prompt(self):
141
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
142
- components = self.get_dummy_components()
143
- sd_pipe = StableDiffusionInstructPix2PixPipeline(**components)
144
- sd_pipe = sd_pipe.to(device)
145
- sd_pipe.set_progress_bar_config(disable=None)
146
-
147
- inputs = self.get_dummy_inputs(device)
148
- negative_prompt = "french fries"
149
- output = sd_pipe(**inputs, negative_prompt=negative_prompt)
150
- image = output.images
151
- image_slice = image[0, -3:, -3:, -1]
152
-
153
- assert image.shape == (1, 32, 32, 3)
154
- expected_slice = np.array([0.7511, 0.3642, 0.4553, 0.6236, 0.5797, 0.5013, 0.4343, 0.5611, 0.4831])
155
-
156
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3
157
-
158
- def test_stable_diffusion_pix2pix_multiple_init_images(self):
159
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
160
- components = self.get_dummy_components()
161
- sd_pipe = StableDiffusionInstructPix2PixPipeline(**components)
162
- sd_pipe = sd_pipe.to(device)
163
- sd_pipe.set_progress_bar_config(disable=None)
164
-
165
- inputs = self.get_dummy_inputs(device)
166
- inputs["prompt"] = [inputs["prompt"]] * 2
167
-
168
- image = np.array(inputs["image"]).astype(np.float32) / 255.0
169
- image = torch.from_numpy(image).unsqueeze(0).to(device)
170
- image = image / 2 + 0.5
171
- image = image.permute(0, 3, 1, 2)
172
- inputs["image"] = image.repeat(2, 1, 1, 1)
173
-
174
- image = sd_pipe(**inputs).images
175
- image_slice = image[-1, -3:, -3:, -1]
176
-
177
- assert image.shape == (2, 32, 32, 3)
178
- expected_slice = np.array([0.5812, 0.5748, 0.5222, 0.5908, 0.5695, 0.7174, 0.6804, 0.5523, 0.5579])
179
-
180
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3
181
-
182
- def test_stable_diffusion_pix2pix_euler(self):
183
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
184
- components = self.get_dummy_components()
185
- components["scheduler"] = EulerAncestralDiscreteScheduler(
186
- beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear"
187
- )
188
- sd_pipe = StableDiffusionInstructPix2PixPipeline(**components)
189
- sd_pipe = sd_pipe.to(device)
190
- sd_pipe.set_progress_bar_config(disable=None)
191
-
192
- inputs = self.get_dummy_inputs(device)
193
- image = sd_pipe(**inputs).images
194
- image_slice = image[0, -3:, -3:, -1]
195
-
196
- slice = [round(x, 4) for x in image_slice.flatten().tolist()]
197
- print(",".join([str(x) for x in slice]))
198
-
199
- assert image.shape == (1, 32, 32, 3)
200
- expected_slice = np.array([0.7417, 0.3842, 0.4732, 0.5776, 0.5891, 0.5139, 0.4052, 0.5673, 0.4986])
201
-
202
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3
203
-
204
- def test_inference_batch_single_identical(self):
205
- super().test_inference_batch_single_identical(expected_max_diff=3e-3)
206
-
207
- # Overwrite the default test_latents_inputs because pix2pix encode the image differently
208
- def test_latents_input(self):
209
- components = self.get_dummy_components()
210
- pipe = StableDiffusionInstructPix2PixPipeline(**components)
211
- pipe.image_processor = VaeImageProcessor(do_resize=False, do_normalize=False)
212
- pipe = pipe.to(torch_device)
213
- pipe.set_progress_bar_config(disable=None)
214
-
215
- out = pipe(**self.get_dummy_inputs_by_type(torch_device, input_image_type="pt"))[0]
216
-
217
- vae = components["vae"]
218
- inputs = self.get_dummy_inputs_by_type(torch_device, input_image_type="pt")
219
-
220
- for image_param in self.image_latents_params:
221
- if image_param in inputs.keys():
222
- inputs[image_param] = vae.encode(inputs[image_param]).latent_dist.mode()
223
-
224
- out_latents_inputs = pipe(**inputs)[0]
225
-
226
- max_diff = np.abs(out - out_latents_inputs).max()
227
- self.assertLess(max_diff, 1e-4, "passing latents as image input generate different result from passing image")
228
-
229
-
230
- @slow
231
- @require_torch_gpu
232
- class StableDiffusionInstructPix2PixPipelineSlowTests(unittest.TestCase):
233
- def tearDown(self):
234
- super().tearDown()
235
- gc.collect()
236
- torch.cuda.empty_cache()
237
-
238
- def get_inputs(self, seed=0):
239
- generator = torch.manual_seed(seed)
240
- image = load_image(
241
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_pix2pix/example.jpg"
242
- )
243
- inputs = {
244
- "prompt": "turn him into a cyborg",
245
- "image": image,
246
- "generator": generator,
247
- "num_inference_steps": 3,
248
- "guidance_scale": 7.5,
249
- "image_guidance_scale": 1.0,
250
- "output_type": "numpy",
251
- }
252
- return inputs
253
-
254
- def test_stable_diffusion_pix2pix_default(self):
255
- pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(
256
- "timbrooks/instruct-pix2pix", safety_checker=None
257
- )
258
- pipe.to(torch_device)
259
- pipe.set_progress_bar_config(disable=None)
260
- pipe.enable_attention_slicing()
261
-
262
- inputs = self.get_inputs()
263
- image = pipe(**inputs).images
264
- image_slice = image[0, -3:, -3:, -1].flatten()
265
-
266
- assert image.shape == (1, 512, 512, 3)
267
- expected_slice = np.array([0.5902, 0.6015, 0.6027, 0.5983, 0.6092, 0.6061, 0.5765, 0.5785, 0.5555])
268
-
269
- assert np.abs(expected_slice - image_slice).max() < 1e-3
270
-
271
- def test_stable_diffusion_pix2pix_k_lms(self):
272
- pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(
273
- "timbrooks/instruct-pix2pix", safety_checker=None
274
- )
275
- pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config)
276
- pipe.to(torch_device)
277
- pipe.set_progress_bar_config(disable=None)
278
- pipe.enable_attention_slicing()
279
-
280
- inputs = self.get_inputs()
281
- image = pipe(**inputs).images
282
- image_slice = image[0, -3:, -3:, -1].flatten()
283
-
284
- assert image.shape == (1, 512, 512, 3)
285
- expected_slice = np.array([0.6578, 0.6817, 0.6972, 0.6761, 0.6856, 0.6916, 0.6428, 0.6516, 0.6301])
286
-
287
- assert np.abs(expected_slice - image_slice).max() < 1e-3
288
-
289
- def test_stable_diffusion_pix2pix_ddim(self):
290
- pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(
291
- "timbrooks/instruct-pix2pix", safety_checker=None
292
- )
293
- pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
294
- pipe.to(torch_device)
295
- pipe.set_progress_bar_config(disable=None)
296
- pipe.enable_attention_slicing()
297
-
298
- inputs = self.get_inputs()
299
- image = pipe(**inputs).images
300
- image_slice = image[0, -3:, -3:, -1].flatten()
301
-
302
- assert image.shape == (1, 512, 512, 3)
303
- expected_slice = np.array([0.3828, 0.3834, 0.3818, 0.3792, 0.3865, 0.3752, 0.3792, 0.3847, 0.3753])
304
-
305
- assert np.abs(expected_slice - image_slice).max() < 1e-3
306
-
307
- def test_stable_diffusion_pix2pix_intermediate_state(self):
308
- number_of_steps = 0
309
-
310
- def callback_fn(step: int, timestep: int, latents: torch.FloatTensor) -> None:
311
- callback_fn.has_been_called = True
312
- nonlocal number_of_steps
313
- number_of_steps += 1
314
- if step == 1:
315
- latents = latents.detach().cpu().numpy()
316
- assert latents.shape == (1, 4, 64, 64)
317
- latents_slice = latents[0, -3:, -3:, -1]
318
- expected_slice = np.array([-0.2463, -0.4644, -0.9756, 1.5176, 1.4414, 0.7866, 0.9897, 0.8521, 0.7983])
319
-
320
- assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2
321
- elif step == 2:
322
- latents = latents.detach().cpu().numpy()
323
- assert latents.shape == (1, 4, 64, 64)
324
- latents_slice = latents[0, -3:, -3:, -1]
325
- expected_slice = np.array([-0.2644, -0.4626, -0.9653, 1.5176, 1.4551, 0.7686, 0.9805, 0.8452, 0.8115])
326
-
327
- assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2
328
-
329
- callback_fn.has_been_called = False
330
-
331
- pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(
332
- "timbrooks/instruct-pix2pix", safety_checker=None, torch_dtype=torch.float16
333
- )
334
- pipe = pipe.to(torch_device)
335
- pipe.set_progress_bar_config(disable=None)
336
- pipe.enable_attention_slicing()
337
-
338
- inputs = self.get_inputs()
339
- pipe(**inputs, callback=callback_fn, callback_steps=1)
340
- assert callback_fn.has_been_called
341
- assert number_of_steps == 3
342
-
343
- def test_stable_diffusion_pipeline_with_sequential_cpu_offloading(self):
344
- torch.cuda.empty_cache()
345
- torch.cuda.reset_max_memory_allocated()
346
- torch.cuda.reset_peak_memory_stats()
347
-
348
- pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(
349
- "timbrooks/instruct-pix2pix", safety_checker=None, torch_dtype=torch.float16
350
- )
351
- pipe = pipe.to(torch_device)
352
- pipe.set_progress_bar_config(disable=None)
353
- pipe.enable_attention_slicing(1)
354
- pipe.enable_sequential_cpu_offload()
355
-
356
- inputs = self.get_inputs()
357
- _ = pipe(**inputs)
358
-
359
- mem_bytes = torch.cuda.max_memory_allocated()
360
- # make sure that less than 2.2 GB is allocated
361
- assert mem_bytes < 2.2 * 10**9
362
-
363
- def test_stable_diffusion_pix2pix_pipeline_multiple_of_8(self):
364
- inputs = self.get_inputs()
365
- # resize to resolution that is divisible by 8 but not 16 or 32
366
- inputs["image"] = inputs["image"].resize((504, 504))
367
-
368
- model_id = "timbrooks/instruct-pix2pix"
369
- pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(
370
- model_id,
371
- safety_checker=None,
372
- )
373
- pipe.to(torch_device)
374
- pipe.set_progress_bar_config(disable=None)
375
- pipe.enable_attention_slicing()
376
-
377
- output = pipe(**inputs)
378
- image = output.images[0]
379
-
380
- image_slice = image[255:258, 383:386, -1]
381
-
382
- assert image.shape == (504, 504, 3)
383
- expected_slice = np.array([0.2726, 0.2529, 0.2664, 0.2655, 0.2641, 0.2642, 0.2591, 0.2649, 0.2590])
384
-
385
- assert np.abs(image_slice.flatten() - expected_slice).max() < 5e-3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/shared_heads/res_layer.py DELETED
@@ -1,77 +0,0 @@
1
- import torch.nn as nn
2
- from mmcv.cnn import constant_init, kaiming_init
3
- from mmcv.runner import auto_fp16, load_checkpoint
4
-
5
- from mmdet.models.backbones import ResNet
6
- from mmdet.models.builder import SHARED_HEADS
7
- from mmdet.models.utils import ResLayer as _ResLayer
8
- from mmdet.utils import get_root_logger
9
-
10
-
11
- @SHARED_HEADS.register_module()
12
- class ResLayer(nn.Module):
13
-
14
- def __init__(self,
15
- depth,
16
- stage=3,
17
- stride=2,
18
- dilation=1,
19
- style='pytorch',
20
- norm_cfg=dict(type='BN', requires_grad=True),
21
- norm_eval=True,
22
- with_cp=False,
23
- dcn=None):
24
- super(ResLayer, self).__init__()
25
- self.norm_eval = norm_eval
26
- self.norm_cfg = norm_cfg
27
- self.stage = stage
28
- self.fp16_enabled = False
29
- block, stage_blocks = ResNet.arch_settings[depth]
30
- stage_block = stage_blocks[stage]
31
- planes = 64 * 2**stage
32
- inplanes = 64 * 2**(stage - 1) * block.expansion
33
-
34
- res_layer = _ResLayer(
35
- block,
36
- inplanes,
37
- planes,
38
- stage_block,
39
- stride=stride,
40
- dilation=dilation,
41
- style=style,
42
- with_cp=with_cp,
43
- norm_cfg=self.norm_cfg,
44
- dcn=dcn)
45
- self.add_module(f'layer{stage + 1}', res_layer)
46
-
47
- def init_weights(self, pretrained=None):
48
- """Initialize the weights in the module.
49
-
50
- Args:
51
- pretrained (str, optional): Path to pre-trained weights.
52
- Defaults to None.
53
- """
54
- if isinstance(pretrained, str):
55
- logger = get_root_logger()
56
- load_checkpoint(self, pretrained, strict=False, logger=logger)
57
- elif pretrained is None:
58
- for m in self.modules():
59
- if isinstance(m, nn.Conv2d):
60
- kaiming_init(m)
61
- elif isinstance(m, nn.BatchNorm2d):
62
- constant_init(m, 1)
63
- else:
64
- raise TypeError('pretrained must be a str or None')
65
-
66
- @auto_fp16()
67
- def forward(self, x):
68
- res_layer = getattr(self, f'layer{self.stage + 1}')
69
- out = res_layer(x)
70
- return out
71
-
72
- def train(self, mode=True):
73
- super(ResLayer, self).train(mode)
74
- if self.norm_eval:
75
- for m in self.modules():
76
- if isinstance(m, nn.BatchNorm2d):
77
- m.eval()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/utils/profiling.py DELETED
@@ -1,39 +0,0 @@
1
- import contextlib
2
- import sys
3
- import time
4
-
5
- import torch
6
-
7
- if sys.version_info >= (3, 7):
8
-
9
- @contextlib.contextmanager
10
- def profile_time(trace_name,
11
- name,
12
- enabled=True,
13
- stream=None,
14
- end_stream=None):
15
- """Print time spent by CPU and GPU.
16
-
17
- Useful as a temporary context manager to find sweet spots of code
18
- suitable for async implementation.
19
- """
20
- if (not enabled) or not torch.cuda.is_available():
21
- yield
22
- return
23
- stream = stream if stream else torch.cuda.current_stream()
24
- end_stream = end_stream if end_stream else stream
25
- start = torch.cuda.Event(enable_timing=True)
26
- end = torch.cuda.Event(enable_timing=True)
27
- stream.record_event(start)
28
- try:
29
- cpu_start = time.monotonic()
30
- yield
31
- finally:
32
- cpu_end = time.monotonic()
33
- end_stream.record_event(end)
34
- end.synchronize()
35
- cpu_time = (cpu_end - cpu_start) * 1000
36
- gpu_time = start.elapsed_time(end)
37
- msg = f'{trace_name} {name} cpu_time {cpu_time:.2f} ms '
38
- msg += f'gpu_time {gpu_time:.2f} ms stream {stream}'
39
- print(msg, end_stream)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/utils/testing.py DELETED
@@ -1,140 +0,0 @@
1
- # Copyright (c) Open-MMLab.
2
- import sys
3
- from collections.abc import Iterable
4
- from runpy import run_path
5
- from shlex import split
6
- from typing import Any, Dict, List
7
- from unittest.mock import patch
8
-
9
-
10
- def check_python_script(cmd):
11
- """Run the python cmd script with `__main__`. The difference between
12
- `os.system` is that, this function exectues code in the current process, so
13
- that it can be tracked by coverage tools. Currently it supports two forms:
14
-
15
- - ./tests/data/scripts/hello.py zz
16
- - python tests/data/scripts/hello.py zz
17
- """
18
- args = split(cmd)
19
- if args[0] == 'python':
20
- args = args[1:]
21
- with patch.object(sys, 'argv', args):
22
- run_path(args[0], run_name='__main__')
23
-
24
-
25
- def _any(judge_result):
26
- """Since built-in ``any`` works only when the element of iterable is not
27
- iterable, implement the function."""
28
- if not isinstance(judge_result, Iterable):
29
- return judge_result
30
-
31
- try:
32
- for element in judge_result:
33
- if _any(element):
34
- return True
35
- except TypeError:
36
- # Maybe encounter the case: torch.tensor(True) | torch.tensor(False)
37
- if judge_result:
38
- return True
39
- return False
40
-
41
-
42
- def assert_dict_contains_subset(dict_obj: Dict[Any, Any],
43
- expected_subset: Dict[Any, Any]) -> bool:
44
- """Check if the dict_obj contains the expected_subset.
45
-
46
- Args:
47
- dict_obj (Dict[Any, Any]): Dict object to be checked.
48
- expected_subset (Dict[Any, Any]): Subset expected to be contained in
49
- dict_obj.
50
-
51
- Returns:
52
- bool: Whether the dict_obj contains the expected_subset.
53
- """
54
-
55
- for key, value in expected_subset.items():
56
- if key not in dict_obj.keys() or _any(dict_obj[key] != value):
57
- return False
58
- return True
59
-
60
-
61
- def assert_attrs_equal(obj: Any, expected_attrs: Dict[str, Any]) -> bool:
62
- """Check if attribute of class object is correct.
63
-
64
- Args:
65
- obj (object): Class object to be checked.
66
- expected_attrs (Dict[str, Any]): Dict of the expected attrs.
67
-
68
- Returns:
69
- bool: Whether the attribute of class object is correct.
70
- """
71
- for attr, value in expected_attrs.items():
72
- if not hasattr(obj, attr) or _any(getattr(obj, attr) != value):
73
- return False
74
- return True
75
-
76
-
77
- def assert_dict_has_keys(obj: Dict[str, Any],
78
- expected_keys: List[str]) -> bool:
79
- """Check if the obj has all the expected_keys.
80
-
81
- Args:
82
- obj (Dict[str, Any]): Object to be checked.
83
- expected_keys (List[str]): Keys expected to contained in the keys of
84
- the obj.
85
-
86
- Returns:
87
- bool: Whether the obj has the expected keys.
88
- """
89
- return set(expected_keys).issubset(set(obj.keys()))
90
-
91
-
92
- def assert_keys_equal(result_keys: List[str], target_keys: List[str]) -> bool:
93
- """Check if target_keys is equal to result_keys.
94
-
95
- Args:
96
- result_keys (List[str]): Result keys to be checked.
97
- target_keys (List[str]): Target keys to be checked.
98
-
99
- Returns:
100
- bool: Whether target_keys is equal to result_keys.
101
- """
102
- return set(result_keys) == set(target_keys)
103
-
104
-
105
- def assert_is_norm_layer(module) -> bool:
106
- """Check if the module is a norm layer.
107
-
108
- Args:
109
- module (nn.Module): The module to be checked.
110
-
111
- Returns:
112
- bool: Whether the module is a norm layer.
113
- """
114
- from .parrots_wrapper import _BatchNorm, _InstanceNorm
115
- from torch.nn import GroupNorm, LayerNorm
116
- norm_layer_candidates = (_BatchNorm, _InstanceNorm, GroupNorm, LayerNorm)
117
- return isinstance(module, norm_layer_candidates)
118
-
119
-
120
- def assert_params_all_zeros(module) -> bool:
121
- """Check if the parameters of the module is all zeros.
122
-
123
- Args:
124
- module (nn.Module): The module to be checked.
125
-
126
- Returns:
127
- bool: Whether the parameters of the module is all zeros.
128
- """
129
- weight_data = module.weight.data
130
- is_weight_zero = weight_data.allclose(
131
- weight_data.new_zeros(weight_data.size()))
132
-
133
- if hasattr(module, 'bias') and module.bias is not None:
134
- bias_data = module.bias.data
135
- is_bias_zero = bias_data.allclose(
136
- bias_data.new_zeros(bias_data.size()))
137
- else:
138
- is_bias_zero = True
139
-
140
- return is_weight_zero and is_bias_zero
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anthony7906/MengHuiMXD_GPT/modules/shared.py DELETED
@@ -1,55 +0,0 @@
1
- from modules.presets import COMPLETION_URL, BALANCE_API_URL, USAGE_API_URL, API_HOST
2
- import os
3
- import queue
4
-
5
- class State:
6
- interrupted = False
7
- multi_api_key = False
8
- completion_url = COMPLETION_URL
9
- balance_api_url = BALANCE_API_URL
10
- usage_api_url = USAGE_API_URL
11
-
12
- def interrupt(self):
13
- self.interrupted = True
14
-
15
- def recover(self):
16
- self.interrupted = False
17
-
18
- def set_api_host(self, api_host):
19
- self.completion_url = f"https://{api_host}/v1/chat/completions"
20
- self.balance_api_url = f"https://{api_host}/dashboard/billing/credit_grants"
21
- self.usage_api_url = f"https://{api_host}/dashboard/billing/usage"
22
- os.environ["OPENAI_API_BASE"] = f"https://{api_host}/v1"
23
-
24
- def reset_api_host(self):
25
- self.completion_url = COMPLETION_URL
26
- self.balance_api_url = BALANCE_API_URL
27
- self.usage_api_url = USAGE_API_URL
28
- os.environ["OPENAI_API_BASE"] = f"https://{API_HOST}/v1"
29
- return API_HOST
30
-
31
- def reset_all(self):
32
- self.interrupted = False
33
- self.completion_url = COMPLETION_URL
34
-
35
- def set_api_key_queue(self, api_key_list):
36
- self.multi_api_key = True
37
- self.api_key_queue = queue.Queue()
38
- for api_key in api_key_list:
39
- self.api_key_queue.put(api_key)
40
-
41
- def switching_api_key(self, func):
42
- if not hasattr(self, "api_key_queue"):
43
- return func
44
-
45
- def wrapped(*args, **kwargs):
46
- api_key = self.api_key_queue.get()
47
- args[0].api_key = api_key
48
- ret = func(*args, **kwargs)
49
- self.api_key_queue.put(api_key)
50
- return ret
51
-
52
- return wrapped
53
-
54
-
55
- state = State()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Apk/anything-v3.0/utils.py DELETED
@@ -1,6 +0,0 @@
1
- def is_google_colab():
2
- try:
3
- import google.colab
4
- return True
5
- except:
6
- return False
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/req/req_set.py DELETED
@@ -1,82 +0,0 @@
1
- import logging
2
- from collections import OrderedDict
3
- from typing import Dict, List
4
-
5
- from pip._vendor.packaging.utils import canonicalize_name
6
-
7
- from pip._internal.req.req_install import InstallRequirement
8
-
9
- logger = logging.getLogger(__name__)
10
-
11
-
12
- class RequirementSet:
13
- def __init__(self, check_supported_wheels: bool = True) -> None:
14
- """Create a RequirementSet."""
15
-
16
- self.requirements: Dict[str, InstallRequirement] = OrderedDict()
17
- self.check_supported_wheels = check_supported_wheels
18
-
19
- self.unnamed_requirements: List[InstallRequirement] = []
20
-
21
- def __str__(self) -> str:
22
- requirements = sorted(
23
- (req for req in self.requirements.values() if not req.comes_from),
24
- key=lambda req: canonicalize_name(req.name or ""),
25
- )
26
- return " ".join(str(req.req) for req in requirements)
27
-
28
- def __repr__(self) -> str:
29
- requirements = sorted(
30
- self.requirements.values(),
31
- key=lambda req: canonicalize_name(req.name or ""),
32
- )
33
-
34
- format_string = "<{classname} object; {count} requirement(s): {reqs}>"
35
- return format_string.format(
36
- classname=self.__class__.__name__,
37
- count=len(requirements),
38
- reqs=", ".join(str(req.req) for req in requirements),
39
- )
40
-
41
- def add_unnamed_requirement(self, install_req: InstallRequirement) -> None:
42
- assert not install_req.name
43
- self.unnamed_requirements.append(install_req)
44
-
45
- def add_named_requirement(self, install_req: InstallRequirement) -> None:
46
- assert install_req.name
47
-
48
- project_name = canonicalize_name(install_req.name)
49
- self.requirements[project_name] = install_req
50
-
51
- def has_requirement(self, name: str) -> bool:
52
- project_name = canonicalize_name(name)
53
-
54
- return (
55
- project_name in self.requirements
56
- and not self.requirements[project_name].constraint
57
- )
58
-
59
- def get_requirement(self, name: str) -> InstallRequirement:
60
- project_name = canonicalize_name(name)
61
-
62
- if project_name in self.requirements:
63
- return self.requirements[project_name]
64
-
65
- raise KeyError(f"No project with the name {name!r}")
66
-
67
- @property
68
- def all_requirements(self) -> List[InstallRequirement]:
69
- return self.unnamed_requirements + list(self.requirements.values())
70
-
71
- @property
72
- def requirements_to_install(self) -> List[InstallRequirement]:
73
- """Return the list of requirements that need to be installed.
74
-
75
- TODO remove this property together with the legacy resolver, since the new
76
- resolver only returns requirements that need to be installed.
77
- """
78
- return [
79
- install_req
80
- for install_req in self.all_requirements
81
- if not install_req.constraint and not install_req.satisfied_by
82
- ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/markers.py DELETED
@@ -1,152 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- #
3
- # Copyright (C) 2012-2017 Vinay Sajip.
4
- # Licensed to the Python Software Foundation under a contributor agreement.
5
- # See LICENSE.txt and CONTRIBUTORS.txt.
6
- #
7
- """
8
- Parser for the environment markers micro-language defined in PEP 508.
9
- """
10
-
11
- # Note: In PEP 345, the micro-language was Python compatible, so the ast
12
- # module could be used to parse it. However, PEP 508 introduced operators such
13
- # as ~= and === which aren't in Python, necessitating a different approach.
14
-
15
- import os
16
- import re
17
- import sys
18
- import platform
19
-
20
- from .compat import string_types
21
- from .util import in_venv, parse_marker
22
- from .version import NormalizedVersion as NV
23
-
24
- __all__ = ['interpret']
25
-
26
- _VERSION_PATTERN = re.compile(r'((\d+(\.\d+)*\w*)|\'(\d+(\.\d+)*\w*)\'|\"(\d+(\.\d+)*\w*)\")')
27
-
28
- def _is_literal(o):
29
- if not isinstance(o, string_types) or not o:
30
- return False
31
- return o[0] in '\'"'
32
-
33
- def _get_versions(s):
34
- result = []
35
- for m in _VERSION_PATTERN.finditer(s):
36
- result.append(NV(m.groups()[0]))
37
- return set(result)
38
-
39
- class Evaluator(object):
40
- """
41
- This class is used to evaluate marker expessions.
42
- """
43
-
44
- operations = {
45
- '==': lambda x, y: x == y,
46
- '===': lambda x, y: x == y,
47
- '~=': lambda x, y: x == y or x > y,
48
- '!=': lambda x, y: x != y,
49
- '<': lambda x, y: x < y,
50
- '<=': lambda x, y: x == y or x < y,
51
- '>': lambda x, y: x > y,
52
- '>=': lambda x, y: x == y or x > y,
53
- 'and': lambda x, y: x and y,
54
- 'or': lambda x, y: x or y,
55
- 'in': lambda x, y: x in y,
56
- 'not in': lambda x, y: x not in y,
57
- }
58
-
59
- def evaluate(self, expr, context):
60
- """
61
- Evaluate a marker expression returned by the :func:`parse_requirement`
62
- function in the specified context.
63
- """
64
- if isinstance(expr, string_types):
65
- if expr[0] in '\'"':
66
- result = expr[1:-1]
67
- else:
68
- if expr not in context:
69
- raise SyntaxError('unknown variable: %s' % expr)
70
- result = context[expr]
71
- else:
72
- assert isinstance(expr, dict)
73
- op = expr['op']
74
- if op not in self.operations:
75
- raise NotImplementedError('op not implemented: %s' % op)
76
- elhs = expr['lhs']
77
- erhs = expr['rhs']
78
- if _is_literal(expr['lhs']) and _is_literal(expr['rhs']):
79
- raise SyntaxError('invalid comparison: %s %s %s' % (elhs, op, erhs))
80
-
81
- lhs = self.evaluate(elhs, context)
82
- rhs = self.evaluate(erhs, context)
83
- if ((elhs == 'python_version' or erhs == 'python_version') and
84
- op in ('<', '<=', '>', '>=', '===', '==', '!=', '~=')):
85
- lhs = NV(lhs)
86
- rhs = NV(rhs)
87
- elif elhs == 'python_version' and op in ('in', 'not in'):
88
- lhs = NV(lhs)
89
- rhs = _get_versions(rhs)
90
- result = self.operations[op](lhs, rhs)
91
- return result
92
-
93
- _DIGITS = re.compile(r'\d+\.\d+')
94
-
95
- def default_context():
96
- def format_full_version(info):
97
- version = '%s.%s.%s' % (info.major, info.minor, info.micro)
98
- kind = info.releaselevel
99
- if kind != 'final':
100
- version += kind[0] + str(info.serial)
101
- return version
102
-
103
- if hasattr(sys, 'implementation'):
104
- implementation_version = format_full_version(sys.implementation.version)
105
- implementation_name = sys.implementation.name
106
- else:
107
- implementation_version = '0'
108
- implementation_name = ''
109
-
110
- ppv = platform.python_version()
111
- m = _DIGITS.match(ppv)
112
- pv = m.group(0)
113
- result = {
114
- 'implementation_name': implementation_name,
115
- 'implementation_version': implementation_version,
116
- 'os_name': os.name,
117
- 'platform_machine': platform.machine(),
118
- 'platform_python_implementation': platform.python_implementation(),
119
- 'platform_release': platform.release(),
120
- 'platform_system': platform.system(),
121
- 'platform_version': platform.version(),
122
- 'platform_in_venv': str(in_venv()),
123
- 'python_full_version': ppv,
124
- 'python_version': pv,
125
- 'sys_platform': sys.platform,
126
- }
127
- return result
128
-
129
- DEFAULT_CONTEXT = default_context()
130
- del default_context
131
-
132
- evaluator = Evaluator()
133
-
134
- def interpret(marker, execution_context=None):
135
- """
136
- Interpret a marker and return a result depending on environment.
137
-
138
- :param marker: The marker to interpret.
139
- :type marker: str
140
- :param execution_context: The context used for name lookup.
141
- :type execution_context: mapping
142
- """
143
- try:
144
- expr, rest = parse_marker(marker)
145
- except Exception as e:
146
- raise SyntaxError('Unable to interpret marker syntax: %s: %s' % (marker, e))
147
- if rest and rest[0] != '#':
148
- raise SyntaxError('unexpected trailing data in marker: %s: %s' % (marker, rest))
149
- context = dict(DEFAULT_CONTEXT)
150
- if execution_context:
151
- context.update(execution_context)
152
- return evaluator.evaluate(expr, context)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/importlib_resources/abc.py DELETED
@@ -1,137 +0,0 @@
1
- import abc
2
- from typing import BinaryIO, Iterable, Text
3
-
4
- from ._compat import runtime_checkable, Protocol
5
-
6
-
7
- class ResourceReader(metaclass=abc.ABCMeta):
8
- """Abstract base class for loaders to provide resource reading support."""
9
-
10
- @abc.abstractmethod
11
- def open_resource(self, resource: Text) -> BinaryIO:
12
- """Return an opened, file-like object for binary reading.
13
-
14
- The 'resource' argument is expected to represent only a file name.
15
- If the resource cannot be found, FileNotFoundError is raised.
16
- """
17
- # This deliberately raises FileNotFoundError instead of
18
- # NotImplementedError so that if this method is accidentally called,
19
- # it'll still do the right thing.
20
- raise FileNotFoundError
21
-
22
- @abc.abstractmethod
23
- def resource_path(self, resource: Text) -> Text:
24
- """Return the file system path to the specified resource.
25
-
26
- The 'resource' argument is expected to represent only a file name.
27
- If the resource does not exist on the file system, raise
28
- FileNotFoundError.
29
- """
30
- # This deliberately raises FileNotFoundError instead of
31
- # NotImplementedError so that if this method is accidentally called,
32
- # it'll still do the right thing.
33
- raise FileNotFoundError
34
-
35
- @abc.abstractmethod
36
- def is_resource(self, path: Text) -> bool:
37
- """Return True if the named 'path' is a resource.
38
-
39
- Files are resources, directories are not.
40
- """
41
- raise FileNotFoundError
42
-
43
- @abc.abstractmethod
44
- def contents(self) -> Iterable[str]:
45
- """Return an iterable of entries in `package`."""
46
- raise FileNotFoundError
47
-
48
-
49
- @runtime_checkable
50
- class Traversable(Protocol):
51
- """
52
- An object with a subset of pathlib.Path methods suitable for
53
- traversing directories and opening files.
54
- """
55
-
56
- @abc.abstractmethod
57
- def iterdir(self):
58
- """
59
- Yield Traversable objects in self
60
- """
61
-
62
- def read_bytes(self):
63
- """
64
- Read contents of self as bytes
65
- """
66
- with self.open('rb') as strm:
67
- return strm.read()
68
-
69
- def read_text(self, encoding=None):
70
- """
71
- Read contents of self as text
72
- """
73
- with self.open(encoding=encoding) as strm:
74
- return strm.read()
75
-
76
- @abc.abstractmethod
77
- def is_dir(self) -> bool:
78
- """
79
- Return True if self is a directory
80
- """
81
-
82
- @abc.abstractmethod
83
- def is_file(self) -> bool:
84
- """
85
- Return True if self is a file
86
- """
87
-
88
- @abc.abstractmethod
89
- def joinpath(self, child):
90
- """
91
- Return Traversable child in self
92
- """
93
-
94
- def __truediv__(self, child):
95
- """
96
- Return Traversable child in self
97
- """
98
- return self.joinpath(child)
99
-
100
- @abc.abstractmethod
101
- def open(self, mode='r', *args, **kwargs):
102
- """
103
- mode may be 'r' or 'rb' to open as text or binary. Return a handle
104
- suitable for reading (same as pathlib.Path.open).
105
-
106
- When opening as text, accepts encoding parameters such as those
107
- accepted by io.TextIOWrapper.
108
- """
109
-
110
- @abc.abstractproperty
111
- def name(self) -> str:
112
- """
113
- The base name of this object without any parent references.
114
- """
115
-
116
-
117
- class TraversableResources(ResourceReader):
118
- """
119
- The required interface for providing traversable
120
- resources.
121
- """
122
-
123
- @abc.abstractmethod
124
- def files(self):
125
- """Return a Traversable object for the loaded package."""
126
-
127
- def open_resource(self, resource):
128
- return self.files().joinpath(resource).open('rb')
129
-
130
- def resource_path(self, resource):
131
- raise FileNotFoundError(resource)
132
-
133
- def is_resource(self, path):
134
- return self.files().joinpath(path).is_file()
135
-
136
- def contents(self):
137
- return (item.name for item in self.files().iterdir())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/actions.py DELETED
@@ -1,207 +0,0 @@
1
- # actions.py
2
-
3
- from .exceptions import ParseException
4
- from .util import col
5
-
6
-
7
- class OnlyOnce:
8
- """
9
- Wrapper for parse actions, to ensure they are only called once.
10
- """
11
-
12
- def __init__(self, method_call):
13
- from .core import _trim_arity
14
-
15
- self.callable = _trim_arity(method_call)
16
- self.called = False
17
-
18
- def __call__(self, s, l, t):
19
- if not self.called:
20
- results = self.callable(s, l, t)
21
- self.called = True
22
- return results
23
- raise ParseException(s, l, "OnlyOnce obj called multiple times w/out reset")
24
-
25
- def reset(self):
26
- """
27
- Allow the associated parse action to be called once more.
28
- """
29
-
30
- self.called = False
31
-
32
-
33
- def match_only_at_col(n):
34
- """
35
- Helper method for defining parse actions that require matching at
36
- a specific column in the input text.
37
- """
38
-
39
- def verify_col(strg, locn, toks):
40
- if col(locn, strg) != n:
41
- raise ParseException(strg, locn, "matched token not at column {}".format(n))
42
-
43
- return verify_col
44
-
45
-
46
- def replace_with(repl_str):
47
- """
48
- Helper method for common parse actions that simply return
49
- a literal value. Especially useful when used with
50
- :class:`transform_string<ParserElement.transform_string>` ().
51
-
52
- Example::
53
-
54
- num = Word(nums).set_parse_action(lambda toks: int(toks[0]))
55
- na = one_of("N/A NA").set_parse_action(replace_with(math.nan))
56
- term = na | num
57
-
58
- term[1, ...].parse_string("324 234 N/A 234") # -> [324, 234, nan, 234]
59
- """
60
- return lambda s, l, t: [repl_str]
61
-
62
-
63
- def remove_quotes(s, l, t):
64
- """
65
- Helper parse action for removing quotation marks from parsed
66
- quoted strings.
67
-
68
- Example::
69
-
70
- # by default, quotation marks are included in parsed results
71
- quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["'Now is the Winter of our Discontent'"]
72
-
73
- # use remove_quotes to strip quotation marks from parsed results
74
- quoted_string.set_parse_action(remove_quotes)
75
- quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["Now is the Winter of our Discontent"]
76
- """
77
- return t[0][1:-1]
78
-
79
-
80
- def with_attribute(*args, **attr_dict):
81
- """
82
- Helper to create a validating parse action to be used with start
83
- tags created with :class:`make_xml_tags` or
84
- :class:`make_html_tags`. Use ``with_attribute`` to qualify
85
- a starting tag with a required attribute value, to avoid false
86
- matches on common tags such as ``<TD>`` or ``<DIV>``.
87
-
88
- Call ``with_attribute`` with a series of attribute names and
89
- values. Specify the list of filter attributes names and values as:
90
-
91
- - keyword arguments, as in ``(align="right")``, or
92
- - as an explicit dict with ``**`` operator, when an attribute
93
- name is also a Python reserved word, as in ``**{"class":"Customer", "align":"right"}``
94
- - a list of name-value tuples, as in ``(("ns1:class", "Customer"), ("ns2:align", "right"))``
95
-
96
- For attribute names with a namespace prefix, you must use the second
97
- form. Attribute names are matched insensitive to upper/lower case.
98
-
99
- If just testing for ``class`` (with or without a namespace), use
100
- :class:`with_class`.
101
-
102
- To verify that the attribute exists, but without specifying a value,
103
- pass ``with_attribute.ANY_VALUE`` as the value.
104
-
105
- Example::
106
-
107
- html = '''
108
- <div>
109
- Some text
110
- <div type="grid">1 4 0 1 0</div>
111
- <div type="graph">1,3 2,3 1,1</div>
112
- <div>this has no type</div>
113
- </div>
114
-
115
- '''
116
- div,div_end = make_html_tags("div")
117
-
118
- # only match div tag having a type attribute with value "grid"
119
- div_grid = div().set_parse_action(with_attribute(type="grid"))
120
- grid_expr = div_grid + SkipTo(div | div_end)("body")
121
- for grid_header in grid_expr.search_string(html):
122
- print(grid_header.body)
123
-
124
- # construct a match with any div tag having a type attribute, regardless of the value
125
- div_any_type = div().set_parse_action(with_attribute(type=with_attribute.ANY_VALUE))
126
- div_expr = div_any_type + SkipTo(div | div_end)("body")
127
- for div_header in div_expr.search_string(html):
128
- print(div_header.body)
129
-
130
- prints::
131
-
132
- 1 4 0 1 0
133
-
134
- 1 4 0 1 0
135
- 1,3 2,3 1,1
136
- """
137
- if args:
138
- attrs = args[:]
139
- else:
140
- attrs = attr_dict.items()
141
- attrs = [(k, v) for k, v in attrs]
142
-
143
- def pa(s, l, tokens):
144
- for attrName, attrValue in attrs:
145
- if attrName not in tokens:
146
- raise ParseException(s, l, "no matching attribute " + attrName)
147
- if attrValue != with_attribute.ANY_VALUE and tokens[attrName] != attrValue:
148
- raise ParseException(
149
- s,
150
- l,
151
- "attribute {!r} has value {!r}, must be {!r}".format(
152
- attrName, tokens[attrName], attrValue
153
- ),
154
- )
155
-
156
- return pa
157
-
158
-
159
- with_attribute.ANY_VALUE = object()
160
-
161
-
162
- def with_class(classname, namespace=""):
163
- """
164
- Simplified version of :class:`with_attribute` when
165
- matching on a div class - made difficult because ``class`` is
166
- a reserved word in Python.
167
-
168
- Example::
169
-
170
- html = '''
171
- <div>
172
- Some text
173
- <div class="grid">1 4 0 1 0</div>
174
- <div class="graph">1,3 2,3 1,1</div>
175
- <div>this &lt;div&gt; has no class</div>
176
- </div>
177
-
178
- '''
179
- div,div_end = make_html_tags("div")
180
- div_grid = div().set_parse_action(with_class("grid"))
181
-
182
- grid_expr = div_grid + SkipTo(div | div_end)("body")
183
- for grid_header in grid_expr.search_string(html):
184
- print(grid_header.body)
185
-
186
- div_any_type = div().set_parse_action(with_class(withAttribute.ANY_VALUE))
187
- div_expr = div_any_type + SkipTo(div | div_end)("body")
188
- for div_header in div_expr.search_string(html):
189
- print(div_header.body)
190
-
191
- prints::
192
-
193
- 1 4 0 1 0
194
-
195
- 1 4 0 1 0
196
- 1,3 2,3 1,1
197
- """
198
- classattr = "{}:class".format(namespace) if namespace else "class"
199
- return with_attribute(**{classattr: classname})
200
-
201
-
202
- # pre-PEP8 compatibility symbols
203
- replaceWith = replace_with
204
- removeQuotes = remove_quotes
205
- withAttribute = with_attribute
206
- withClass = with_class
207
- matchOnlyAtCol = match_only_at_col
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awesimo/jojogan/e4e/models/encoders/psp_encoders.py DELETED
@@ -1,200 +0,0 @@
1
- from enum import Enum
2
- import math
3
- import numpy as np
4
- import torch
5
- from torch import nn
6
- from torch.nn import Conv2d, BatchNorm2d, PReLU, Sequential, Module
7
-
8
- from e4e.models.encoders.helpers import get_blocks, bottleneck_IR, bottleneck_IR_SE, _upsample_add
9
- from e4e.models.stylegan2.model import EqualLinear
10
-
11
-
12
- class ProgressiveStage(Enum):
13
- WTraining = 0
14
- Delta1Training = 1
15
- Delta2Training = 2
16
- Delta3Training = 3
17
- Delta4Training = 4
18
- Delta5Training = 5
19
- Delta6Training = 6
20
- Delta7Training = 7
21
- Delta8Training = 8
22
- Delta9Training = 9
23
- Delta10Training = 10
24
- Delta11Training = 11
25
- Delta12Training = 12
26
- Delta13Training = 13
27
- Delta14Training = 14
28
- Delta15Training = 15
29
- Delta16Training = 16
30
- Delta17Training = 17
31
- Inference = 18
32
-
33
-
34
- class GradualStyleBlock(Module):
35
- def __init__(self, in_c, out_c, spatial):
36
- super(GradualStyleBlock, self).__init__()
37
- self.out_c = out_c
38
- self.spatial = spatial
39
- num_pools = int(np.log2(spatial))
40
- modules = []
41
- modules += [Conv2d(in_c, out_c, kernel_size=3, stride=2, padding=1),
42
- nn.LeakyReLU()]
43
- for i in range(num_pools - 1):
44
- modules += [
45
- Conv2d(out_c, out_c, kernel_size=3, stride=2, padding=1),
46
- nn.LeakyReLU()
47
- ]
48
- self.convs = nn.Sequential(*modules)
49
- self.linear = EqualLinear(out_c, out_c, lr_mul=1)
50
-
51
- def forward(self, x):
52
- x = self.convs(x)
53
- x = x.view(-1, self.out_c)
54
- x = self.linear(x)
55
- return x
56
-
57
-
58
- class GradualStyleEncoder(Module):
59
- def __init__(self, num_layers, mode='ir', opts=None):
60
- super(GradualStyleEncoder, self).__init__()
61
- assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152'
62
- assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se'
63
- blocks = get_blocks(num_layers)
64
- if mode == 'ir':
65
- unit_module = bottleneck_IR
66
- elif mode == 'ir_se':
67
- unit_module = bottleneck_IR_SE
68
- self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
69
- BatchNorm2d(64),
70
- PReLU(64))
71
- modules = []
72
- for block in blocks:
73
- for bottleneck in block:
74
- modules.append(unit_module(bottleneck.in_channel,
75
- bottleneck.depth,
76
- bottleneck.stride))
77
- self.body = Sequential(*modules)
78
-
79
- self.styles = nn.ModuleList()
80
- log_size = int(math.log(opts.stylegan_size, 2))
81
- self.style_count = 2 * log_size - 2
82
- self.coarse_ind = 3
83
- self.middle_ind = 7
84
- for i in range(self.style_count):
85
- if i < self.coarse_ind:
86
- style = GradualStyleBlock(512, 512, 16)
87
- elif i < self.middle_ind:
88
- style = GradualStyleBlock(512, 512, 32)
89
- else:
90
- style = GradualStyleBlock(512, 512, 64)
91
- self.styles.append(style)
92
- self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0)
93
- self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0)
94
-
95
- def forward(self, x):
96
- x = self.input_layer(x)
97
-
98
- latents = []
99
- modulelist = list(self.body._modules.values())
100
- for i, l in enumerate(modulelist):
101
- x = l(x)
102
- if i == 6:
103
- c1 = x
104
- elif i == 20:
105
- c2 = x
106
- elif i == 23:
107
- c3 = x
108
-
109
- for j in range(self.coarse_ind):
110
- latents.append(self.styles[j](c3))
111
-
112
- p2 = _upsample_add(c3, self.latlayer1(c2))
113
- for j in range(self.coarse_ind, self.middle_ind):
114
- latents.append(self.styles[j](p2))
115
-
116
- p1 = _upsample_add(p2, self.latlayer2(c1))
117
- for j in range(self.middle_ind, self.style_count):
118
- latents.append(self.styles[j](p1))
119
-
120
- out = torch.stack(latents, dim=1)
121
- return out
122
-
123
-
124
- class Encoder4Editing(Module):
125
- def __init__(self, num_layers, mode='ir', opts=None):
126
- super(Encoder4Editing, self).__init__()
127
- assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152'
128
- assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se'
129
- blocks = get_blocks(num_layers)
130
- if mode == 'ir':
131
- unit_module = bottleneck_IR
132
- elif mode == 'ir_se':
133
- unit_module = bottleneck_IR_SE
134
- self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
135
- BatchNorm2d(64),
136
- PReLU(64))
137
- modules = []
138
- for block in blocks:
139
- for bottleneck in block:
140
- modules.append(unit_module(bottleneck.in_channel,
141
- bottleneck.depth,
142
- bottleneck.stride))
143
- self.body = Sequential(*modules)
144
-
145
- self.styles = nn.ModuleList()
146
- log_size = int(math.log(opts.stylegan_size, 2))
147
- self.style_count = 2 * log_size - 2
148
- self.coarse_ind = 3
149
- self.middle_ind = 7
150
-
151
- for i in range(self.style_count):
152
- if i < self.coarse_ind:
153
- style = GradualStyleBlock(512, 512, 16)
154
- elif i < self.middle_ind:
155
- style = GradualStyleBlock(512, 512, 32)
156
- else:
157
- style = GradualStyleBlock(512, 512, 64)
158
- self.styles.append(style)
159
-
160
- self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0)
161
- self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0)
162
-
163
- self.progressive_stage = ProgressiveStage.Inference
164
-
165
- def get_deltas_starting_dimensions(self):
166
- ''' Get a list of the initial dimension of every delta from which it is applied '''
167
- return list(range(self.style_count)) # Each dimension has a delta applied to it
168
-
169
- def set_progressive_stage(self, new_stage: ProgressiveStage):
170
- self.progressive_stage = new_stage
171
- print('Changed progressive stage to: ', new_stage)
172
-
173
- def forward(self, x):
174
- x = self.input_layer(x)
175
-
176
- modulelist = list(self.body._modules.values())
177
- for i, l in enumerate(modulelist):
178
- x = l(x)
179
- if i == 6:
180
- c1 = x
181
- elif i == 20:
182
- c2 = x
183
- elif i == 23:
184
- c3 = x
185
-
186
- # Infer main W and duplicate it
187
- w0 = self.styles[0](c3)
188
- w = w0.repeat(self.style_count, 1, 1).permute(1, 0, 2)
189
- stage = self.progressive_stage.value
190
- features = c3
191
- for i in range(1, min(stage + 1, self.style_count)): # Infer additional deltas
192
- if i == self.coarse_ind:
193
- p2 = _upsample_add(c3, self.latlayer1(c2)) # FPN's middle features
194
- features = p2
195
- elif i == self.middle_ind:
196
- p1 = _upsample_add(p2, self.latlayer2(c1)) # FPN's fine features
197
- features = p1
198
- delta_i = self.styles[i](features)
199
- w[:, i] += delta_i
200
- return w
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BAAI/AltDiffusion/js/index.js DELETED
@@ -1,186 +0,0 @@
1
- window.SD = (() => {
2
- /*
3
- * Painterro is made a field of the SD global object
4
- * To provide convinience when using w() method in css_and_js.py
5
- */
6
- class PainterroClass {
7
- static isOpen = false;
8
- static async init ({ x, toId }) {
9
- console.log(x)
10
-
11
- const originalImage = x[2] === 'Mask' ? x[1]?.image : x[0];
12
-
13
- if (window.Painterro === undefined) {
14
- try {
15
- await this.load();
16
- } catch (e) {
17
- SDClass.error(e);
18
-
19
- return this.fallback(originalImage);
20
- }
21
- }
22
-
23
- if (this.isOpen) {
24
- return this.fallback(originalImage);
25
- }
26
- this.isOpen = true;
27
-
28
- let resolveResult;
29
- const paintClient = Painterro({
30
- hiddenTools: ['arrow'],
31
- onHide: () => {
32
- resolveResult?.(null);
33
- },
34
- saveHandler: (image, done) => {
35
- const data = image.asDataURL();
36
-
37
- // ensures stable performance even
38
- // when the editor is in interactive mode
39
- SD.clearImageInput(SD.el.get(`#${toId}`));
40
-
41
- resolveResult(data);
42
-
43
- done(true);
44
- paintClient.hide();
45
- },
46
- });
47
-
48
- const result = await new Promise((resolve) => {
49
- resolveResult = resolve;
50
- paintClient.show(originalImage);
51
- });
52
- this.isOpen = false;
53
-
54
- return result ? this.success(result) : this.fallback(originalImage);
55
- }
56
- static success (result) { return [result, { image: result, mask: result }] };
57
- static fallback (image) { return [image, { image: image, mask: image }] };
58
- static load () {
59
- return new Promise((resolve, reject) => {
60
- const scriptId = '__painterro-script';
61
- if (document.getElementById(scriptId)) {
62
- reject(new Error('Tried to load painterro script, but script tag already exists.'));
63
- return;
64
- }
65
-
66
- const styleId = '__painterro-css-override';
67
- if (!document.getElementById(styleId)) {
68
- /* Ensure Painterro window is always on top */
69
- const style = document.createElement('style');
70
- style.id = styleId;
71
- style.setAttribute('type', 'text/css');
72
- style.appendChild(document.createTextNode(`
73
- .ptro-holder-wrapper {
74
- z-index: 100;
75
- }
76
- `));
77
- document.head.appendChild(style);
78
- }
79
-
80
- const script = document.createElement('script');
81
- script.id = scriptId;
82
- script.src = 'https://unpkg.com/[email protected]/build/painterro.min.js';
83
- script.onload = () => resolve(true);
84
- script.onerror = (e) => {
85
- // remove self on error to enable reattempting load
86
- document.head.removeChild(script);
87
- reject(e);
88
- };
89
- document.head.appendChild(script);
90
- });
91
- }
92
- }
93
-
94
- /*
95
- * Turns out caching elements doesn't actually work in gradio
96
- * As elements in tabs might get recreated
97
- */
98
- class ElementCache {
99
- #el;
100
- constructor () {
101
- this.root = document.querySelector('gradio-app').shadowRoot;
102
- }
103
- get (selector) {
104
- return this.root.querySelector(selector);
105
- }
106
- }
107
-
108
- /*
109
- * The main helper class to incapsulate functions
110
- * that change gradio ui functionality
111
- */
112
- class SDClass {
113
- el = new ElementCache();
114
- Painterro = PainterroClass;
115
- moveImageFromGallery ({ x, fromId, toId }) {
116
- x = x[0];
117
- if (!Array.isArray(x) || x.length === 0) return;
118
-
119
- this.clearImageInput(this.el.get(`#${toId}`));
120
-
121
- const i = this.#getGallerySelectedIndex(this.el.get(`#${fromId}`));
122
-
123
- return [x[i].replace('data:;','data:image/png;')];
124
- }
125
- async copyImageFromGalleryToClipboard ({ x, fromId }) {
126
- x = x[0];
127
- if (!Array.isArray(x) || x.length === 0) return;
128
-
129
- const i = this.#getGallerySelectedIndex(this.el.get(`#${fromId}`));
130
-
131
- const data = x[i];
132
- const blob = await (await fetch(data.replace('data:;','data:image/png;'))).blob();
133
- const item = new ClipboardItem({'image/png': blob});
134
-
135
- await this.copyToClipboard([item]);
136
- }
137
- clickFirstVisibleButton({ rowId }) {
138
- const generateButtons = this.el.get(`#${rowId}`).querySelectorAll('.gr-button-primary');
139
-
140
- if (!generateButtons) return;
141
-
142
- for (let i = 0, arr = [...generateButtons]; i < arr.length; i++) {
143
- const cs = window.getComputedStyle(arr[i]);
144
-
145
- if (cs.display !== 'none' && cs.visibility !== 'hidden') {
146
- console.log(arr[i]);
147
-
148
- arr[i].click();
149
- break;
150
- }
151
- }
152
- }
153
- async gradioInputToClipboard ({ x }) { return this.copyToClipboard(x[0]); }
154
- async copyToClipboard (value) {
155
- if (!value || typeof value === 'boolean') return;
156
- try {
157
- if (Array.isArray(value) &&
158
- value.length &&
159
- value[0] instanceof ClipboardItem) {
160
- await navigator.clipboard.write(value);
161
- } else {
162
- await navigator.clipboard.writeText(value);
163
- }
164
- } catch (e) {
165
- SDClass.error(e);
166
- }
167
- }
168
- static error (e) {
169
- console.error(e);
170
- if (typeof e === 'string') {
171
- alert(e);
172
- } else if(typeof e === 'object' && Object.hasOwn(e, 'message')) {
173
- alert(e.message);
174
- }
175
- }
176
- clearImageInput (imageEditor) {
177
- imageEditor?.querySelector('.modify-upload button:last-child')?.click();
178
- }
179
- #getGallerySelectedIndex (gallery) {
180
- const selected = gallery.querySelector(`.\\!ring-2`);
181
- return selected ? [...selected.parentNode.children].indexOf(selected) : 0;
182
- }
183
- }
184
-
185
- return new SDClass();
186
- })();
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Cmo Descargar Y Jugar Entre Nosotros En El PC.md DELETED
@@ -1,101 +0,0 @@
1
- <br />
2
- <h1>Cómo descargar Brawl Stars para iPhone</h1>
3
- <p>Si estás buscando un juego de ritmo rápido, lleno de acción y lleno de diversión para jugar en tu iPhone, definitivamente deberías ver Brawl Stars. Brawl Stars es un juego multijugador de arena de batalla en línea (MOBA) desarrollado por Supercell, los creadores de Clash of Clans y Clash Royale. En este juego, puedes elegir entre docenas de personajes únicos llamados Brawlers, cada uno con sus propias habilidades, armas y personalidades. Puedes hacer equipo con tus amigos o jugar solo en varios modos de juego, como Gem Grab, Showdown, Brawl Ball, Bounty, Heist y más. También puedes desbloquear nuevos skins, gadgets, poderes estelares y pines para personalizar tus Brawlers y mostrar tu estilo. </p>
4
- <h2>cómo descargar y jugar entre nosotros en el PC</h2><br /><p><b><b>Download File</b> &#8230; <a href="https://bltlly.com/2v6ILh">https://bltlly.com/2v6ILh</a></b></p><br /><br />
5
- <p>Brawl Stars es uno de los juegos más populares para dispositivos móviles en este momento, con más de 100 millones de descargas solo en Google Play. Pero ¿qué pasa si quieres jugar en tu iPhone? No te preocupes, tenemos todo cubierto. En este artículo, te mostraremos cómo descargar Brawl Stars para iPhone en solo unos sencillos pasos. También te daremos algunos consejos y trucos para jugar Brawl Stars en el iPhone y responder a algunas preguntas frecuentes sobre el juego. Así que sin más preámbulos, ¡empecemos! </p>
6
- <h2>¿Qué es Brawl Stars? </h2>
7
- <p>Brawl Stars es un juego MOBA 3v3 que combina elementos de tiro, lucha, estrategia y trabajo en equipo. El juego cuenta con varios modos que requieren diferentes objetivos y habilidades. Por ejemplo, en el modo Gem Grab, tienes que recoger y guardar 10 gemas para ganar; en el modo Showdown, tienes que sobrevivir el mayor tiempo posible en un battle royale; en el modo Brawl Ball, tienes que anotar dos goles antes que el otro equipo; y así sucesivamente. </p>
8
-
9
- <p>Brawl Stars es un juego que es fácil de aprender pero difícil de dominar. Tienes que usar tus habilidades, estrategia y trabajo en equipo para ganar partidos y subir de rango. También puede unirse o crear un club para chatear con otros jugadores, compartir consejos y jugar juntos. Brawl Stars es un juego que se actualiza constantemente con nuevos contenidos, como nuevos luchadores, skins, mapas, eventos y características. También puedes participar en desafíos especiales y torneos para ganar recompensas y fama. </p>
10
- <h2>¿Por qué jugar Brawl estrellas en el iPhone? </h2>
11
- <p>Brawl Stars es un juego diseñado para dispositivos móviles, y jugarlo en iPhone tiene muchas ventajas. Estas son algunas de las razones por las que deberías jugar Brawl Stars en iPhone:</p>
12
- <p></p>
13
- <ul>
14
- <li><strong>Compatibilidad</strong>: Brawl Stars es compatible con la mayoría de modelos de iPhone, desde iPhone 6S y versiones posteriores. No necesitas un dispositivo de alta gama para disfrutar del juego, ya que funciona sin problemas y de manera eficiente en la mayoría de los iPhones. </li>
15
- <li><strong>Rendimiento</strong>: Brawl Stars tiene una alta velocidad de fotogramas y baja latencia en el iPhone, lo que significa que puedes jugar el juego sin retrasos ni tartamudeos. También puede ajustar la calidad gráfica y el modo de ahorro de batería para optimizar el rendimiento de acuerdo con su preferencia. </li>
16
- <li><strong>Gráficos</strong>: Brawl Stars tiene gráficos coloridos y vibrantes que se ven muy bien en la pantalla de retina del iPhone. El juego tiene un estilo caricaturesco y encantador que atrae a jugadores de todas las edades. También puede apreciar los detalles y animaciones de los luchadores, pieles, mapas y efectos en la pantalla del iPhone. </li>
17
- <li><strong>Controles</strong>: Brawl Stars tiene controles simples e intuitivos que son fáciles de usar en la pantalla táctil del iPhone. Puede mover su brawler con el joystick izquierdo y apuntar y disparar con el joystick derecho. También puedes tocar para usar tu súper habilidad, deslizar el dedo para usar tu gadget y pellizcar para acercar o alejar. También puede personalizar los controles para adaptarse a su estilo de juego y comodidad. </li>
18
-
19
- </ul>
20
- <h2>¿Cómo obtener estrellas de pelea en el iPhone? </h2>
21
- <p>Ahora que sabes por qué deberías jugar Brawl Stars en el iPhone, vamos a ver cómo puedes conseguirlo en tu dispositivo. El proceso es muy simple y sencillo, y solo toma unos minutos. Estos son los pasos que debes seguir:</p>
22
- <h3>Paso 1: Abra la aplicación App Store</h3>
23
- <p>Lo primero que tienes que hacer es abrir la aplicación App Store en tu iPhone. Puedes encontrarla en tu pantalla de inicio o en tu biblioteca de aplicaciones. La aplicación App Store tiene un icono azul con una letra blanca A dentro. </p>
24
- <p><img src="https://i.imgur.com/0wYQlXf.png" alt="Icono de la App Store" width="100" height="100"></p>
25
- <h3>Paso 2: Buscar estrellas de pelea</h3>
26
- <p>Una vez que abra la aplicación App Store, debe buscar Brawl Stars en la pestaña de búsqueda. Puede encontrar la pestaña de búsqueda en la esquina inferior derecha de la pantalla. Tiene un icono de lupa. </p>
27
- <p><img src="https://i.imgur.com/4nqZm8R.png" alt="Icono de la pestaña de búsqueda" width="100" height="100"></p>
28
- <p>Toque en la pestaña de búsqueda y escriba "Brawl Stars" en la barra de búsqueda. Verá una lista de resultados que coinciden con su consulta. Busca el que dice "Brawl Stars" de Supercell y tiene un icono rojo con tres estrellas dentro. </p>
29
- <p><img src="https://i.imgur.com/9uGy0Wk.png" alt="Resultado de Brawl Stars" width="300" height="150"></p>
30
- <h3>Paso 3: Toque Obtener o el precio</h3>
31
- <p>Cuando encuentre Brawl Stars en los resultados, toque en él para abrir su página en el App Store. Verás información sobre el juego, como su descripción, capturas de pantalla, valoraciones, reseñas y más. </p>
32
- <p>Para descargar Brawl Stars, necesitas tocar el botón Obtener o el precio si no es gratis en tu región. El botón Obtener o el precio se encuentra en la esquina superior derecha de la pantalla, junto al icono y nombre del juego. </p>
33
- <p><img src="https://i.imgur.com/6WQ3w0F.png" alt="Obtener botón" width="300" height="150"></p>
34
- <h3>Paso 4: Confirmar la descarga</h3>
35
-
36
- <p><img src="https://i.imgur.com/9Zy8JlU.png" alt="Confirmar descarga" width="300" height="150"></p>
37
- <p>Introduzca su contraseña o utilice su huella digital o su cara para confirmar la descarga. Verá un mensaje de confirmación que dice "Descargar..." o "Comprar". </p>
38
- <h3>Paso 5: Espera a que termine la descarga</h3>
39
- <p>Ahora solo tienes que esperar a que termine la descarga. Puedes comprobar el progreso de la descarga mirando el círculo alrededor del icono del juego. El círculo se llenará a medida que avance la descarga. También puede ver el estado de descarga en la pestaña Actualizaciones de la aplicación App Store. </p>
40
- <p><img src="https://i.imgur.com/1ZqXa1G.png" alt="Download progress" width="300" height="150"></p>
41
- <p>Brawl Stars es de unos 300 MB de tamaño, por lo que puede tardar unos minutos en descargarse dependiendo de su velocidad y conexión a Internet. Asegúrate de tener suficiente espacio de almacenamiento en tu iPhone y una conexión Wi-Fi o datos móviles estable. </p>
42
- <h3>Paso 6: Estrellas de pelea abierta y disfrutar</h3>
43
- <p>Enhorabuena, ¡has descargado con éxito Brawl Stars para iPhone! Ahora puedes abrir el juego y empezar a jugar. Puedes encontrar Brawl Stars en tu pantalla de inicio o en tu biblioteca de aplicaciones. El icono del juego es rojo con tres estrellas dentro. </p>
44
- <p><img src="https://i.imgur.com/0wYQlXf.png" alt="Brawl Stars icon" width="100" height="></p>
45
- <p>Toque en el icono para abrir Brawl Stars. Verá una pantalla de carga con el logotipo del juego y algunos consejos. Espere a que el juego se cargue y luego siga las instrucciones en la pantalla para configurar su cuenta, elegir su nombre y completar el tutorial. También recibirá un luchador gratis como regalo de bienvenida. </p>
46
- <p><img src="https://i.imgur.com/2LJQv4r.png" alt="Brawl Stars loading screen" width="300" height="150"></p>
47
-
48
- <p><img src="https://i.imgur.com/7kqgYx9.png" alt="Brawl Stars main menú" width="300" height="150"></p>
49
- <p>¡Ahora estás listo para pelear! ¡Diviértete y disfruta de Brawl Stars en tu iPhone! </p>
50
- <h2>Consejos y trucos para jugar Brawl estrellas en el iPhone</h2>
51
- <p>Brawl Stars es un juego que requiere habilidad, estrategia y trabajo en equipo para ganar. Estos son algunos consejos y trucos que pueden ayudarte a mejorar tu experiencia de juego y convertirte en un mejor luchador:</p>
52
- <ul>
53
- <li><strong>Ajusta tus ajustes</strong>: Puedes personalizar tus ajustes para adaptarlos a tu estilo de juego y comodidad. Por ejemplo, puede cambiar el tamaño y la posición de los joysticks, activar o desactivar el objetivo automático, cambiar entre el modo vertical o horizontal y elegir entre tocar o deslizar para disparar. También puede activar o desactivar la vibración, los efectos de sonido, la música, el chat de voz y las notificaciones. </li>
54
- <li><strong>Únete a un club</strong>: Un club es un grupo de jugadores que pueden chatear, jugar juntos y compartir consejos. Unirte a un club puede ayudarte a hacer amigos, aprender de otros jugadores y divertirte más. Puede unirse a un club existente o crear su propio club con sus amigos. También puede participar en eventos del club y guerras para ganar recompensas y fama. </li>
55
- <li><strong>Usa gadgets y poderes estelares</strong>: Los gadgets y poderes estelares son habilidades especiales que pueden mejorar el rendimiento de tus luchadores. Los gadgets se activan deslizando sobre la pantalla y tienen un número limitado de usos por partido. Los poderes estelares son habilidades pasivas que siempre están activas una vez desbloqueadas. Puedes desbloquear gadgets y poderes estelares abriendo cajas o alcanzando ciertos hitos de trofeos. También puedes comprar gadgets y poderes estrella con monedas en la tienda. Los gadgets y los poderes de las estrellas pueden darte una ventaja en la batalla, pero tienes que usarlos sabiamente y estratégicamente. </li>
56
-
57
- <li><strong>Aprende de los pros</strong>: Si quieres mejorar tus habilidades y conocimientos, puedes ver vídeos y secuencias de reproductores profesionales y creadores de contenido. Puedes aprender de sus consejos, trucos, estrategias y errores. También puedes interactuar con ellos y hacer preguntas en el chat o comentarios. Puedes encontrar muchos videos y transmisiones de Brawl Stars en YouTube, Twitch, Reddit y otras plataformas. </li>
58
- </ul>
59
- <h2>Preguntas frecuentes sobre Brawl Stars en iPhone</h2>
60
- <p>Aquí están algunas de las preguntas y respuestas más comunes sobre Brawl Stars en iPhone:</p>
61
- <h4>¿Cómo actualizo Brawl Stars en el iPhone? </h4>
62
- <p>Para actualizar Brawl Stars en el iPhone, debe abrir la aplicación App Store e ir a la pestaña Actualizaciones. Verá una lista de aplicaciones que tienen actualizaciones disponibles. Busque Brawl Stars y toque en el botón Actualizar junto a él. También puede habilitar las actualizaciones automáticas en la configuración de la aplicación App Store. </p>
63
- <h4>¿Cómo puedo restaurar mis compras en Brawl Stars en iPhone? </h4>
64
- <p>Si has comprado gemas u otros artículos en Brawl Stars con dinero real y los has perdido debido a un cambio de dispositivo o un problema de juego, puedes restaurar tus compras siguiendo estos pasos:</p>
65
- <ol>
66
- <li>Abrir Brawl Stars e ir al icono de configuración en la esquina superior derecha de la pantalla. </li>
67
- <li>Toque en Ayuda y Soporte.</li>
68
- <li>Toque en Contáctenos.</li>
69
- <li>Escriba un mensaje explicando su situación y proporcione su etiqueta de jugador, número de recibo, fecha de compra y cantidad de compra. </li>
70
- <li> Enviar el mensaje y esperar una respuesta del equipo de soporte. </li>
71
- </ol>
72
- <h4>¿Cómo puedo contactar al equipo de soporte de Brawl Stars en el iPhone? </h4>
73
- <p>Si tienes algún problema, preguntas o comentarios sobre Brawl Stars en el iPhone, puedes ponerte en contacto con el equipo de soporte siguiendo estos pasos:</p>
74
- <ol>
75
- <li>Abrir Brawl Stars e ir al icono de configuración en la esquina superior derecha de la pantalla. </li>
76
- <li>Toque en Ayuda y Soporte.</li>
77
- <li>Toque en Contáctenos.</li>
78
-
79
- <li> Enviar el mensaje y esperar una respuesta del equipo de soporte. </li>
80
- </ol>
81
- <h4>¿Cómo puedo jugar con mis amigos en Brawl Stars en iPhone? </h4>
82
- <p>Si quieres jugar con tus amigos en Brawl Stars en iPhone, tienes dos opciones:</p>
83
- <ul>
84
- <li><strong>Crear o unirse a una habitación amigable</strong>: Una habitación amigable es una habitación privada donde puedes invitar a tus amigos o miembros del club a jugar juntos. Puedes crear o unirte a una sala amigable tocando el botón de juego amigable en la esquina inferior izquierda del menú principal. Puedes elegir el modo de juego y el mapa que quieras, y también puedes habilitar modificadores y bots. Puedes invitar a tus amigos o miembros del club tocando el botón de invitación en la esquina inferior derecha de la pantalla de la habitación. También puede compartir un código de habitación con sus amigos o miembros del club pulsando en el botón compartir en la esquina superior derecha de la pantalla de la habitación amigable. </li>
85
- <li><strong>Crear o unirse a un código de equipo</strong>: Un código de equipo es un código que se puede utilizar para unirse a un equipo con otros jugadores que quieren jugar juntos. Puede crear o unirse a un código de equipo pulsando en el botón de reproducción en la esquina inferior derecha del menú principal. Verás una lista de modos de juego que están disponibles para emparejar. Elige uno y pulsa sobre él. Verá una pantalla donde puede seleccionar su luchador y ver a sus compañeros de equipo. En la esquina superior izquierda de esta pantalla, verá un botón que dice "Crear/ Unirse al código de equipo". Toque en él para crear o unirse a un código de equipo. Puedes compartir tu código de equipo con tus amigos o miembros del club pulsando en el botón de compartir al lado. También puede introducir un código de equipo que alguien más ha compartido con usted pulsando en el botón entrar junto a él. </li>
86
- </ul>
87
- <h4>¿Cómo puedo canjear códigos en Brawl Stars en iPhone? </h4>
88
-
89
- <ol>
90
- <li>Abre Brawl Stars y ve al icono de la tienda en la esquina superior izquierda del menú principal. </li>
91
- <li>Desplácese hacia abajo hasta la parte inferior de la pantalla de la tienda y busque un botón que diga "Canjear código". Toque en él para abrir una ventana emergente. </li>
92
- <li> Introduzca el código que ha recibido en el cuadro de texto y toque en el botón confirmar. </li>
93
- <li> Verá un mensaje que dice "Código redimido" y las recompensas que ha recibido. Toque en el botón de reclamación para recoger sus recompensas. </li>
94
- </ol>
95
- <p>Tenga en cuenta que los códigos distinguen entre mayúsculas y minúsculas y tienen una fecha de vencimiento. Solo puede usar un código por cuenta. Si introduce un código inválido o caducado, verá un mensaje de error que dice "Código inválido" o "Código caducado". </p>
96
- <h2>Conclusión</h2>
97
- <p>Brawl Stars es un juego divertido y emocionante que puedes jugar en tu iPhone. Puedes descargarlo desde la App Store en unos sencillos pasos y disfrutar de sus características, modos, personajes y jugabilidad. También puedes mejorar tus habilidades, unirte a un club, participar en eventos y canjear códigos para obtener más recompensas y diversión. Brawl Stars es un juego que se actualiza constantemente con nuevos contenidos y mejoras, por lo que nunca te aburrirás de él. </p>
98
- <p>¿Qué estás esperando? Descargar Brawl Stars para iPhone hoy y unirse a los millones de jugadores que están luchando su camino a la gloria! </p>
99
- <h2></h2></p> 64aa2da5cf<br />
100
- <br />
101
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distro/__main__.py DELETED
@@ -1,4 +0,0 @@
1
- from .distro import main
2
-
3
- if __name__ == "__main__":
4
- main()
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/pangomarkup.py DELETED
@@ -1,83 +0,0 @@
1
- """
2
- pygments.formatters.pangomarkup
3
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
4
-
5
- Formatter for Pango markup output.
6
-
7
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
8
- :license: BSD, see LICENSE for details.
9
- """
10
-
11
- from pip._vendor.pygments.formatter import Formatter
12
-
13
-
14
- __all__ = ['PangoMarkupFormatter']
15
-
16
-
17
- _escape_table = {
18
- ord('&'): '&amp;',
19
- ord('<'): '&lt;',
20
- }
21
-
22
-
23
- def escape_special_chars(text, table=_escape_table):
24
- """Escape & and < for Pango Markup."""
25
- return text.translate(table)
26
-
27
-
28
- class PangoMarkupFormatter(Formatter):
29
- """
30
- Format tokens as Pango Markup code. It can then be rendered to an SVG.
31
-
32
- .. versionadded:: 2.9
33
- """
34
-
35
- name = 'Pango Markup'
36
- aliases = ['pango', 'pangomarkup']
37
- filenames = []
38
-
39
- def __init__(self, **options):
40
- Formatter.__init__(self, **options)
41
-
42
- self.styles = {}
43
-
44
- for token, style in self.style:
45
- start = ''
46
- end = ''
47
- if style['color']:
48
- start += '<span fgcolor="#%s">' % style['color']
49
- end = '</span>' + end
50
- if style['bold']:
51
- start += '<b>'
52
- end = '</b>' + end
53
- if style['italic']:
54
- start += '<i>'
55
- end = '</i>' + end
56
- if style['underline']:
57
- start += '<u>'
58
- end = '</u>' + end
59
- self.styles[token] = (start, end)
60
-
61
- def format_unencoded(self, tokensource, outfile):
62
- lastval = ''
63
- lasttype = None
64
-
65
- outfile.write('<tt>')
66
-
67
- for ttype, value in tokensource:
68
- while ttype not in self.styles:
69
- ttype = ttype.parent
70
- if ttype == lasttype:
71
- lastval += escape_special_chars(value)
72
- else:
73
- if lastval:
74
- stylebegin, styleend = self.styles[lasttype]
75
- outfile.write(stylebegin + lastval + styleend)
76
- lastval = escape_special_chars(value)
77
- lasttype = ttype
78
-
79
- if lastval:
80
- stylebegin, styleend = self.styles[lasttype]
81
- outfile.write(stylebegin + lastval + styleend)
82
-
83
- outfile.write('</tt>')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/style.py DELETED
@@ -1,796 +0,0 @@
1
- import sys
2
- from functools import lru_cache
3
- from marshal import dumps, loads
4
- from random import randint
5
- from typing import Any, Dict, Iterable, List, Optional, Type, Union, cast
6
-
7
- from . import errors
8
- from .color import Color, ColorParseError, ColorSystem, blend_rgb
9
- from .repr import Result, rich_repr
10
- from .terminal_theme import DEFAULT_TERMINAL_THEME, TerminalTheme
11
-
12
- # Style instances and style definitions are often interchangeable
13
- StyleType = Union[str, "Style"]
14
-
15
-
16
- class _Bit:
17
- """A descriptor to get/set a style attribute bit."""
18
-
19
- __slots__ = ["bit"]
20
-
21
- def __init__(self, bit_no: int) -> None:
22
- self.bit = 1 << bit_no
23
-
24
- def __get__(self, obj: "Style", objtype: Type["Style"]) -> Optional[bool]:
25
- if obj._set_attributes & self.bit:
26
- return obj._attributes & self.bit != 0
27
- return None
28
-
29
-
30
- @rich_repr
31
- class Style:
32
- """A terminal style.
33
-
34
- A terminal style consists of a color (`color`), a background color (`bgcolor`), and a number of attributes, such
35
- as bold, italic etc. The attributes have 3 states: they can either be on
36
- (``True``), off (``False``), or not set (``None``).
37
-
38
- Args:
39
- color (Union[Color, str], optional): Color of terminal text. Defaults to None.
40
- bgcolor (Union[Color, str], optional): Color of terminal background. Defaults to None.
41
- bold (bool, optional): Enable bold text. Defaults to None.
42
- dim (bool, optional): Enable dim text. Defaults to None.
43
- italic (bool, optional): Enable italic text. Defaults to None.
44
- underline (bool, optional): Enable underlined text. Defaults to None.
45
- blink (bool, optional): Enabled blinking text. Defaults to None.
46
- blink2 (bool, optional): Enable fast blinking text. Defaults to None.
47
- reverse (bool, optional): Enabled reverse text. Defaults to None.
48
- conceal (bool, optional): Enable concealed text. Defaults to None.
49
- strike (bool, optional): Enable strikethrough text. Defaults to None.
50
- underline2 (bool, optional): Enable doubly underlined text. Defaults to None.
51
- frame (bool, optional): Enable framed text. Defaults to None.
52
- encircle (bool, optional): Enable encircled text. Defaults to None.
53
- overline (bool, optional): Enable overlined text. Defaults to None.
54
- link (str, link): Link URL. Defaults to None.
55
-
56
- """
57
-
58
- _color: Optional[Color]
59
- _bgcolor: Optional[Color]
60
- _attributes: int
61
- _set_attributes: int
62
- _hash: Optional[int]
63
- _null: bool
64
- _meta: Optional[bytes]
65
-
66
- __slots__ = [
67
- "_color",
68
- "_bgcolor",
69
- "_attributes",
70
- "_set_attributes",
71
- "_link",
72
- "_link_id",
73
- "_ansi",
74
- "_style_definition",
75
- "_hash",
76
- "_null",
77
- "_meta",
78
- ]
79
-
80
- # maps bits on to SGR parameter
81
- _style_map = {
82
- 0: "1",
83
- 1: "2",
84
- 2: "3",
85
- 3: "4",
86
- 4: "5",
87
- 5: "6",
88
- 6: "7",
89
- 7: "8",
90
- 8: "9",
91
- 9: "21",
92
- 10: "51",
93
- 11: "52",
94
- 12: "53",
95
- }
96
-
97
- STYLE_ATTRIBUTES = {
98
- "dim": "dim",
99
- "d": "dim",
100
- "bold": "bold",
101
- "b": "bold",
102
- "italic": "italic",
103
- "i": "italic",
104
- "underline": "underline",
105
- "u": "underline",
106
- "blink": "blink",
107
- "blink2": "blink2",
108
- "reverse": "reverse",
109
- "r": "reverse",
110
- "conceal": "conceal",
111
- "c": "conceal",
112
- "strike": "strike",
113
- "s": "strike",
114
- "underline2": "underline2",
115
- "uu": "underline2",
116
- "frame": "frame",
117
- "encircle": "encircle",
118
- "overline": "overline",
119
- "o": "overline",
120
- }
121
-
122
- def __init__(
123
- self,
124
- *,
125
- color: Optional[Union[Color, str]] = None,
126
- bgcolor: Optional[Union[Color, str]] = None,
127
- bold: Optional[bool] = None,
128
- dim: Optional[bool] = None,
129
- italic: Optional[bool] = None,
130
- underline: Optional[bool] = None,
131
- blink: Optional[bool] = None,
132
- blink2: Optional[bool] = None,
133
- reverse: Optional[bool] = None,
134
- conceal: Optional[bool] = None,
135
- strike: Optional[bool] = None,
136
- underline2: Optional[bool] = None,
137
- frame: Optional[bool] = None,
138
- encircle: Optional[bool] = None,
139
- overline: Optional[bool] = None,
140
- link: Optional[str] = None,
141
- meta: Optional[Dict[str, Any]] = None,
142
- ):
143
- self._ansi: Optional[str] = None
144
- self._style_definition: Optional[str] = None
145
-
146
- def _make_color(color: Union[Color, str]) -> Color:
147
- return color if isinstance(color, Color) else Color.parse(color)
148
-
149
- self._color = None if color is None else _make_color(color)
150
- self._bgcolor = None if bgcolor is None else _make_color(bgcolor)
151
- self._set_attributes = sum(
152
- (
153
- bold is not None,
154
- dim is not None and 2,
155
- italic is not None and 4,
156
- underline is not None and 8,
157
- blink is not None and 16,
158
- blink2 is not None and 32,
159
- reverse is not None and 64,
160
- conceal is not None and 128,
161
- strike is not None and 256,
162
- underline2 is not None and 512,
163
- frame is not None and 1024,
164
- encircle is not None and 2048,
165
- overline is not None and 4096,
166
- )
167
- )
168
- self._attributes = (
169
- sum(
170
- (
171
- bold and 1 or 0,
172
- dim and 2 or 0,
173
- italic and 4 or 0,
174
- underline and 8 or 0,
175
- blink and 16 or 0,
176
- blink2 and 32 or 0,
177
- reverse and 64 or 0,
178
- conceal and 128 or 0,
179
- strike and 256 or 0,
180
- underline2 and 512 or 0,
181
- frame and 1024 or 0,
182
- encircle and 2048 or 0,
183
- overline and 4096 or 0,
184
- )
185
- )
186
- if self._set_attributes
187
- else 0
188
- )
189
-
190
- self._link = link
191
- self._meta = None if meta is None else dumps(meta)
192
- self._link_id = (
193
- f"{randint(0, 999999)}{hash(self._meta)}" if (link or meta) else ""
194
- )
195
- self._hash: Optional[int] = None
196
- self._null = not (self._set_attributes or color or bgcolor or link or meta)
197
-
198
- @classmethod
199
- def null(cls) -> "Style":
200
- """Create an 'null' style, equivalent to Style(), but more performant."""
201
- return NULL_STYLE
202
-
203
- @classmethod
204
- def from_color(
205
- cls, color: Optional[Color] = None, bgcolor: Optional[Color] = None
206
- ) -> "Style":
207
- """Create a new style with colors and no attributes.
208
-
209
- Returns:
210
- color (Optional[Color]): A (foreground) color, or None for no color. Defaults to None.
211
- bgcolor (Optional[Color]): A (background) color, or None for no color. Defaults to None.
212
- """
213
- style: Style = cls.__new__(Style)
214
- style._ansi = None
215
- style._style_definition = None
216
- style._color = color
217
- style._bgcolor = bgcolor
218
- style._set_attributes = 0
219
- style._attributes = 0
220
- style._link = None
221
- style._link_id = ""
222
- style._meta = None
223
- style._null = not (color or bgcolor)
224
- style._hash = None
225
- return style
226
-
227
- @classmethod
228
- def from_meta(cls, meta: Optional[Dict[str, Any]]) -> "Style":
229
- """Create a new style with meta data.
230
-
231
- Returns:
232
- meta (Optional[Dict[str, Any]]): A dictionary of meta data. Defaults to None.
233
- """
234
- style: Style = cls.__new__(Style)
235
- style._ansi = None
236
- style._style_definition = None
237
- style._color = None
238
- style._bgcolor = None
239
- style._set_attributes = 0
240
- style._attributes = 0
241
- style._link = None
242
- style._meta = dumps(meta)
243
- style._link_id = f"{randint(0, 999999)}{hash(style._meta)}"
244
- style._hash = None
245
- style._null = not (meta)
246
- return style
247
-
248
- @classmethod
249
- def on(cls, meta: Optional[Dict[str, Any]] = None, **handlers: Any) -> "Style":
250
- """Create a blank style with meta information.
251
-
252
- Example:
253
- style = Style.on(click=self.on_click)
254
-
255
- Args:
256
- meta (Optional[Dict[str, Any]], optional): An optional dict of meta information.
257
- **handlers (Any): Keyword arguments are translated in to handlers.
258
-
259
- Returns:
260
- Style: A Style with meta information attached.
261
- """
262
- meta = {} if meta is None else meta
263
- meta.update({f"@{key}": value for key, value in handlers.items()})
264
- return cls.from_meta(meta)
265
-
266
- bold = _Bit(0)
267
- dim = _Bit(1)
268
- italic = _Bit(2)
269
- underline = _Bit(3)
270
- blink = _Bit(4)
271
- blink2 = _Bit(5)
272
- reverse = _Bit(6)
273
- conceal = _Bit(7)
274
- strike = _Bit(8)
275
- underline2 = _Bit(9)
276
- frame = _Bit(10)
277
- encircle = _Bit(11)
278
- overline = _Bit(12)
279
-
280
- @property
281
- def link_id(self) -> str:
282
- """Get a link id, used in ansi code for links."""
283
- return self._link_id
284
-
285
- def __str__(self) -> str:
286
- """Re-generate style definition from attributes."""
287
- if self._style_definition is None:
288
- attributes: List[str] = []
289
- append = attributes.append
290
- bits = self._set_attributes
291
- if bits & 0b0000000001111:
292
- if bits & 1:
293
- append("bold" if self.bold else "not bold")
294
- if bits & (1 << 1):
295
- append("dim" if self.dim else "not dim")
296
- if bits & (1 << 2):
297
- append("italic" if self.italic else "not italic")
298
- if bits & (1 << 3):
299
- append("underline" if self.underline else "not underline")
300
- if bits & 0b0000111110000:
301
- if bits & (1 << 4):
302
- append("blink" if self.blink else "not blink")
303
- if bits & (1 << 5):
304
- append("blink2" if self.blink2 else "not blink2")
305
- if bits & (1 << 6):
306
- append("reverse" if self.reverse else "not reverse")
307
- if bits & (1 << 7):
308
- append("conceal" if self.conceal else "not conceal")
309
- if bits & (1 << 8):
310
- append("strike" if self.strike else "not strike")
311
- if bits & 0b1111000000000:
312
- if bits & (1 << 9):
313
- append("underline2" if self.underline2 else "not underline2")
314
- if bits & (1 << 10):
315
- append("frame" if self.frame else "not frame")
316
- if bits & (1 << 11):
317
- append("encircle" if self.encircle else "not encircle")
318
- if bits & (1 << 12):
319
- append("overline" if self.overline else "not overline")
320
- if self._color is not None:
321
- append(self._color.name)
322
- if self._bgcolor is not None:
323
- append("on")
324
- append(self._bgcolor.name)
325
- if self._link:
326
- append("link")
327
- append(self._link)
328
- self._style_definition = " ".join(attributes) or "none"
329
- return self._style_definition
330
-
331
- def __bool__(self) -> bool:
332
- """A Style is false if it has no attributes, colors, or links."""
333
- return not self._null
334
-
335
- def _make_ansi_codes(self, color_system: ColorSystem) -> str:
336
- """Generate ANSI codes for this style.
337
-
338
- Args:
339
- color_system (ColorSystem): Color system.
340
-
341
- Returns:
342
- str: String containing codes.
343
- """
344
-
345
- if self._ansi is None:
346
- sgr: List[str] = []
347
- append = sgr.append
348
- _style_map = self._style_map
349
- attributes = self._attributes & self._set_attributes
350
- if attributes:
351
- if attributes & 1:
352
- append(_style_map[0])
353
- if attributes & 2:
354
- append(_style_map[1])
355
- if attributes & 4:
356
- append(_style_map[2])
357
- if attributes & 8:
358
- append(_style_map[3])
359
- if attributes & 0b0000111110000:
360
- for bit in range(4, 9):
361
- if attributes & (1 << bit):
362
- append(_style_map[bit])
363
- if attributes & 0b1111000000000:
364
- for bit in range(9, 13):
365
- if attributes & (1 << bit):
366
- append(_style_map[bit])
367
- if self._color is not None:
368
- sgr.extend(self._color.downgrade(color_system).get_ansi_codes())
369
- if self._bgcolor is not None:
370
- sgr.extend(
371
- self._bgcolor.downgrade(color_system).get_ansi_codes(
372
- foreground=False
373
- )
374
- )
375
- self._ansi = ";".join(sgr)
376
- return self._ansi
377
-
378
- @classmethod
379
- @lru_cache(maxsize=1024)
380
- def normalize(cls, style: str) -> str:
381
- """Normalize a style definition so that styles with the same effect have the same string
382
- representation.
383
-
384
- Args:
385
- style (str): A style definition.
386
-
387
- Returns:
388
- str: Normal form of style definition.
389
- """
390
- try:
391
- return str(cls.parse(style))
392
- except errors.StyleSyntaxError:
393
- return style.strip().lower()
394
-
395
- @classmethod
396
- def pick_first(cls, *values: Optional[StyleType]) -> StyleType:
397
- """Pick first non-None style."""
398
- for value in values:
399
- if value is not None:
400
- return value
401
- raise ValueError("expected at least one non-None style")
402
-
403
- def __rich_repr__(self) -> Result:
404
- yield "color", self.color, None
405
- yield "bgcolor", self.bgcolor, None
406
- yield "bold", self.bold, None,
407
- yield "dim", self.dim, None,
408
- yield "italic", self.italic, None
409
- yield "underline", self.underline, None,
410
- yield "blink", self.blink, None
411
- yield "blink2", self.blink2, None
412
- yield "reverse", self.reverse, None
413
- yield "conceal", self.conceal, None
414
- yield "strike", self.strike, None
415
- yield "underline2", self.underline2, None
416
- yield "frame", self.frame, None
417
- yield "encircle", self.encircle, None
418
- yield "link", self.link, None
419
- if self._meta:
420
- yield "meta", self.meta
421
-
422
- def __eq__(self, other: Any) -> bool:
423
- if not isinstance(other, Style):
424
- return NotImplemented
425
- return self.__hash__() == other.__hash__()
426
-
427
- def __ne__(self, other: Any) -> bool:
428
- if not isinstance(other, Style):
429
- return NotImplemented
430
- return self.__hash__() != other.__hash__()
431
-
432
- def __hash__(self) -> int:
433
- if self._hash is not None:
434
- return self._hash
435
- self._hash = hash(
436
- (
437
- self._color,
438
- self._bgcolor,
439
- self._attributes,
440
- self._set_attributes,
441
- self._link,
442
- self._meta,
443
- )
444
- )
445
- return self._hash
446
-
447
- @property
448
- def color(self) -> Optional[Color]:
449
- """The foreground color or None if it is not set."""
450
- return self._color
451
-
452
- @property
453
- def bgcolor(self) -> Optional[Color]:
454
- """The background color or None if it is not set."""
455
- return self._bgcolor
456
-
457
- @property
458
- def link(self) -> Optional[str]:
459
- """Link text, if set."""
460
- return self._link
461
-
462
- @property
463
- def transparent_background(self) -> bool:
464
- """Check if the style specified a transparent background."""
465
- return self.bgcolor is None or self.bgcolor.is_default
466
-
467
- @property
468
- def background_style(self) -> "Style":
469
- """A Style with background only."""
470
- return Style(bgcolor=self.bgcolor)
471
-
472
- @property
473
- def meta(self) -> Dict[str, Any]:
474
- """Get meta information (can not be changed after construction)."""
475
- return {} if self._meta is None else cast(Dict[str, Any], loads(self._meta))
476
-
477
- @property
478
- def without_color(self) -> "Style":
479
- """Get a copy of the style with color removed."""
480
- if self._null:
481
- return NULL_STYLE
482
- style: Style = self.__new__(Style)
483
- style._ansi = None
484
- style._style_definition = None
485
- style._color = None
486
- style._bgcolor = None
487
- style._attributes = self._attributes
488
- style._set_attributes = self._set_attributes
489
- style._link = self._link
490
- style._link_id = f"{randint(0, 999999)}" if self._link else ""
491
- style._null = False
492
- style._meta = None
493
- style._hash = None
494
- return style
495
-
496
- @classmethod
497
- @lru_cache(maxsize=4096)
498
- def parse(cls, style_definition: str) -> "Style":
499
- """Parse a style definition.
500
-
501
- Args:
502
- style_definition (str): A string containing a style.
503
-
504
- Raises:
505
- errors.StyleSyntaxError: If the style definition syntax is invalid.
506
-
507
- Returns:
508
- `Style`: A Style instance.
509
- """
510
- if style_definition.strip() == "none" or not style_definition:
511
- return cls.null()
512
-
513
- STYLE_ATTRIBUTES = cls.STYLE_ATTRIBUTES
514
- color: Optional[str] = None
515
- bgcolor: Optional[str] = None
516
- attributes: Dict[str, Optional[Any]] = {}
517
- link: Optional[str] = None
518
-
519
- words = iter(style_definition.split())
520
- for original_word in words:
521
- word = original_word.lower()
522
- if word == "on":
523
- word = next(words, "")
524
- if not word:
525
- raise errors.StyleSyntaxError("color expected after 'on'")
526
- try:
527
- Color.parse(word) is None
528
- except ColorParseError as error:
529
- raise errors.StyleSyntaxError(
530
- f"unable to parse {word!r} as background color; {error}"
531
- ) from None
532
- bgcolor = word
533
-
534
- elif word == "not":
535
- word = next(words, "")
536
- attribute = STYLE_ATTRIBUTES.get(word)
537
- if attribute is None:
538
- raise errors.StyleSyntaxError(
539
- f"expected style attribute after 'not', found {word!r}"
540
- )
541
- attributes[attribute] = False
542
-
543
- elif word == "link":
544
- word = next(words, "")
545
- if not word:
546
- raise errors.StyleSyntaxError("URL expected after 'link'")
547
- link = word
548
-
549
- elif word in STYLE_ATTRIBUTES:
550
- attributes[STYLE_ATTRIBUTES[word]] = True
551
-
552
- else:
553
- try:
554
- Color.parse(word)
555
- except ColorParseError as error:
556
- raise errors.StyleSyntaxError(
557
- f"unable to parse {word!r} as color; {error}"
558
- ) from None
559
- color = word
560
- style = Style(color=color, bgcolor=bgcolor, link=link, **attributes)
561
- return style
562
-
563
- @lru_cache(maxsize=1024)
564
- def get_html_style(self, theme: Optional[TerminalTheme] = None) -> str:
565
- """Get a CSS style rule."""
566
- theme = theme or DEFAULT_TERMINAL_THEME
567
- css: List[str] = []
568
- append = css.append
569
-
570
- color = self.color
571
- bgcolor = self.bgcolor
572
- if self.reverse:
573
- color, bgcolor = bgcolor, color
574
- if self.dim:
575
- foreground_color = (
576
- theme.foreground_color if color is None else color.get_truecolor(theme)
577
- )
578
- color = Color.from_triplet(
579
- blend_rgb(foreground_color, theme.background_color, 0.5)
580
- )
581
- if color is not None:
582
- theme_color = color.get_truecolor(theme)
583
- append(f"color: {theme_color.hex}")
584
- append(f"text-decoration-color: {theme_color.hex}")
585
- if bgcolor is not None:
586
- theme_color = bgcolor.get_truecolor(theme, foreground=False)
587
- append(f"background-color: {theme_color.hex}")
588
- if self.bold:
589
- append("font-weight: bold")
590
- if self.italic:
591
- append("font-style: italic")
592
- if self.underline:
593
- append("text-decoration: underline")
594
- if self.strike:
595
- append("text-decoration: line-through")
596
- if self.overline:
597
- append("text-decoration: overline")
598
- return "; ".join(css)
599
-
600
- @classmethod
601
- def combine(cls, styles: Iterable["Style"]) -> "Style":
602
- """Combine styles and get result.
603
-
604
- Args:
605
- styles (Iterable[Style]): Styles to combine.
606
-
607
- Returns:
608
- Style: A new style instance.
609
- """
610
- iter_styles = iter(styles)
611
- return sum(iter_styles, next(iter_styles))
612
-
613
- @classmethod
614
- def chain(cls, *styles: "Style") -> "Style":
615
- """Combine styles from positional argument in to a single style.
616
-
617
- Args:
618
- *styles (Iterable[Style]): Styles to combine.
619
-
620
- Returns:
621
- Style: A new style instance.
622
- """
623
- iter_styles = iter(styles)
624
- return sum(iter_styles, next(iter_styles))
625
-
626
- def copy(self) -> "Style":
627
- """Get a copy of this style.
628
-
629
- Returns:
630
- Style: A new Style instance with identical attributes.
631
- """
632
- if self._null:
633
- return NULL_STYLE
634
- style: Style = self.__new__(Style)
635
- style._ansi = self._ansi
636
- style._style_definition = self._style_definition
637
- style._color = self._color
638
- style._bgcolor = self._bgcolor
639
- style._attributes = self._attributes
640
- style._set_attributes = self._set_attributes
641
- style._link = self._link
642
- style._link_id = f"{randint(0, 999999)}" if self._link else ""
643
- style._hash = self._hash
644
- style._null = False
645
- style._meta = self._meta
646
- return style
647
-
648
- @lru_cache(maxsize=128)
649
- def clear_meta_and_links(self) -> "Style":
650
- """Get a copy of this style with link and meta information removed.
651
-
652
- Returns:
653
- Style: New style object.
654
- """
655
- if self._null:
656
- return NULL_STYLE
657
- style: Style = self.__new__(Style)
658
- style._ansi = self._ansi
659
- style._style_definition = self._style_definition
660
- style._color = self._color
661
- style._bgcolor = self._bgcolor
662
- style._attributes = self._attributes
663
- style._set_attributes = self._set_attributes
664
- style._link = None
665
- style._link_id = ""
666
- style._hash = self._hash
667
- style._null = False
668
- style._meta = None
669
- return style
670
-
671
- def update_link(self, link: Optional[str] = None) -> "Style":
672
- """Get a copy with a different value for link.
673
-
674
- Args:
675
- link (str, optional): New value for link. Defaults to None.
676
-
677
- Returns:
678
- Style: A new Style instance.
679
- """
680
- style: Style = self.__new__(Style)
681
- style._ansi = self._ansi
682
- style._style_definition = self._style_definition
683
- style._color = self._color
684
- style._bgcolor = self._bgcolor
685
- style._attributes = self._attributes
686
- style._set_attributes = self._set_attributes
687
- style._link = link
688
- style._link_id = f"{randint(0, 999999)}" if link else ""
689
- style._hash = None
690
- style._null = False
691
- style._meta = self._meta
692
- return style
693
-
694
- def render(
695
- self,
696
- text: str = "",
697
- *,
698
- color_system: Optional[ColorSystem] = ColorSystem.TRUECOLOR,
699
- legacy_windows: bool = False,
700
- ) -> str:
701
- """Render the ANSI codes for the style.
702
-
703
- Args:
704
- text (str, optional): A string to style. Defaults to "".
705
- color_system (Optional[ColorSystem], optional): Color system to render to. Defaults to ColorSystem.TRUECOLOR.
706
-
707
- Returns:
708
- str: A string containing ANSI style codes.
709
- """
710
- if not text or color_system is None:
711
- return text
712
- attrs = self._ansi or self._make_ansi_codes(color_system)
713
- rendered = f"\x1b[{attrs}m{text}\x1b[0m" if attrs else text
714
- if self._link and not legacy_windows:
715
- rendered = (
716
- f"\x1b]8;id={self._link_id};{self._link}\x1b\\{rendered}\x1b]8;;\x1b\\"
717
- )
718
- return rendered
719
-
720
- def test(self, text: Optional[str] = None) -> None:
721
- """Write text with style directly to terminal.
722
-
723
- This method is for testing purposes only.
724
-
725
- Args:
726
- text (Optional[str], optional): Text to style or None for style name.
727
-
728
- """
729
- text = text or str(self)
730
- sys.stdout.write(f"{self.render(text)}\n")
731
-
732
- @lru_cache(maxsize=1024)
733
- def _add(self, style: Optional["Style"]) -> "Style":
734
- if style is None or style._null:
735
- return self
736
- if self._null:
737
- return style
738
- new_style: Style = self.__new__(Style)
739
- new_style._ansi = None
740
- new_style._style_definition = None
741
- new_style._color = style._color or self._color
742
- new_style._bgcolor = style._bgcolor or self._bgcolor
743
- new_style._attributes = (self._attributes & ~style._set_attributes) | (
744
- style._attributes & style._set_attributes
745
- )
746
- new_style._set_attributes = self._set_attributes | style._set_attributes
747
- new_style._link = style._link or self._link
748
- new_style._link_id = style._link_id or self._link_id
749
- new_style._null = style._null
750
- if self._meta and style._meta:
751
- new_style._meta = dumps({**self.meta, **style.meta})
752
- else:
753
- new_style._meta = self._meta or style._meta
754
- new_style._hash = None
755
- return new_style
756
-
757
- def __add__(self, style: Optional["Style"]) -> "Style":
758
- combined_style = self._add(style)
759
- return combined_style.copy() if combined_style.link else combined_style
760
-
761
-
762
- NULL_STYLE = Style()
763
-
764
-
765
- class StyleStack:
766
- """A stack of styles."""
767
-
768
- __slots__ = ["_stack"]
769
-
770
- def __init__(self, default_style: "Style") -> None:
771
- self._stack: List[Style] = [default_style]
772
-
773
- def __repr__(self) -> str:
774
- return f"<stylestack {self._stack!r}>"
775
-
776
- @property
777
- def current(self) -> Style:
778
- """Get the Style at the top of the stack."""
779
- return self._stack[-1]
780
-
781
- def push(self, style: Style) -> None:
782
- """Push a new style on to the stack.
783
-
784
- Args:
785
- style (Style): New style to combine with current style.
786
- """
787
- self._stack.append(self._stack[-1] + style)
788
-
789
- def pop(self) -> Style:
790
- """Pop last style and discard.
791
-
792
- Returns:
793
- Style: New current style (also available as stack.current)
794
- """
795
- self._stack.pop()
796
- return self._stack[-1]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BraydenMoore/MARCI-NFL-Betting/Source/Test/xgboost_ML.py DELETED
@@ -1,59 +0,0 @@
1
- import xgboost as xgb
2
- import pandas as pd
3
- import pickle as pkl
4
- import numpy as np
5
- import os
6
-
7
- model = 'xgboost_ML_no_odds_71.4%'
8
-
9
- current_directory = os.path.dirname(os.path.abspath(__file__))
10
- parent_directory = os.path.dirname(current_directory)
11
- data_directory = os.path.join(parent_directory, 'Data')
12
- model_directory = os.path.join(parent_directory, 'Models')
13
- pickle_directory = os.path.join(parent_directory, 'Pickles')
14
-
15
- file_path = os.path.join(model_directory, f'{model}.json')
16
- xgb_ml = xgb.Booster()
17
- xgb_ml.load_model(file_path)
18
-
19
- file_path = os.path.join(pickle_directory, 'test_games_ML_no_odds.pkl')
20
- with open(file_path,'rb') as f:
21
- test_games = pkl.load(f).tolist()
22
-
23
- file_path = os.path.join(data_directory, 'gbg_and_odds.csv')
24
- gbg_and_odds = pd.read_csv(file_path)
25
- test_data = gbg_and_odds.loc[gbg_and_odds['game_id'].isin(test_games)]
26
- test_data_matrix = xgb.DMatrix(test_data.drop(columns=['game_id','Over','Home-Team-Win','Season','home_team','away_team','game_date','Key','Home Score','Away Score','Home Odds Close','Away Odds Close','Home Winnings','Away Winnings','Away Odds','Home Odds']).astype(float).values)
27
-
28
- predicted_probas = xgb_ml.predict(test_data_matrix)
29
- predictions = np.argmax(predicted_probas, axis=1)
30
- test_data['predicted_proba'] = [i[1] for i in predicted_probas]
31
- test_data['prediction'] = (test_data['predicted_proba']>0.5).astype(int)
32
- test_data['correct'] = test_data['Home-Team-Win']==test_data['prediction']
33
-
34
- bets = test_data.loc[(test_data['predicted_proba']>0.6) | (test_data['predicted_proba']<0.4)]
35
- bets['winnings'] = [h if p==1 else a for h,a,p in bets[['Home Winnings','Away Winnings','prediction']].values]
36
-
37
- import matplotlib.pyplot as plt
38
- fig = plt.figure(facecolor='black')
39
- ax = fig.add_subplot(1, 1, 1, facecolor='black')
40
-
41
- # Plot data with line color as RGB(0, 128, 0)
42
- ax.plot(bets['winnings'].cumsum().values*100, linewidth=3, color=(0/255, 128/255, 0/255))
43
-
44
- # Set title and labels
45
- ax.set_title('MARCI 3.0 - MoneyLine w/ 60% Confidence Threshold', color='white')
46
- ax.set_xlabel('Games Bet On', color='white')
47
- ax.set_ylabel('Return (%)', color='white')
48
-
49
- # Change tick colors to white
50
- ax.tick_params(axis='x', colors='white')
51
- ax.tick_params(axis='y', colors='white')
52
-
53
- # Change axis edge colors
54
- ax.spines['bottom'].set_color('white')
55
- ax.spines['top'].set_color('white')
56
- ax.spines['left'].set_color('white')
57
- ax.spines['right'].set_color('white')
58
-
59
- plt.savefig(f'{model}_dark.png', facecolor='black')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BrianL/CoE197-Fil-DialectTranslator/app.py DELETED
@@ -1,36 +0,0 @@
1
- import gradio as gr
2
- from transformers import pipeline
3
-
4
-
5
- def trnslt(TagalogText,Language):
6
- txt_inp = gr.Interface.load("huggingface/Helsinki-NLP/opus-mt-tl-en")
7
- if Language=="Cebuano":
8
- ceb1 = gr.Interface.load("huggingface/Helsinki-NLP/opus-mt-en-ceb")
9
- out_ceb = gr.Series(txt_inp,ceb1)
10
- return out_ceb(TagalogText)
11
- elif Language=="Ilocano":
12
- ilo1 = gr.Interface.load("huggingface/Helsinki-NLP/opus-mt-en-ilo")
13
- out_ilo = gr.Series(txt_inp,ilo1)
14
- return out_ilo(TagalogText)
15
- elif Language=="Hiligaynon":
16
- hil1 = gr.Interface.load("huggingface/Helsinki-NLP/opus-mt-en-hil")
17
- out_hil = gr.Series(txt_inp,hil1)
18
- return out_hil(TagalogText)
19
-
20
- iface = gr.Interface(
21
- fn=trnslt,
22
- inputs=[gr.inputs.Textbox(label="Input Tagalog Text"),
23
- gr.inputs.Radio(["Cebuano","Ilocano","Hiligaynon"],label="Translate to",optional=False)],
24
- outputs='text',
25
- examples=[["Magandang Umaga","Cebuano"],["Magandang gabi","Ilocano"],["Masarap ang Adobo","Hiligaynon"],
26
- ["Kumusta Ka Na","Cebuano"],["Bumibili si Juan ng manok","Ilocano"],["Magandang umaga","Hiligaynon"]],
27
- live=True,
28
- theme="dark-seafoam",
29
- title="Basic Filipino Dialect Translator",
30
- description=" This application uses Helsinki-NLP models to translate Tagalog texts to 3 other dialects of the Filipino language",
31
- css=".footer{display:none !important}",
32
- )
33
-
34
- iface.launch()
35
-
36
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/binary_search.h DELETED
@@ -1,1902 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
-
18
- /*! \file binary_search.h
19
- * \brief Search for values in sorted ranges.
20
- */
21
-
22
- #pragma once
23
-
24
- #include <thrust/detail/config.h>
25
- #include <thrust/detail/execution_policy.h>
26
- #include <thrust/pair.h>
27
-
28
- namespace thrust
29
- {
30
-
31
-
32
- /*! \addtogroup algorithms
33
- */
34
-
35
-
36
- /*! \addtogroup searching
37
- * \ingroup algorithms
38
- * \{
39
- */
40
-
41
-
42
- /*! \addtogroup binary_search Binary Search
43
- * \ingroup searching
44
- * \{
45
- */
46
-
47
-
48
- //////////////////////
49
- // Scalar Functions //
50
- //////////////////////
51
-
52
-
53
- /*! \p lower_bound is a version of binary search: it attempts to find
54
- * the element value in an ordered range <tt>[first, last)</tt>.
55
- * Specifically, it returns the first position where value could be
56
- * inserted without violating the ordering. This version of
57
- * \p lower_bound uses <tt>operator<</tt> for comparison and returns
58
- * the furthermost iterator \c i in <tt>[first, last)</tt> such that,
59
- * for every iterator \c j in <tt>[first, i)</tt>, <tt>*j < value</tt>.
60
- *
61
- * The algorithm's execution is parallelized as determined by \p exec.
62
- *
63
- * \param exec The execution policy to use for parallelization.
64
- * \param first The beginning of the ordered sequence.
65
- * \param last The end of the ordered sequence.
66
- * \param value The value to be searched.
67
- * \return The furthermost iterator \c i, such that <tt>*i < value</tt>.
68
- *
69
- * \tparam DerivedPolicy The name of the derived execution policy.
70
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
71
- * \tparam LessThanComparable is a model of <a href="http://www.sgi.com/tech/stl/LessThanComparable.html">LessThanComparable</a>.
72
- *
73
- * The following code snippet demonstrates how to use \p lower_bound
74
- * to search for values in a ordered range using the \p thrust::device execution policy for parallelization:
75
- *
76
- * \code
77
- * #include <thrust/binary_search.h>
78
- * #include <thrust/device_vector.h>
79
- * #include <thrust/execution_policy.h>
80
- * ...
81
- * thrust::device_vector<int> input(5);
82
- *
83
- * input[0] = 0;
84
- * input[1] = 2;
85
- * input[2] = 5;
86
- * input[3] = 7;
87
- * input[4] = 8;
88
- *
89
- * thrust::lower_bound(thrust::device, input.begin(), input.end(), 0); // returns input.begin()
90
- * thrust::lower_bound(thrust::device, input.begin(), input.end(), 1); // returns input.begin() + 1
91
- * thrust::lower_bound(thrust::device, input.begin(), input.end(), 2); // returns input.begin() + 1
92
- * thrust::lower_bound(thrust::device, input.begin(), input.end(), 3); // returns input.begin() + 2
93
- * thrust::lower_bound(thrust::device, input.begin(), input.end(), 8); // returns input.begin() + 4
94
- * thrust::lower_bound(thrust::device, input.begin(), input.end(), 9); // returns input.end()
95
- * \endcode
96
- *
97
- * \see http://www.sgi.com/tech/stl/lower_bound.html
98
- * \see \p upper_bound
99
- * \see \p equal_range
100
- * \see \p binary_search
101
- */
102
- template<typename DerivedPolicy, typename ForwardIterator, typename LessThanComparable>
103
- __host__ __device__
104
- ForwardIterator lower_bound(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
105
- ForwardIterator first,
106
- ForwardIterator last,
107
- const LessThanComparable &value);
108
-
109
-
110
- /*! \p lower_bound is a version of binary search: it attempts to find
111
- * the element value in an ordered range <tt>[first, last)</tt>.
112
- * Specifically, it returns the first position where value could be
113
- * inserted without violating the ordering. This version of
114
- * \p lower_bound uses <tt>operator<</tt> for comparison and returns
115
- * the furthermost iterator \c i in <tt>[first, last)</tt> such that,
116
- * for every iterator \c j in <tt>[first, i)</tt>, <tt>*j < value</tt>.
117
- *
118
- * \param first The beginning of the ordered sequence.
119
- * \param last The end of the ordered sequence.
120
- * \param value The value to be searched.
121
- * \return The furthermost iterator \c i, such that <tt>*i < value</tt>.
122
- *
123
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
124
- * \tparam LessThanComparable is a model of <a href="http://www.sgi.com/tech/stl/LessThanComparable.html">LessThanComparable</a>.
125
- *
126
- * The following code snippet demonstrates how to use \p lower_bound
127
- * to search for values in a ordered range.
128
- *
129
- * \code
130
- * #include <thrust/binary_search.h>
131
- * #include <thrust/device_vector.h>
132
- * ...
133
- * thrust::device_vector<int> input(5);
134
- *
135
- * input[0] = 0;
136
- * input[1] = 2;
137
- * input[2] = 5;
138
- * input[3] = 7;
139
- * input[4] = 8;
140
- *
141
- * thrust::lower_bound(input.begin(), input.end(), 0); // returns input.begin()
142
- * thrust::lower_bound(input.begin(), input.end(), 1); // returns input.begin() + 1
143
- * thrust::lower_bound(input.begin(), input.end(), 2); // returns input.begin() + 1
144
- * thrust::lower_bound(input.begin(), input.end(), 3); // returns input.begin() + 2
145
- * thrust::lower_bound(input.begin(), input.end(), 8); // returns input.begin() + 4
146
- * thrust::lower_bound(input.begin(), input.end(), 9); // returns input.end()
147
- * \endcode
148
- *
149
- * \see http://www.sgi.com/tech/stl/lower_bound.html
150
- * \see \p upper_bound
151
- * \see \p equal_range
152
- * \see \p binary_search
153
- */
154
- template <class ForwardIterator, class LessThanComparable>
155
- ForwardIterator lower_bound(ForwardIterator first,
156
- ForwardIterator last,
157
- const LessThanComparable& value);
158
-
159
-
160
- /*! \p lower_bound is a version of binary search: it attempts to find
161
- * the element value in an ordered range <tt>[first, last)</tt>.
162
- * Specifically, it returns the first position where value could be
163
- * inserted without violating the ordering. This version of
164
- * \p lower_bound uses function object \c comp for comparison
165
- * and returns the furthermost iterator \c i in <tt>[first, last)</tt>
166
- * such that, for every iterator \c j in <tt>[first, i)</tt>,
167
- * <tt>comp(*j, value)</tt> is \c true.
168
- *
169
- * The algorithm's execution is parallelized as determined by \p exec.
170
- *
171
- * \param exec The execution policy to use for parallelization.
172
- * \param first The beginning of the ordered sequence.
173
- * \param last The end of the ordered sequence.
174
- * \param value The value to be searched.
175
- * \param comp The comparison operator.
176
- * \return The furthermost iterator \c i, such that <tt>comp(*i, value)</tt> is \c true.
177
- *
178
- * \tparam DerivedPolicy The name of the derived execution policy.
179
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
180
- * \tparam T is comparable to \p ForwardIterator's \c value_type.
181
- * \tparam StrictWeakOrdering is a model of <a href="http://www.sgi.com/tech/stl/StrictWeakOrdering.html">Strict Weak Ordering</a>.
182
- *
183
- * The following code snippet demonstrates how to use \p lower_bound
184
- * to search for values in a ordered range using the \p thrust::device execution policy for parallelization:
185
- *
186
- * \code
187
- * #include <thrust/binary_search.h>
188
- * #include <thrust/device_vector.h>
189
- * #include <thrust/functional.h>
190
- * #include <thrust/execution_policy.h>
191
- * ...
192
- * thrust::device_vector<int> input(5);
193
- *
194
- * input[0] = 0;
195
- * input[1] = 2;
196
- * input[2] = 5;
197
- * input[3] = 7;
198
- * input[4] = 8;
199
- *
200
- * thrust::lower_bound(input.begin(), input.end(), 0, thrust::less<int>()); // returns input.begin()
201
- * thrust::lower_bound(input.begin(), input.end(), 1, thrust::less<int>()); // returns input.begin() + 1
202
- * thrust::lower_bound(input.begin(), input.end(), 2, thrust::less<int>()); // returns input.begin() + 1
203
- * thrust::lower_bound(input.begin(), input.end(), 3, thrust::less<int>()); // returns input.begin() + 2
204
- * thrust::lower_bound(input.begin(), input.end(), 8, thrust::less<int>()); // returns input.begin() + 4
205
- * thrust::lower_bound(input.begin(), input.end(), 9, thrust::less<int>()); // returns input.end()
206
- * \endcode
207
- *
208
- * \see http://www.sgi.com/tech/stl/lower_bound.html
209
- * \see \p upper_bound
210
- * \see \p equal_range
211
- * \see \p binary_search
212
- */
213
- template<typename DerivedPolicy, typename ForwardIterator, typename T, typename StrictWeakOrdering>
214
- __host__ __device__
215
- ForwardIterator lower_bound(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
216
- ForwardIterator first,
217
- ForwardIterator last,
218
- const T &value,
219
- StrictWeakOrdering comp);
220
-
221
-
222
- /*! \p lower_bound is a version of binary search: it attempts to find
223
- * the element value in an ordered range <tt>[first, last)</tt>.
224
- * Specifically, it returns the first position where value could be
225
- * inserted without violating the ordering. This version of
226
- * \p lower_bound uses function object \c comp for comparison
227
- * and returns the furthermost iterator \c i in <tt>[first, last)</tt>
228
- * such that, for every iterator \c j in <tt>[first, i)</tt>,
229
- * <tt>comp(*j, value)</tt> is \c true.
230
- *
231
- * \param first The beginning of the ordered sequence.
232
- * \param last The end of the ordered sequence.
233
- * \param value The value to be searched.
234
- * \param comp The comparison operator.
235
- * \return The furthermost iterator \c i, such that <tt>comp(*i, value)</tt> is \c true.
236
- *
237
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
238
- * \tparam T is comparable to \p ForwardIterator's \c value_type.
239
- * \tparam StrictWeakOrdering is a model of <a href="http://www.sgi.com/tech/stl/StrictWeakOrdering.html">Strict Weak Ordering</a>.
240
- *
241
- * The following code snippet demonstrates how to use \p lower_bound
242
- * to search for values in a ordered range.
243
- *
244
- * \code
245
- * #include <thrust/binary_search.h>
246
- * #include <thrust/device_vector.h>
247
- * #include <thrust/functional.h>
248
- * ...
249
- * thrust::device_vector<int> input(5);
250
- *
251
- * input[0] = 0;
252
- * input[1] = 2;
253
- * input[2] = 5;
254
- * input[3] = 7;
255
- * input[4] = 8;
256
- *
257
- * thrust::lower_bound(input.begin(), input.end(), 0, thrust::less<int>()); // returns input.begin()
258
- * thrust::lower_bound(input.begin(), input.end(), 1, thrust::less<int>()); // returns input.begin() + 1
259
- * thrust::lower_bound(input.begin(), input.end(), 2, thrust::less<int>()); // returns input.begin() + 1
260
- * thrust::lower_bound(input.begin(), input.end(), 3, thrust::less<int>()); // returns input.begin() + 2
261
- * thrust::lower_bound(input.begin(), input.end(), 8, thrust::less<int>()); // returns input.begin() + 4
262
- * thrust::lower_bound(input.begin(), input.end(), 9, thrust::less<int>()); // returns input.end()
263
- * \endcode
264
- *
265
- * \see http://www.sgi.com/tech/stl/lower_bound.html
266
- * \see \p upper_bound
267
- * \see \p equal_range
268
- * \see \p binary_search
269
- */
270
- template <class ForwardIterator, class T, class StrictWeakOrdering>
271
- ForwardIterator lower_bound(ForwardIterator first,
272
- ForwardIterator last,
273
- const T& value,
274
- StrictWeakOrdering comp);
275
-
276
-
277
- /*! \p upper_bound is a version of binary search: it attempts to find
278
- * the element value in an ordered range <tt>[first, last)</tt>.
279
- * Specifically, it returns the last position where value could be
280
- * inserted without violating the ordering. This version of
281
- * \p upper_bound uses <tt>operator<</tt> for comparison and returns
282
- * the furthermost iterator \c i in <tt>[first, last)</tt> such that,
283
- * for every iterator \c j in <tt>[first, i)</tt>, <tt>value < *j</tt>
284
- * is \c false.
285
- *
286
- * The algorithm's execution is parallelized as determined by \p exec.
287
- *
288
- * \param exec The execution policy to use for parallelization.
289
- * \param first The beginning of the ordered sequence.
290
- * \param last The end of the ordered sequence.
291
- * \param value The value to be searched.
292
- * \return The furthermost iterator \c i, such that <tt>value < *i</tt> is \c false.
293
- *
294
- * \tparam DerivedPolicy The name of the derived execution policy.
295
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
296
- * \tparam LessThanComparable is a model of <a href="http://www.sgi.com/tech/stl/LessThanComparable.html">LessThanComparable</a>.
297
- *
298
- * The following code snippet demonstrates how to use \p upper_bound
299
- * to search for values in a ordered range using the \p thrust::device execution policy for parallelism:
300
- *
301
- * \code
302
- * #include <thrust/binary_search.h>
303
- * #include <thrust/device_vector.h>
304
- * #include <thrust/execution_policy.h>
305
- * ...
306
- * thrust::device_vector<int> input(5);
307
- *
308
- * input[0] = 0;
309
- * input[1] = 2;
310
- * input[2] = 5;
311
- * input[3] = 7;
312
- * input[4] = 8;
313
- *
314
- * thrust::upper_bound(thrust::device, input.begin(), input.end(), 0); // returns input.begin() + 1
315
- * thrust::upper_bound(thrust::device, input.begin(), input.end(), 1); // returns input.begin() + 1
316
- * thrust::upper_bound(thrust::device, input.begin(), input.end(), 2); // returns input.begin() + 2
317
- * thrust::upper_bound(thrust::device, input.begin(), input.end(), 3); // returns input.begin() + 2
318
- * thrust::upper_bound(thrust::device, input.begin(), input.end(), 8); // returns input.end()
319
- * thrust::upper_bound(thrust::device, input.begin(), input.end(), 9); // returns input.end()
320
- * \endcode
321
- *
322
- * \see http://www.sgi.com/tech/stl/upper_bound.html
323
- * \see \p lower_bound
324
- * \see \p equal_range
325
- * \see \p binary_search
326
- */
327
- template<typename DerivedPolicy, typename ForwardIterator, typename LessThanComparable>
328
- __host__ __device__
329
- ForwardIterator upper_bound(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
330
- ForwardIterator first,
331
- ForwardIterator last,
332
- const LessThanComparable &value);
333
-
334
-
335
- /*! \p upper_bound is a version of binary search: it attempts to find
336
- * the element value in an ordered range <tt>[first, last)</tt>.
337
- * Specifically, it returns the last position where value could be
338
- * inserted without violating the ordering. This version of
339
- * \p upper_bound uses <tt>operator<</tt> for comparison and returns
340
- * the furthermost iterator \c i in <tt>[first, last)</tt> such that,
341
- * for every iterator \c j in <tt>[first, i)</tt>, <tt>value < *j</tt>
342
- * is \c false.
343
- *
344
- * \param first The beginning of the ordered sequence.
345
- * \param last The end of the ordered sequence.
346
- * \param value The value to be searched.
347
- * \return The furthermost iterator \c i, such that <tt>value < *i</tt> is \c false.
348
- *
349
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
350
- * \tparam LessThanComparable is a model of <a href="http://www.sgi.com/tech/stl/LessThanComparable.html">LessThanComparable</a>.
351
- *
352
- * The following code snippet demonstrates how to use \p upper_bound
353
- * to search for values in a ordered range.
354
- *
355
- * \code
356
- * #include <thrust/binary_search.h>
357
- * #include <thrust/device_vector.h>
358
- * ...
359
- * thrust::device_vector<int> input(5);
360
- *
361
- * input[0] = 0;
362
- * input[1] = 2;
363
- * input[2] = 5;
364
- * input[3] = 7;
365
- * input[4] = 8;
366
- *
367
- * thrust::upper_bound(input.begin(), input.end(), 0); // returns input.begin() + 1
368
- * thrust::upper_bound(input.begin(), input.end(), 1); // returns input.begin() + 1
369
- * thrust::upper_bound(input.begin(), input.end(), 2); // returns input.begin() + 2
370
- * thrust::upper_bound(input.begin(), input.end(), 3); // returns input.begin() + 2
371
- * thrust::upper_bound(input.begin(), input.end(), 8); // returns input.end()
372
- * thrust::upper_bound(input.begin(), input.end(), 9); // returns input.end()
373
- * \endcode
374
- *
375
- * \see http://www.sgi.com/tech/stl/upper_bound.html
376
- * \see \p lower_bound
377
- * \see \p equal_range
378
- * \see \p binary_search
379
- */
380
- template <class ForwardIterator, class LessThanComparable>
381
- ForwardIterator upper_bound(ForwardIterator first,
382
- ForwardIterator last,
383
- const LessThanComparable& value);
384
-
385
-
386
- /*! \p upper_bound is a version of binary search: it attempts to find
387
- * the element value in an ordered range <tt>[first, last)</tt>.
388
- * Specifically, it returns the last position where value could be
389
- * inserted without violating the ordering. This version of
390
- * \p upper_bound uses function object \c comp for comparison and returns
391
- * the furthermost iterator \c i in <tt>[first, last)</tt> such that,
392
- * for every iterator \c j in <tt>[first, i)</tt>, <tt>comp(value, *j)</tt>
393
- * is \c false.
394
- *
395
- * The algorithm's execution is parallelized as determined by \p exec.
396
- *
397
- * \param exec The execution policy to use for parallelization.
398
- * \param first The beginning of the ordered sequence.
399
- * \param last The end of the ordered sequence.
400
- * \param value The value to be searched.
401
- * \param comp The comparison operator.
402
- * \return The furthermost iterator \c i, such that <tt>comp(value, *i)</tt> is \c false.
403
- *
404
- * \tparam DerivedPolicy The name of the derived execution policy.
405
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
406
- * \tparam T is comparable to \p ForwardIterator's \c value_type.
407
- * \tparam StrictWeakOrdering is a model of <a href="http://www.sgi.com/tech/stl/StrictWeakOrdering.html">Strict Weak Ordering</a>.
408
- *
409
- * The following code snippet demonstrates how to use \p upper_bound
410
- * to search for values in a ordered range using the \p thrust::device execution policy for parallelization:
411
- *
412
- * \code
413
- * #include <thrust/binary_search.h>
414
- * #include <thrust/device_vector.h>
415
- * #include <thrust/functional.h>
416
- * #include <thrust/execution_policy.h>
417
- * ...
418
- * thrust::device_vector<int> input(5);
419
- *
420
- * input[0] = 0;
421
- * input[1] = 2;
422
- * input[2] = 5;
423
- * input[3] = 7;
424
- * input[4] = 8;
425
- *
426
- * thrust::upper_bound(thrust::device, input.begin(), input.end(), 0, thrust::less<int>()); // returns input.begin() + 1
427
- * thrust::upper_bound(thrust::device, input.begin(), input.end(), 1, thrust::less<int>()); // returns input.begin() + 1
428
- * thrust::upper_bound(thrust::device, input.begin(), input.end(), 2, thrust::less<int>()); // returns input.begin() + 2
429
- * thrust::upper_bound(thrust::device, input.begin(), input.end(), 3, thrust::less<int>()); // returns input.begin() + 2
430
- * thrust::upper_bound(thrust::device, input.begin(), input.end(), 8, thrust::less<int>()); // returns input.end()
431
- * thrust::upper_bound(thrust::device, input.begin(), input.end(), 9, thrust::less<int>()); // returns input.end()
432
- * \endcode
433
- *
434
- * \see http://www.sgi.com/tech/stl/upper_bound.html
435
- * \see \p lower_bound
436
- * \see \p equal_range
437
- * \see \p binary_search
438
- */
439
- template<typename DerivedPolicy, typename ForwardIterator, typename T, typename StrictWeakOrdering>
440
- __host__ __device__
441
- ForwardIterator upper_bound(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
442
- ForwardIterator first,
443
- ForwardIterator last,
444
- const T &value,
445
- StrictWeakOrdering comp);
446
-
447
- /*! \p upper_bound is a version of binary search: it attempts to find
448
- * the element value in an ordered range <tt>[first, last)</tt>.
449
- * Specifically, it returns the last position where value could be
450
- * inserted without violating the ordering. This version of
451
- * \p upper_bound uses function object \c comp for comparison and returns
452
- * the furthermost iterator \c i in <tt>[first, last)</tt> such that,
453
- * for every iterator \c j in <tt>[first, i)</tt>, <tt>comp(value, *j)</tt>
454
- * is \c false.
455
- *
456
- * \param first The beginning of the ordered sequence.
457
- * \param last The end of the ordered sequence.
458
- * \param value The value to be searched.
459
- * \param comp The comparison operator.
460
- * \return The furthermost iterator \c i, such that <tt>comp(value, *i)</tt> is \c false.
461
- *
462
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
463
- * \tparam T is comparable to \p ForwardIterator's \c value_type.
464
- * \tparam StrictWeakOrdering is a model of <a href="http://www.sgi.com/tech/stl/StrictWeakOrdering.html">Strict Weak Ordering</a>.
465
- *
466
- * The following code snippet demonstrates how to use \p upper_bound
467
- * to search for values in a ordered range.
468
- *
469
- * \code
470
- * #include <thrust/binary_search.h>
471
- * #include <thrust/device_vector.h>
472
- * #include <thrust/functional.h>
473
- * ...
474
- * thrust::device_vector<int> input(5);
475
- *
476
- * input[0] = 0;
477
- * input[1] = 2;
478
- * input[2] = 5;
479
- * input[3] = 7;
480
- * input[4] = 8;
481
- *
482
- * thrust::upper_bound(input.begin(), input.end(), 0, thrust::less<int>()); // returns input.begin() + 1
483
- * thrust::upper_bound(input.begin(), input.end(), 1, thrust::less<int>()); // returns input.begin() + 1
484
- * thrust::upper_bound(input.begin(), input.end(), 2, thrust::less<int>()); // returns input.begin() + 2
485
- * thrust::upper_bound(input.begin(), input.end(), 3, thrust::less<int>()); // returns input.begin() + 2
486
- * thrust::upper_bound(input.begin(), input.end(), 8, thrust::less<int>()); // returns input.end()
487
- * thrust::upper_bound(input.begin(), input.end(), 9, thrust::less<int>()); // returns input.end()
488
- * \endcode
489
- *
490
- * \see http://www.sgi.com/tech/stl/upper_bound.html
491
- * \see \p lower_bound
492
- * \see \p equal_range
493
- * \see \p binary_search
494
- */
495
- template <class ForwardIterator, class T, class StrictWeakOrdering>
496
- ForwardIterator upper_bound(ForwardIterator first,
497
- ForwardIterator last,
498
- const T& value,
499
- StrictWeakOrdering comp);
500
-
501
-
502
- /*! \p binary_search is a version of binary search: it attempts to find
503
- * the element value in an ordered range <tt>[first, last)</tt>.
504
- * It returns \c true if an element that is equivalent to \c value
505
- * is present in <tt>[first, last)</tt> and \c false if no such element
506
- * exists. Specifically, this version returns \c true if and only if
507
- * there exists an iterator \c i in <tt>[first, last)</tt> such that
508
- * <tt>*i < value</tt> and <tt>value < *i</tt> are both \c false.
509
- *
510
- * The algorithm's execution is parallelized as determined by \p exec.
511
- *
512
- * \param exec The execution policy to use for parallelization.
513
- * \param first The beginning of the ordered sequence.
514
- * \param last The end of the ordered sequence.
515
- * \param value The value to be searched.
516
- * \return \c true if an equivalent element exists in <tt>[first, last)</tt>, otherwise \c false.
517
- *
518
- * \tparam DerivedPolicy The name of the derived execution policy.
519
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
520
- * \tparam LessThanComparable is a model of <a href="http://www.sgi.com/tech/stl/LessThanComparable.html">LessThanComparable</a>.
521
- *
522
- * The following code snippet demonstrates how to use \p binary_search
523
- * to search for values in a ordered range using the \p thrust::device execution policy for parallelization:
524
- *
525
- * \code
526
- * #include <thrust/binary_search.h>
527
- * #include <thrust/device_vector.h>
528
- * #include <thrust/execution_policy.h>
529
- * ...
530
- * thrust::device_vector<int> input(5);
531
- *
532
- * input[0] = 0;
533
- * input[1] = 2;
534
- * input[2] = 5;
535
- * input[3] = 7;
536
- * input[4] = 8;
537
- *
538
- * thrust::binary_search(thrust::device, input.begin(), input.end(), 0); // returns true
539
- * thrust::binary_search(thrust::device, input.begin(), input.end(), 1); // returns false
540
- * thrust::binary_search(thrust::device, input.begin(), input.end(), 2); // returns true
541
- * thrust::binary_search(thrust::device, input.begin(), input.end(), 3); // returns false
542
- * thrust::binary_search(thrust::device, input.begin(), input.end(), 8); // returns true
543
- * thrust::binary_search(thrust::device, input.begin(), input.end(), 9); // returns false
544
- * \endcode
545
- *
546
- * \see http://www.sgi.com/tech/stl/binary_search.html
547
- * \see \p lower_bound
548
- * \see \p upper_bound
549
- * \see \p equal_range
550
- */
551
- template <typename DerivedPolicy, typename ForwardIterator, typename LessThanComparable>
552
- __host__ __device__
553
- bool binary_search(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
554
- ForwardIterator first,
555
- ForwardIterator last,
556
- const LessThanComparable& value);
557
-
558
-
559
- /*! \p binary_search is a version of binary search: it attempts to find
560
- * the element value in an ordered range <tt>[first, last)</tt>.
561
- * It returns \c true if an element that is equivalent to \c value
562
- * is present in <tt>[first, last)</tt> and \c false if no such element
563
- * exists. Specifically, this version returns \c true if and only if
564
- * there exists an iterator \c i in <tt>[first, last)</tt> such that
565
- * <tt>*i < value</tt> and <tt>value < *i</tt> are both \c false.
566
- *
567
- * \param first The beginning of the ordered sequence.
568
- * \param last The end of the ordered sequence.
569
- * \param value The value to be searched.
570
- * \return \c true if an equivalent element exists in <tt>[first, last)</tt>, otherwise \c false.
571
- *
572
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
573
- * \tparam LessThanComparable is a model of <a href="http://www.sgi.com/tech/stl/LessThanComparable.html">LessThanComparable</a>.
574
- *
575
- * The following code snippet demonstrates how to use \p binary_search
576
- * to search for values in a ordered range.
577
- *
578
- * \code
579
- * #include <thrust/binary_search.h>
580
- * #include <thrust/device_vector.h>
581
- * ...
582
- * thrust::device_vector<int> input(5);
583
- *
584
- * input[0] = 0;
585
- * input[1] = 2;
586
- * input[2] = 5;
587
- * input[3] = 7;
588
- * input[4] = 8;
589
- *
590
- * thrust::binary_search(input.begin(), input.end(), 0); // returns true
591
- * thrust::binary_search(input.begin(), input.end(), 1); // returns false
592
- * thrust::binary_search(input.begin(), input.end(), 2); // returns true
593
- * thrust::binary_search(input.begin(), input.end(), 3); // returns false
594
- * thrust::binary_search(input.begin(), input.end(), 8); // returns true
595
- * thrust::binary_search(input.begin(), input.end(), 9); // returns false
596
- * \endcode
597
- *
598
- * \see http://www.sgi.com/tech/stl/binary_search.html
599
- * \see \p lower_bound
600
- * \see \p upper_bound
601
- * \see \p equal_range
602
- */
603
- template <class ForwardIterator, class LessThanComparable>
604
- bool binary_search(ForwardIterator first,
605
- ForwardIterator last,
606
- const LessThanComparable& value);
607
-
608
-
609
- /*! \p binary_search is a version of binary search: it attempts to find
610
- * the element value in an ordered range <tt>[first, last)</tt>.
611
- * It returns \c true if an element that is equivalent to \c value
612
- * is present in <tt>[first, last)</tt> and \c false if no such element
613
- * exists. Specifically, this version returns \c true if and only if
614
- * there exists an iterator \c i in <tt>[first, last)</tt> such that
615
- * <tt>comp(*i, value)</tt> and <tt>comp(value, *i)</tt> are both \c false.
616
- *
617
- * The algorithm's execution is parallelized as determined by \p exec.
618
- *
619
- * \param exec The execution policy to use for parallelization.
620
- * \param first The beginning of the ordered sequence.
621
- * \param last The end of the ordered sequence.
622
- * \param value The value to be searched.
623
- * \param comp The comparison operator.
624
- * \return \c true if an equivalent element exists in <tt>[first, last)</tt>, otherwise \c false.
625
- *
626
- * \tparam DerivedPolicy The name of the derived execution policy.
627
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
628
- * \tparam T is comparable to \p ForwardIterator's \c value_type.
629
- * \tparam StrictWeakOrdering is a model of <a href="http://www.sgi.com/tech/stl/StrictWeakOrdering.html">Strict Weak Ordering</a>.
630
- *
631
- * The following code snippet demonstrates how to use \p binary_search
632
- * to search for values in a ordered range using the \p thrust::device execution policy for parallelization:
633
- *
634
- * \code
635
- * #include <thrust/binary_search.h>
636
- * #include <thrust/device_vector.h>
637
- * #include <thrust/functional.h>
638
- * #include <thrust/execution_policy.h>
639
- * ...
640
- * thrust::device_vector<int> input(5);
641
- *
642
- * input[0] = 0;
643
- * input[1] = 2;
644
- * input[2] = 5;
645
- * input[3] = 7;
646
- * input[4] = 8;
647
- *
648
- * thrust::binary_search(thrust::device, input.begin(), input.end(), 0, thrust::less<int>()); // returns true
649
- * thrust::binary_search(thrust::device, input.begin(), input.end(), 1, thrust::less<int>()); // returns false
650
- * thrust::binary_search(thrust::device, input.begin(), input.end(), 2, thrust::less<int>()); // returns true
651
- * thrust::binary_search(thrust::device, input.begin(), input.end(), 3, thrust::less<int>()); // returns false
652
- * thrust::binary_search(thrust::device, input.begin(), input.end(), 8, thrust::less<int>()); // returns true
653
- * thrust::binary_search(thrust::device, input.begin(), input.end(), 9, thrust::less<int>()); // returns false
654
- * \endcode
655
- *
656
- * \see http://www.sgi.com/tech/stl/binary_search.html
657
- * \see \p lower_bound
658
- * \see \p upper_bound
659
- * \see \p equal_range
660
- */
661
- template <typename DerivedPolicy, typename ForwardIterator, typename T, typename StrictWeakOrdering>
662
- __host__ __device__
663
- bool binary_search(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
664
- ForwardIterator first,
665
- ForwardIterator last,
666
- const T& value,
667
- StrictWeakOrdering comp);
668
-
669
-
670
- /*! \p binary_search is a version of binary search: it attempts to find
671
- * the element value in an ordered range <tt>[first, last)</tt>.
672
- * It returns \c true if an element that is equivalent to \c value
673
- * is present in <tt>[first, last)</tt> and \c false if no such element
674
- * exists. Specifically, this version returns \c true if and only if
675
- * there exists an iterator \c i in <tt>[first, last)</tt> such that
676
- * <tt>comp(*i, value)</tt> and <tt>comp(value, *i)</tt> are both \c false.
677
- *
678
- * \param first The beginning of the ordered sequence.
679
- * \param last The end of the ordered sequence.
680
- * \param value The value to be searched.
681
- * \param comp The comparison operator.
682
- * \return \c true if an equivalent element exists in <tt>[first, last)</tt>, otherwise \c false.
683
- *
684
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
685
- * \tparam T is comparable to \p ForwardIterator's \c value_type.
686
- * \tparam StrictWeakOrdering is a model of <a href="http://www.sgi.com/tech/stl/StrictWeakOrdering.html">Strict Weak Ordering</a>.
687
- *
688
- * The following code snippet demonstrates how to use \p binary_search
689
- * to search for values in a ordered range.
690
- *
691
- * \code
692
- * #include <thrust/binary_search.h>
693
- * #include <thrust/device_vector.h>
694
- * #include <thrust/functional.h>
695
- * ...
696
- * thrust::device_vector<int> input(5);
697
- *
698
- * input[0] = 0;
699
- * input[1] = 2;
700
- * input[2] = 5;
701
- * input[3] = 7;
702
- * input[4] = 8;
703
- *
704
- * thrust::binary_search(input.begin(), input.end(), 0, thrust::less<int>()); // returns true
705
- * thrust::binary_search(input.begin(), input.end(), 1, thrust::less<int>()); // returns false
706
- * thrust::binary_search(input.begin(), input.end(), 2, thrust::less<int>()); // returns true
707
- * thrust::binary_search(input.begin(), input.end(), 3, thrust::less<int>()); // returns false
708
- * thrust::binary_search(input.begin(), input.end(), 8, thrust::less<int>()); // returns true
709
- * thrust::binary_search(input.begin(), input.end(), 9, thrust::less<int>()); // returns false
710
- * \endcode
711
- *
712
- * \see http://www.sgi.com/tech/stl/binary_search.html
713
- * \see \p lower_bound
714
- * \see \p upper_bound
715
- * \see \p equal_range
716
- */
717
- template <class ForwardIterator, class T, class StrictWeakOrdering>
718
- bool binary_search(ForwardIterator first,
719
- ForwardIterator last,
720
- const T& value,
721
- StrictWeakOrdering comp);
722
-
723
-
724
- /*! \p equal_range is a version of binary search: it attempts to find
725
- * the element value in an ordered range <tt>[first, last)</tt>. The
726
- * value returned by \p equal_range is essentially a combination of
727
- * the values returned by \p lower_bound and \p upper_bound: it returns
728
- * a \p pair of iterators \c i and \c j such that \c i is the first
729
- * position where value could be inserted without violating the
730
- * ordering and \c j is the last position where value could be inserted
731
- * without violating the ordering. It follows that every element in the
732
- * range <tt>[i, j)</tt> is equivalent to value, and that
733
- * <tt>[i, j)</tt> is the largest subrange of <tt>[first, last)</tt> that
734
- * has this property.
735
- *
736
- * This version of \p equal_range returns a \p pair of iterators
737
- * <tt>[i, j)</tt>, where \c i is the furthermost iterator in
738
- * <tt>[first, last)</tt> such that, for every iterator \c k in
739
- * <tt>[first, i)</tt>, <tt>*k < value</tt>. \c j is the furthermost
740
- * iterator in <tt>[first, last)</tt> such that, for every iterator
741
- * \c k in <tt>[first, j)</tt>, <tt>value < *k</tt> is \c false.
742
- * For every iterator \c k in <tt>[i, j)</tt>, neither
743
- * <tt>value < *k</tt> nor <tt>*k < value</tt> is \c true.
744
- *
745
- * The algorithm's execution is parallelized as determined by \p exec.
746
- *
747
- * \param exec The execution policy to use for parallelization.
748
- * \param first The beginning of the ordered sequence.
749
- * \param last The end of the ordered sequence.
750
- * \param value The value to be searched.
751
- * \return A \p pair of iterators <tt>[i, j)</tt> that define the range of equivalent elements.
752
- *
753
- * \tparam DerivedPolicy The name of the derived execution policy.
754
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
755
- * \tparam LessThanComparable is a model of <a href="http://www.sgi.com/tech/stl/LessThanComparable.html">LessThanComparable</a>.
756
- *
757
- * The following code snippet demonstrates how to use \p equal_range
758
- * to search for values in a ordered range using the \p thrust::device execution policy for parallelization:
759
- *
760
- * \code
761
- * #include <thrust/binary_search.h>
762
- * #include <thrust/device_vector.h>
763
- * #include <thrust/execution_policy.h>
764
- * ...
765
- * thrust::device_vector<int> input(5);
766
- *
767
- * input[0] = 0;
768
- * input[1] = 2;
769
- * input[2] = 5;
770
- * input[3] = 7;
771
- * input[4] = 8;
772
- *
773
- * thrust::equal_range(thrust::device, input.begin(), input.end(), 0); // returns [input.begin(), input.begin() + 1)
774
- * thrust::equal_range(thrust::device, input.begin(), input.end(), 1); // returns [input.begin() + 1, input.begin() + 1)
775
- * thrust::equal_range(thrust::device, input.begin(), input.end(), 2); // returns [input.begin() + 1, input.begin() + 2)
776
- * thrust::equal_range(thrust::device, input.begin(), input.end(), 3); // returns [input.begin() + 2, input.begin() + 2)
777
- * thrust::equal_range(thrust::device, input.begin(), input.end(), 8); // returns [input.begin() + 4, input.end)
778
- * thrust::equal_range(thrust::device, input.begin(), input.end(), 9); // returns [input.end(), input.end)
779
- * \endcode
780
- *
781
- * \see http://www.sgi.com/tech/stl/equal_range.html
782
- * \see \p lower_bound
783
- * \see \p upper_bound
784
- * \see \p binary_search
785
- */
786
- template <typename DerivedPolicy, typename ForwardIterator, typename LessThanComparable>
787
- __host__ __device__
788
- thrust::pair<ForwardIterator, ForwardIterator>
789
- equal_range(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
790
- ForwardIterator first,
791
- ForwardIterator last,
792
- const LessThanComparable& value);
793
-
794
-
795
- /*! \p equal_range is a version of binary search: it attempts to find
796
- * the element value in an ordered range <tt>[first, last)</tt>. The
797
- * value returned by \p equal_range is essentially a combination of
798
- * the values returned by \p lower_bound and \p upper_bound: it returns
799
- * a \p pair of iterators \c i and \c j such that \c i is the first
800
- * position where value could be inserted without violating the
801
- * ordering and \c j is the last position where value could be inserted
802
- * without violating the ordering. It follows that every element in the
803
- * range <tt>[i, j)</tt> is equivalent to value, and that
804
- * <tt>[i, j)</tt> is the largest subrange of <tt>[first, last)</tt> that
805
- * has this property.
806
- *
807
- * This version of \p equal_range returns a \p pair of iterators
808
- * <tt>[i, j)</tt>, where \c i is the furthermost iterator in
809
- * <tt>[first, last)</tt> such that, for every iterator \c k in
810
- * <tt>[first, i)</tt>, <tt>*k < value</tt>. \c j is the furthermost
811
- * iterator in <tt>[first, last)</tt> such that, for every iterator
812
- * \c k in <tt>[first, j)</tt>, <tt>value < *k</tt> is \c false.
813
- * For every iterator \c k in <tt>[i, j)</tt>, neither
814
- * <tt>value < *k</tt> nor <tt>*k < value</tt> is \c true.
815
- *
816
- * \param first The beginning of the ordered sequence.
817
- * \param last The end of the ordered sequence.
818
- * \param value The value to be searched.
819
- * \return A \p pair of iterators <tt>[i, j)</tt> that define the range of equivalent elements.
820
- *
821
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
822
- * \tparam LessThanComparable is a model of <a href="http://www.sgi.com/tech/stl/LessThanComparable.html">LessThanComparable</a>.
823
- *
824
- * The following code snippet demonstrates how to use \p equal_range
825
- * to search for values in a ordered range.
826
- *
827
- * \code
828
- * #include <thrust/binary_search.h>
829
- * #include <thrust/device_vector.h>
830
- * ...
831
- * thrust::device_vector<int> input(5);
832
- *
833
- * input[0] = 0;
834
- * input[1] = 2;
835
- * input[2] = 5;
836
- * input[3] = 7;
837
- * input[4] = 8;
838
- *
839
- * thrust::equal_range(input.begin(), input.end(), 0); // returns [input.begin(), input.begin() + 1)
840
- * thrust::equal_range(input.begin(), input.end(), 1); // returns [input.begin() + 1, input.begin() + 1)
841
- * thrust::equal_range(input.begin(), input.end(), 2); // returns [input.begin() + 1, input.begin() + 2)
842
- * thrust::equal_range(input.begin(), input.end(), 3); // returns [input.begin() + 2, input.begin() + 2)
843
- * thrust::equal_range(input.begin(), input.end(), 8); // returns [input.begin() + 4, input.end)
844
- * thrust::equal_range(input.begin(), input.end(), 9); // returns [input.end(), input.end)
845
- * \endcode
846
- *
847
- * \see http://www.sgi.com/tech/stl/equal_range.html
848
- * \see \p lower_bound
849
- * \see \p upper_bound
850
- * \see \p binary_search
851
- */
852
- template <class ForwardIterator, class LessThanComparable>
853
- thrust::pair<ForwardIterator, ForwardIterator>
854
- equal_range(ForwardIterator first,
855
- ForwardIterator last,
856
- const LessThanComparable& value);
857
-
858
-
859
- /*! \p equal_range is a version of binary search: it attempts to find
860
- * the element value in an ordered range <tt>[first, last)</tt>. The
861
- * value returned by \p equal_range is essentially a combination of
862
- * the values returned by \p lower_bound and \p upper_bound: it returns
863
- * a \p pair of iterators \c i and \c j such that \c i is the first
864
- * position where value could be inserted without violating the
865
- * ordering and \c j is the last position where value could be inserted
866
- * without violating the ordering. It follows that every element in the
867
- * range <tt>[i, j)</tt> is equivalent to value, and that
868
- * <tt>[i, j)</tt> is the largest subrange of <tt>[first, last)</tt> that
869
- * has this property.
870
- *
871
- * This version of \p equal_range returns a \p pair of iterators
872
- * <tt>[i, j)</tt>. \c i is the furthermost iterator in
873
- * <tt>[first, last)</tt> such that, for every iterator \c k in
874
- * <tt>[first, i)</tt>, <tt>comp(*k, value)</tt> is \c true.
875
- * \c j is the furthermost iterator in <tt>[first, last)</tt> such
876
- * that, for every iterator \c k in <tt>[first, last)</tt>,
877
- * <tt>comp(value, *k)</tt> is \c false. For every iterator \c k
878
- * in <tt>[i, j)</tt>, neither <tt>comp(value, *k)</tt> nor
879
- * <tt>comp(*k, value)</tt> is \c true.
880
- *
881
- * The algorithm's execution is parallelized as determined by \p exec.
882
- *
883
- * \param exec The execution policy to use for parallelization.
884
- * \param first The beginning of the ordered sequence.
885
- * \param last The end of the ordered sequence.
886
- * \param value The value to be searched.
887
- * \param comp The comparison operator.
888
- * \return A \p pair of iterators <tt>[i, j)</tt> that define the range of equivalent elements.
889
- *
890
- * \tparam DerivedPolicy The name of the derived execution policy.
891
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
892
- * \tparam T is comparable to \p ForwardIterator's \c value_type.
893
- * \tparam StrictWeakOrdering is a model of <a href="http://www.sgi.com/tech/stl/StrictWeakOrdering.html">Strict Weak Ordering</a>.
894
- *
895
- * The following code snippet demonstrates how to use \p equal_range
896
- * to search for values in a ordered range using the \p thrust::device execution policy for parallelization:
897
- *
898
- * \code
899
- * #include <thrust/binary_search.h>
900
- * #include <thrust/device_vector.h>
901
- * #include <thrust/functional.h>
902
- * #include <thrust/execution_policy.h>
903
- * ...
904
- * thrust::device_vector<int> input(5);
905
- *
906
- * input[0] = 0;
907
- * input[1] = 2;
908
- * input[2] = 5;
909
- * input[3] = 7;
910
- * input[4] = 8;
911
- *
912
- * thrust::equal_range(thrust::device, input.begin(), input.end(), 0, thrust::less<int>()); // returns [input.begin(), input.begin() + 1)
913
- * thrust::equal_range(thrust::device, input.begin(), input.end(), 1, thrust::less<int>()); // returns [input.begin() + 1, input.begin() + 1)
914
- * thrust::equal_range(thrust::device, input.begin(), input.end(), 2, thrust::less<int>()); // returns [input.begin() + 1, input.begin() + 2)
915
- * thrust::equal_range(thrust::device, input.begin(), input.end(), 3, thrust::less<int>()); // returns [input.begin() + 2, input.begin() + 2)
916
- * thrust::equal_range(thrust::device, input.begin(), input.end(), 8, thrust::less<int>()); // returns [input.begin() + 4, input.end)
917
- * thrust::equal_range(thrust::device, input.begin(), input.end(), 9, thrust::less<int>()); // returns [input.end(), input.end)
918
- * \endcode
919
- *
920
- * \see http://www.sgi.com/tech/stl/equal_range.html
921
- * \see \p lower_bound
922
- * \see \p upper_bound
923
- * \see \p binary_search
924
- */
925
- template <typename DerivedPolicy, typename ForwardIterator, typename T, typename StrictWeakOrdering>
926
- __host__ __device__
927
- thrust::pair<ForwardIterator, ForwardIterator>
928
- equal_range(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
929
- ForwardIterator first,
930
- ForwardIterator last,
931
- const T& value,
932
- StrictWeakOrdering comp);
933
-
934
-
935
- /*! \p equal_range is a version of binary search: it attempts to find
936
- * the element value in an ordered range <tt>[first, last)</tt>. The
937
- * value returned by \p equal_range is essentially a combination of
938
- * the values returned by \p lower_bound and \p upper_bound: it returns
939
- * a \p pair of iterators \c i and \c j such that \c i is the first
940
- * position where value could be inserted without violating the
941
- * ordering and \c j is the last position where value could be inserted
942
- * without violating the ordering. It follows that every element in the
943
- * range <tt>[i, j)</tt> is equivalent to value, and that
944
- * <tt>[i, j)</tt> is the largest subrange of <tt>[first, last)</tt> that
945
- * has this property.
946
- *
947
- * This version of \p equal_range returns a \p pair of iterators
948
- * <tt>[i, j)</tt>. \c i is the furthermost iterator in
949
- * <tt>[first, last)</tt> such that, for every iterator \c k in
950
- * <tt>[first, i)</tt>, <tt>comp(*k, value)</tt> is \c true.
951
- * \c j is the furthermost iterator in <tt>[first, last)</tt> such
952
- * that, for every iterator \c k in <tt>[first, last)</tt>,
953
- * <tt>comp(value, *k)</tt> is \c false. For every iterator \c k
954
- * in <tt>[i, j)</tt>, neither <tt>comp(value, *k)</tt> nor
955
- * <tt>comp(*k, value)</tt> is \c true.
956
- *
957
- * \param first The beginning of the ordered sequence.
958
- * \param last The end of the ordered sequence.
959
- * \param value The value to be searched.
960
- * \param comp The comparison operator.
961
- * \return A \p pair of iterators <tt>[i, j)</tt> that define the range of equivalent elements.
962
- *
963
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
964
- * \tparam T is comparable to \p ForwardIterator's \c value_type.
965
- * \tparam StrictWeakOrdering is a model of <a href="http://www.sgi.com/tech/stl/StrictWeakOrdering.html">Strict Weak Ordering</a>.
966
- *
967
- * The following code snippet demonstrates how to use \p equal_range
968
- * to search for values in a ordered range.
969
- *
970
- * \code
971
- * #include <thrust/binary_search.h>
972
- * #include <thrust/device_vector.h>
973
- * #include <thrust/functional.h>
974
- * ...
975
- * thrust::device_vector<int> input(5);
976
- *
977
- * input[0] = 0;
978
- * input[1] = 2;
979
- * input[2] = 5;
980
- * input[3] = 7;
981
- * input[4] = 8;
982
- *
983
- * thrust::equal_range(input.begin(), input.end(), 0, thrust::less<int>()); // returns [input.begin(), input.begin() + 1)
984
- * thrust::equal_range(input.begin(), input.end(), 1, thrust::less<int>()); // returns [input.begin() + 1, input.begin() + 1)
985
- * thrust::equal_range(input.begin(), input.end(), 2, thrust::less<int>()); // returns [input.begin() + 1, input.begin() + 2)
986
- * thrust::equal_range(input.begin(), input.end(), 3, thrust::less<int>()); // returns [input.begin() + 2, input.begin() + 2)
987
- * thrust::equal_range(input.begin(), input.end(), 8, thrust::less<int>()); // returns [input.begin() + 4, input.end)
988
- * thrust::equal_range(input.begin(), input.end(), 9, thrust::less<int>()); // returns [input.end(), input.end)
989
- * \endcode
990
- *
991
- * \see http://www.sgi.com/tech/stl/equal_range.html
992
- * \see \p lower_bound
993
- * \see \p upper_bound
994
- * \see \p binary_search
995
- */
996
- template <class ForwardIterator, class T, class StrictWeakOrdering>
997
- thrust::pair<ForwardIterator, ForwardIterator>
998
- equal_range(ForwardIterator first,
999
- ForwardIterator last,
1000
- const T& value,
1001
- StrictWeakOrdering comp);
1002
-
1003
-
1004
- /*! \addtogroup vectorized_binary_search Vectorized Searches
1005
- * \ingroup binary_search
1006
- * \{
1007
- */
1008
-
1009
-
1010
- //////////////////////
1011
- // Vector Functions //
1012
- //////////////////////
1013
-
1014
-
1015
- /*! \p lower_bound is a vectorized version of binary search: for each
1016
- * iterator \c v in <tt>[values_first, values_last)</tt> it attempts to
1017
- * find the value <tt>*v</tt> in an ordered range <tt>[first, last)</tt>.
1018
- * Specifically, it returns the index of first position where value could
1019
- * be inserted without violating the ordering.
1020
- *
1021
- * The algorithm's execution is parallelized as determined by \p exec.
1022
- *
1023
- * \param exec The execution policy to use for parallelization.
1024
- * \param first The beginning of the ordered sequence.
1025
- * \param last The end of the ordered sequence.
1026
- * \param values_first The beginning of the search values sequence.
1027
- * \param values_last The end of the search values sequence.
1028
- * \param result The beginning of the output sequence.
1029
- *
1030
- * \tparam DerivedPolicy The name of the derived execution policy.
1031
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
1032
- * \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>.
1033
- * and \c InputIterator's \c value_type is <a href="http://www.sgi.com/tech/stl/LessThanComparable.html">LessThanComparable</a>.
1034
- * \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>.
1035
- * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type.
1036
- *
1037
- * \pre The ranges <tt>[first,last)</tt> and <tt>[result, result + (last - first))</tt> shall not overlap.
1038
- *
1039
- * The following code snippet demonstrates how to use \p lower_bound
1040
- * to search for multiple values in a ordered range using the \p thrust::device execution policy for
1041
- * parallelization:
1042
- *
1043
- * \code
1044
- * #include <thrust/binary_search.h>
1045
- * #include <thrust/device_vector.h>
1046
- * #include <thrust/execution_policy.h>
1047
- * ...
1048
- * thrust::device_vector<int> input(5);
1049
- *
1050
- * input[0] = 0;
1051
- * input[1] = 2;
1052
- * input[2] = 5;
1053
- * input[3] = 7;
1054
- * input[4] = 8;
1055
- *
1056
- * thrust::device_vector<int> values(6);
1057
- * values[0] = 0;
1058
- * values[1] = 1;
1059
- * values[2] = 2;
1060
- * values[3] = 3;
1061
- * values[4] = 8;
1062
- * values[5] = 9;
1063
- *
1064
- * thrust::device_vector<unsigned int> output(6);
1065
- *
1066
- * thrust::lower_bound(thrust::device,
1067
- * input.begin(), input.end(),
1068
- * values.begin(), values.end(),
1069
- * output.begin());
1070
- *
1071
- * // output is now [0, 1, 1, 2, 4, 5]
1072
- * \endcode
1073
- *
1074
- * \see http://www.sgi.com/tech/stl/lower_bound.html
1075
- * \see \p upper_bound
1076
- * \see \p equal_range
1077
- * \see \p binary_search
1078
- */
1079
- template <typename DerivedPolicy, typename ForwardIterator, typename InputIterator, typename OutputIterator>
1080
- __host__ __device__
1081
- OutputIterator lower_bound(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
1082
- ForwardIterator first,
1083
- ForwardIterator last,
1084
- InputIterator values_first,
1085
- InputIterator values_last,
1086
- OutputIterator result);
1087
-
1088
-
1089
- /*! \p lower_bound is a vectorized version of binary search: for each
1090
- * iterator \c v in <tt>[values_first, values_last)</tt> it attempts to
1091
- * find the value <tt>*v</tt> in an ordered range <tt>[first, last)</tt>.
1092
- * Specifically, it returns the index of first position where value could
1093
- * be inserted without violating the ordering.
1094
- *
1095
- * \param first The beginning of the ordered sequence.
1096
- * \param last The end of the ordered sequence.
1097
- * \param values_first The beginning of the search values sequence.
1098
- * \param values_last The end of the search values sequence.
1099
- * \param result The beginning of the output sequence.
1100
- *
1101
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
1102
- * \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>.
1103
- * and \c InputIterator's \c value_type is <a href="http://www.sgi.com/tech/stl/LessThanComparable.html">LessThanComparable</a>.
1104
- * \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>.
1105
- * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type.
1106
- *
1107
- * \pre The ranges <tt>[first,last)</tt> and <tt>[result, result + (last - first))</tt> shall not overlap.
1108
- *
1109
- * The following code snippet demonstrates how to use \p lower_bound
1110
- * to search for multiple values in a ordered range.
1111
- *
1112
- * \code
1113
- * #include <thrust/binary_search.h>
1114
- * #include <thrust/device_vector.h>
1115
- * ...
1116
- * thrust::device_vector<int> input(5);
1117
- *
1118
- * input[0] = 0;
1119
- * input[1] = 2;
1120
- * input[2] = 5;
1121
- * input[3] = 7;
1122
- * input[4] = 8;
1123
- *
1124
- * thrust::device_vector<int> values(6);
1125
- * values[0] = 0;
1126
- * values[1] = 1;
1127
- * values[2] = 2;
1128
- * values[3] = 3;
1129
- * values[4] = 8;
1130
- * values[5] = 9;
1131
- *
1132
- * thrust::device_vector<unsigned int> output(6);
1133
- *
1134
- * thrust::lower_bound(input.begin(), input.end(),
1135
- * values.begin(), values.end(),
1136
- * output.begin());
1137
- *
1138
- * // output is now [0, 1, 1, 2, 4, 5]
1139
- * \endcode
1140
- *
1141
- * \see http://www.sgi.com/tech/stl/lower_bound.html
1142
- * \see \p upper_bound
1143
- * \see \p equal_range
1144
- * \see \p binary_search
1145
- */
1146
- template <class ForwardIterator, class InputIterator, class OutputIterator>
1147
- OutputIterator lower_bound(ForwardIterator first,
1148
- ForwardIterator last,
1149
- InputIterator values_first,
1150
- InputIterator values_last,
1151
- OutputIterator result);
1152
-
1153
-
1154
- /*! \p lower_bound is a vectorized version of binary search: for each
1155
- * iterator \c v in <tt>[values_first, values_last)</tt> it attempts to
1156
- * find the value <tt>*v</tt> in an ordered range <tt>[first, last)</tt>.
1157
- * Specifically, it returns the index of first position where value could
1158
- * be inserted without violating the ordering. This version of
1159
- * \p lower_bound uses function object \c comp for comparison.
1160
- *
1161
- * The algorithm's execution is parallelized as determined by \p exec.
1162
- *
1163
- * \param exec The execution policy to use for parallelization.
1164
- * \param first The beginning of the ordered sequence.
1165
- * \param last The end of the ordered sequence.
1166
- * \param values_first The beginning of the search values sequence.
1167
- * \param values_last The end of the search values sequence.
1168
- * \param result The beginning of the output sequence.
1169
- * \param comp The comparison operator.
1170
- *
1171
- * \tparam DerivedPolicy The name of the derived execution policy.
1172
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
1173
- * \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>.
1174
- * and \c InputIterator's \c value_type is comparable to \p ForwardIterator's \c value_type.
1175
- * \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>.
1176
- * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type.
1177
- * \tparam StrictWeakOrdering is a model of <a href="http://www.sgi.com/tech/stl/StrictWeakOrdering.html">Strict Weak Ordering</a>.
1178
- *
1179
- * \pre The ranges <tt>[first,last)</tt> and <tt>[result, result + (last - first))</tt> shall not overlap.
1180
- *
1181
- * The following code snippet demonstrates how to use \p lower_bound
1182
- * to search for multiple values in a ordered range.
1183
- *
1184
- * \code
1185
- * #include <thrust/binary_search.h>
1186
- * #include <thrust/device_vector.h>
1187
- * #include <thrust/functional.h>
1188
- * #include <thrust/execution_policy.h>
1189
- * ...
1190
- * thrust::device_vector<int> input(5);
1191
- *
1192
- * input[0] = 0;
1193
- * input[1] = 2;
1194
- * input[2] = 5;
1195
- * input[3] = 7;
1196
- * input[4] = 8;
1197
- *
1198
- * thrust::device_vector<int> values(6);
1199
- * values[0] = 0;
1200
- * values[1] = 1;
1201
- * values[2] = 2;
1202
- * values[3] = 3;
1203
- * values[4] = 8;
1204
- * values[5] = 9;
1205
- *
1206
- * thrust::device_vector<unsigned int> output(6);
1207
- *
1208
- * thrust::lower_bound(input.begin(), input.end(),
1209
- * values.begin(), values.end(),
1210
- * output.begin(),
1211
- * thrust::less<int>());
1212
- *
1213
- * // output is now [0, 1, 1, 2, 4, 5]
1214
- * \endcode
1215
- *
1216
- * \see http://www.sgi.com/tech/stl/lower_bound.html
1217
- * \see \p upper_bound
1218
- * \see \p equal_range
1219
- * \see \p binary_search
1220
- */
1221
- template <typename DerivedPolicy, typename ForwardIterator, typename InputIterator, typename OutputIterator, typename StrictWeakOrdering>
1222
- __host__ __device__
1223
- OutputIterator lower_bound(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
1224
- ForwardIterator first,
1225
- ForwardIterator last,
1226
- InputIterator values_first,
1227
- InputIterator values_last,
1228
- OutputIterator result,
1229
- StrictWeakOrdering comp);
1230
-
1231
-
1232
- /*! \p lower_bound is a vectorized version of binary search: for each
1233
- * iterator \c v in <tt>[values_first, values_last)</tt> it attempts to
1234
- * find the value <tt>*v</tt> in an ordered range <tt>[first, last)</tt>.
1235
- * Specifically, it returns the index of first position where value could
1236
- * be inserted without violating the ordering. This version of
1237
- * \p lower_bound uses function object \c comp for comparison.
1238
- *
1239
- * \param first The beginning of the ordered sequence.
1240
- * \param last The end of the ordered sequence.
1241
- * \param values_first The beginning of the search values sequence.
1242
- * \param values_last The end of the search values sequence.
1243
- * \param result The beginning of the output sequence.
1244
- * \param comp The comparison operator.
1245
- *
1246
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
1247
- * \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>.
1248
- * and \c InputIterator's \c value_type is comparable to \p ForwardIterator's \c value_type.
1249
- * \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>.
1250
- * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type.
1251
- * \tparam StrictWeakOrdering is a model of <a href="http://www.sgi.com/tech/stl/StrictWeakOrdering.html">Strict Weak Ordering</a>.
1252
- *
1253
- * \pre The ranges <tt>[first,last)</tt> and <tt>[result, result + (last - first))</tt> shall not overlap.
1254
- *
1255
- * The following code snippet demonstrates how to use \p lower_bound
1256
- * to search for multiple values in a ordered range.
1257
- *
1258
- * \code
1259
- * #include <thrust/binary_search.h>
1260
- * #include <thrust/device_vector.h>
1261
- * #include <thrust/functional.h>
1262
- * ...
1263
- * thrust::device_vector<int> input(5);
1264
- *
1265
- * input[0] = 0;
1266
- * input[1] = 2;
1267
- * input[2] = 5;
1268
- * input[3] = 7;
1269
- * input[4] = 8;
1270
- *
1271
- * thrust::device_vector<int> values(6);
1272
- * values[0] = 0;
1273
- * values[1] = 1;
1274
- * values[2] = 2;
1275
- * values[3] = 3;
1276
- * values[4] = 8;
1277
- * values[5] = 9;
1278
- *
1279
- * thrust::device_vector<unsigned int> output(6);
1280
- *
1281
- * thrust::lower_bound(input.begin(), input.end(),
1282
- * values.begin(), values.end(),
1283
- * output.begin(),
1284
- * thrust::less<int>());
1285
- *
1286
- * // output is now [0, 1, 1, 2, 4, 5]
1287
- * \endcode
1288
- *
1289
- * \see http://www.sgi.com/tech/stl/lower_bound.html
1290
- * \see \p upper_bound
1291
- * \see \p equal_range
1292
- * \see \p binary_search
1293
- */
1294
- template <class ForwardIterator, class InputIterator, class OutputIterator, class StrictWeakOrdering>
1295
- OutputIterator lower_bound(ForwardIterator first,
1296
- ForwardIterator last,
1297
- InputIterator values_first,
1298
- InputIterator values_last,
1299
- OutputIterator result,
1300
- StrictWeakOrdering comp);
1301
-
1302
-
1303
- /*! \p upper_bound is a vectorized version of binary search: for each
1304
- * iterator \c v in <tt>[values_first, values_last)</tt> it attempts to
1305
- * find the value <tt>*v</tt> in an ordered range <tt>[first, last)</tt>.
1306
- * Specifically, it returns the index of last position where value could
1307
- * be inserted without violating the ordering.
1308
- *
1309
- * The algorithm's execution is parallelized as determined by \p exec.
1310
- *
1311
- * \param exec The execution policy to use for parallelization.
1312
- * \param first The beginning of the ordered sequence.
1313
- * \param last The end of the ordered sequence.
1314
- * \param values_first The beginning of the search values sequence.
1315
- * \param values_last The end of the search values sequence.
1316
- * \param result The beginning of the output sequence.
1317
- *
1318
- * \tparam DerivedPolicy The name of the derived execution policy.
1319
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
1320
- * \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>.
1321
- * and \c InputIterator's \c value_type is <a href="http://www.sgi.com/tech/stl/LessThanComparable.html">LessThanComparable</a>.
1322
- * \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>.
1323
- * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type.
1324
- *
1325
- * \pre The ranges <tt>[first,last)</tt> and <tt>[result, result + (last - first))</tt> shall not overlap.
1326
- *
1327
- * The following code snippet demonstrates how to use \p upper_bound
1328
- * to search for multiple values in a ordered range using the \p thrust::device execution policy for
1329
- * parallelization:
1330
- *
1331
- * \code
1332
- * #include <thrust/binary_search.h>
1333
- * #include <thrust/device_vector.h>
1334
- * #include <thrust/execution_policy.h>
1335
- * ...
1336
- * thrust::device_vector<int> input(5);
1337
- *
1338
- * input[0] = 0;
1339
- * input[1] = 2;
1340
- * input[2] = 5;
1341
- * input[3] = 7;
1342
- * input[4] = 8;
1343
- *
1344
- * thrust::device_vector<int> values(6);
1345
- * values[0] = 0;
1346
- * values[1] = 1;
1347
- * values[2] = 2;
1348
- * values[3] = 3;
1349
- * values[4] = 8;
1350
- * values[5] = 9;
1351
- *
1352
- * thrust::device_vector<unsigned int> output(6);
1353
- *
1354
- * thrust::upper_bound(thrust::device,
1355
- * input.begin(), input.end(),
1356
- * values.begin(), values.end(),
1357
- * output.begin());
1358
- *
1359
- * // output is now [1, 1, 2, 2, 5, 5]
1360
- * \endcode
1361
- *
1362
- * \see http://www.sgi.com/tech/stl/upper_bound.html
1363
- * \see \p upper_bound
1364
- * \see \p equal_range
1365
- * \see \p binary_search
1366
- */
1367
- template <typename DerivedPolicy, typename ForwardIterator, typename InputIterator, typename OutputIterator>
1368
- __host__ __device__
1369
- OutputIterator upper_bound(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
1370
- ForwardIterator first,
1371
- ForwardIterator last,
1372
- InputIterator values_first,
1373
- InputIterator values_last,
1374
- OutputIterator result);
1375
-
1376
-
1377
- /*! \p upper_bound is a vectorized version of binary search: for each
1378
- * iterator \c v in <tt>[values_first, values_last)</tt> it attempts to
1379
- * find the value <tt>*v</tt> in an ordered range <tt>[first, last)</tt>.
1380
- * Specifically, it returns the index of last position where value could
1381
- * be inserted without violating the ordering.
1382
- *
1383
- * \param first The beginning of the ordered sequence.
1384
- * \param last The end of the ordered sequence.
1385
- * \param values_first The beginning of the search values sequence.
1386
- * \param values_last The end of the search values sequence.
1387
- * \param result The beginning of the output sequence.
1388
- *
1389
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
1390
- * \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>.
1391
- * and \c InputIterator's \c value_type is <a href="http://www.sgi.com/tech/stl/LessThanComparable.html">LessThanComparable</a>.
1392
- * \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>.
1393
- * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type.
1394
- *
1395
- * \pre The ranges <tt>[first,last)</tt> and <tt>[result, result + (last - first))</tt> shall not overlap.
1396
- *
1397
- * The following code snippet demonstrates how to use \p upper_bound
1398
- * to search for multiple values in a ordered range.
1399
- *
1400
- * \code
1401
- * #include <thrust/binary_search.h>
1402
- * #include <thrust/device_vector.h>
1403
- * ...
1404
- * thrust::device_vector<int> input(5);
1405
- *
1406
- * input[0] = 0;
1407
- * input[1] = 2;
1408
- * input[2] = 5;
1409
- * input[3] = 7;
1410
- * input[4] = 8;
1411
- *
1412
- * thrust::device_vector<int> values(6);
1413
- * values[0] = 0;
1414
- * values[1] = 1;
1415
- * values[2] = 2;
1416
- * values[3] = 3;
1417
- * values[4] = 8;
1418
- * values[5] = 9;
1419
- *
1420
- * thrust::device_vector<unsigned int> output(6);
1421
- *
1422
- * thrust::upper_bound(input.begin(), input.end(),
1423
- * values.begin(), values.end(),
1424
- * output.begin());
1425
- *
1426
- * // output is now [1, 1, 2, 2, 5, 5]
1427
- * \endcode
1428
- *
1429
- * \see http://www.sgi.com/tech/stl/upper_bound.html
1430
- * \see \p upper_bound
1431
- * \see \p equal_range
1432
- * \see \p binary_search
1433
- */
1434
- template <class ForwardIterator, class InputIterator, class OutputIterator>
1435
- OutputIterator upper_bound(ForwardIterator first,
1436
- ForwardIterator last,
1437
- InputIterator values_first,
1438
- InputIterator values_last,
1439
- OutputIterator result);
1440
-
1441
-
1442
- /*! \p upper_bound is a vectorized version of binary search: for each
1443
- * iterator \c v in <tt>[values_first, values_last)</tt> it attempts to
1444
- * find the value <tt>*v</tt> in an ordered range <tt>[first, last)</tt>.
1445
- * Specifically, it returns the index of first position where value could
1446
- * be inserted without violating the ordering. This version of
1447
- * \p upper_bound uses function object \c comp for comparison.
1448
- *
1449
- * The algorithm's execution is parallelized as determined by \p exec.
1450
- *
1451
- * \param exec The execution policy to use for parallelization.
1452
- * \param first The beginning of the ordered sequence.
1453
- * \param last The end of the ordered sequence.
1454
- * \param values_first The beginning of the search values sequence.
1455
- * \param values_last The end of the search values sequence.
1456
- * \param result The beginning of the output sequence.
1457
- * \param comp The comparison operator.
1458
- *
1459
- * \tparam DerivedPolicy The name of the derived execution policy.
1460
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
1461
- * \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>.
1462
- * and \c InputIterator's \c value_type is comparable to \p ForwardIterator's \c value_type.
1463
- * \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>.
1464
- * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type.
1465
- * \tparam StrictWeakOrdering is a model of <a href="http://www.sgi.com/tech/stl/StrictWeakOrdering.html">Strict Weak Ordering</a>.
1466
- *
1467
- * \pre The ranges <tt>[first,last)</tt> and <tt>[result, result + (last - first))</tt> shall not overlap.
1468
- *
1469
- * The following code snippet demonstrates how to use \p upper_bound
1470
- * to search for multiple values in a ordered range using the \p thrust::device execution policy for
1471
- * parallelization:
1472
- *
1473
- * \code
1474
- * #include <thrust/binary_search.h>
1475
- * #include <thrust/device_vector.h>
1476
- * #include <thrust/functional.h>
1477
- * #include <thrust/execution_policy.h>
1478
- * ...
1479
- * thrust::device_vector<int> input(5);
1480
- *
1481
- * input[0] = 0;
1482
- * input[1] = 2;
1483
- * input[2] = 5;
1484
- * input[3] = 7;
1485
- * input[4] = 8;
1486
- *
1487
- * thrust::device_vector<int> values(6);
1488
- * values[0] = 0;
1489
- * values[1] = 1;
1490
- * values[2] = 2;
1491
- * values[3] = 3;
1492
- * values[4] = 8;
1493
- * values[5] = 9;
1494
- *
1495
- * thrust::device_vector<unsigned int> output(6);
1496
- *
1497
- * thrust::upper_bound(thrust::device,
1498
- * input.begin(), input.end(),
1499
- * values.begin(), values.end(),
1500
- * output.begin(),
1501
- * thrust::less<int>());
1502
- *
1503
- * // output is now [1, 1, 2, 2, 5, 5]
1504
- * \endcode
1505
- *
1506
- * \see http://www.sgi.com/tech/stl/upper_bound.html
1507
- * \see \p lower_bound
1508
- * \see \p equal_range
1509
- * \see \p binary_search
1510
- */
1511
- template <typename DerivedPolicy, typename ForwardIterator, typename InputIterator, typename OutputIterator, typename StrictWeakOrdering>
1512
- __host__ __device__
1513
- OutputIterator upper_bound(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
1514
- ForwardIterator first,
1515
- ForwardIterator last,
1516
- InputIterator values_first,
1517
- InputIterator values_last,
1518
- OutputIterator result,
1519
- StrictWeakOrdering comp);
1520
-
1521
-
1522
- /*! \p upper_bound is a vectorized version of binary search: for each
1523
- * iterator \c v in <tt>[values_first, values_last)</tt> it attempts to
1524
- * find the value <tt>*v</tt> in an ordered range <tt>[first, last)</tt>.
1525
- * Specifically, it returns the index of first position where value could
1526
- * be inserted without violating the ordering. This version of
1527
- * \p upper_bound uses function object \c comp for comparison.
1528
- *
1529
- * \param first The beginning of the ordered sequence.
1530
- * \param last The end of the ordered sequence.
1531
- * \param values_first The beginning of the search values sequence.
1532
- * \param values_last The end of the search values sequence.
1533
- * \param result The beginning of the output sequence.
1534
- * \param comp The comparison operator.
1535
- *
1536
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
1537
- * \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>.
1538
- * and \c InputIterator's \c value_type is comparable to \p ForwardIterator's \c value_type.
1539
- * \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>.
1540
- * and \c ForwardIterator's difference_type is convertible to \c OutputIterator's \c value_type.
1541
- * \tparam StrictWeakOrdering is a model of <a href="http://www.sgi.com/tech/stl/StrictWeakOrdering.html">Strict Weak Ordering</a>.
1542
- *
1543
- * \pre The ranges <tt>[first,last)</tt> and <tt>[result, result + (last - first))</tt> shall not overlap.
1544
- *
1545
- * The following code snippet demonstrates how to use \p upper_bound
1546
- * to search for multiple values in a ordered range.
1547
- *
1548
- * \code
1549
- * #include <thrust/binary_search.h>
1550
- * #include <thrust/device_vector.h>
1551
- * #include <thrust/functional.h>
1552
- * ...
1553
- * thrust::device_vector<int> input(5);
1554
- *
1555
- * input[0] = 0;
1556
- * input[1] = 2;
1557
- * input[2] = 5;
1558
- * input[3] = 7;
1559
- * input[4] = 8;
1560
- *
1561
- * thrust::device_vector<int> values(6);
1562
- * values[0] = 0;
1563
- * values[1] = 1;
1564
- * values[2] = 2;
1565
- * values[3] = 3;
1566
- * values[4] = 8;
1567
- * values[5] = 9;
1568
- *
1569
- * thrust::device_vector<unsigned int> output(6);
1570
- *
1571
- * thrust::upper_bound(input.begin(), input.end(),
1572
- * values.begin(), values.end(),
1573
- * output.begin(),
1574
- * thrust::less<int>());
1575
- *
1576
- * // output is now [1, 1, 2, 2, 5, 5]
1577
- * \endcode
1578
- *
1579
- * \see http://www.sgi.com/tech/stl/upper_bound.html
1580
- * \see \p lower_bound
1581
- * \see \p equal_range
1582
- * \see \p binary_search
1583
- */
1584
- template <class ForwardIterator, class InputIterator, class OutputIterator, class StrictWeakOrdering>
1585
- OutputIterator upper_bound(ForwardIterator first,
1586
- ForwardIterator last,
1587
- InputIterator values_first,
1588
- InputIterator values_last,
1589
- OutputIterator result,
1590
- StrictWeakOrdering comp);
1591
-
1592
-
1593
- /*! \p binary_search is a vectorized version of binary search: for each
1594
- * iterator \c v in <tt>[values_first, values_last)</tt> it attempts to
1595
- * find the value <tt>*v</tt> in an ordered range <tt>[first, last)</tt>.
1596
- * It returns \c true if an element that is equivalent to \c value
1597
- * is present in <tt>[first, last)</tt> and \c false if no such element
1598
- * exists.
1599
- *
1600
- * The algorithm's execution is parallelized as determined by \p exec.
1601
- *
1602
- * \param exec The execution policy to use for parallelization.
1603
- * \param first The beginning of the ordered sequence.
1604
- * \param last The end of the ordered sequence.
1605
- * \param values_first The beginning of the search values sequence.
1606
- * \param values_last The end of the search values sequence.
1607
- * \param result The beginning of the output sequence.
1608
- *
1609
- * \tparam DerivedPolicy The name of the derived execution policy.
1610
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
1611
- * \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>.
1612
- * and \c InputIterator's \c value_type is <a href="http://www.sgi.com/tech/stl/LessThanComparable.html">LessThanComparable</a>.
1613
- * \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>.
1614
- * and bool is convertible to \c OutputIterator's \c value_type.
1615
- *
1616
- * \pre The ranges <tt>[first,last)</tt> and <tt>[result, result + (last - first))</tt> shall not overlap.
1617
- *
1618
- * The following code snippet demonstrates how to use \p binary_search
1619
- * to search for multiple values in a ordered range using the \p thrust::device execution policy for
1620
- * parallelization:
1621
- *
1622
- * \code
1623
- * #include <thrust/binary_search.h>
1624
- * #include <thrust/device_vector.h>
1625
- * #include <thrust/execution_policy.h>
1626
- * ...
1627
- * thrust::device_vector<int> input(5);
1628
- *
1629
- * input[0] = 0;
1630
- * input[1] = 2;
1631
- * input[2] = 5;
1632
- * input[3] = 7;
1633
- * input[4] = 8;
1634
- *
1635
- * thrust::device_vector<int> values(6);
1636
- * values[0] = 0;
1637
- * values[1] = 1;
1638
- * values[2] = 2;
1639
- * values[3] = 3;
1640
- * values[4] = 8;
1641
- * values[5] = 9;
1642
- *
1643
- * thrust::device_vector<bool> output(6);
1644
- *
1645
- * thrust::binary_search(thrust::device,
1646
- * input.begin(), input.end(),
1647
- * values.begin(), values.end(),
1648
- * output.begin());
1649
- *
1650
- * // output is now [true, false, true, false, true, false]
1651
- * \endcode
1652
- *
1653
- * \see http://www.sgi.com/tech/stl/binary_search.html
1654
- * \see \p lower_bound
1655
- * \see \p upper_bound
1656
- * \see \p equal_range
1657
- */
1658
- template <typename DerivedPolicy, typename ForwardIterator, typename InputIterator, typename OutputIterator>
1659
- __host__ __device__
1660
- OutputIterator binary_search(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
1661
- ForwardIterator first,
1662
- ForwardIterator last,
1663
- InputIterator values_first,
1664
- InputIterator values_last,
1665
- OutputIterator result);
1666
-
1667
-
1668
- /*! \p binary_search is a vectorized version of binary search: for each
1669
- * iterator \c v in <tt>[values_first, values_last)</tt> it attempts to
1670
- * find the value <tt>*v</tt> in an ordered range <tt>[first, last)</tt>.
1671
- * It returns \c true if an element that is equivalent to \c value
1672
- * is present in <tt>[first, last)</tt> and \c false if no such element
1673
- * exists.
1674
- *
1675
- * \param first The beginning of the ordered sequence.
1676
- * \param last The end of the ordered sequence.
1677
- * \param values_first The beginning of the search values sequence.
1678
- * \param values_last The end of the search values sequence.
1679
- * \param result The beginning of the output sequence.
1680
- *
1681
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
1682
- * \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>.
1683
- * and \c InputIterator's \c value_type is <a href="http://www.sgi.com/tech/stl/LessThanComparable.html">LessThanComparable</a>.
1684
- * \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>.
1685
- * and bool is convertible to \c OutputIterator's \c value_type.
1686
- *
1687
- * \pre The ranges <tt>[first,last)</tt> and <tt>[result, result + (last - first))</tt> shall not overlap.
1688
- *
1689
- * The following code snippet demonstrates how to use \p binary_search
1690
- * to search for multiple values in a ordered range.
1691
- *
1692
- * \code
1693
- * #include <thrust/binary_search.h>
1694
- * #include <thrust/device_vector.h>
1695
- * ...
1696
- * thrust::device_vector<int> input(5);
1697
- *
1698
- * input[0] = 0;
1699
- * input[1] = 2;
1700
- * input[2] = 5;
1701
- * input[3] = 7;
1702
- * input[4] = 8;
1703
- *
1704
- * thrust::device_vector<int> values(6);
1705
- * values[0] = 0;
1706
- * values[1] = 1;
1707
- * values[2] = 2;
1708
- * values[3] = 3;
1709
- * values[4] = 8;
1710
- * values[5] = 9;
1711
- *
1712
- * thrust::device_vector<bool> output(6);
1713
- *
1714
- * thrust::binary_search(input.begin(), input.end(),
1715
- * values.begin(), values.end(),
1716
- * output.begin());
1717
- *
1718
- * // output is now [true, false, true, false, true, false]
1719
- * \endcode
1720
- *
1721
- * \see http://www.sgi.com/tech/stl/binary_search.html
1722
- * \see \p lower_bound
1723
- * \see \p upper_bound
1724
- * \see \p equal_range
1725
- */
1726
- template <class ForwardIterator, class InputIterator, class OutputIterator>
1727
- OutputIterator binary_search(ForwardIterator first,
1728
- ForwardIterator last,
1729
- InputIterator values_first,
1730
- InputIterator values_last,
1731
- OutputIterator result);
1732
-
1733
-
1734
- /*! \p binary_search is a vectorized version of binary search: for each
1735
- * iterator \c v in <tt>[values_first, values_last)</tt> it attempts to
1736
- * find the value <tt>*v</tt> in an ordered range <tt>[first, last)</tt>.
1737
- * It returns \c true if an element that is equivalent to \c value
1738
- * is present in <tt>[first, last)</tt> and \c false if no such element
1739
- * exists. This version of \p binary_search uses function object
1740
- * \c comp for comparison.
1741
- *
1742
- * The algorithm's execution is parallelized as determined by \p exec.
1743
- *
1744
- * \param exec The execution policy to use for parallelization.
1745
- * \param first The beginning of the ordered sequence.
1746
- * \param last The end of the ordered sequence.
1747
- * \param values_first The beginning of the search values sequence.
1748
- * \param values_last The end of the search values sequence.
1749
- * \param result The beginning of the output sequence.
1750
- * \param comp The comparison operator.
1751
- *
1752
- * \tparam DerivedPolicy The name of the derived execution policy.
1753
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
1754
- * \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>.
1755
- * and \c InputIterator's \c value_type is <a href="http://www.sgi.com/tech/stl/LessThanComparable.html">LessThanComparable</a>.
1756
- * \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>.
1757
- * and bool is convertible to \c OutputIterator's \c value_type.
1758
- * \tparam StrictWeakOrdering is a model of <a href="http://www.sgi.com/tech/stl/StrictWeakOrdering.html">Strict Weak Ordering</a>.
1759
- *
1760
- * \pre The ranges <tt>[first,last)</tt> and <tt>[result, result + (last - first))</tt> shall not overlap.
1761
- *
1762
- * The following code snippet demonstrates how to use \p binary_search
1763
- * to search for multiple values in a ordered range using the \p thrust::device execution policy for
1764
- * parallelization:
1765
- *
1766
- * \code
1767
- * #include <thrust/binary_search.h>
1768
- * #include <thrust/device_vector.h>
1769
- * #include <thrust/functional.h>
1770
- * #include <thrust/execution_policy.h>
1771
- * ...
1772
- * thrust::device_vector<int> input(5);
1773
- *
1774
- * input[0] = 0;
1775
- * input[1] = 2;
1776
- * input[2] = 5;
1777
- * input[3] = 7;
1778
- * input[4] = 8;
1779
- *
1780
- * thrust::device_vector<int> values(6);
1781
- * values[0] = 0;
1782
- * values[1] = 1;
1783
- * values[2] = 2;
1784
- * values[3] = 3;
1785
- * values[4] = 8;
1786
- * values[5] = 9;
1787
- *
1788
- * thrust::device_vector<bool> output(6);
1789
- *
1790
- * thrust::binary_search(thrust::device,
1791
- * input.begin(), input.end(),
1792
- * values.begin(), values.end(),
1793
- * output.begin(),
1794
- * thrust::less<T>());
1795
- *
1796
- * // output is now [true, false, true, false, true, false]
1797
- * \endcode
1798
- *
1799
- * \see http://www.sgi.com/tech/stl/binary_search.html
1800
- * \see \p lower_bound
1801
- * \see \p upper_bound
1802
- * \see \p equal_range
1803
- */
1804
- template <typename DerivedPolicy, typename ForwardIterator, typename InputIterator, typename OutputIterator, typename StrictWeakOrdering>
1805
- __host__ __device__
1806
- OutputIterator binary_search(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
1807
- ForwardIterator first,
1808
- ForwardIterator last,
1809
- InputIterator values_first,
1810
- InputIterator values_last,
1811
- OutputIterator result,
1812
- StrictWeakOrdering comp);
1813
-
1814
-
1815
- /*! \p binary_search is a vectorized version of binary search: for each
1816
- * iterator \c v in <tt>[values_first, values_last)</tt> it attempts to
1817
- * find the value <tt>*v</tt> in an ordered range <tt>[first, last)</tt>.
1818
- * It returns \c true if an element that is equivalent to \c value
1819
- * is present in <tt>[first, last)</tt> and \c false if no such element
1820
- * exists. This version of \p binary_search uses function object
1821
- * \c comp for comparison.
1822
- *
1823
- * \param first The beginning of the ordered sequence.
1824
- * \param last The end of the ordered sequence.
1825
- * \param values_first The beginning of the search values sequence.
1826
- * \param values_last The end of the search values sequence.
1827
- * \param result The beginning of the output sequence.
1828
- * \param comp The comparison operator.
1829
- *
1830
- * \tparam ForwardIterator is a model of <a href="http://www.sgi.com/tech/stl/ForwardIterator">Forward Iterator</a>.
1831
- * \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>.
1832
- * and \c InputIterator's \c value_type is <a href="http://www.sgi.com/tech/stl/LessThanComparable.html">LessThanComparable</a>.
1833
- * \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>.
1834
- * and bool is convertible to \c OutputIterator's \c value_type.
1835
- * \tparam StrictWeakOrdering is a model of <a href="http://www.sgi.com/tech/stl/StrictWeakOrdering.html">Strict Weak Ordering</a>.
1836
- *
1837
- * \pre The ranges <tt>[first,last)</tt> and <tt>[result, result + (last - first))</tt> shall not overlap.
1838
- *
1839
- * The following code snippet demonstrates how to use \p binary_search
1840
- * to search for multiple values in a ordered range.
1841
- *
1842
- * \code
1843
- * #include <thrust/binary_search.h>
1844
- * #include <thrust/device_vector.h>
1845
- * #include <thrust/functional.h>
1846
- * ...
1847
- * thrust::device_vector<int> input(5);
1848
- *
1849
- * input[0] = 0;
1850
- * input[1] = 2;
1851
- * input[2] = 5;
1852
- * input[3] = 7;
1853
- * input[4] = 8;
1854
- *
1855
- * thrust::device_vector<int> values(6);
1856
- * values[0] = 0;
1857
- * values[1] = 1;
1858
- * values[2] = 2;
1859
- * values[3] = 3;
1860
- * values[4] = 8;
1861
- * values[5] = 9;
1862
- *
1863
- * thrust::device_vector<bool> output(6);
1864
- *
1865
- * thrust::binary_search(input.begin(), input.end(),
1866
- * values.begin(), values.end(),
1867
- * output.begin(),
1868
- * thrust::less<T>());
1869
- *
1870
- * // output is now [true, false, true, false, true, false]
1871
- * \endcode
1872
- *
1873
- * \see http://www.sgi.com/tech/stl/binary_search.html
1874
- * \see \p lower_bound
1875
- * \see \p upper_bound
1876
- * \see \p equal_range
1877
- */
1878
- template <class ForwardIterator, class InputIterator, class OutputIterator, class StrictWeakOrdering>
1879
- OutputIterator binary_search(ForwardIterator first,
1880
- ForwardIterator last,
1881
- InputIterator values_first,
1882
- InputIterator values_last,
1883
- OutputIterator result,
1884
- StrictWeakOrdering comp);
1885
-
1886
-
1887
- /*! \} // end vectorized_binary_search
1888
- */
1889
-
1890
-
1891
- /*! \} // end binary_search
1892
- */
1893
-
1894
-
1895
- /*! \} // end searching
1896
- */
1897
-
1898
-
1899
- } // end namespace thrust
1900
-
1901
- #include <thrust/detail/binary_search.inl>
1902
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ClearLove443/Robby-chatbot/modules/utils.py DELETED
@@ -1,105 +0,0 @@
1
- import os
2
- import pandas as pd
3
- import streamlit as st
4
- import pdfplumber
5
-
6
- from modules.chatbot import Chatbot
7
- from modules.embedder import Embedder
8
-
9
- class Utilities:
10
-
11
- @staticmethod
12
- def load_api_key():
13
- """
14
- Loads the OpenAI API key from the .env file or
15
- from the user's input and returns it
16
- """
17
- if not hasattr(st.session_state, "api_key"):
18
- st.session_state.api_key = None
19
- #you can define your API key in .env directly
20
- if os.path.exists(".env") and os.environ.get("OPENAI_API_KEY") is not None:
21
- user_api_key = os.environ["OPENAI_API_KEY"]
22
- st.sidebar.success("API key loaded from .env", icon="🚀")
23
- else:
24
- if st.session_state.api_key is not None:
25
- user_api_key = st.session_state.api_key
26
- st.sidebar.success("API key loaded from previous input", icon="🚀")
27
- else:
28
- user_api_key = st.sidebar.text_input(
29
- label="#### Your OpenAI API key 👇", placeholder="sk-...", type="password"
30
- )
31
- if user_api_key:
32
- st.session_state.api_key = user_api_key
33
-
34
- return user_api_key
35
-
36
-
37
- @staticmethod
38
- def handle_upload(file_types):
39
- """
40
- Handles and display uploaded_file
41
- :param file_types: List of accepted file types, e.g., ["csv", "pdf", "txt"]
42
- """
43
- uploaded_file = st.sidebar.file_uploader("upload", type=file_types, label_visibility="collapsed")
44
- if uploaded_file is not None:
45
-
46
- def show_csv_file(uploaded_file):
47
- file_container = st.expander("Your CSV file :")
48
- uploaded_file.seek(0)
49
- shows = pd.read_csv(uploaded_file)
50
- file_container.write(shows)
51
-
52
- def show_pdf_file(uploaded_file):
53
- file_container = st.expander("Your PDF file :")
54
- with pdfplumber.open(uploaded_file) as pdf:
55
- pdf_text = ""
56
- for page in pdf.pages:
57
- pdf_text += page.extract_text() + "\n\n"
58
- file_container.write(pdf_text)
59
-
60
- def show_txt_file(uploaded_file):
61
- file_container = st.expander("Your TXT file:")
62
- uploaded_file.seek(0)
63
- content = uploaded_file.read().decode("utf-8")
64
- file_container.write(content)
65
-
66
- def get_file_extension(uploaded_file):
67
- return os.path.splitext(uploaded_file)[1].lower()
68
-
69
- file_extension = get_file_extension(uploaded_file.name)
70
-
71
- # Show the contents of the file based on its extension
72
- #if file_extension == ".csv" :
73
- # show_csv_file(uploaded_file)
74
- if file_extension== ".pdf" :
75
- show_pdf_file(uploaded_file)
76
- elif file_extension== ".txt" :
77
- show_txt_file(uploaded_file)
78
-
79
- else:
80
- st.session_state["reset_chat"] = True
81
-
82
- #print(uploaded_file)
83
- return uploaded_file
84
-
85
- @staticmethod
86
- def setup_chatbot(uploaded_file, model, temperature):
87
- """
88
- Sets up the chatbot with the uploaded file, model, and temperature
89
- """
90
- embeds = Embedder()
91
-
92
- with st.spinner("Processing..."):
93
- uploaded_file.seek(0)
94
- file = uploaded_file.read()
95
- # Get the document embeddings for the uploaded file
96
- vectors = embeds.getDocEmbeds(file, uploaded_file.name)
97
-
98
- # Create a Chatbot instance with the specified model and temperature
99
- chatbot = Chatbot(model, temperature,vectors)
100
- st.session_state["ready"] = True
101
-
102
- return chatbot
103
-
104
-
105
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cong723/gpt-academic-public/crazy_functions/总结word文档.py DELETED
@@ -1,127 +0,0 @@
1
- from toolbox import update_ui
2
- from toolbox import CatchException, report_execption, write_results_to_file
3
- from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
4
- fast_debug = False
5
-
6
-
7
- def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
8
- import time, os
9
- # pip install python-docx 用于docx格式,跨平台
10
- # pip install pywin32 用于doc格式,仅支持Win平台
11
- for index, fp in enumerate(file_manifest):
12
- if fp.split(".")[-1] == "docx":
13
- from docx import Document
14
- doc = Document(fp)
15
- file_content = "\n".join([para.text for para in doc.paragraphs])
16
- else:
17
- import win32com.client
18
- word = win32com.client.Dispatch("Word.Application")
19
- word.visible = False
20
- # 打开文件
21
- print('fp', os.getcwd())
22
- doc = word.Documents.Open(os.getcwd() + '/' + fp)
23
- # file_content = doc.Content.Text
24
- doc = word.ActiveDocument
25
- file_content = doc.Range().Text
26
- doc.Close()
27
- word.Quit()
28
-
29
- print(file_content)
30
- # private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名
31
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
32
- from request_llm.bridge_all import model_info
33
- max_token = model_info[llm_kwargs['llm_model']]['max_token']
34
- TOKEN_LIMIT_PER_FRAGMENT = max_token * 3 // 4
35
- paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
36
- txt=file_content,
37
- get_token_fn=model_info[llm_kwargs['llm_model']]['token_cnt'],
38
- limit=TOKEN_LIMIT_PER_FRAGMENT
39
- )
40
- this_paper_history = []
41
- for i, paper_frag in enumerate(paper_fragments):
42
- i_say = f'请对下面的文章片段用中文做概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{paper_frag}```'
43
- i_say_show_user = f'请对下面的文章片段做概述: {os.path.abspath(fp)}的第{i+1}/{len(paper_fragments)}个片段。'
44
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
45
- inputs=i_say,
46
- inputs_show_user=i_say_show_user,
47
- llm_kwargs=llm_kwargs,
48
- chatbot=chatbot,
49
- history=[],
50
- sys_prompt="总结文章。"
51
- )
52
-
53
- chatbot[-1] = (i_say_show_user, gpt_say)
54
- history.extend([i_say_show_user,gpt_say])
55
- this_paper_history.extend([i_say_show_user,gpt_say])
56
-
57
- # 已经对该文章的所有片段总结完毕,如果文章被切分了,
58
- if len(paper_fragments) > 1:
59
- i_say = f"根据以上的对话,总结文章{os.path.abspath(fp)}的主要内容。"
60
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
61
- inputs=i_say,
62
- inputs_show_user=i_say,
63
- llm_kwargs=llm_kwargs,
64
- chatbot=chatbot,
65
- history=this_paper_history,
66
- sys_prompt="总结文章。"
67
- )
68
-
69
- history.extend([i_say,gpt_say])
70
- this_paper_history.extend([i_say,gpt_say])
71
-
72
- res = write_results_to_file(history)
73
- chatbot.append(("完成了吗?", res))
74
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
75
-
76
- res = write_results_to_file(history)
77
- chatbot.append(("所有文件都总结完成了吗?", res))
78
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
79
-
80
-
81
- @CatchException
82
- def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
83
- import glob, os
84
-
85
- # 基本信息:功能、贡献者
86
- chatbot.append([
87
- "函数插件功能?",
88
- "批量总结Word文档。函数插件贡献者: JasonGuo1"])
89
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
90
-
91
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
92
- try:
93
- from docx import Document
94
- except:
95
- report_execption(chatbot, history,
96
- a=f"解析项目: {txt}",
97
- b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。")
98
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
99
- return
100
-
101
- # 清空历史,以免输入溢出
102
- history = []
103
-
104
- # 检测输入参数,如没有给定输入参数,直接退出
105
- if os.path.exists(txt):
106
- project_folder = txt
107
- else:
108
- if txt == "": txt = '空空如也的输入栏'
109
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
110
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
111
- return
112
-
113
- # 搜索需要处理的文件清单
114
- if txt.endswith('.docx') or txt.endswith('.doc'):
115
- file_manifest = [txt]
116
- else:
117
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.docx', recursive=True)] + \
118
- [f for f in glob.glob(f'{project_folder}/**/*.doc', recursive=True)]
119
-
120
- # 如果没找到任何文件
121
- if len(file_manifest) == 0:
122
- report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.docx或doc文件: {txt}")
123
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
124
- return
125
-
126
- # 开始正式执行任务
127
- yield from 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Curranj/FlowerDiffusion/app.py DELETED
@@ -1,72 +0,0 @@
1
- import io
2
- import os
3
- import warnings
4
-
5
- from PIL import Image
6
- from stability_sdk import client
7
- import stability_sdk.interfaces.gooseai.generation.generation_pb2 as generation
8
-
9
- import gradio as gr
10
- stability_api = client.StabilityInference(
11
- key=os.environ["Secret"],
12
- verbose=True,
13
- )
14
-
15
-
16
- def infer(prompt):
17
- # the object returned is a python generator
18
- answers = stability_api.generate(
19
- prompt=f"Beautiful Portait of a {prompt} made out of flowers 💐 🌺 🌸 , artstation winner by Victo Ngai, Kilian Eng, vibrant colors, winning-award masterpiece, aesthetic octane render, 8K HD",
20
- height =640
21
- )
22
-
23
- # iterating over the generator produces the api response
24
- for resp in answers:
25
- for artifact in resp.artifacts:
26
- if artifact.finish_reason == generation.FILTER:
27
- warnings.warn(
28
- "Your request activated the API's safety filters and could not be processed."
29
- "Please modify the prompt and try again.")
30
- if artifact.type == generation.ARTIFACT_IMAGE:
31
- img = Image.open(io.BytesIO(artifact.binary))
32
- return img
33
-
34
-
35
- block = gr.Blocks(css=".container { max-width: 600px; margin: auto; }")
36
-
37
- num_samples = 1
38
-
39
-
40
-
41
- with block as demo:
42
- gr.Markdown("<h1><center>Flower Diffusion</center></h1>")
43
- gr.Markdown(
44
- "Get a pretty flowery image from any prompt - keep it simple!"
45
- )
46
- with gr.Group():
47
- with gr.Box():
48
- with gr.Row().style(mobile_collapse=False, equal_height=True):
49
-
50
- text = gr.Textbox(
51
- value = "Kitty cat",
52
- label="Enter your prompt", show_label=False, max_lines=1
53
- ).style(
54
- border=(True, False, True, True),
55
- rounded=(True, False, False, True),
56
- container=False,
57
- )
58
- btn = gr.Button("Run").style(
59
- margin=False,
60
- rounded=(False, True, True, False),
61
- )
62
-
63
-
64
- gallery = gr.Image()
65
- text.submit(infer, inputs=[text], outputs=gallery)
66
- btn.click(infer, inputs=[text], outputs=gallery)
67
-
68
-
69
-
70
-
71
-
72
- demo.launch(debug=True, enable_queue = True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/BufrStubImagePlugin.py DELETED
@@ -1,73 +0,0 @@
1
- #
2
- # The Python Imaging Library
3
- # $Id$
4
- #
5
- # BUFR stub adapter
6
- #
7
- # Copyright (c) 1996-2003 by Fredrik Lundh
8
- #
9
- # See the README file for information on usage and redistribution.
10
- #
11
-
12
- from . import Image, ImageFile
13
-
14
- _handler = None
15
-
16
-
17
- def register_handler(handler):
18
- """
19
- Install application-specific BUFR image handler.
20
-
21
- :param handler: Handler object.
22
- """
23
- global _handler
24
- _handler = handler
25
-
26
-
27
- # --------------------------------------------------------------------
28
- # Image adapter
29
-
30
-
31
- def _accept(prefix):
32
- return prefix[:4] == b"BUFR" or prefix[:4] == b"ZCZC"
33
-
34
-
35
- class BufrStubImageFile(ImageFile.StubImageFile):
36
- format = "BUFR"
37
- format_description = "BUFR"
38
-
39
- def _open(self):
40
- offset = self.fp.tell()
41
-
42
- if not _accept(self.fp.read(4)):
43
- msg = "Not a BUFR file"
44
- raise SyntaxError(msg)
45
-
46
- self.fp.seek(offset)
47
-
48
- # make something up
49
- self.mode = "F"
50
- self._size = 1, 1
51
-
52
- loader = self._load()
53
- if loader:
54
- loader.open(self)
55
-
56
- def _load(self):
57
- return _handler
58
-
59
-
60
- def _save(im, fp, filename):
61
- if _handler is None or not hasattr(_handler, "save"):
62
- msg = "BUFR save handler not installed"
63
- raise OSError(msg)
64
- _handler.save(im, fp, filename)
65
-
66
-
67
- # --------------------------------------------------------------------
68
- # Registry
69
-
70
- Image.register_open(BufrStubImageFile.format, BufrStubImageFile, _accept)
71
- Image.register_save(BufrStubImageFile.format, _save)
72
-
73
- Image.register_extension(BufrStubImageFile.format, ".bufr")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/subset/__main__.py DELETED
@@ -1,6 +0,0 @@
1
- import sys
2
- from fontTools.subset import main
3
-
4
-
5
- if __name__ == "__main__":
6
- sys.exit(main())
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Model3D-98fc2b2c.css DELETED
@@ -1 +0,0 @@
1
- .gallery.svelte-1ayixqk{padding:var(--size-1) var(--size-2)}
 
 
spaces/DaleChen/AutoGPT/autogpt/commands/web_playwright.py DELETED
@@ -1,80 +0,0 @@
1
- """Web scraping commands using Playwright"""
2
- from __future__ import annotations
3
-
4
- try:
5
- from playwright.sync_api import sync_playwright
6
- except ImportError:
7
- print(
8
- "Playwright not installed. Please install it with 'pip install playwright' to use."
9
- )
10
- from bs4 import BeautifulSoup
11
-
12
- from autogpt.processing.html import extract_hyperlinks, format_hyperlinks
13
-
14
-
15
- def scrape_text(url: str) -> str:
16
- """Scrape text from a webpage
17
-
18
- Args:
19
- url (str): The URL to scrape text from
20
-
21
- Returns:
22
- str: The scraped text
23
- """
24
- with sync_playwright() as p:
25
- browser = p.chromium.launch()
26
- page = browser.new_page()
27
-
28
- try:
29
- page.goto(url)
30
- html_content = page.content()
31
- soup = BeautifulSoup(html_content, "html.parser")
32
-
33
- for script in soup(["script", "style"]):
34
- script.extract()
35
-
36
- text = soup.get_text()
37
- lines = (line.strip() for line in text.splitlines())
38
- chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
39
- text = "\n".join(chunk for chunk in chunks if chunk)
40
-
41
- except Exception as e:
42
- text = f"Error: {str(e)}"
43
-
44
- finally:
45
- browser.close()
46
-
47
- return text
48
-
49
-
50
- def scrape_links(url: str) -> str | list[str]:
51
- """Scrape links from a webpage
52
-
53
- Args:
54
- url (str): The URL to scrape links from
55
-
56
- Returns:
57
- Union[str, List[str]]: The scraped links
58
- """
59
- with sync_playwright() as p:
60
- browser = p.chromium.launch()
61
- page = browser.new_page()
62
-
63
- try:
64
- page.goto(url)
65
- html_content = page.content()
66
- soup = BeautifulSoup(html_content, "html.parser")
67
-
68
- for script in soup(["script", "style"]):
69
- script.extract()
70
-
71
- hyperlinks = extract_hyperlinks(soup, url)
72
- formatted_links = format_hyperlinks(hyperlinks)
73
-
74
- except Exception as e:
75
- formatted_links = f"Error: {str(e)}"
76
-
77
- finally:
78
- browser.close()
79
-
80
- return formatted_links
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DamianMH/Mlove/Dockerfile DELETED
@@ -1,21 +0,0 @@
1
- FROM node:18-bullseye-slim
2
-
3
- RUN apt-get update && \
4
-
5
- apt-get install -y git
6
-
7
- RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
8
-
9
- WORKDIR /app
10
-
11
- RUN npm install
12
-
13
- COPY Dockerfile greeting.md* .env* ./
14
-
15
- RUN npm run build
16
-
17
- EXPOSE 7860
18
-
19
- ENV NODE_ENV=production
20
-
21
- CMD [ "npm", "start" ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Datasculptor/DescriptionGPT/detic/modeling/meta_arch/d2_deformable_detr.py DELETED
@@ -1,308 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- import torch
3
- import torch.nn.functional as F
4
- from torch import nn
5
- import math
6
-
7
- from detectron2.modeling import META_ARCH_REGISTRY, build_backbone
8
- from detectron2.structures import Boxes, Instances
9
- from ..utils import load_class_freq, get_fed_loss_inds
10
-
11
- from models.backbone import Joiner
12
- from models.deformable_detr import DeformableDETR, SetCriterion, MLP
13
- from models.deformable_detr import _get_clones
14
- from models.matcher import HungarianMatcher
15
- from models.position_encoding import PositionEmbeddingSine
16
- from models.deformable_transformer import DeformableTransformer
17
- from models.segmentation import sigmoid_focal_loss
18
- from util.box_ops import box_cxcywh_to_xyxy, box_xyxy_to_cxcywh
19
- from util.misc import NestedTensor, accuracy
20
-
21
-
22
- __all__ = ["DeformableDetr"]
23
-
24
- class CustomSetCriterion(SetCriterion):
25
- def __init__(self, num_classes, matcher, weight_dict, losses, \
26
- focal_alpha=0.25, use_fed_loss=False):
27
- super().__init__(num_classes, matcher, weight_dict, losses, focal_alpha)
28
- self.use_fed_loss = use_fed_loss
29
- if self.use_fed_loss:
30
- self.register_buffer(
31
- 'fed_loss_weight', load_class_freq(freq_weight=0.5))
32
-
33
- def loss_labels(self, outputs, targets, indices, num_boxes, log=True):
34
- """Classification loss (NLL)
35
- targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes]
36
- """
37
- assert 'pred_logits' in outputs
38
- src_logits = outputs['pred_logits']
39
-
40
- idx = self._get_src_permutation_idx(indices)
41
- target_classes_o = torch.cat([t["labels"][J] for t, (_, J) in zip(targets, indices)])
42
- target_classes = torch.full(src_logits.shape[:2], self.num_classes,
43
- dtype=torch.int64, device=src_logits.device)
44
- target_classes[idx] = target_classes_o
45
-
46
- target_classes_onehot = torch.zeros(
47
- [src_logits.shape[0], src_logits.shape[1], src_logits.shape[2] + 1],
48
- dtype=src_logits.dtype, layout=src_logits.layout,
49
- device=src_logits.device)
50
- target_classes_onehot.scatter_(2, target_classes.unsqueeze(-1), 1)
51
-
52
- target_classes_onehot = target_classes_onehot[:,:,:-1] # B x N x C
53
- if self.use_fed_loss:
54
- inds = get_fed_loss_inds(
55
- gt_classes=target_classes_o,
56
- num_sample_cats=50,
57
- weight=self.fed_loss_weight,
58
- C=target_classes_onehot.shape[2])
59
- loss_ce = sigmoid_focal_loss(
60
- src_logits[:, :, inds],
61
- target_classes_onehot[:, :, inds],
62
- num_boxes,
63
- alpha=self.focal_alpha,
64
- gamma=2) * src_logits.shape[1]
65
- else:
66
- loss_ce = sigmoid_focal_loss(
67
- src_logits, target_classes_onehot, num_boxes,
68
- alpha=self.focal_alpha,
69
- gamma=2) * src_logits.shape[1]
70
- losses = {'loss_ce': loss_ce}
71
-
72
- if log:
73
- # TODO this should probably be a separate loss, not hacked in this one here
74
- losses['class_error'] = 100 - accuracy(src_logits[idx], target_classes_o)[0]
75
- return losses
76
-
77
-
78
- class MaskedBackbone(nn.Module):
79
- """ This is a thin wrapper around D2's backbone to provide padding masking"""
80
-
81
- def __init__(self, cfg):
82
- super().__init__()
83
- self.backbone = build_backbone(cfg)
84
- backbone_shape = self.backbone.output_shape()
85
- self.feature_strides = [backbone_shape[f].stride for f in backbone_shape.keys()]
86
- self.strides = [backbone_shape[f].stride for f in backbone_shape.keys()]
87
- self.num_channels = [backbone_shape[x].channels for x in backbone_shape.keys()]
88
-
89
- def forward(self, tensor_list: NestedTensor):
90
- xs = self.backbone(tensor_list.tensors)
91
- out = {}
92
- for name, x in xs.items():
93
- m = tensor_list.mask
94
- assert m is not None
95
- mask = F.interpolate(m[None].float(), size=x.shape[-2:]).to(torch.bool)[0]
96
- out[name] = NestedTensor(x, mask)
97
- return out
98
-
99
- @META_ARCH_REGISTRY.register()
100
- class DeformableDetr(nn.Module):
101
- """
102
- Implement Deformable Detr
103
- """
104
-
105
- def __init__(self, cfg):
106
- super().__init__()
107
- self.with_image_labels = cfg.WITH_IMAGE_LABELS
108
- self.weak_weight = cfg.MODEL.DETR.WEAK_WEIGHT
109
-
110
- self.device = torch.device(cfg.MODEL.DEVICE)
111
- self.test_topk = cfg.TEST.DETECTIONS_PER_IMAGE
112
- self.num_classes = cfg.MODEL.DETR.NUM_CLASSES
113
- self.mask_on = cfg.MODEL.MASK_ON
114
- hidden_dim = cfg.MODEL.DETR.HIDDEN_DIM
115
- num_queries = cfg.MODEL.DETR.NUM_OBJECT_QUERIES
116
-
117
- # Transformer parameters:
118
- nheads = cfg.MODEL.DETR.NHEADS
119
- dropout = cfg.MODEL.DETR.DROPOUT
120
- dim_feedforward = cfg.MODEL.DETR.DIM_FEEDFORWARD
121
- enc_layers = cfg.MODEL.DETR.ENC_LAYERS
122
- dec_layers = cfg.MODEL.DETR.DEC_LAYERS
123
- num_feature_levels = cfg.MODEL.DETR.NUM_FEATURE_LEVELS
124
- two_stage = cfg.MODEL.DETR.TWO_STAGE
125
- with_box_refine = cfg.MODEL.DETR.WITH_BOX_REFINE
126
-
127
- # Loss parameters:
128
- giou_weight = cfg.MODEL.DETR.GIOU_WEIGHT
129
- l1_weight = cfg.MODEL.DETR.L1_WEIGHT
130
- deep_supervision = cfg.MODEL.DETR.DEEP_SUPERVISION
131
- cls_weight = cfg.MODEL.DETR.CLS_WEIGHT
132
- focal_alpha = cfg.MODEL.DETR.FOCAL_ALPHA
133
-
134
- N_steps = hidden_dim // 2
135
- d2_backbone = MaskedBackbone(cfg)
136
- backbone = Joiner(d2_backbone, PositionEmbeddingSine(N_steps, normalize=True))
137
-
138
- transformer = DeformableTransformer(
139
- d_model=hidden_dim,
140
- nhead=nheads,
141
- num_encoder_layers=enc_layers,
142
- num_decoder_layers=dec_layers,
143
- dim_feedforward=dim_feedforward,
144
- dropout=dropout,
145
- activation="relu",
146
- return_intermediate_dec=True,
147
- num_feature_levels=num_feature_levels,
148
- dec_n_points=4,
149
- enc_n_points=4,
150
- two_stage=two_stage,
151
- two_stage_num_proposals=num_queries)
152
-
153
- self.detr = DeformableDETR(
154
- backbone, transformer, num_classes=self.num_classes,
155
- num_queries=num_queries,
156
- num_feature_levels=num_feature_levels,
157
- aux_loss=deep_supervision,
158
- with_box_refine=with_box_refine,
159
- two_stage=two_stage,
160
- )
161
-
162
- if self.mask_on:
163
- assert 0, 'Mask is not supported yet :('
164
-
165
- matcher = HungarianMatcher(
166
- cost_class=cls_weight, cost_bbox=l1_weight, cost_giou=giou_weight)
167
- weight_dict = {"loss_ce": cls_weight, "loss_bbox": l1_weight}
168
- weight_dict["loss_giou"] = giou_weight
169
- if deep_supervision:
170
- aux_weight_dict = {}
171
- for i in range(dec_layers - 1):
172
- aux_weight_dict.update({k + f"_{i}": v for k, v in weight_dict.items()})
173
- weight_dict.update(aux_weight_dict)
174
- print('weight_dict', weight_dict)
175
- losses = ["labels", "boxes", "cardinality"]
176
- if self.mask_on:
177
- losses += ["masks"]
178
- self.criterion = CustomSetCriterion(
179
- self.num_classes, matcher=matcher, weight_dict=weight_dict,
180
- focal_alpha=focal_alpha,
181
- losses=losses,
182
- use_fed_loss=cfg.MODEL.DETR.USE_FED_LOSS
183
- )
184
- pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).to(self.device).view(3, 1, 1)
185
- pixel_std = torch.Tensor(cfg.MODEL.PIXEL_STD).to(self.device).view(3, 1, 1)
186
- self.normalizer = lambda x: (x - pixel_mean) / pixel_std
187
-
188
-
189
- def forward(self, batched_inputs):
190
- """
191
- Args:
192
- Returns:
193
- dict[str: Tensor]:
194
- mapping from a named loss to a tensor storing the loss. Used during training only.
195
- """
196
- images = self.preprocess_image(batched_inputs)
197
- output = self.detr(images)
198
- if self.training:
199
- gt_instances = [x["instances"].to(self.device) for x in batched_inputs]
200
- targets = self.prepare_targets(gt_instances)
201
- loss_dict = self.criterion(output, targets)
202
- weight_dict = self.criterion.weight_dict
203
- for k in loss_dict.keys():
204
- if k in weight_dict:
205
- loss_dict[k] *= weight_dict[k]
206
- if self.with_image_labels:
207
- if batched_inputs[0]['ann_type'] in ['image', 'captiontag']:
208
- loss_dict['loss_image'] = self.weak_weight * self._weak_loss(
209
- output, batched_inputs)
210
- else:
211
- loss_dict['loss_image'] = images[0].new_zeros(
212
- [1], dtype=torch.float32)[0]
213
- # import pdb; pdb.set_trace()
214
- return loss_dict
215
- else:
216
- image_sizes = output["pred_boxes"].new_tensor(
217
- [(t["height"], t["width"]) for t in batched_inputs])
218
- results = self.post_process(output, image_sizes)
219
- return results
220
-
221
-
222
- def prepare_targets(self, targets):
223
- new_targets = []
224
- for targets_per_image in targets:
225
- h, w = targets_per_image.image_size
226
- image_size_xyxy = torch.as_tensor([w, h, w, h], dtype=torch.float, device=self.device)
227
- gt_classes = targets_per_image.gt_classes
228
- gt_boxes = targets_per_image.gt_boxes.tensor / image_size_xyxy
229
- gt_boxes = box_xyxy_to_cxcywh(gt_boxes)
230
- new_targets.append({"labels": gt_classes, "boxes": gt_boxes})
231
- if self.mask_on and hasattr(targets_per_image, 'gt_masks'):
232
- assert 0, 'Mask is not supported yet :('
233
- gt_masks = targets_per_image.gt_masks
234
- gt_masks = convert_coco_poly_to_mask(gt_masks.polygons, h, w)
235
- new_targets[-1].update({'masks': gt_masks})
236
- return new_targets
237
-
238
-
239
- def post_process(self, outputs, target_sizes):
240
- """
241
- """
242
- out_logits, out_bbox = outputs['pred_logits'], outputs['pred_boxes']
243
- assert len(out_logits) == len(target_sizes)
244
- assert target_sizes.shape[1] == 2
245
-
246
- prob = out_logits.sigmoid()
247
- topk_values, topk_indexes = torch.topk(
248
- prob.view(out_logits.shape[0], -1), self.test_topk, dim=1)
249
- scores = topk_values
250
- topk_boxes = topk_indexes // out_logits.shape[2]
251
- labels = topk_indexes % out_logits.shape[2]
252
- boxes = box_cxcywh_to_xyxy(out_bbox)
253
- boxes = torch.gather(boxes, 1, topk_boxes.unsqueeze(-1).repeat(1,1,4))
254
-
255
- # and from relative [0, 1] to absolute [0, height] coordinates
256
- img_h, img_w = target_sizes.unbind(1)
257
- scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1)
258
- boxes = boxes * scale_fct[:, None, :]
259
-
260
- results = []
261
- for s, l, b, size in zip(scores, labels, boxes, target_sizes):
262
- r = Instances((size[0], size[1]))
263
- r.pred_boxes = Boxes(b)
264
- r.scores = s
265
- r.pred_classes = l
266
- results.append({'instances': r})
267
- return results
268
-
269
-
270
- def preprocess_image(self, batched_inputs):
271
- """
272
- Normalize, pad and batch the input images.
273
- """
274
- images = [self.normalizer(x["image"].to(self.device)) for x in batched_inputs]
275
- return images
276
-
277
-
278
- def _weak_loss(self, outputs, batched_inputs):
279
- loss = 0
280
- for b, x in enumerate(batched_inputs):
281
- labels = x['pos_category_ids']
282
- pred_logits = [outputs['pred_logits'][b]]
283
- pred_boxes = [outputs['pred_boxes'][b]]
284
- for xx in outputs['aux_outputs']:
285
- pred_logits.append(xx['pred_logits'][b])
286
- pred_boxes.append(xx['pred_boxes'][b])
287
- pred_logits = torch.stack(pred_logits, dim=0) # L x N x C
288
- pred_boxes = torch.stack(pred_boxes, dim=0) # L x N x 4
289
- for label in labels:
290
- loss += self._max_size_loss(
291
- pred_logits, pred_boxes, label) / len(labels)
292
- loss = loss / len(batched_inputs)
293
- return loss
294
-
295
-
296
- def _max_size_loss(self, logits, boxes, label):
297
- '''
298
- Inputs:
299
- logits: L x N x C
300
- boxes: L x N x 4
301
- '''
302
- target = logits.new_zeros((logits.shape[0], logits.shape[2]))
303
- target[:, label] = 1.
304
- sizes = boxes[..., 2] * boxes[..., 3] # L x N
305
- ind = sizes.argmax(dim=1) # L
306
- loss = F.binary_cross_entropy_with_logits(
307
- logits[range(len(ind)), ind], target, reduction='sum')
308
- return loss
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Datasculptor/StyleGAN-NADA/e4e/options/__init__.py DELETED
File without changes
spaces/Dinoking/Guccio-AI-Designer/decomposition.py DELETED
@@ -1,402 +0,0 @@
1
- # Copyright 2020 Erik Härkönen. All rights reserved.
2
- # This file is licensed to you under the Apache License, Version 2.0 (the "License");
3
- # you may not use this file except in compliance with the License. You may obtain a copy
4
- # of the License at http://www.apache.org/licenses/LICENSE-2.0
5
-
6
- # Unless required by applicable law or agreed to in writing, software distributed under
7
- # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS
8
- # OF ANY KIND, either express or implied. See the License for the specific language
9
- # governing permissions and limitations under the License.
10
-
11
- # Patch for broken CTRL+C handler
12
- # https://github.com/ContinuumIO/anaconda-issues/issues/905
13
- import os
14
- os.environ['FOR_DISABLE_CONSOLE_CTRL_HANDLER'] = '1'
15
-
16
- import numpy as np
17
- import os
18
- from pathlib import Path
19
- import re
20
- import sys
21
- import datetime
22
- import argparse
23
- import torch
24
- import json
25
- from types import SimpleNamespace
26
- import scipy
27
- from scipy.cluster.vq import kmeans
28
- from tqdm import trange
29
- from netdissect.nethook import InstrumentedModel
30
- from config import Config
31
- from estimators import get_estimator
32
- from models import get_instrumented_model
33
-
34
- SEED_SAMPLING = 1
35
- SEED_RANDOM_DIRS = 2
36
- SEED_LINREG = 3
37
- SEED_VISUALIZATION = 5
38
-
39
- B = 20
40
- n_clusters = 500
41
-
42
- def get_random_dirs(components, dimensions):
43
- gen = np.random.RandomState(seed=SEED_RANDOM_DIRS)
44
- dirs = gen.normal(size=(components, dimensions))
45
- dirs /= np.sqrt(np.sum(dirs**2, axis=1, keepdims=True))
46
- return dirs.astype(np.float32)
47
-
48
- # Compute maximum batch size for given VRAM and network
49
- def get_max_batch_size(inst, device, layer_name=None):
50
- inst.remove_edits()
51
-
52
- # Reset statistics
53
- torch.cuda.reset_max_memory_cached(device)
54
- torch.cuda.reset_max_memory_allocated(device)
55
- total_mem = torch.cuda.get_device_properties(device).total_memory
56
-
57
- B_max = 20
58
-
59
- # Measure actual usage
60
- for i in range(2, B_max, 2):
61
- z = inst.model.sample_latent(n_samples=i)
62
- if layer_name:
63
- inst.model.partial_forward(z, layer_name)
64
- else:
65
- inst.model.forward(z)
66
-
67
- maxmem = torch.cuda.max_memory_allocated(device)
68
- del z
69
-
70
- if maxmem > 0.5*total_mem:
71
- print('Batch size {:d}: memory usage {:.0f}MB'.format(i, maxmem / 1e6))
72
- return i
73
-
74
- return B_max
75
-
76
- # Solve for directions in latent space that match PCs in activaiton space
77
- def linreg_lstsq(comp_np, mean_np, stdev_np, inst, config):
78
- print('Performing least squares regression', flush=True)
79
-
80
- torch.manual_seed(SEED_LINREG)
81
- np.random.seed(SEED_LINREG)
82
-
83
- comp = torch.from_numpy(comp_np).float().to(inst.model.device)
84
- mean = torch.from_numpy(mean_np).float().to(inst.model.device)
85
- stdev = torch.from_numpy(stdev_np).float().to(inst.model.device)
86
-
87
- n_samp = max(10_000, config.n) // B * B # make divisible
88
- n_comp = comp.shape[0]
89
- latent_dims = inst.model.get_latent_dims()
90
-
91
- # We're looking for M s.t. M*P*G'(Z) = Z => M*A = Z
92
- # Z = batch of latent vectors (n_samples x latent_dims)
93
- # G'(Z) = batch of activations at intermediate layer
94
- # A = P*G'(Z) = projected activations (n_samples x pca_coords)
95
- # M = linear mapping (pca_coords x latent_dims)
96
-
97
- # Minimization min_M ||MA - Z||_l2 rewritten as min_M.T ||A.T*M.T - Z.T||_l2
98
- # to match format expected by pytorch.lstsq
99
-
100
- # TODO: regression on pixel-space outputs? (using nonlinear optimizer)
101
- # min_M lpips(G_full(MA), G_full(Z))
102
-
103
- # Tensors to fill with data
104
- # Dimensions other way around, so these are actually the transposes
105
- A = np.zeros((n_samp, n_comp), dtype=np.float32)
106
- Z = np.zeros((n_samp, latent_dims), dtype=np.float32)
107
-
108
- # Project tensor X onto PCs, return coordinates
109
- def project(X, comp):
110
- N = X.shape[0]
111
- K = comp.shape[0]
112
- coords = torch.bmm(comp.expand([N]+[-1]*comp.ndim), X.view(N, -1, 1))
113
- return coords.reshape(N, K)
114
-
115
- for i in trange(n_samp // B, desc='Collecting samples', ascii=True):
116
- z = inst.model.sample_latent(B)
117
- inst.model.partial_forward(z, config.layer)
118
- act = inst.retained_features()[config.layer].reshape(B, -1)
119
-
120
- # Project onto basis
121
- act = act - mean
122
- coords = project(act, comp)
123
- coords_scaled = coords / stdev
124
-
125
- A[i*B:(i+1)*B] = coords_scaled.detach().cpu().numpy()
126
- Z[i*B:(i+1)*B] = z.detach().cpu().numpy().reshape(B, -1)
127
-
128
- # Solve least squares fit
129
-
130
- # gelsd = divide-and-conquer SVD; good default
131
- # gelsy = complete orthogonal factorization; sometimes faster
132
- # gelss = SVD; slow but less memory hungry
133
- M_t = scipy.linalg.lstsq(A, Z, lapack_driver='gelsd')[0] # torch.lstsq(Z, A)[0][:n_comp, :]
134
-
135
- # Solution given by rows of M_t
136
- Z_comp = M_t[:n_comp, :]
137
- Z_mean = np.mean(Z, axis=0, keepdims=True)
138
-
139
- return Z_comp, Z_mean
140
-
141
- def regression(comp, mean, stdev, inst, config):
142
- # Sanity check: verify orthonormality
143
- M = np.dot(comp, comp.T)
144
- if not np.allclose(M, np.identity(M.shape[0])):
145
- det = np.linalg.det(M)
146
- print(f'WARNING: Computed basis is not orthonormal (determinant={det})')
147
-
148
- return linreg_lstsq(comp, mean, stdev, inst, config)
149
-
150
- def compute(config, dump_name, instrumented_model):
151
- global B
152
-
153
- timestamp = lambda : datetime.datetime.now().strftime("%d.%m %H:%M")
154
- print(f'[{timestamp()}] Computing', dump_name.name)
155
-
156
- # Ensure reproducibility
157
- torch.manual_seed(0) # also sets cuda seeds
158
- np.random.seed(0)
159
-
160
- # Speed up backend
161
- torch.backends.cudnn.benchmark = True
162
-
163
- has_gpu = torch.cuda.is_available()
164
- device = torch.device('cuda' if has_gpu else 'cpu')
165
- layer_key = config.layer
166
-
167
- if instrumented_model is None:
168
- inst = get_instrumented_model(config.model, config.output_class, layer_key, device)
169
- model = inst.model
170
- else:
171
- print('Reusing InstrumentedModel instance')
172
- inst = instrumented_model
173
- model = inst.model
174
- inst.remove_edits()
175
- model.set_output_class(config.output_class)
176
-
177
- # Regress back to w space
178
- if config.use_w:
179
- print('Using W latent space')
180
- model.use_w()
181
-
182
- inst.retain_layer(layer_key)
183
- model.partial_forward(model.sample_latent(1), layer_key)
184
- sample_shape = inst.retained_features()[layer_key].shape
185
- sample_dims = np.prod(sample_shape)
186
- print('Feature shape:', sample_shape)
187
-
188
- input_shape = inst.model.get_latent_shape()
189
- input_dims = inst.model.get_latent_dims()
190
-
191
- config.components = min(config.components, sample_dims)
192
- transformer = get_estimator(config.estimator, config.components, config.sparsity)
193
-
194
- X = None
195
- X_global_mean = None
196
-
197
- # Figure out batch size if not provided
198
- B = config.batch_size or get_max_batch_size(inst, device, layer_key)
199
-
200
- # Divisible by B (ignored in output name)
201
- N = config.n // B * B
202
-
203
- # Compute maximum batch size based on RAM + pagefile budget
204
- target_bytes = 20 * 1_000_000_000 # GB
205
- feat_size_bytes = sample_dims * np.dtype('float64').itemsize
206
- N_limit_RAM = np.floor_divide(target_bytes, feat_size_bytes)
207
- if not transformer.batch_support and N > N_limit_RAM:
208
- print('WARNING: estimator does not support batching, ' \
209
- 'given config will use {:.1f} GB memory.'.format(feat_size_bytes / 1_000_000_000 * N))
210
-
211
- # 32-bit LAPACK gets very unhappy about huge matrices (in linalg.svd)
212
- if config.estimator == 'ica':
213
- lapack_max_N = np.floor_divide(np.iinfo(np.int32).max // 4, sample_dims) # 4x extra buffer
214
- if N > lapack_max_N:
215
- raise RuntimeError(f'Matrices too large for ICA, please use N <= {lapack_max_N}')
216
-
217
- print('B={}, N={}, dims={}, N/dims={:.1f}'.format(B, N, sample_dims, N/sample_dims), flush=True)
218
-
219
- # Must not depend on chosen batch size (reproducibility)
220
- NB = max(B, max(2_000, 3*config.components)) # ipca: as large as possible!
221
-
222
- samples = None
223
- if not transformer.batch_support:
224
- samples = np.zeros((N + NB, sample_dims), dtype=np.float32)
225
-
226
- torch.manual_seed(config.seed or SEED_SAMPLING)
227
- np.random.seed(config.seed or SEED_SAMPLING)
228
-
229
- # Use exactly the same latents regardless of batch size
230
- # Store in main memory, since N might be huge (1M+)
231
- # Run in batches, since sample_latent() might perform Z -> W mapping
232
- n_lat = ((N + NB - 1) // B + 1) * B
233
- latents = np.zeros((n_lat, *input_shape[1:]), dtype=np.float32)
234
- with torch.no_grad():
235
- for i in trange(n_lat // B, desc='Sampling latents'):
236
- latents[i*B:(i+1)*B] = model.sample_latent(n_samples=B).cpu().numpy()
237
-
238
- # Decomposition on non-Gaussian latent space
239
- samples_are_latents = layer_key in ['g_mapping', 'style'] and inst.model.latent_space_name() == 'W'
240
-
241
- canceled = False
242
- try:
243
- X = np.ones((NB, sample_dims), dtype=np.float32)
244
- action = 'Fitting' if transformer.batch_support else 'Collecting'
245
- for gi in trange(0, N, NB, desc=f'{action} batches (NB={NB})', ascii=True):
246
- for mb in range(0, NB, B):
247
- z = torch.from_numpy(latents[gi+mb:gi+mb+B]).to(device)
248
-
249
- if samples_are_latents:
250
- # Decomposition on latents directly (e.g. StyleGAN W)
251
- batch = z.reshape((B, -1))
252
- else:
253
- # Decomposition on intermediate layer
254
- with torch.no_grad():
255
- model.partial_forward(z, layer_key)
256
-
257
- # Permuted to place PCA dimensions last
258
- batch = inst.retained_features()[layer_key].reshape((B, -1))
259
-
260
- space_left = min(B, NB - mb)
261
- X[mb:mb+space_left] = batch.cpu().numpy()[:space_left]
262
-
263
- if transformer.batch_support:
264
- if not transformer.fit_partial(X.reshape(-1, sample_dims)):
265
- break
266
- else:
267
- samples[gi:gi+NB, :] = X.copy()
268
- except KeyboardInterrupt:
269
- if not transformer.batch_support:
270
- sys.exit(1) # no progress yet
271
-
272
- dump_name = dump_name.parent / dump_name.name.replace(f'n{N}', f'n{gi}')
273
- print(f'Saving current state to "{dump_name.name}" before exiting')
274
- canceled = True
275
-
276
- if not transformer.batch_support:
277
- X = samples # Use all samples
278
- X_global_mean = X.mean(axis=0, keepdims=True, dtype=np.float32) # TODO: activations surely multi-modal...!
279
- X -= X_global_mean
280
-
281
- print(f'[{timestamp()}] Fitting whole batch')
282
- t_start_fit = datetime.datetime.now()
283
-
284
- transformer.fit(X)
285
-
286
- print(f'[{timestamp()}] Done in {datetime.datetime.now() - t_start_fit}')
287
- assert np.all(transformer.transformer.mean_ < 1e-3), 'Mean of normalized data should be zero'
288
- else:
289
- X_global_mean = transformer.transformer.mean_.reshape((1, sample_dims))
290
- X = X.reshape(-1, sample_dims)
291
- X -= X_global_mean
292
-
293
- X_comp, X_stdev, X_var_ratio = transformer.get_components()
294
-
295
- assert X_comp.shape[1] == sample_dims \
296
- and X_comp.shape[0] == config.components \
297
- and X_global_mean.shape[1] == sample_dims \
298
- and X_stdev.shape[0] == config.components, 'Invalid shape'
299
-
300
- # 'Activations' are really latents in a secondary latent space
301
- if samples_are_latents:
302
- Z_comp = X_comp
303
- Z_global_mean = X_global_mean
304
- else:
305
- Z_comp, Z_global_mean = regression(X_comp, X_global_mean, X_stdev, inst, config)
306
-
307
- # Normalize
308
- Z_comp /= np.linalg.norm(Z_comp, axis=-1, keepdims=True)
309
-
310
- # Random projections
311
- # We expect these to explain much less of the variance
312
- random_dirs = get_random_dirs(config.components, np.prod(sample_shape))
313
- n_rand_samples = min(5000, X.shape[0])
314
- X_view = X[:n_rand_samples, :].T
315
- assert np.shares_memory(X_view, X), "Error: slice produced copy"
316
- X_stdev_random = np.dot(random_dirs, X_view).std(axis=1)
317
-
318
- # Inflate back to proper shapes (for easier broadcasting)
319
- X_comp = X_comp.reshape(-1, *sample_shape)
320
- X_global_mean = X_global_mean.reshape(sample_shape)
321
- Z_comp = Z_comp.reshape(-1, *input_shape)
322
- Z_global_mean = Z_global_mean.reshape(input_shape)
323
-
324
- # Compute stdev in latent space if non-Gaussian
325
- lat_stdev = np.ones_like(X_stdev)
326
- if config.use_w:
327
- samples = model.sample_latent(5000).reshape(5000, input_dims).detach().cpu().numpy()
328
- coords = np.dot(Z_comp.reshape(-1, input_dims), samples.T)
329
- lat_stdev = coords.std(axis=1)
330
-
331
- os.makedirs(dump_name.parent, exist_ok=True)
332
- np.savez_compressed(dump_name, **{
333
- 'act_comp': X_comp.astype(np.float32),
334
- 'act_mean': X_global_mean.astype(np.float32),
335
- 'act_stdev': X_stdev.astype(np.float32),
336
- 'lat_comp': Z_comp.astype(np.float32),
337
- 'lat_mean': Z_global_mean.astype(np.float32),
338
- 'lat_stdev': lat_stdev.astype(np.float32),
339
- 'var_ratio': X_var_ratio.astype(np.float32),
340
- 'random_stdevs': X_stdev_random.astype(np.float32),
341
- })
342
-
343
- if canceled:
344
- sys.exit(1)
345
-
346
- # Don't shutdown if passed as param
347
- if instrumented_model is None:
348
- inst.close()
349
- del inst
350
- del model
351
-
352
- del X
353
- del X_comp
354
- del random_dirs
355
- del batch
356
- del samples
357
- del latents
358
- torch.cuda.empty_cache()
359
-
360
- # Return cached results or commpute if needed
361
- # Pass existing InstrumentedModel instance to reuse it
362
- def get_or_compute(config, model=None, submit_config=None, force_recompute=False):
363
- if submit_config is None:
364
- wrkdir = str(Path(__file__).parent.resolve())
365
- submit_config = SimpleNamespace(run_dir_root = wrkdir, run_dir = wrkdir)
366
-
367
- # Called directly by run.py
368
- return _compute(submit_config, config, model, force_recompute)
369
-
370
- def _compute(submit_config, config, model=None, force_recompute=False):
371
- basedir = Path(submit_config.run_dir)
372
- outdir = basedir / 'out'
373
-
374
- if config.n is None:
375
- raise RuntimeError('Must specify number of samples with -n=XXX')
376
-
377
- if model and not isinstance(model, InstrumentedModel):
378
- raise RuntimeError('Passed model has to be wrapped in "InstrumentedModel"')
379
-
380
- if config.use_w and not 'StyleGAN' in config.model:
381
- raise RuntimeError(f'Cannot change latent space of non-StyleGAN model {config.model}')
382
-
383
- transformer = get_estimator(config.estimator, config.components, config.sparsity)
384
- dump_name = "{}-{}_{}_{}_n{}{}{}.npz".format(
385
- config.model.lower(),
386
- config.output_class.replace(' ', '_'),
387
- config.layer.lower(),
388
- transformer.get_param_str(),
389
- config.n,
390
- '_w' if config.use_w else '',
391
- f'_seed{config.seed}' if config.seed else ''
392
- )
393
-
394
- dump_path = basedir / 'cache' / 'components' / dump_name
395
-
396
- if not dump_path.is_file() or force_recompute:
397
- print('Not cached')
398
- t_start = datetime.datetime.now()
399
- compute(config, dump_path, model)
400
- print('Total time:', datetime.datetime.now() - t_start)
401
-
402
- return dump_path
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Disguised/anime_character_recognizer/app.py DELETED
@@ -1,20 +0,0 @@
1
- import gradio as gr
2
- from fastai.vision.all import *
3
- import gradio as gr
4
- import re
5
- from glob import glob
6
-
7
- learn = load_learner('model_ft15(extra).pkl')
8
-
9
- categories = learn.dls.vocab
10
-
11
- def classify_image(img):
12
- pred,idx,probs = learn.predict(img)
13
- return dict(zip(categories, map(float,probs)))
14
-
15
- image = gr.inputs.Image(shape=(192, 192))
16
- label = gr.outputs.Label()
17
-
18
-
19
- intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples='./examples')
20
- intf.launch(inline=False)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Dorado607/ChuanhuChatGPT/modules/config.py DELETED
@@ -1,269 +0,0 @@
1
- from collections import defaultdict
2
- from contextlib import contextmanager
3
- import os
4
- import logging
5
- import sys
6
- import commentjson as json
7
-
8
- from . import shared
9
- from . import presets
10
-
11
-
12
- __all__ = [
13
- "my_api_key",
14
- "sensitive_id",
15
- "authflag",
16
- "auth_list",
17
- "dockerflag",
18
- "retrieve_proxy",
19
- "log_level",
20
- "advance_docs",
21
- "update_doc_config",
22
- "usage_limit",
23
- "multi_api_key",
24
- "server_name",
25
- "server_port",
26
- "share",
27
- "check_update",
28
- "latex_delimiters_set",
29
- "hide_history_when_not_logged_in",
30
- "default_chuanhu_assistant_model",
31
- "show_api_billing"
32
- ]
33
-
34
- # 添加一个统一的config文件,避免文件过多造成的疑惑(优先级最低)
35
- # 同时,也可以为后续支持自定义功能提供config的帮助
36
- if os.path.exists("config.json"):
37
- with open("config.json", "r", encoding='utf-8') as f:
38
- config = json.load(f)
39
- else:
40
- config = {}
41
-
42
-
43
- def load_config_to_environ(key_list):
44
- global config
45
- for key in key_list:
46
- if key in config:
47
- os.environ[key.upper()] = os.environ.get(key.upper(), config[key])
48
-
49
-
50
- lang_config = config.get("language", "auto")
51
- language = os.environ.get("LANGUAGE", lang_config)
52
-
53
- hide_history_when_not_logged_in = config.get(
54
- "hide_history_when_not_logged_in", False)
55
- check_update = config.get("check_update", True)
56
- show_api_billing = config.get("show_api_billing", False)
57
- show_api_billing = bool(os.environ.get("SHOW_API_BILLING", show_api_billing))
58
-
59
- if os.path.exists("api_key.txt"):
60
- logging.info("检测到api_key.txt文件,正在进行迁移...")
61
- with open("api_key.txt", "r", encoding="utf-8") as f:
62
- config["openai_api_key"] = f.read().strip()
63
- os.rename("api_key.txt", "api_key(deprecated).txt")
64
- with open("config.json", "w", encoding='utf-8') as f:
65
- json.dump(config, f, indent=4, ensure_ascii=False)
66
-
67
- if os.path.exists("auth.json"):
68
- logging.info("检测到auth.json文件,正在进行迁移...")
69
- auth_list = []
70
- with open("auth.json", "r", encoding='utf-8') as f:
71
- auth = json.load(f)
72
- for _ in auth:
73
- if auth[_]["username"] and auth[_]["password"]:
74
- auth_list.append((auth[_]["username"], auth[_]["password"]))
75
- else:
76
- logging.error("请检查auth.json文件中的用户名和密码!")
77
- sys.exit(1)
78
- config["users"] = auth_list
79
- os.rename("auth.json", "auth(deprecated).json")
80
- with open("config.json", "w", encoding='utf-8') as f:
81
- json.dump(config, f, indent=4, ensure_ascii=False)
82
-
83
- # 处理docker if we are running in Docker
84
- dockerflag = config.get("dockerflag", False)
85
- if os.environ.get("dockerrun") == "yes":
86
- dockerflag = True
87
-
88
- # 处理 api-key 以及 允许的用户列表
89
- my_api_key = config.get("openai_api_key", "")
90
- my_api_key = os.environ.get("OPENAI_API_KEY", my_api_key)
91
- os.environ["OPENAI_API_KEY"] = my_api_key
92
- os.environ["OPENAI_EMBEDDING_API_KEY"] = my_api_key
93
-
94
- if config.get("legacy_api_usage", False):
95
- sensitive_id = config.get("sensitive_id", "")
96
- sensitive_id = os.environ.get("SENSITIVE_ID", sensitive_id)
97
- else:
98
- sensitive_id = my_api_key
99
-
100
- google_palm_api_key = config.get("google_palm_api_key", "")
101
- google_palm_api_key = os.environ.get(
102
- "GOOGLE_PALM_API_KEY", google_palm_api_key)
103
- os.environ["GOOGLE_PALM_API_KEY"] = google_palm_api_key
104
-
105
- xmchat_api_key = config.get("xmchat_api_key", "")
106
- os.environ["XMCHAT_API_KEY"] = xmchat_api_key
107
-
108
- minimax_api_key = config.get("minimax_api_key", "")
109
- os.environ["MINIMAX_API_KEY"] = minimax_api_key
110
- minimax_group_id = config.get("minimax_group_id", "")
111
- os.environ["MINIMAX_GROUP_ID"] = minimax_group_id
112
-
113
- load_config_to_environ(["openai_api_type", "azure_openai_api_key", "azure_openai_api_base_url",
114
- "azure_openai_api_version", "azure_deployment_name", "azure_embedding_deployment_name", "azure_embedding_model_name"])
115
-
116
-
117
- usage_limit = os.environ.get("USAGE_LIMIT", config.get("usage_limit", 120))
118
-
119
- # 多账户机制
120
- multi_api_key = config.get("multi_api_key", False) # 是否开启多账户机制
121
- if multi_api_key:
122
- api_key_list = config.get("api_key_list", [])
123
- if len(api_key_list) == 0:
124
- logging.error("多账号模式已开启,但api_key_list为空,请检查config.json")
125
- sys.exit(1)
126
- shared.state.set_api_key_queue(api_key_list)
127
-
128
- auth_list = config.get("users", []) # 实际上是使用者的列表
129
- authflag = len(auth_list) > 0 # 是否开启认证的状态值,改为判断auth_list长度
130
-
131
- # 处理自定义的api_host,优先读环境变量的配置,如果存在则自动装配
132
- api_host = os.environ.get(
133
- "OPENAI_API_BASE", config.get("openai_api_base", None))
134
- if api_host is not None:
135
- shared.state.set_api_host(api_host)
136
- os.environ["OPENAI_API_BASE"] = f"{api_host}/v1"
137
- logging.info(f"OpenAI API Base set to: {os.environ['OPENAI_API_BASE']}")
138
-
139
- default_chuanhu_assistant_model = config.get(
140
- "default_chuanhu_assistant_model", "gpt-3.5-turbo")
141
- for x in ["GOOGLE_CSE_ID", "GOOGLE_API_KEY", "WOLFRAM_ALPHA_APPID", "SERPAPI_API_KEY"]:
142
- if config.get(x, None) is not None:
143
- os.environ[x] = config[x]
144
-
145
-
146
- @contextmanager
147
- def retrieve_openai_api(api_key=None):
148
- old_api_key = os.environ.get("OPENAI_API_KEY", "")
149
- if api_key is None:
150
- os.environ["OPENAI_API_KEY"] = my_api_key
151
- yield my_api_key
152
- else:
153
- os.environ["OPENAI_API_KEY"] = api_key
154
- yield api_key
155
- os.environ["OPENAI_API_KEY"] = old_api_key
156
-
157
-
158
- # 处理log
159
- log_level = config.get("log_level", "INFO")
160
- logging.basicConfig(
161
- level=log_level,
162
- format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s",
163
- )
164
-
165
- # 处理代理:
166
- http_proxy = os.environ.get("HTTP_PROXY", "")
167
- https_proxy = os.environ.get("HTTPS_PROXY", "")
168
- http_proxy = config.get("http_proxy", http_proxy)
169
- https_proxy = config.get("https_proxy", https_proxy)
170
-
171
- # 重置系统变量,在不需要设置的时候不设置环境变量,以免引起全局代理报错
172
- os.environ["HTTP_PROXY"] = ""
173
- os.environ["HTTPS_PROXY"] = ""
174
-
175
- local_embedding = config.get("local_embedding", False) # 是否使用本地embedding
176
-
177
-
178
- @contextmanager
179
- def retrieve_proxy(proxy=None):
180
- """
181
- 1, 如果proxy = NONE,设置环境变量,并返回最新设置的代理
182
- 2,如果proxy != NONE,更新当前的代理配置,但是不更新环境变量
183
- """
184
- global http_proxy, https_proxy
185
- if proxy is not None:
186
- http_proxy = proxy
187
- https_proxy = proxy
188
- yield http_proxy, https_proxy
189
- else:
190
- old_var = os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"]
191
- os.environ["HTTP_PROXY"] = http_proxy
192
- os.environ["HTTPS_PROXY"] = https_proxy
193
- yield http_proxy, https_proxy # return new proxy
194
-
195
- # return old proxy
196
- os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"] = old_var
197
-
198
-
199
- # 处理latex options
200
- user_latex_option = config.get("latex_option", "default")
201
- if user_latex_option == "default":
202
- latex_delimiters_set = [
203
- {"left": "$$", "right": "$$", "display": True},
204
- {"left": "$", "right": "$", "display": False},
205
- {"left": "\\(", "right": "\\)", "display": False},
206
- {"left": "\\[", "right": "\\]", "display": True},
207
- ]
208
- elif user_latex_option == "strict":
209
- latex_delimiters_set = [
210
- {"left": "$$", "right": "$$", "display": True},
211
- {"left": "\\(", "right": "\\)", "display": False},
212
- {"left": "\\[", "right": "\\]", "display": True},
213
- ]
214
- elif user_latex_option == "all":
215
- latex_delimiters_set = [
216
- {"left": "$$", "right": "$$", "display": True},
217
- {"left": "$", "right": "$", "display": False},
218
- {"left": "\\(", "right": "\\)", "display": False},
219
- {"left": "\\[", "right": "\\]", "display": True},
220
- {"left": "\\begin{equation}", "right": "\\end{equation}", "display": True},
221
- {"left": "\\begin{align}", "right": "\\end{align}", "display": True},
222
- {"left": "\\begin{alignat}", "right": "\\end{alignat}", "display": True},
223
- {"left": "\\begin{gather}", "right": "\\end{gather}", "display": True},
224
- {"left": "\\begin{CD}", "right": "\\end{CD}", "display": True},
225
- ]
226
- elif user_latex_option == "disabled":
227
- latex_delimiters_set = []
228
- else:
229
- latex_delimiters_set = [
230
- {"left": "$$", "right": "$$", "display": True},
231
- {"left": "$", "right": "$", "display": False},
232
- {"left": "\\(", "right": "\\)", "display": False},
233
- {"left": "\\[", "right": "\\]", "display": True},
234
- ]
235
-
236
- # 处理advance docs
237
- advance_docs = defaultdict(lambda: defaultdict(dict))
238
- advance_docs.update(config.get("advance_docs", {}))
239
-
240
-
241
- def update_doc_config(two_column_pdf):
242
- global advance_docs
243
- advance_docs["pdf"]["two_column"] = two_column_pdf
244
-
245
- logging.info(f"更新后的文件参数为:{advance_docs}")
246
-
247
-
248
- # 处理gradio.launch参数
249
- server_name = config.get("server_name", None)
250
- server_port = config.get("server_port", None)
251
- if server_name is None:
252
- if dockerflag:
253
- server_name = "0.0.0.0"
254
- else:
255
- server_name = "127.0.0.1"
256
- if server_port is None:
257
- if dockerflag:
258
- server_port = 7860
259
-
260
- assert server_port is None or type(server_port) == int, "要求port设置为int类型"
261
-
262
- # 设置默认model
263
- default_model = config.get("default_model", "")
264
- try:
265
- presets.DEFAULT_MODEL = presets.MODELS.index(default_model)
266
- except ValueError:
267
- pass
268
-
269
- share = config.get("share", False)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Drac77/hakurei-waifu-diffusion/app.py DELETED
@@ -1,3 +0,0 @@
1
- import gradio as gr
2
-
3
- gr.Interface.load("models/hakurei/waifu-diffusion").launch()