Commit
·
23de991
1
Parent(s):
78a3baa
Update parquet files (step 34 of 121)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1gistliPinn/ChatGPT4/Examples/CivilCAD Free LINK Download Full Version.md +0 -6
- spaces/1gistliPinn/ChatGPT4/Examples/Crack Crysis 2 Pc 64 Bits.md +0 -25
- spaces/1gistliPinn/ChatGPT4/Examples/Download Feem Wifi Pro Cracked For Windowsk.md +0 -6
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Animal Connect How to Find and Match the Same Animals in Different Scenarios.md +0 -146
- spaces/1phancelerku/anime-remove-background/Download Simcity Buildit Hack APK and Enjoy the Game with No Limits.md +0 -105
- spaces/3i2irg/SF-model/README.md +0 -12
- spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_r34.py +0 -26
- spaces/AHzizi/WaifuVoiceGen/models.py +0 -533
- spaces/AIConsultant/MusicGen/audiocraft/grids/compression/encodec_audiogen_16khz.py +0 -29
- spaces/AIGC-Audio/AudioGPT/NeuralSeq/vocoders/pwg.py +0 -137
- spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/linear_probe.py +0 -63
- spaces/ASJMO/freegpt/g4f/README.md +0 -5
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/click/Factory.js +0 -11
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pinch/Factory.js +0 -11
- spaces/AkashKhamkar/QnA-generator/README.md +0 -12
- spaces/AlekseyKorshuk/model-evaluation/utils.py +0 -34
- spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/run.sh +0 -2
- spaces/Amrrs/DragGan-Inversion/viz/__init__.py +0 -9
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/latent_diffusion/__init__.py +0 -11
- spaces/Andy1621/uniformer_image_detection/tools/misc/print_config.py +0 -54
- spaces/Anonymous-123/ImageNet-Editing/run.sh +0 -86
- spaces/Arnx/MusicGenXvAKN/audiocraft/quantization/base.py +0 -107
- spaces/AsakuraMizu/moe-tts/modules.py +0 -390
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distro/distro.py +0 -1399
- spaces/AtomdffAI/wechatgpt4atom/bot/openai/open_ai_bot.py +0 -166
- spaces/BAAI/dreambooth-altdiffusion/README.md +0 -14
- spaces/Bart92/RVC_HF/infer/modules/train/train.py +0 -723
- spaces/Benson/text-generation/Examples/Bowmasters Apk All Characters Unlocked 2022.md +0 -61
- spaces/CVH-vn1210/make_hair/minigpt4/processors/__init__.py +0 -33
- spaces/CVMX-jaca-tonos/YouTube-Video-Streaming-Spanish-ASR/README.md +0 -12
- spaces/CVPR/LIVE/thrust/thrust/scan.h +0 -1564
- spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/uninitialized_fill.h +0 -114
- spaces/CVPR/WALT/mmdet/models/backbones/regnet.py +0 -325
- spaces/CVPR/WALT/mmdet/models/dense_heads/fovea_head.py +0 -341
- spaces/CVPR/WALT/mmdet/utils/profiling.py +0 -39
- spaces/Caoyunkang/Segment-Any-Anomaly/SAM/scripts/export_onnx_model.py +0 -204
- spaces/CofAI/chat.b4/client/css/options.css +0 -10
- spaces/CofAI/chat/client/css/message.css +0 -65
- spaces/DEEMOSTECH/ChatAvatar/index.html +0 -1
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/configTools.py +0 -348
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/utils/colors.py +0 -359
- spaces/Dabs/wordcloud/README.md +0 -11
- spaces/DaleChen/AutoGPT/autogpt/agent/agent_manager.py +0 -103
- spaces/Dana19/animal_classifier/README.md +0 -13
- spaces/DarwinAnim8or/Mistral-Chat/README.md +0 -12
- spaces/DemoLou/moe-tts/text/ngu_dialect.py +0 -30
- spaces/DragGan/DragGan-Inversion/stylegan_human/alignment.py +0 -233
- spaces/ECCV2022/bytetrack/tutorials/ctracker/mot_online/kalman_filter.py +0 -269
- spaces/ELam/text_generator/README.md +0 -12
- spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_datasets/academic_test.py +0 -57
spaces/1gistliPinn/ChatGPT4/Examples/CivilCAD Free LINK Download Full Version.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>CivilCAD Free Download Full Version</h2><br /><p><b><b>Download Zip</b> ››››› <a href="https://imgfil.com/2uxYkO">https://imgfil.com/2uxYkO</a></b></p><br /><br />
|
2 |
-
|
3 |
-
Civil Cad Crack Download Free by Tamergen, released 26 November 2016 Civil Cad Crack Download Free >>> http://shorl.com/jyfifafypuvi ... 1fdad05405<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Crack Crysis 2 Pc 64 Bits.md
DELETED
@@ -1,25 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Run Crysis 2 on Windows 10 64-bit OS</h1>
|
3 |
-
<p>Crysis 2 is a sci-fi first-person shooter game developed by Crytek and released in 2011. It is the sequel to the critically acclaimed Crysis, which was known for its stunning graphics and demanding system requirements. Crysis 2 is set in a post-apocalyptic New York City, where the player has to fight against alien invaders and human enemies using a nanosuit that grants enhanced abilities.</p>
|
4 |
-
<p>Many PC gamers wonder if they can run Crysis 2 on Windows 10 64-bit OS, since the game was originally designed for Windows XP, Vista, and 7. The good news is that Crysis 2 is compatible with Windows 10 64-bit OS, as long as you have the recommended system requirements and install the latest patches and updates. Here are some tips on how to run Crysis 2 on Windows 10 64-bit OS smoothly and enjoyably.</p>
|
5 |
-
<h2>crack crysis 2 pc 64 bits</h2><br /><p><b><b>Download Zip</b> → <a href="https://imgfil.com/2uxZ67">https://imgfil.com/2uxZ67</a></b></p><br /><br />
|
6 |
-
<h2>Check Your System Requirements</h2>
|
7 |
-
<p>Before you install and run Crysis 2 on Windows 10 64-bit OS, you should check if your PC meets the minimum or recommended system requirements for the game. Here are the official system requirements for Crysis 2:</p>
|
8 |
-
<table>
|
9 |
-
<tr><th>Minimum Requirements</th><th>Recommended Requirements</th></tr>
|
10 |
-
<tr><td>CPU: Intel Core 2 Duo 2 GHz or AMD Athlon 64 X2 2 GHz</td><td>CPU: Intel Core i5-750 or AMD Phenom II X4 3 GHz</td></tr>
|
11 |
-
<tr><td>RAM: 2 GB</td><td>RAM: 3 GB</td></tr>
|
12 |
-
<tr><td>GPU: NVIDIA GeForce 8800 GT or ATI Radeon HD 3850 with 512 MB VRAM</td><td>GPU: NVIDIA GeForce GTX 260 or ATI Radeon HD 5850 with 1 GB VRAM</td></tr>
|
13 |
-
<tr><td>OS: Windows XP, Vista, or 7 (32-bit)</td><td>OS: Windows XP, Vista, or 7 (64-bit)</td></tr>
|
14 |
-
<tr><td>HDD: At least 9 GB of free space</td><td>HDD: At least 9 GB of free space</td></tr>
|
15 |
-
<tr><td>DX: DirectX 9.0c</td><td>DX: DirectX 11</td></tr>
|
16 |
-
<tr><td>Sound: DirectX compatible sound card</td><td>Sound: DirectX compatible sound card</td></tr>
|
17 |
-
<tr><td>Internet: Broadband connection for online multiplayer</td><td>Internet: Broadband connection for online multiplayer</td></tr>
|
18 |
-
</table>
|
19 |
-
<p>If your PC meets the minimum requirements, you should be able to run Crysis 2 on Windows 10 64-bit OS at low settings and resolution. However, if you want to enjoy the game at higher settings and resolution, you should aim for the recommended requirements or higher. You can use tools like Can You Run It or System Requirements Lab to check your PC's compatibility with Crysis 2.</p>
|
20 |
-
<h2>Install the Latest Patches and Updates</h2>
|
21 |
-
<p>Another important step to run Crysis 2 on Windows 10 64-bit OS is to install the latest patches and updates for the game. These patches and updates fix various bugs, improve performance, and add new features to the game. The most important patch for Crysis 2 is Patch 1.9, which prepares the game for DirectX 11 features and high-resolution textures[^1^]. You can download Patch 1.9 from the official website of Crysis or from other sources like Steam or Origin.</p>
|
22 |
-
<p>Patch 1.9 also includes two optional downloads: DirectX 11 Ultra Upgrade and High-Resolution Textures[^1^]. These downloads enhance the graphics quality of Crysis</p>
|
23 |
-
<p></p> d5da3c52bf<br />
|
24 |
-
<br />
|
25 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Download Feem Wifi Pro Cracked For Windowsk.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>download feem wifi pro cracked for windowsk</h2><br /><p><b><b>Download</b> ✒ <a href="https://imgfil.com/2uy0TG">https://imgfil.com/2uy0TG</a></b></p><br /><br />
|
2 |
-
|
3 |
-
Ponyo Full Movie In English 1080p ->>> DOWNLOAD. Transforming into a little ... download feem wifi pro cracked for windowsk · Chudail Story ... 1fdad05405<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Animal Connect How to Find and Match the Same Animals in Different Scenarios.md
DELETED
@@ -1,146 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>How to Connect with Animals: A Guide for Beginners</h1>
|
3 |
-
<p>Have you ever wondered what your pet is thinking or feeling? Have you ever wished you could communicate with animals in a deeper and more meaningful way? If so, you are not alone. Many people have a natural curiosity and affinity for animals, and want to learn how to connect with them on a spiritual, emotional, or mental level.</p>
|
4 |
-
<p>Animal communication, also known as interspecies communication, is the ability to communicate with animals using non-verbal methods such as telepathy, intuition, or body language. It is not a supernatural or paranormal phenomenon, but rather a natural and innate skill that anyone can develop with practice and patience.</p>
|
5 |
-
<h2>connect animal</h2><br /><p><b><b>Download</b> ::: <a href="https://urlin.us/2uSVI8">https://urlin.us/2uSVI8</a></b></p><br /><br />
|
6 |
-
<p>In this article, we will explore what animal communication is and why it is important, how to prepare yourself for it, how to practice it in different situations, and how to improve your abilities. We will also answer some frequently asked questions about animal communication at the end.</p>
|
7 |
-
<h2>What is animal communication and why is it important?</h2>
|
8 |
-
<p>Animal communication is the exchange of information and feelings between humans and animals without using words or sounds. It can involve sending and receiving images, emotions, thoughts, sensations, impressions, or intentions through a mental or energetic connection.</p>
|
9 |
-
<p>Animal communication is important for several reasons. First of all, it can help us understand animals better and appreciate their intelligence, personality, and emotions. It can also help us improve our relationship with them by resolving conflicts, addressing behavioral issues, or expressing our love and gratitude.</p>
|
10 |
-
<p>connect animal game<br />
|
11 |
-
connect animal onet kyodai<br />
|
12 |
-
connect animal classic<br />
|
13 |
-
connect animal puzzle<br />
|
14 |
-
connect animal matching<br />
|
15 |
-
connect animal link<br />
|
16 |
-
connect animal deluxe<br />
|
17 |
-
connect animal free<br />
|
18 |
-
connect animal offline<br />
|
19 |
-
connect animal online<br />
|
20 |
-
connect animal app<br />
|
21 |
-
connect animal apk<br />
|
22 |
-
connect animal download<br />
|
23 |
-
connect animal play store<br />
|
24 |
-
connect animal y8<br />
|
25 |
-
connect animal html5<br />
|
26 |
-
connect animal mobile<br />
|
27 |
-
connect animal pc<br />
|
28 |
-
connect animal android<br />
|
29 |
-
connect animal ios<br />
|
30 |
-
connect animal ipad<br />
|
31 |
-
connect animal iphone<br />
|
32 |
-
connect animal spearmint games<br />
|
33 |
-
connect animal around the world<br />
|
34 |
-
connect animal travel<br />
|
35 |
-
connect animal cute<br />
|
36 |
-
connect animal fun<br />
|
37 |
-
connect animal addictive<br />
|
38 |
-
connect animal challenging<br />
|
39 |
-
connect animal levels<br />
|
40 |
-
connect animal timer<br />
|
41 |
-
connect animal power ups<br />
|
42 |
-
connect animal hints<br />
|
43 |
-
connect animal shuffle<br />
|
44 |
-
connect animal bomb<br />
|
45 |
-
connect animal score<br />
|
46 |
-
connect animal leaderboard<br />
|
47 |
-
connect animal review<br />
|
48 |
-
connect animal rating<br />
|
49 |
-
connect animal feedback<br />
|
50 |
-
connect animal tips<br />
|
51 |
-
connect animal tricks<br />
|
52 |
-
connect animal cheats<br />
|
53 |
-
connect animal guide<br />
|
54 |
-
connect animal walkthrough<br />
|
55 |
-
connect animal gameplay<br />
|
56 |
-
connect animal video<br />
|
57 |
-
connect animal trailer<br />
|
58 |
-
connect animals onet kyodai game y8.com[^2^]</p>
|
59 |
-
<p>Secondly, animal communication can benefit both humans and animals in terms of health and well-being. It can help us detect and treat physical or emotional problems in animals before they become serious. It can also help us cope with stress, anxiety, grief, or loneliness by providing comfort and support from our animal friends.</p>
|
60 |
-
<p>Thirdly, animal communication can foster a deeper connection with nature and all living beings. It can help us respect and protect animals and their habitats by raising our awareness of their needs and rights. It can also help us learn from their wisdom and insights by tapping into their unique perspectives and experiences.</p>
|
61 |
-
<h2>How to prepare yourself for animal communication</h2>
|
62 |
-
<h3>The skills and qualities you need to develop</h3>
|
63 |
-
<p>To communicate with animals effectively, you need to develop some skills and qualities that will enhance your receptivity and accuracy. Some of these are:</p>
|
64 |
-
<ul>
|
65 |
-
<li><b>Empathy:</b> The ability to feel what another being is feeling and understand their point of view.</li>
|
66 |
-
<li><b>Intuition:</b> The ability to access your inner knowing and trust your gut feelings.</li>
|
67 |
-
<li><b>Focus:</b> The ability to concentrate on one thing at a time and block out distractions.</li>
|
68 |
-
<li><b>Calmness:</b> The ability to relax your mind and body and release any tension or negativity.</li>
|
69 |
-
<li><b>Openness:</b> The ability to be curious and willing to learn from animals without judgment or prejudice.</li>
|
70 |
-
<li><b>Honesty:</b> The ability to be truthful and authentic with yourself and animals.</li>
|
71 |
-
</ul>
|
72 |
-
<p>These skills and qualities can be cultivated through various practices such as meditation, mindfulness, yoga, journaling, or self-care. You can also learn from other animal communicators by reading books, taking courses, or joining communities.</p>
|
73 |
-
<h3>The tools and techniques you can use</h3>
|
74 |
-
<p>There are many tools and techniques that can help you communicate with animals more easily and effectively. Some of these are:</p>
|
75 |
-
<ul>
|
76 |
-
<li><b>Photos:</b> You can use photos of animals to establish a connection with them and send or receive messages. You can also use photos of yourself to introduce yourself to animals and share your intentions.</li>
|
77 |
-
<li><b>Objects:</b> You can use objects that belong to animals or have their scent or energy to connect with them. For example, you can use their toys, collars, blankets, or hair.</li>
|
78 |
-
<li><b>Pendulums:</b> You can use pendulums to ask yes or no questions to animals and receive answers by observing the movement of the pendulum.</li>
|
79 |
-
<li><b>Cards:</b> You can use cards such as tarot cards, oracle cards, or animal cards to receive guidance or insights from animals or the spirit world.</li>
|
80 |
-
<li><b>Crystals:</b> You can use crystals to enhance your intuition, clarity, protection, or healing when communicating with animals. You can also use crystals to send or receive energy to or from animals.</li>
|
81 |
-
</ul>
|
82 |
-
<p>These tools and techniques are not necessary for animal communication, but they can be helpful for beginners or as a support for your intuition. You can experiment with different tools and techniques and find what works best for you and the animals you communicate with.</p>
|
83 |
-
<h2>How to practice animal communication in different situations</h2>
|
84 |
-
<h3>How to connect with your own pets or domestic animals</h3>
|
85 |
-
<p>Connecting with your own pets or domestic animals is a great way to start practicing animal communication. They are usually familiar with you and willing to communicate with you. Here are some steps you can follow to connect with them:</p>
|
86 |
-
<ol>
|
87 |
-
<li><b>Set your intention:</b> Before you communicate with your pet, set your intention for the communication. For example, you may want to ask them how they are feeling, what they need, or what they like. You may also want to tell them something important, such as a change in your schedule, a visit to the vet, or a new family member. Be clear and positive about your intention and ask for their permission to communicate.</li>
|
88 |
-
<li><b>Create a connection:</b> Next, create a connection with your pet by looking into their eyes, touching their body, or holding their photo or object. Breathe deeply and calmly and tune into their energy. Imagine that you are sending them love and gratitude from your heart. You can also say their name mentally or aloud and invite them to communicate with you.</li>
|
89 |
-
<li><b>Send and receive messages:</b> Then, send and receive messages with your pet using your preferred method of communication. You can use images, emotions, thoughts, sensations, impressions, or intentions. You can also use words or sounds if you feel comfortable. Be open and attentive to what they are sending you and acknowledge their messages. You can also ask them questions or give them feedback. Remember to be respectful and compassionate in your communication.</li>
|
90 |
-
<li><b>Close the communication:</b> Finally, close the communication by thanking your pet for their time and cooperation. You can also give them a hug, a treat, or a compliment. Then, disconnect from their energy by taking a deep breath and shaking off any excess energy. You can also write down or record your communication for future reference.</li>
|
91 |
-
</ol>
|
92 |
-
<h3>How to connect with wild animals or animals in nature</h3>
|
93 |
-
<p>Connecting with wild animals or animals in nature is a more challenging but rewarding form of animal communication. They are usually less familiar with humans and may have different needs and preferences than domestic animals. Here are some steps you can follow to connect with them:</p>
|
94 |
-
<ol>
|
95 |
-
<li><b>Select an animal:</b> Before you communicate with a wild animal, select an animal that you feel drawn to or curious about. You can choose an animal that you see in person, in a photo, in a video, or in your imagination. You can also let the animal choose you by being open and receptive to their presence.</li>
|
96 |
-
<li><b>Set your intention:</b> Next, set your intention for the communication. For example, you may want to learn more about their life, behavior or culture. You may also want to express your admiration, appreciation, or support for them. Be clear and positive about your intention and ask for their permission to communicate.</li>
|
97 |
-
<li><b>Create a connection:</b> Then, create a connection with the animal by looking at them, sending them a mental image of yourself, or holding their photo or object. Breathe deeply and calmly and tune into their energy. Imagine that you are sending them love and respect from your heart. You can also say their name or species mentally or aloud and invite them to communicate with you.</li>
|
98 |
-
<li><b>Send and receive messages:</b> Next, send and receive messages with the animal using your preferred method of communication. You can use images, emotions, thoughts, sensations, impressions, or intentions. You can also use words or sounds if you feel comfortable. Be open and attentive to what they are sending you and acknowledge their messages. You can also ask them questions or give them feedback. Remember to be respectful and compassionate in your communication.</li>
|
99 |
-
<li><b>Close the communication:</b> Finally, close the communication by thanking the animal for their time and cooperation. You can also give them a blessing, a prayer, or a gift. Then, disconnect from their energy by taking a deep breath and shaking off any excess energy. You can also write down or record your communication for future reference.</li>
|
100 |
-
</ol>
|
101 |
-
<h3>How to connect with animals in distress or need</h3>
|
102 |
-
<p>Connecting with animals in distress or need is a more sensitive and delicate form of animal communication. They are usually suffering from physical or emotional pain, trauma, fear, or loss. They may also be in danger, captivity, or abuse. Here are some steps you can follow to connect with them:</p>
|
103 |
-
<ol>
|
104 |
-
<li><b>Select an animal:</b> Before you communicate with an animal in distress or need, select an animal that you feel compassion for or want to help. You can choose an animal that you see in person, in a photo, in a video, or in your imagination. You can also let the animal choose you by being open and receptive to their call.</li>
|
105 |
-
<li><b>Set your intention:</b> Next, set your intention for the communication. For example, you may want to offer them comfort, healing, guidance, or assistance. You may also want to listen to their story, understand their situation, or advocate for their rights. Be clear and positive about your intention and ask for their permission to communicate.</li>
|
106 |
-
<li><b>Create a connection:</b> Then, create a connection with the animal by looking at them, sending them a mental image of yourself, or holding their photo or object. Breathe deeply and calmly and tune into their energy. Imagine that you are sending them love and compassion from your heart. You can also say their name or species mentally or aloud and invite them to communicate with you.</li>
|
107 |
-
<li><b>Send and receive messages:</b> Next, send and receive messages with the animal using your preferred method of communication. You can use images, emotions, thoughts, sensations, impressions, or intentions. You can also use words or sounds if you feel comfortable. Be open and attentive to what they are sending you and acknowledge their messages. You can also ask them questions or give them feedback. Remember to be respectful and compassionate in your communication.</li>
|
108 |
-
<li><b>Close the communication:</b> Finally, close the communication by thanking the animal for their time and cooperation. You can also give them a hug, a kiss, or a gesture of support. Then, disconnect from their energy by taking a deep breath and shaking off any excess energy. You can also write down or record your communication for future reference.</li>
|
109 |
-
</ol>
|
110 |
-
<h2>How to improve your animal communication abilities</h2>
|
111 |
-
<h3>The tips and resources you can follow</h3>
|
112 |
-
<p>To improve your animal communication abilities, you need to practice regularly and learn from your experiences. Here are some tips and resources you can follow to enhance your skills:</p>
|
113 |
-
<ul>
|
114 |
-
<li><b>Practice with different animals:</b> Try to communicate with different animals of different species, personalities, backgrounds and situations. This will help you expand your knowledge, awareness, and sensitivity to different animal perspectives and needs.</li>
|
115 |
-
<li><b>Practice with feedback:</b> Try to communicate with animals that can give you feedback on your communication, such as your own pets, animal communicators, or animal shelters. This will help you verify your accuracy, improve your confidence, and correct your mistakes.</li>
|
116 |
-
<li><b>Practice with fun:</b> Try to communicate with animals that can make you laugh, smile, or enjoy yourself, such as funny, playful, or cute animals. This will help you relax, have fun, and create a positive bond with animals.</li>
|
117 |
-
<li><b>Read books and blogs:</b> There are many books and blogs that can teach you more about animal communication, such as <i>Animal Speak</i> by Ted Andrews, <i>The Language of Animals</i> by Carol Gurney, or <i>The Animal Communicator's Guide Through Life, Loss and Love</i> by Pea Horsley. You can also read stories and testimonials from other animal communicators and learn from their experiences.</li>
|
118 |
-
<li><b>Watch videos and podcasts:</b> There are many videos and podcasts that can show you how animal communication works, such as <i>The Animal Communicator</i> by Anna Breytenbach, <i>The Pet Psychic</i> by Sonya Fitzpatrick, or <i>The Animal Intuitive Show</i> by Anne Angelo Webb. You can also watch interviews and demonstrations from other animal communicators and see their techniques.</li>
|
119 |
-
<li><b>Take courses and workshops:</b> There are many courses and workshops that can help you learn and practice animal communication, such as <i>The Animal Communication Mastery Program</i> by Danielle MacKinnon, <i>The Animal Communication Online Course</i> by James French, or <i>The Animal Communication Workshop</i> by Penelope Smith. You can also join online or offline communities and groups of animal communicators and get support and guidance.</li>
|
120 |
-
</ul>
|
121 |
-
<h3>The common mistakes and pitfalls you can avoid</h3>
|
122 |
-
<p>To improve your animal communication abilities, you also need to avoid some common mistakes and pitfalls that can hinder your progress or harm your relationship with animals. Some of these are:</p>
|
123 |
-
<ul>
|
124 |
-
<li><b>Expecting too much:</b> Don't expect to communicate with animals perfectly or instantly. Animal communication is a skill that takes time and practice to master. Be patient and gentle with yourself and the animals you communicate with.</li>
|
125 |
-
<li><b>Doubting yourself:</b> Don't doubt your intuition or abilities. Animal communication is a natural and innate skill that everyone has. Trust your gut feelings and impressions and don't let your logical mind interfere.</li>
|
126 |
-
<li><b>Imposing yourself:</b> Don't impose your communication on animals without their consent or interest. Animal communication is a mutual exchange that requires respect and cooperation. Always ask for permission before you communicate and respect their choice if they decline or end the communication.</li>
|
127 |
-
<li><b>Projecting yourself:</b> Don't project your own thoughts, feelings, or beliefs onto animals. Animal communication is a way of understanding animals as they are, not as we want them to be. Be open and curious about their point of view and don't judge or criticize them.</li>
|
128 |
-
<li><b>Misinterpreting them:</b> Don't misinterpret the messages or signals that animals send you. Animal communication is a complex and subtle process that involves different levels of meaning and context. Be careful not to jump to conclusions or make assumptions based on your own perspective or experience.</li>
|
129 |
-
</ul>
|
130 |
-
<h2>Conclusion and FAQs</h2>
|
131 |
-
<p>In conclusion, animal communication is a wonderful way of connecting with animals on a deeper and more meaningful level. It can help us understand them better, improve our relationship with them, benefit our health and well-being, foster a deeper connection with nature, and learn from their wisdom and insights.</p>
|
132 |
-
<p>To communicate with animals effectively, we need to prepare ourselves by developing some skills and qualities, using some tools and techniques, and practicing in different situations. We also need to improve our abilities by following some tips and resources, and avoiding some common mistakes and pitfalls.</p>
|
133 |
-
<p>If you are interested in learning more about animal communication, here are some frequently asked questions and answers that may help you:</p>
|
134 |
-
<h4>Q: Can anyone communicate with animals?</h4>
|
135 |
-
<p>A: Yes, anyone can communicate with animals, as it is a natural and innate skill that we all have. However, some people may have more natural talent or affinity for it than others, and some people may need more training or practice to develop it.</p>
|
136 |
-
<h4>Q: How can I tell if an animal is communicating with me?</h4>
|
137 |
-
<p>A: You can tell if an animal is communicating with you by paying attention to your intuition and the signs that they are sending you. Some signs may include eye contact, body language, facial expressions, sounds, or behaviors. You may also receive messages from them in the form of images, emotions, thoughts, sensations, impressions, or intentions in your mind or heart.</p>
|
138 |
-
<h4>Q: How can I verify the accuracy of my communication?</h4>
|
139 |
-
<p>A: You can verify the accuracy of your communication by asking for feedback from the animal or from other sources. For example, you can ask the animal to confirm or clarify their message by sending you a sign or a signal. You can also ask other people who know the animal well or have access to their information to validate your communication.</p>
|
140 |
-
<h4>Q: How can I protect myself from negative or harmful energies when communicating with animals?</h4>
|
141 |
-
<p>A: You can protect yourself from negative or harmful energies when communicating with animals by setting boundaries, shielding yourself, and cleansing yourself. For example, you can set boundaries by asking for permission before you communicate and respecting the animal's choice if they decline or end the communication. You can shield yourself by imagining a protective bubble or a white light around you and the animal. You can cleanse yourself by taking a shower, using salt water, burning sage, or meditating after the communication.</p>
|
142 |
-
<h4>Q: How can I communicate with animals who have passed away?</h4>
|
143 |
-
<p>A: You can communicate with animals who have passed away by using the same methods and techniques as you would with living animals. However, you may need to adjust your frequency and vibration to match theirs, as they are in a different realm or dimension. You may also need to be more patient and respectful, as they may have different rules or preferences than living animals.</p>
|
144 |
-
<p>I hope this article has helped you learn more about animal communication and how to connect with animals. If you have any questions or comments, please feel free to contact me. Thank you for reading and happy communicating!</p> 197e85843d<br />
|
145 |
-
<br />
|
146 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download Simcity Buildit Hack APK and Enjoy the Game with No Limits.md
DELETED
@@ -1,105 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Download SimCity BuildIt Hack</h1>
|
3 |
-
<p>SimCity BuildIt is a popular mobile game that allows you to create and manage your own city. You can build various types of buildings, such as residential zones, factories, shops, parks, landmarks, and more. You can also provide services to your citizens, such as power, water, sewage, waste management, fire, police, health, education, transportation, entertainment, etc. You can also participate in club wars, contests of mayors, event tracks, design challenges, and other activities.</p>
|
4 |
-
<h2>how to download simcity buildit hack</h2><br /><p><b><b>Download File</b> ☆☆☆ <a href="https://jinyurl.com/2uNLfT">https://jinyurl.com/2uNLfT</a></b></p><br /><br />
|
5 |
-
<p>The game is free to play, but it also has some in-game currencies that you can use to speed up your progress or unlock special features. These currencies are simoleons (the basic money), simcash (the premium money), golden keys (used to unlock specializations), platinum keys (used to unlock mayor's pass buildings), neosimoleons (used in omega zones), war simoleons (used in club wars), regional simoleons (used in regions), and design simoleons (used in design challenges).</p>
|
6 |
-
<p>However, earning these currencies can be time-consuming and challenging. You may need to complete various tasks, participate in events, trade with other players, or spend real money to get them. This can make the game frustrating or boring for some players who want to enjoy the game without limitations. That's why some players may want to use a hack or mod apk for SimCity BuildIt.</p>
|
7 |
-
<h2>What is SimCity BuildIt Hack?</h2>
|
8 |
-
<p>A hack or mod apk is a modified version of the original game that gives you access to unlimited resources or other advantages. For example, a hack or mod apk for SimCity BuildIt may allow you to get unlimited money, golden keys, platinum keys, neosimoleons, war simoleons, regional simoleons, design simoleons, or other resources. It may also allow you to unlock all the buildings, services, specializations, regions, etc. It may also give you other features such as faster production speed, instant upgrade completion, unlimited storage capacity, etc.</p>
|
9 |
-
<p>Using a hack or mod apk for SimCity BuildIt can make the game easier and more fun for you. You can build your dream city without worrying about running out of resources or waiting for long hours. You can also experiment with different designs and layouts without any restrictions. You can also dominate the club wars and contests of mayors with your powerful city.</p>
|
10 |
-
<p>how to get simcity buildit hack tool<br />
|
11 |
-
how to install simcity buildit hack apk<br />
|
12 |
-
how to use simcity buildit hack and cheats tool<br />
|
13 |
-
how to download simcity buildit hack for android<br />
|
14 |
-
how to download simcity buildit hack for ios<br />
|
15 |
-
how to download simcity buildit hack for pc<br />
|
16 |
-
how to download simcity buildit hack no survey<br />
|
17 |
-
how to download simcity buildit hack no human verification<br />
|
18 |
-
how to download simcity buildit hack without root<br />
|
19 |
-
how to download simcity buildit hack without jailbreak<br />
|
20 |
-
how to download simcity buildit hack online<br />
|
21 |
-
how to download simcity buildit hack offline<br />
|
22 |
-
how to download simcity buildit hack 2023<br />
|
23 |
-
how to download simcity buildit hack latest version<br />
|
24 |
-
how to download simcity buildit hack mod apk<br />
|
25 |
-
how to download simcity buildit hack unlimited money<br />
|
26 |
-
how to download simcity buildit hack unlimited simcash<br />
|
27 |
-
how to download simcity buildit hack unlimited keys<br />
|
28 |
-
how to download simcity buildit hack free resources<br />
|
29 |
-
how to download simcity buildit hack generator<br />
|
30 |
-
how to download simcity buildit hack reddit<br />
|
31 |
-
how to download simcity buildit hack youtube<br />
|
32 |
-
how to download simcity buildit hack video tutorial<br />
|
33 |
-
how to download simcity buildit hack step by step guide<br />
|
34 |
-
how to download simcity buildit hack easy method<br />
|
35 |
-
how to download simcity buildit hack working 100%<br />
|
36 |
-
how to download simcity buildit hack safe and secure<br />
|
37 |
-
how to download simcity buildit hack legal and legit<br />
|
38 |
-
how to download simcity buildit hack from official website<br />
|
39 |
-
how to download simcity buildit hack from trusted source<br />
|
40 |
-
how to download simcity buildit hack from apkcombo.com[^3^]<br />
|
41 |
-
how to download simcity buildit hack from reddit.com[^1^] [^2^]<br />
|
42 |
-
how to download simcity buildit hack from newscientist.com<br />
|
43 |
-
how to download simcity buildit hack from the-sun.com[^3^] <br />
|
44 |
-
how to download simcity buildit hack from yahoo.com[^1^]<br />
|
45 |
-
how to download simcity buildit hack with proof of success<br />
|
46 |
-
how to download simcity buildit hack with positive reviews<br />
|
47 |
-
how to download simcity buildit hack with customer support<br />
|
48 |
-
how to download simcity buildit hack with updates and patches<br />
|
49 |
-
how to download simcity buildit hack with bonus features and tips<br />
|
50 |
-
how to download simcity buildit hack with no ads and malware<br />
|
51 |
-
how to download simcity buildit hack with no errors and bugs<br />
|
52 |
-
how to download simcity buildit hack with no password and activation code<br />
|
53 |
-
how to download simcity buildit hack with no viruses and spyware<br />
|
54 |
-
how to download simcity buildit hack with no risks and bans</p>
|
55 |
-
<h3>How to Get Unlimited Money, Golden Keys, and Other Resources</h3>
|
56 |
-
<p>If you want <p>If you want to get unlimited money, golden keys, and other resources in SimCity BuildIt, you will need to download and install a hack or mod apk for the game. Here are the steps you need to follow:</p>
|
57 |
-
<ol>
|
58 |
-
<li>Find a reliable source for downloading the hack or mod apk. You can search online for websites or forums that offer SimCity BuildIt hacks or mod apks. Make sure to read the reviews and feedback from other users to avoid downloading any viruses or malware.</li>
|
59 |
-
<li>Download the hack or mod apk file to your device. You may need to enable the option to install apps from unknown sources in your device settings. You may also need to disable any antivirus or security software that may interfere with the installation.</li>
|
60 |
-
<li>Install the hack or mod apk file on your device. Follow the instructions on the screen to complete the installation. You may need to grant some permissions to the app to access your device data.</li>
|
61 |
-
<li>Launch the hack or mod apk app and enjoy the game. You should see a menu or a button that allows you to activate the hack or mod features. You can then start playing the game with unlimited resources and other advantages.</li>
|
62 |
-
</ol>
|
63 |
-
<h4>Tips and Tricks for Using SimCity BuildIt Hack</h4>
|
64 |
-
<p>Using a hack or mod apk for SimCity BuildIt can be fun and exciting, but it can also be risky and problematic. Here are some tips and tricks for using the hack or mod apk effectively:</p>
|
65 |
-
<ul>
|
66 |
-
<li>Plan ahead. Before you start building your city, think about what kind of city you want to create. Do you want a modern metropolis, a green eco-city, a futuristic omega city, or a regional city? Choose your buildings, services, specializations, and regions accordingly.</li>
|
67 |
-
<li>Keep your city clean. Even if you have unlimited resources, you still need to provide adequate services to your citizens. Make sure to have enough power, water, sewage, waste management, fire, police, health, education, transportation, entertainment, etc. for your population. Avoid placing polluting buildings near residential zones.</li>
|
68 |
-
<li>Look out for deals. Sometimes, you may get offers from other players or NPCs to buy or sell resources or items. These deals can be beneficial if you want to get rid of excess resources or get some rare items.</li>
|
69 |
-
</ul>
|
70 |
-
<h4>Risks and Drawbacks of Using SimCity BuildIt Hack</h4>
|
71 |
-
<p>Using a hack or mod apk for SimCity BuildIt can also have some potential risks and drawbacks. Here are some of them:</p>
|
72 |
-
<ul>
|
73 |
-
<li>Getting banned. The game developers may detect that you are using a hack or mod apk and ban your account from playing the game. This can result in losing your progress and achievements.</li>
|
74 |
-
<li>Losing progress. The hack or mod apk may not be compatible with the latest version of the game or your device. This can cause the game to crash or freeze, and you may lose your progress or data.</li>
|
75 |
-
<li>Encountering bugs. The hack or mod apk may not work properly or have some errors or glitches. This can affect the gameplay quality and experience.</li>
|
76 |
-
</ul>
|
77 |
-
<h3>How to Play SimCity BuildIt Without Hack</h3>
|
78 |
-
<p>If you don't want to use a hack or mod apk for SimCity BuildIt, you can still play the game without them. You can enjoy the game's challenges and rewards by playing it legitimately and fairly. Here are some ways to play SimCity BuildIt without hack:</p>
|
79 |
-
<h4>How to Earn Money, Golden Keys, and Other Resources Legally</h4>
|
80 |
-
<p>You can earn money, golden keys, and other resources in SimCity BuildIt by completing various tasks, participating in events, and trading with other players. Here are some examples:</p>
|
81 |
-
<ul>
|
82 |
-
<li>Complete tasks. You can complete tasks given by your advisors, such as building certain types of buildings, providing certain services, collecting taxes, etc. These tasks will reward you with money, golden keys, simcash, etc.</li>
|
83 |
-
<li>Participate in events. You can participate in events such as club wars, contests of mayors, event tracks, design challenges, etc. These events will reward you with money, golden keys, platinum keys, neosimoleons, war simoleons, regional simoleons, design simoleons, etc.</li>
|
84 |
-
<li>Trade with other players. You can trade with other players through the global trade headquarters or through your club chat. You can sell your excess resources or items for money or buy resources or items that you need from other players.</li>
|
85 |
-
</ul>
|
86 |
-
<h4>How to Build the Ultimate City with SimCity BuildIt Tips and Cheats</h4>
|
87 |
-
<p>You can build the ultimate city in SimCity BuildIt by following some proven tips and cheats that will help you optimize your city's performance <p>You can build the ultimate city in SimCity BuildIt by following some proven tips and cheats that will help you optimize your city's performance and appearance. Here are some examples:</p>
|
88 |
-
<ul>
|
89 |
-
<li>Expand your population. You can expand your population by building more residential zones and upgrading them. You can also increase your population by providing them with parks, landmarks, specializations, etc. A higher population will give you more taxes and happiness.</li>
|
90 |
-
<li>Upgrade your buildings. You can upgrade your buildings by producing and collecting items from your factories and shops. You can also use simcash or golden keys to speed up the upgrade process. Upgrading your buildings will give you more population, money, experience, etc.</li>
|
91 |
-
<li>Unlock specializations. You can unlock specializations by earning golden keys or platinum keys from events or tasks. Specializations are buildings that provide boosts to your population, happiness, or income. Some examples of specializations are education, entertainment, gambling, landmarks, etc.</li>
|
92 |
-
</ul>
|
93 |
-
<h2>Conclusion</h2>
|
94 |
-
<p>SimCity BuildIt is a fun and addictive game that lets you create and manage your own city. You can choose to play the game with or without a hack or mod apk. A hack or mod apk can give you unlimited resources and other advantages, but it can also have some risks and drawbacks. Playing the game without a hack or mod apk can be challenging and rewarding, but it can also be frustrating and boring. Ultimately, the choice is yours. You can decide what kind of city you want to build and how you want to play the game.</p>
|
95 |
-
<h3>FAQs</h3>
|
96 |
-
<p>Here are some frequently asked questions and answers about SimCity BuildIt hack:</p>
|
97 |
-
<ol>
|
98 |
-
<li>Q: Is SimCity BuildIt hack safe to use?<br>A: SimCity BuildIt hack may not be safe to use, as it may contain viruses or malware that can harm your device or data. It may also be detected by the game developers and result in a ban from playing the game.</li>
|
99 |
-
<li>Q: How do I update SimCity BuildIt hack?<br>A: SimCity BuildIt hack may not be compatible with the latest version of the game or your device. You may need to find a new source for downloading the hack or mod apk, or wait for the hack or mod apk to be updated by its developers.</li>
|
100 |
-
<li>Q: Can I play SimCity BuildIt hack online?<br>A: SimCity BuildIt hack may not work online, as it may require an internet connection to activate the hack or mod features. It may also be detected by the game servers and result in a ban from playing the game.</li>
|
101 |
-
<li>Q: Can I play SimCity BuildIt hack with my friends?<br>A: SimCity BuildIt hack may not allow you to play with your friends, as it may interfere with the multiplayer features of the game. It may also be unfair to other players who are playing the game legitimately.</li>
|
102 |
-
<li>Q: Can I transfer my progress from SimCity BuildIt hack to SimCity BuildIt original?<br>A: SimCity BuildIt hack may not allow you to transfer your progress to SimCity BuildIt original, as it may have different data formats or structures. It may also result in a loss of progress or data.</li>
|
103 |
-
</ol></p> 401be4b1e0<br />
|
104 |
-
<br />
|
105 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/3i2irg/SF-model/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: SF Model
|
3 |
-
emoji: 🐨
|
4 |
-
colorFrom: indigo
|
5 |
-
colorTo: yellow
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.21.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_r34.py
DELETED
@@ -1,26 +0,0 @@
|
|
1 |
-
from easydict import EasyDict as edict
|
2 |
-
|
3 |
-
# make training faster
|
4 |
-
# our RAM is 256G
|
5 |
-
# mount -t tmpfs -o size=140G tmpfs /train_tmp
|
6 |
-
|
7 |
-
config = edict()
|
8 |
-
config.loss = "cosface"
|
9 |
-
config.network = "r34"
|
10 |
-
config.resume = False
|
11 |
-
config.output = None
|
12 |
-
config.embedding_size = 512
|
13 |
-
config.sample_rate = 1.0
|
14 |
-
config.fp16 = True
|
15 |
-
config.momentum = 0.9
|
16 |
-
config.weight_decay = 5e-4
|
17 |
-
config.batch_size = 128
|
18 |
-
config.lr = 0.1 # batch size is 512
|
19 |
-
|
20 |
-
config.rec = "/train_tmp/glint360k"
|
21 |
-
config.num_classes = 360232
|
22 |
-
config.num_image = 17091657
|
23 |
-
config.num_epoch = 20
|
24 |
-
config.warmup_epoch = -1
|
25 |
-
config.decay_epoch = [8, 12, 15, 18]
|
26 |
-
config.val_targets = ["lfw", "cfp_fp", "agedb_30"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AHzizi/WaifuVoiceGen/models.py
DELETED
@@ -1,533 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
import torch
|
3 |
-
from torch import nn
|
4 |
-
from torch.nn import functional as F
|
5 |
-
|
6 |
-
import commons
|
7 |
-
import modules
|
8 |
-
import attentions
|
9 |
-
import monotonic_align
|
10 |
-
|
11 |
-
from torch.nn import Conv1d, ConvTranspose1d, Conv2d
|
12 |
-
from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
|
13 |
-
from commons import init_weights, get_padding
|
14 |
-
|
15 |
-
|
16 |
-
class StochasticDurationPredictor(nn.Module):
|
17 |
-
def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
|
18 |
-
super().__init__()
|
19 |
-
filter_channels = in_channels # it needs to be removed from future version.
|
20 |
-
self.in_channels = in_channels
|
21 |
-
self.filter_channels = filter_channels
|
22 |
-
self.kernel_size = kernel_size
|
23 |
-
self.p_dropout = p_dropout
|
24 |
-
self.n_flows = n_flows
|
25 |
-
self.gin_channels = gin_channels
|
26 |
-
|
27 |
-
self.log_flow = modules.Log()
|
28 |
-
self.flows = nn.ModuleList()
|
29 |
-
self.flows.append(modules.ElementwiseAffine(2))
|
30 |
-
for i in range(n_flows):
|
31 |
-
self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
|
32 |
-
self.flows.append(modules.Flip())
|
33 |
-
|
34 |
-
self.post_pre = nn.Conv1d(1, filter_channels, 1)
|
35 |
-
self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
|
36 |
-
self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
|
37 |
-
self.post_flows = nn.ModuleList()
|
38 |
-
self.post_flows.append(modules.ElementwiseAffine(2))
|
39 |
-
for i in range(4):
|
40 |
-
self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
|
41 |
-
self.post_flows.append(modules.Flip())
|
42 |
-
|
43 |
-
self.pre = nn.Conv1d(in_channels, filter_channels, 1)
|
44 |
-
self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
|
45 |
-
self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
|
46 |
-
if gin_channels != 0:
|
47 |
-
self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
|
48 |
-
|
49 |
-
def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
|
50 |
-
x = torch.detach(x)
|
51 |
-
x = self.pre(x)
|
52 |
-
if g is not None:
|
53 |
-
g = torch.detach(g)
|
54 |
-
x = x + self.cond(g)
|
55 |
-
x = self.convs(x, x_mask)
|
56 |
-
x = self.proj(x) * x_mask
|
57 |
-
|
58 |
-
if not reverse:
|
59 |
-
flows = self.flows
|
60 |
-
assert w is not None
|
61 |
-
|
62 |
-
logdet_tot_q = 0
|
63 |
-
h_w = self.post_pre(w)
|
64 |
-
h_w = self.post_convs(h_w, x_mask)
|
65 |
-
h_w = self.post_proj(h_w) * x_mask
|
66 |
-
e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
|
67 |
-
z_q = e_q
|
68 |
-
for flow in self.post_flows:
|
69 |
-
z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
|
70 |
-
logdet_tot_q += logdet_q
|
71 |
-
z_u, z1 = torch.split(z_q, [1, 1], 1)
|
72 |
-
u = torch.sigmoid(z_u) * x_mask
|
73 |
-
z0 = (w - u) * x_mask
|
74 |
-
logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2])
|
75 |
-
logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q
|
76 |
-
|
77 |
-
logdet_tot = 0
|
78 |
-
z0, logdet = self.log_flow(z0, x_mask)
|
79 |
-
logdet_tot += logdet
|
80 |
-
z = torch.cat([z0, z1], 1)
|
81 |
-
for flow in flows:
|
82 |
-
z, logdet = flow(z, x_mask, g=x, reverse=reverse)
|
83 |
-
logdet_tot = logdet_tot + logdet
|
84 |
-
nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot
|
85 |
-
return nll + logq # [b]
|
86 |
-
else:
|
87 |
-
flows = list(reversed(self.flows))
|
88 |
-
flows = flows[:-2] + [flows[-1]] # remove a useless vflow
|
89 |
-
z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
|
90 |
-
for flow in flows:
|
91 |
-
z = flow(z, x_mask, g=x, reverse=reverse)
|
92 |
-
z0, z1 = torch.split(z, [1, 1], 1)
|
93 |
-
logw = z0
|
94 |
-
return logw
|
95 |
-
|
96 |
-
|
97 |
-
class DurationPredictor(nn.Module):
|
98 |
-
def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
|
99 |
-
super().__init__()
|
100 |
-
|
101 |
-
self.in_channels = in_channels
|
102 |
-
self.filter_channels = filter_channels
|
103 |
-
self.kernel_size = kernel_size
|
104 |
-
self.p_dropout = p_dropout
|
105 |
-
self.gin_channels = gin_channels
|
106 |
-
|
107 |
-
self.drop = nn.Dropout(p_dropout)
|
108 |
-
self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
|
109 |
-
self.norm_1 = modules.LayerNorm(filter_channels)
|
110 |
-
self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
|
111 |
-
self.norm_2 = modules.LayerNorm(filter_channels)
|
112 |
-
self.proj = nn.Conv1d(filter_channels, 1, 1)
|
113 |
-
|
114 |
-
if gin_channels != 0:
|
115 |
-
self.cond = nn.Conv1d(gin_channels, in_channels, 1)
|
116 |
-
|
117 |
-
def forward(self, x, x_mask, g=None):
|
118 |
-
x = torch.detach(x)
|
119 |
-
if g is not None:
|
120 |
-
g = torch.detach(g)
|
121 |
-
x = x + self.cond(g)
|
122 |
-
x = self.conv_1(x * x_mask)
|
123 |
-
x = torch.relu(x)
|
124 |
-
x = self.norm_1(x)
|
125 |
-
x = self.drop(x)
|
126 |
-
x = self.conv_2(x * x_mask)
|
127 |
-
x = torch.relu(x)
|
128 |
-
x = self.norm_2(x)
|
129 |
-
x = self.drop(x)
|
130 |
-
x = self.proj(x * x_mask)
|
131 |
-
return x * x_mask
|
132 |
-
|
133 |
-
|
134 |
-
class TextEncoder(nn.Module):
|
135 |
-
def __init__(self,
|
136 |
-
n_vocab,
|
137 |
-
out_channels,
|
138 |
-
hidden_channels,
|
139 |
-
filter_channels,
|
140 |
-
n_heads,
|
141 |
-
n_layers,
|
142 |
-
kernel_size,
|
143 |
-
p_dropout):
|
144 |
-
super().__init__()
|
145 |
-
self.n_vocab = n_vocab
|
146 |
-
self.out_channels = out_channels
|
147 |
-
self.hidden_channels = hidden_channels
|
148 |
-
self.filter_channels = filter_channels
|
149 |
-
self.n_heads = n_heads
|
150 |
-
self.n_layers = n_layers
|
151 |
-
self.kernel_size = kernel_size
|
152 |
-
self.p_dropout = p_dropout
|
153 |
-
|
154 |
-
self.emb = nn.Embedding(n_vocab, hidden_channels)
|
155 |
-
nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
|
156 |
-
|
157 |
-
self.encoder = attentions.Encoder(
|
158 |
-
hidden_channels,
|
159 |
-
filter_channels,
|
160 |
-
n_heads,
|
161 |
-
n_layers,
|
162 |
-
kernel_size,
|
163 |
-
p_dropout)
|
164 |
-
self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
165 |
-
|
166 |
-
def forward(self, x, x_lengths):
|
167 |
-
x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
|
168 |
-
x = torch.transpose(x, 1, -1) # [b, h, t]
|
169 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
|
170 |
-
|
171 |
-
x = self.encoder(x * x_mask, x_mask)
|
172 |
-
stats = self.proj(x) * x_mask
|
173 |
-
|
174 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
175 |
-
return x, m, logs, x_mask
|
176 |
-
|
177 |
-
|
178 |
-
class ResidualCouplingBlock(nn.Module):
|
179 |
-
def __init__(self,
|
180 |
-
channels,
|
181 |
-
hidden_channels,
|
182 |
-
kernel_size,
|
183 |
-
dilation_rate,
|
184 |
-
n_layers,
|
185 |
-
n_flows=4,
|
186 |
-
gin_channels=0):
|
187 |
-
super().__init__()
|
188 |
-
self.channels = channels
|
189 |
-
self.hidden_channels = hidden_channels
|
190 |
-
self.kernel_size = kernel_size
|
191 |
-
self.dilation_rate = dilation_rate
|
192 |
-
self.n_layers = n_layers
|
193 |
-
self.n_flows = n_flows
|
194 |
-
self.gin_channels = gin_channels
|
195 |
-
|
196 |
-
self.flows = nn.ModuleList()
|
197 |
-
for i in range(n_flows):
|
198 |
-
self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
|
199 |
-
self.flows.append(modules.Flip())
|
200 |
-
|
201 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
202 |
-
if not reverse:
|
203 |
-
for flow in self.flows:
|
204 |
-
x, _ = flow(x, x_mask, g=g, reverse=reverse)
|
205 |
-
else:
|
206 |
-
for flow in reversed(self.flows):
|
207 |
-
x = flow(x, x_mask, g=g, reverse=reverse)
|
208 |
-
return x
|
209 |
-
|
210 |
-
|
211 |
-
class PosteriorEncoder(nn.Module):
|
212 |
-
def __init__(self,
|
213 |
-
in_channels,
|
214 |
-
out_channels,
|
215 |
-
hidden_channels,
|
216 |
-
kernel_size,
|
217 |
-
dilation_rate,
|
218 |
-
n_layers,
|
219 |
-
gin_channels=0):
|
220 |
-
super().__init__()
|
221 |
-
self.in_channels = in_channels
|
222 |
-
self.out_channels = out_channels
|
223 |
-
self.hidden_channels = hidden_channels
|
224 |
-
self.kernel_size = kernel_size
|
225 |
-
self.dilation_rate = dilation_rate
|
226 |
-
self.n_layers = n_layers
|
227 |
-
self.gin_channels = gin_channels
|
228 |
-
|
229 |
-
self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
|
230 |
-
self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
|
231 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
232 |
-
|
233 |
-
def forward(self, x, x_lengths, g=None):
|
234 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
|
235 |
-
x = self.pre(x) * x_mask
|
236 |
-
x = self.enc(x, x_mask, g=g)
|
237 |
-
stats = self.proj(x) * x_mask
|
238 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
239 |
-
z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
|
240 |
-
return z, m, logs, x_mask
|
241 |
-
|
242 |
-
|
243 |
-
class Generator(torch.nn.Module):
|
244 |
-
def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
|
245 |
-
super(Generator, self).__init__()
|
246 |
-
self.num_kernels = len(resblock_kernel_sizes)
|
247 |
-
self.num_upsamples = len(upsample_rates)
|
248 |
-
self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
|
249 |
-
resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
|
250 |
-
|
251 |
-
self.ups = nn.ModuleList()
|
252 |
-
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
|
253 |
-
self.ups.append(weight_norm(
|
254 |
-
ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)),
|
255 |
-
k, u, padding=(k-u)//2)))
|
256 |
-
|
257 |
-
self.resblocks = nn.ModuleList()
|
258 |
-
for i in range(len(self.ups)):
|
259 |
-
ch = upsample_initial_channel//(2**(i+1))
|
260 |
-
for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
|
261 |
-
self.resblocks.append(resblock(ch, k, d))
|
262 |
-
|
263 |
-
self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
|
264 |
-
self.ups.apply(init_weights)
|
265 |
-
|
266 |
-
if gin_channels != 0:
|
267 |
-
self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
|
268 |
-
|
269 |
-
def forward(self, x, g=None):
|
270 |
-
x = self.conv_pre(x)
|
271 |
-
if g is not None:
|
272 |
-
x = x + self.cond(g)
|
273 |
-
|
274 |
-
for i in range(self.num_upsamples):
|
275 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
276 |
-
x = self.ups[i](x)
|
277 |
-
xs = None
|
278 |
-
for j in range(self.num_kernels):
|
279 |
-
if xs is None:
|
280 |
-
xs = self.resblocks[i*self.num_kernels+j](x)
|
281 |
-
else:
|
282 |
-
xs += self.resblocks[i*self.num_kernels+j](x)
|
283 |
-
x = xs / self.num_kernels
|
284 |
-
x = F.leaky_relu(x)
|
285 |
-
x = self.conv_post(x)
|
286 |
-
x = torch.tanh(x)
|
287 |
-
|
288 |
-
return x
|
289 |
-
|
290 |
-
def remove_weight_norm(self):
|
291 |
-
print('Removing weight norm...')
|
292 |
-
for l in self.ups:
|
293 |
-
remove_weight_norm(l)
|
294 |
-
for l in self.resblocks:
|
295 |
-
l.remove_weight_norm()
|
296 |
-
|
297 |
-
|
298 |
-
class DiscriminatorP(torch.nn.Module):
|
299 |
-
def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
|
300 |
-
super(DiscriminatorP, self).__init__()
|
301 |
-
self.period = period
|
302 |
-
self.use_spectral_norm = use_spectral_norm
|
303 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
304 |
-
self.convs = nn.ModuleList([
|
305 |
-
norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
306 |
-
norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
307 |
-
norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
308 |
-
norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
|
309 |
-
norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
|
310 |
-
])
|
311 |
-
self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
|
312 |
-
|
313 |
-
def forward(self, x):
|
314 |
-
fmap = []
|
315 |
-
|
316 |
-
# 1d to 2d
|
317 |
-
b, c, t = x.shape
|
318 |
-
if t % self.period != 0: # pad first
|
319 |
-
n_pad = self.period - (t % self.period)
|
320 |
-
x = F.pad(x, (0, n_pad), "reflect")
|
321 |
-
t = t + n_pad
|
322 |
-
x = x.view(b, c, t // self.period, self.period)
|
323 |
-
|
324 |
-
for l in self.convs:
|
325 |
-
x = l(x)
|
326 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
327 |
-
fmap.append(x)
|
328 |
-
x = self.conv_post(x)
|
329 |
-
fmap.append(x)
|
330 |
-
x = torch.flatten(x, 1, -1)
|
331 |
-
|
332 |
-
return x, fmap
|
333 |
-
|
334 |
-
|
335 |
-
class DiscriminatorS(torch.nn.Module):
|
336 |
-
def __init__(self, use_spectral_norm=False):
|
337 |
-
super(DiscriminatorS, self).__init__()
|
338 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
339 |
-
self.convs = nn.ModuleList([
|
340 |
-
norm_f(Conv1d(1, 16, 15, 1, padding=7)),
|
341 |
-
norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
|
342 |
-
norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
|
343 |
-
norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
|
344 |
-
norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
|
345 |
-
norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
|
346 |
-
])
|
347 |
-
self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
|
348 |
-
|
349 |
-
def forward(self, x):
|
350 |
-
fmap = []
|
351 |
-
|
352 |
-
for l in self.convs:
|
353 |
-
x = l(x)
|
354 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
355 |
-
fmap.append(x)
|
356 |
-
x = self.conv_post(x)
|
357 |
-
fmap.append(x)
|
358 |
-
x = torch.flatten(x, 1, -1)
|
359 |
-
|
360 |
-
return x, fmap
|
361 |
-
|
362 |
-
|
363 |
-
class MultiPeriodDiscriminator(torch.nn.Module):
|
364 |
-
def __init__(self, use_spectral_norm=False):
|
365 |
-
super(MultiPeriodDiscriminator, self).__init__()
|
366 |
-
periods = [2,3,5,7,11]
|
367 |
-
|
368 |
-
discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
|
369 |
-
discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
|
370 |
-
self.discriminators = nn.ModuleList(discs)
|
371 |
-
|
372 |
-
def forward(self, y, y_hat):
|
373 |
-
y_d_rs = []
|
374 |
-
y_d_gs = []
|
375 |
-
fmap_rs = []
|
376 |
-
fmap_gs = []
|
377 |
-
for i, d in enumerate(self.discriminators):
|
378 |
-
y_d_r, fmap_r = d(y)
|
379 |
-
y_d_g, fmap_g = d(y_hat)
|
380 |
-
y_d_rs.append(y_d_r)
|
381 |
-
y_d_gs.append(y_d_g)
|
382 |
-
fmap_rs.append(fmap_r)
|
383 |
-
fmap_gs.append(fmap_g)
|
384 |
-
|
385 |
-
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
|
386 |
-
|
387 |
-
|
388 |
-
|
389 |
-
class SynthesizerTrn(nn.Module):
|
390 |
-
"""
|
391 |
-
Synthesizer for Training
|
392 |
-
"""
|
393 |
-
|
394 |
-
def __init__(self,
|
395 |
-
n_vocab,
|
396 |
-
spec_channels,
|
397 |
-
segment_size,
|
398 |
-
inter_channels,
|
399 |
-
hidden_channels,
|
400 |
-
filter_channels,
|
401 |
-
n_heads,
|
402 |
-
n_layers,
|
403 |
-
kernel_size,
|
404 |
-
p_dropout,
|
405 |
-
resblock,
|
406 |
-
resblock_kernel_sizes,
|
407 |
-
resblock_dilation_sizes,
|
408 |
-
upsample_rates,
|
409 |
-
upsample_initial_channel,
|
410 |
-
upsample_kernel_sizes,
|
411 |
-
n_speakers=0,
|
412 |
-
gin_channels=0,
|
413 |
-
use_sdp=True,
|
414 |
-
**kwargs):
|
415 |
-
|
416 |
-
super().__init__()
|
417 |
-
self.n_vocab = n_vocab
|
418 |
-
self.spec_channels = spec_channels
|
419 |
-
self.inter_channels = inter_channels
|
420 |
-
self.hidden_channels = hidden_channels
|
421 |
-
self.filter_channels = filter_channels
|
422 |
-
self.n_heads = n_heads
|
423 |
-
self.n_layers = n_layers
|
424 |
-
self.kernel_size = kernel_size
|
425 |
-
self.p_dropout = p_dropout
|
426 |
-
self.resblock = resblock
|
427 |
-
self.resblock_kernel_sizes = resblock_kernel_sizes
|
428 |
-
self.resblock_dilation_sizes = resblock_dilation_sizes
|
429 |
-
self.upsample_rates = upsample_rates
|
430 |
-
self.upsample_initial_channel = upsample_initial_channel
|
431 |
-
self.upsample_kernel_sizes = upsample_kernel_sizes
|
432 |
-
self.segment_size = segment_size
|
433 |
-
self.n_speakers = n_speakers
|
434 |
-
self.gin_channels = gin_channels
|
435 |
-
|
436 |
-
self.use_sdp = use_sdp
|
437 |
-
|
438 |
-
self.enc_p = TextEncoder(n_vocab,
|
439 |
-
inter_channels,
|
440 |
-
hidden_channels,
|
441 |
-
filter_channels,
|
442 |
-
n_heads,
|
443 |
-
n_layers,
|
444 |
-
kernel_size,
|
445 |
-
p_dropout)
|
446 |
-
self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
|
447 |
-
self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
|
448 |
-
self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
|
449 |
-
|
450 |
-
if use_sdp:
|
451 |
-
self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
|
452 |
-
else:
|
453 |
-
self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
|
454 |
-
|
455 |
-
if n_speakers > 1:
|
456 |
-
self.emb_g = nn.Embedding(n_speakers, gin_channels)
|
457 |
-
|
458 |
-
def forward(self, x, x_lengths, y, y_lengths, sid=None):
|
459 |
-
|
460 |
-
x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
|
461 |
-
if self.n_speakers > 0:
|
462 |
-
g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
|
463 |
-
else:
|
464 |
-
g = None
|
465 |
-
|
466 |
-
z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
|
467 |
-
z_p = self.flow(z, y_mask, g=g)
|
468 |
-
|
469 |
-
with torch.no_grad():
|
470 |
-
# negative cross-entropy
|
471 |
-
s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
|
472 |
-
neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
|
473 |
-
neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
|
474 |
-
neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
|
475 |
-
neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
|
476 |
-
neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
|
477 |
-
|
478 |
-
attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
|
479 |
-
attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
|
480 |
-
|
481 |
-
w = attn.sum(2)
|
482 |
-
if self.use_sdp:
|
483 |
-
l_length = self.dp(x, x_mask, w, g=g)
|
484 |
-
l_length = l_length / torch.sum(x_mask)
|
485 |
-
else:
|
486 |
-
logw_ = torch.log(w + 1e-6) * x_mask
|
487 |
-
logw = self.dp(x, x_mask, g=g)
|
488 |
-
l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging
|
489 |
-
|
490 |
-
# expand prior
|
491 |
-
m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
|
492 |
-
logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
|
493 |
-
|
494 |
-
z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
|
495 |
-
o = self.dec(z_slice, g=g)
|
496 |
-
return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
|
497 |
-
|
498 |
-
def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None):
|
499 |
-
x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
|
500 |
-
if self.n_speakers > 0:
|
501 |
-
g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
|
502 |
-
else:
|
503 |
-
g = None
|
504 |
-
|
505 |
-
if self.use_sdp:
|
506 |
-
logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
|
507 |
-
else:
|
508 |
-
logw = self.dp(x, x_mask, g=g)
|
509 |
-
w = torch.exp(logw) * x_mask * length_scale
|
510 |
-
w_ceil = torch.ceil(w)
|
511 |
-
y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
|
512 |
-
y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
|
513 |
-
attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
|
514 |
-
attn = commons.generate_path(w_ceil, attn_mask)
|
515 |
-
|
516 |
-
m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
|
517 |
-
logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
|
518 |
-
|
519 |
-
z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
|
520 |
-
z = self.flow(z_p, y_mask, g=g, reverse=True)
|
521 |
-
o = self.dec((z * y_mask)[:,:,:max_len], g=g)
|
522 |
-
return o, attn, y_mask, (z, z_p, m_p, logs_p)
|
523 |
-
|
524 |
-
def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
|
525 |
-
assert self.n_speakers > 0, "n_speakers have to be larger than 0."
|
526 |
-
g_src = self.emb_g(sid_src).unsqueeze(-1)
|
527 |
-
g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
|
528 |
-
z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
|
529 |
-
z_p = self.flow(z, y_mask, g=g_src)
|
530 |
-
z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
|
531 |
-
o_hat = self.dec(z_hat * y_mask, g=g_tgt)
|
532 |
-
return o_hat, y_mask, (z, z_p, z_hat)
|
533 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIConsultant/MusicGen/audiocraft/grids/compression/encodec_audiogen_16khz.py
DELETED
@@ -1,29 +0,0 @@
|
|
1 |
-
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
"""
|
8 |
-
Grid search file, simply list all the exp you want in `explorer`.
|
9 |
-
Any new exp added there will be scheduled.
|
10 |
-
You can cancel and experiment by commenting its line.
|
11 |
-
|
12 |
-
This grid shows how to train the new AudioGen EnCodec model at 16 kHz.
|
13 |
-
"""
|
14 |
-
|
15 |
-
from ._explorers import CompressionExplorer
|
16 |
-
from ...environment import AudioCraftEnvironment
|
17 |
-
|
18 |
-
|
19 |
-
@CompressionExplorer
|
20 |
-
def explorer(launcher):
|
21 |
-
partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global'])
|
22 |
-
launcher.slurm_(gpus=8, partition=partitions)
|
23 |
-
# use configuration for AudioGen's EnCodec model trained on monophonic audio sampled at 16 kHz
|
24 |
-
# AudioGen's EnCodec is trained with a total stride of 320 leading to a frame rate of 50 hz
|
25 |
-
launcher.bind_(solver='compression/encodec_audiogen_16khz')
|
26 |
-
# replace this by the desired sound dataset
|
27 |
-
launcher.bind_(dset='internal/sounds_16khz')
|
28 |
-
# launch xp
|
29 |
-
launcher()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/NeuralSeq/vocoders/pwg.py
DELETED
@@ -1,137 +0,0 @@
|
|
1 |
-
import glob
|
2 |
-
import re
|
3 |
-
import librosa
|
4 |
-
import torch
|
5 |
-
import yaml
|
6 |
-
from sklearn.preprocessing import StandardScaler
|
7 |
-
from torch import nn
|
8 |
-
from modules.parallel_wavegan.models import ParallelWaveGANGenerator
|
9 |
-
from modules.parallel_wavegan.utils import read_hdf5
|
10 |
-
from utils.hparams import hparams
|
11 |
-
from utils.pitch_utils import f0_to_coarse
|
12 |
-
from vocoders.base_vocoder import BaseVocoder, register_vocoder
|
13 |
-
import numpy as np
|
14 |
-
|
15 |
-
|
16 |
-
def load_pwg_model(config_path, checkpoint_path, stats_path):
|
17 |
-
# load config
|
18 |
-
with open(config_path) as f:
|
19 |
-
config = yaml.load(f, Loader=yaml.Loader)
|
20 |
-
|
21 |
-
# setup
|
22 |
-
if torch.cuda.is_available():
|
23 |
-
device = torch.device("cuda")
|
24 |
-
else:
|
25 |
-
device = torch.device("cpu")
|
26 |
-
model = ParallelWaveGANGenerator(**config["generator_params"])
|
27 |
-
|
28 |
-
ckpt_dict = torch.load(checkpoint_path, map_location="cpu")
|
29 |
-
if 'state_dict' not in ckpt_dict: # official vocoder
|
30 |
-
model.load_state_dict(torch.load(checkpoint_path, map_location="cpu")["model"]["generator"])
|
31 |
-
scaler = StandardScaler()
|
32 |
-
if config["format"] == "hdf5":
|
33 |
-
scaler.mean_ = read_hdf5(stats_path, "mean")
|
34 |
-
scaler.scale_ = read_hdf5(stats_path, "scale")
|
35 |
-
elif config["format"] == "npy":
|
36 |
-
scaler.mean_ = np.load(stats_path)[0]
|
37 |
-
scaler.scale_ = np.load(stats_path)[1]
|
38 |
-
else:
|
39 |
-
raise ValueError("support only hdf5 or npy format.")
|
40 |
-
else: # custom PWG vocoder
|
41 |
-
fake_task = nn.Module()
|
42 |
-
fake_task.model_gen = model
|
43 |
-
fake_task.load_state_dict(torch.load(checkpoint_path, map_location="cpu")["state_dict"], strict=False)
|
44 |
-
scaler = None
|
45 |
-
|
46 |
-
model.remove_weight_norm()
|
47 |
-
model = model.eval().to(device)
|
48 |
-
print(f"| Loaded model parameters from {checkpoint_path}.")
|
49 |
-
print(f"| PWG device: {device}.")
|
50 |
-
return model, scaler, config, device
|
51 |
-
|
52 |
-
|
53 |
-
@register_vocoder
|
54 |
-
class PWG(BaseVocoder):
|
55 |
-
def __init__(self):
|
56 |
-
if hparams['vocoder_ckpt'] == '': # load LJSpeech PWG pretrained model
|
57 |
-
base_dir = 'wavegan_pretrained'
|
58 |
-
ckpts = glob.glob(f'{base_dir}/checkpoint-*steps.pkl')
|
59 |
-
ckpt = sorted(ckpts, key=
|
60 |
-
lambda x: int(re.findall(f'{base_dir}/checkpoint-(\d+)steps.pkl', x)[0]))[-1]
|
61 |
-
config_path = f'{base_dir}/config.yaml'
|
62 |
-
print('| load PWG: ', ckpt)
|
63 |
-
self.model, self.scaler, self.config, self.device = load_pwg_model(
|
64 |
-
config_path=config_path,
|
65 |
-
checkpoint_path=ckpt,
|
66 |
-
stats_path=f'{base_dir}/stats.h5',
|
67 |
-
)
|
68 |
-
else:
|
69 |
-
base_dir = hparams['vocoder_ckpt']
|
70 |
-
print(base_dir)
|
71 |
-
config_path = f'{base_dir}/config.yaml'
|
72 |
-
ckpt = sorted(glob.glob(f'{base_dir}/model_ckpt_steps_*.ckpt'), key=
|
73 |
-
lambda x: int(re.findall(f'{base_dir}/model_ckpt_steps_(\d+).ckpt', x)[0]))[-1]
|
74 |
-
print('| load PWG: ', ckpt)
|
75 |
-
self.scaler = None
|
76 |
-
self.model, _, self.config, self.device = load_pwg_model(
|
77 |
-
config_path=config_path,
|
78 |
-
checkpoint_path=ckpt,
|
79 |
-
stats_path=f'{base_dir}/stats.h5',
|
80 |
-
)
|
81 |
-
|
82 |
-
def spec2wav(self, mel, **kwargs):
|
83 |
-
# start generation
|
84 |
-
config = self.config
|
85 |
-
device = self.device
|
86 |
-
pad_size = (config["generator_params"]["aux_context_window"],
|
87 |
-
config["generator_params"]["aux_context_window"])
|
88 |
-
c = mel
|
89 |
-
if self.scaler is not None:
|
90 |
-
c = self.scaler.transform(c)
|
91 |
-
|
92 |
-
with torch.no_grad():
|
93 |
-
z = torch.randn(1, 1, c.shape[0] * config["hop_size"]).to(device)
|
94 |
-
c = np.pad(c, (pad_size, (0, 0)), "edge")
|
95 |
-
c = torch.FloatTensor(c).unsqueeze(0).transpose(2, 1).to(device)
|
96 |
-
p = kwargs.get('f0')
|
97 |
-
if p is not None:
|
98 |
-
p = f0_to_coarse(p)
|
99 |
-
p = np.pad(p, (pad_size,), "edge")
|
100 |
-
p = torch.LongTensor(p[None, :]).to(device)
|
101 |
-
y = self.model(z, c, p).view(-1)
|
102 |
-
wav_out = y.cpu().numpy()
|
103 |
-
return wav_out
|
104 |
-
|
105 |
-
@staticmethod
|
106 |
-
def wav2spec(wav_fn, return_linear=False):
|
107 |
-
from data_gen.tts.data_gen_utils import process_utterance
|
108 |
-
res = process_utterance(
|
109 |
-
wav_fn, fft_size=hparams['fft_size'],
|
110 |
-
hop_size=hparams['hop_size'],
|
111 |
-
win_length=hparams['win_size'],
|
112 |
-
num_mels=hparams['audio_num_mel_bins'],
|
113 |
-
fmin=hparams['fmin'],
|
114 |
-
fmax=hparams['fmax'],
|
115 |
-
sample_rate=hparams['audio_sample_rate'],
|
116 |
-
loud_norm=hparams['loud_norm'],
|
117 |
-
min_level_db=hparams['min_level_db'],
|
118 |
-
return_linear=return_linear, vocoder='pwg', eps=float(hparams.get('wav2spec_eps', 1e-10)))
|
119 |
-
if return_linear:
|
120 |
-
return res[0], res[1].T, res[2].T # [T, 80], [T, n_fft]
|
121 |
-
else:
|
122 |
-
return res[0], res[1].T
|
123 |
-
|
124 |
-
@staticmethod
|
125 |
-
def wav2mfcc(wav_fn):
|
126 |
-
fft_size = hparams['fft_size']
|
127 |
-
hop_size = hparams['hop_size']
|
128 |
-
win_length = hparams['win_size']
|
129 |
-
sample_rate = hparams['audio_sample_rate']
|
130 |
-
wav, _ = librosa.core.load(wav_fn, sr=sample_rate)
|
131 |
-
mfcc = librosa.feature.mfcc(y=wav, sr=sample_rate, n_mfcc=13,
|
132 |
-
n_fft=fft_size, hop_length=hop_size,
|
133 |
-
win_length=win_length, pad_mode="constant", power=1.0)
|
134 |
-
mfcc_delta = librosa.feature.delta(mfcc, order=1)
|
135 |
-
mfcc_delta_delta = librosa.feature.delta(mfcc, order=2)
|
136 |
-
mfcc = np.concatenate([mfcc, mfcc_delta, mfcc_delta_delta]).T
|
137 |
-
return mfcc
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/linear_probe.py
DELETED
@@ -1,63 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import torch.nn.functional as F
|
3 |
-
from torch import nn
|
4 |
-
from .model import MLPLayers
|
5 |
-
|
6 |
-
|
7 |
-
class LinearProbe(nn.Module):
|
8 |
-
def __init__(self, model, mlp, freeze, in_ch, out_ch, act=None):
|
9 |
-
"""
|
10 |
-
Args:
|
11 |
-
model: nn.Module
|
12 |
-
mlp: bool, if True, then use the MLP layer as the linear probe module
|
13 |
-
freeze: bool, if Ture, then freeze all the CLAP model's layers when training the linear probe
|
14 |
-
in_ch: int, the output channel from CLAP model
|
15 |
-
out_ch: int, the output channel from linear probe (class_num)
|
16 |
-
act: torch.nn.functional, the activation function before the loss function
|
17 |
-
"""
|
18 |
-
super().__init__()
|
19 |
-
in_ch = 512
|
20 |
-
self.clap_model = model
|
21 |
-
self.clap_model.text_branch = None # to save memory
|
22 |
-
self.freeze = freeze
|
23 |
-
if mlp:
|
24 |
-
self.lp_layer = MLPLayers(units=[in_ch, in_ch * 2, out_ch])
|
25 |
-
else:
|
26 |
-
self.lp_layer = nn.Linear(in_ch, out_ch)
|
27 |
-
|
28 |
-
if self.freeze:
|
29 |
-
for param in self.clap_model.parameters():
|
30 |
-
param.requires_grad = False
|
31 |
-
|
32 |
-
if act == 'None':
|
33 |
-
self.act = None
|
34 |
-
elif act == 'relu':
|
35 |
-
self.act = nn.ReLU()
|
36 |
-
elif act == 'elu':
|
37 |
-
self.act = nn.ELU()
|
38 |
-
elif act == 'prelu':
|
39 |
-
self.act = nn.PReLU(num_parameters=in_ch)
|
40 |
-
elif act == 'softmax':
|
41 |
-
self.act = nn.Softmax(dim=-1)
|
42 |
-
elif act == 'sigmoid':
|
43 |
-
self.act = nn.Sigmoid()
|
44 |
-
|
45 |
-
def forward(self, x, mix_lambda=None, device=None):
|
46 |
-
"""
|
47 |
-
Args:
|
48 |
-
x: waveform, torch.tensor [batch, t_samples] / batch of mel_spec and longer list
|
49 |
-
mix_lambda: torch.tensor [batch], the mixup lambda
|
50 |
-
Returns:
|
51 |
-
class_prob: torch.tensor [batch, class_num]
|
52 |
-
|
53 |
-
"""
|
54 |
-
# batchnorm cancel grandient
|
55 |
-
if self.freeze:
|
56 |
-
self.clap_model.eval()
|
57 |
-
|
58 |
-
x = self.clap_model.audio_projection(
|
59 |
-
self.clap_model.audio_branch(x, mixup_lambda=mix_lambda, device=device)["embedding"])
|
60 |
-
out = self.lp_layer(x)
|
61 |
-
if self.act is not None:
|
62 |
-
out = self.act(out)
|
63 |
-
return out
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ASJMO/freegpt/g4f/README.md
DELETED
@@ -1,5 +0,0 @@
|
|
1 |
-
## 🚀 API G4F
|
2 |
-
|
3 |
-
This API is built upon the [gpt4free](https://github.com/xtekky/gpt4free) project.
|
4 |
-
|
5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/click/Factory.js
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
import Click from './Click.js';
|
2 |
-
import ObjectFactory from '../ObjectFactory.js';
|
3 |
-
import SetValue from '../../../plugins/utils/object/SetValue.js';
|
4 |
-
|
5 |
-
ObjectFactory.register('click', function (gameObject, config) {
|
6 |
-
return new Click(gameObject, config);
|
7 |
-
});
|
8 |
-
|
9 |
-
SetValue(window, 'RexPlugins.UI.Click', Click);
|
10 |
-
|
11 |
-
export default Click;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/pinch/Factory.js
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
import Pinch from './Pinch.js';
|
2 |
-
import ObjectFactory from '../ObjectFactory.js';
|
3 |
-
import SetValue from '../../../plugins/utils/object/SetValue.js';
|
4 |
-
|
5 |
-
ObjectFactory.register('pinch', function (config) {
|
6 |
-
return new Pinch(this.scene, config);
|
7 |
-
});
|
8 |
-
|
9 |
-
SetValue(window, 'RexPlugins.UI.Pinch', Pinch);
|
10 |
-
|
11 |
-
export default Pinch;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AkashKhamkar/QnA-generator/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: QnA Generator
|
3 |
-
emoji: 🌖
|
4 |
-
colorFrom: purple
|
5 |
-
colorTo: pink
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.10.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlekseyKorshuk/model-evaluation/utils.py
DELETED
@@ -1,34 +0,0 @@
|
|
1 |
-
import itertools
|
2 |
-
import random
|
3 |
-
|
4 |
-
|
5 |
-
def get_matchmaking(client, models, is_anonymous=True):
|
6 |
-
model_a, model_b = random.sample(models, k=2)
|
7 |
-
return model_a, model_b
|
8 |
-
sheet = client.open("Chat Arena").sheet1
|
9 |
-
records = sheet.get_all_records()
|
10 |
-
records = [
|
11 |
-
{
|
12 |
-
col: record.get(col, None)
|
13 |
-
for col in ['model_a', 'model_b']
|
14 |
-
} for record in records if record["is_anonymous"] == is_anonymous
|
15 |
-
]
|
16 |
-
|
17 |
-
combinations = list(itertools.combinations_with_replacement(models, 2))
|
18 |
-
combinations = [frozenset(combination) for combination in combinations if len(set(combination)) > 1]
|
19 |
-
|
20 |
-
records = [
|
21 |
-
frozenset(record.values()) for record in records
|
22 |
-
]
|
23 |
-
|
24 |
-
repetitions_count = {combination: 0 for combination in combinations}
|
25 |
-
|
26 |
-
for record in records:
|
27 |
-
repetitions_count[record] += 1
|
28 |
-
|
29 |
-
sorted_repetitions = dict(sorted(repetitions_count.items(), key=lambda item: item[1]))
|
30 |
-
less_common = list(sorted_repetitions.keys())[0]
|
31 |
-
less_common = list(less_common)
|
32 |
-
random.shuffle(less_common)
|
33 |
-
model_a, model_b = less_common
|
34 |
-
return model_a, model_b
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/run.sh
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/ms1mv3_r50
|
2 |
-
ps -ef | grep "train" | grep -v grep | awk '{print "kill -9 "$2}' | sh
|
|
|
|
|
|
spaces/Amrrs/DragGan-Inversion/viz/__init__.py
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
2 |
-
#
|
3 |
-
# NVIDIA CORPORATION and its licensors retain all intellectual property
|
4 |
-
# and proprietary rights in and to this software, related documentation
|
5 |
-
# and any modifications thereto. Any use, reproduction, disclosure or
|
6 |
-
# distribution of this software and related documentation without an express
|
7 |
-
# license agreement from NVIDIA CORPORATION is strictly prohibited.
|
8 |
-
|
9 |
-
# empty
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/latent_diffusion/__init__.py
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
from ...utils import OptionalDependencyNotAvailable, is_torch_available, is_transformers_available
|
2 |
-
from .pipeline_latent_diffusion_superresolution import LDMSuperResolutionPipeline
|
3 |
-
|
4 |
-
|
5 |
-
try:
|
6 |
-
if not (is_transformers_available() and is_torch_available()):
|
7 |
-
raise OptionalDependencyNotAvailable()
|
8 |
-
except OptionalDependencyNotAvailable:
|
9 |
-
from ...utils.dummy_torch_and_transformers_objects import ShapEPipeline
|
10 |
-
else:
|
11 |
-
from .pipeline_latent_diffusion import LDMBertModel, LDMTextToImagePipeline
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/tools/misc/print_config.py
DELETED
@@ -1,54 +0,0 @@
|
|
1 |
-
import argparse
|
2 |
-
import warnings
|
3 |
-
|
4 |
-
from mmcv import Config, DictAction
|
5 |
-
|
6 |
-
|
7 |
-
def parse_args():
|
8 |
-
parser = argparse.ArgumentParser(description='Print the whole config')
|
9 |
-
parser.add_argument('config', help='config file path')
|
10 |
-
parser.add_argument(
|
11 |
-
'--options',
|
12 |
-
nargs='+',
|
13 |
-
action=DictAction,
|
14 |
-
help='override some settings in the used config, the key-value pair '
|
15 |
-
'in xxx=yyy format will be merged into config file (deprecate), '
|
16 |
-
'change to --cfg-options instead.')
|
17 |
-
parser.add_argument(
|
18 |
-
'--cfg-options',
|
19 |
-
nargs='+',
|
20 |
-
action=DictAction,
|
21 |
-
help='override some settings in the used config, the key-value pair '
|
22 |
-
'in xxx=yyy format will be merged into config file. If the value to '
|
23 |
-
'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
|
24 |
-
'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
|
25 |
-
'Note that the quotation marks are necessary and that no white space '
|
26 |
-
'is allowed.')
|
27 |
-
args = parser.parse_args()
|
28 |
-
|
29 |
-
if args.options and args.cfg_options:
|
30 |
-
raise ValueError(
|
31 |
-
'--options and --cfg-options cannot be both '
|
32 |
-
'specified, --options is deprecated in favor of --cfg-options')
|
33 |
-
if args.options:
|
34 |
-
warnings.warn('--options is deprecated in favor of --cfg-options')
|
35 |
-
args.cfg_options = args.options
|
36 |
-
|
37 |
-
return args
|
38 |
-
|
39 |
-
|
40 |
-
def main():
|
41 |
-
args = parse_args()
|
42 |
-
|
43 |
-
cfg = Config.fromfile(args.config)
|
44 |
-
if args.cfg_options is not None:
|
45 |
-
cfg.merge_from_dict(args.cfg_options)
|
46 |
-
# import modules from string list.
|
47 |
-
if cfg.get('custom_imports', None):
|
48 |
-
from mmcv.utils import import_modules_from_strings
|
49 |
-
import_modules_from_strings(**cfg['custom_imports'])
|
50 |
-
print(f'Config:\n{cfg.pretty_text}')
|
51 |
-
|
52 |
-
|
53 |
-
if __name__ == '__main__':
|
54 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-123/ImageNet-Editing/run.sh
DELETED
@@ -1,86 +0,0 @@
|
|
1 |
-
#!/bin/sh
|
2 |
-
#****************************************************************#
|
3 |
-
# ScriptName: run.sh
|
4 |
-
# Author: Anonymous_123
|
5 |
-
# Create Date: 2022-09-12 11:55
|
6 |
-
# Modify Author: Anonymous_123
|
7 |
-
# Modify Date: 2022-09-25 12:02
|
8 |
-
# Function:
|
9 |
-
#***************************************************************#
|
10 |
-
|
11 |
-
# rm -rf results
|
12 |
-
# mkdir results
|
13 |
-
# rm -rf tmp
|
14 |
-
# mkdir tmp
|
15 |
-
ls /usr/local/cuda*
|
16 |
-
|
17 |
-
# Backgrounds
|
18 |
-
bg_scale=$1 #
|
19 |
-
bg_detemined=$2 # given the input background
|
20 |
-
hard=False
|
21 |
-
if [ "$1" != "" ]; then
|
22 |
-
if [ $1 > 0 ]; then
|
23 |
-
hard=True
|
24 |
-
fi
|
25 |
-
fi
|
26 |
-
|
27 |
-
# Size
|
28 |
-
size=$3
|
29 |
-
|
30 |
-
# Direction
|
31 |
-
angle=$4
|
32 |
-
|
33 |
-
# Steps
|
34 |
-
tot_steps=100
|
35 |
-
step=$5
|
36 |
-
skip_step=`expr $tot_steps - $step`
|
37 |
-
|
38 |
-
# number of generated image
|
39 |
-
num_of_Images=$6
|
40 |
-
|
41 |
-
# Background removal
|
42 |
-
cd object_removal/TFill/
|
43 |
-
python test.py \
|
44 |
-
--name imagenet \
|
45 |
-
--img_file ../../tmp/img/ \
|
46 |
-
--mask_file ../../tmp/mask/ \
|
47 |
-
--results_dir ../../results \
|
48 |
-
--model tc \
|
49 |
-
--coarse_or_refine refine \
|
50 |
-
--gpu_id 0 \
|
51 |
-
--no_shuffle \
|
52 |
-
--batch_size 1 \
|
53 |
-
--preprocess scale_shortside \
|
54 |
-
--mask_type 3 \
|
55 |
-
--load_size 512 \
|
56 |
-
--attn_G \
|
57 |
-
--add_noise
|
58 |
-
|
59 |
-
cd ../../
|
60 |
-
mv results/imagenet/test_latest/img_ref_out/input_0.png results/object_removal.png
|
61 |
-
rm -rf results/imagenet/
|
62 |
-
|
63 |
-
# Resize
|
64 |
-
python resize_obj.py --img_path tmp/img/input.JPEG --mask_path tmp/mask/input.png --scale $size
|
65 |
-
|
66 |
-
if [ "$2" != "" ]; then
|
67 |
-
bg_path=$bg_detemined
|
68 |
-
else
|
69 |
-
bg_path="../results/object_removal.png"
|
70 |
-
fi
|
71 |
-
|
72 |
-
echo "Background path: " echo $bg_path
|
73 |
-
echo "Steps: " echo $step
|
74 |
-
echo "Object pixel rate: " echo $size
|
75 |
-
echo "Object angle: " echo $angle
|
76 |
-
|
77 |
-
# Generating
|
78 |
-
cd editing_diffusion
|
79 |
-
if [ $1 > 0 ]; then
|
80 |
-
CUDA_VISIBLE_DEVICES=0 python main.py -p "test.JPEG" -i $bg_path -i2 "../results/img_rescaled.png" --mask "../results/mask_rescaled.png" --output_path "../tmp" --batch_size 1 --skip_timesteps $skip_step --invert_mask --clip_guidance_lambda 0 --classifier_scale 0. --y 0 --final_save_root "../results/" --rotate_obj --angle $angle --background_complex $bg_scale --hard --iterations_num $num_of_Images # --coarse_to_fine #--background_preservation_loss # --vid #--clip_guidance_lambda 0
|
81 |
-
else
|
82 |
-
CUDA_VISIBLE_DEVICES=0 python main.py -p "test.JPEG" -i $bg_path -i2 "../results/img_rescaled.png" --mask "../results/mask_rescaled.png" --output_path "../tmp" --batch_size 1 --skip_timesteps $skip_step --invert_mask --clip_guidance_lambda 0 --classifier_scale 0. --y 0 --final_save_root "../results/" --rotate_obj --angle $angle --background_complex $bg_scale --iterations_num $num_of_Images # --coarse_to_fine #--background_preservation_loss # --vid #--clip_guidance_lambda 0
|
83 |
-
fi
|
84 |
-
|
85 |
-
|
86 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Arnx/MusicGenXvAKN/audiocraft/quantization/base.py
DELETED
@@ -1,107 +0,0 @@
|
|
1 |
-
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
"""
|
8 |
-
Base class for all quantizers.
|
9 |
-
"""
|
10 |
-
|
11 |
-
from dataclasses import dataclass, field
|
12 |
-
import typing as tp
|
13 |
-
|
14 |
-
import torch
|
15 |
-
from torch import nn
|
16 |
-
|
17 |
-
|
18 |
-
@dataclass
|
19 |
-
class QuantizedResult:
|
20 |
-
x: torch.Tensor
|
21 |
-
codes: torch.Tensor
|
22 |
-
bandwidth: torch.Tensor # bandwidth in kb/s used, per batch item.
|
23 |
-
penalty: tp.Optional[torch.Tensor] = None
|
24 |
-
metrics: dict = field(default_factory=dict)
|
25 |
-
|
26 |
-
|
27 |
-
class BaseQuantizer(nn.Module):
|
28 |
-
"""Base class for quantizers.
|
29 |
-
"""
|
30 |
-
|
31 |
-
def forward(self, x: torch.Tensor, frame_rate: int) -> QuantizedResult:
|
32 |
-
"""
|
33 |
-
Given input tensor x, returns first the quantized (or approximately quantized)
|
34 |
-
representation along with quantized codes, bandwidth, and any penalty term for the loss.
|
35 |
-
Finally, this returns a dict of metrics to update logging etc.
|
36 |
-
Frame rate must be passed so that the bandwidth is properly computed.
|
37 |
-
"""
|
38 |
-
raise NotImplementedError()
|
39 |
-
|
40 |
-
def encode(self, x: torch.Tensor) -> torch.Tensor:
|
41 |
-
"""Encode a given input tensor with the specified sample rate at the given bandwidth.
|
42 |
-
"""
|
43 |
-
raise NotImplementedError()
|
44 |
-
|
45 |
-
def decode(self, codes: torch.Tensor) -> torch.Tensor:
|
46 |
-
"""Decode the given codes to the quantized representation.
|
47 |
-
"""
|
48 |
-
raise NotImplementedError()
|
49 |
-
|
50 |
-
@property
|
51 |
-
def total_codebooks(self):
|
52 |
-
"""Total number of codebooks.
|
53 |
-
"""
|
54 |
-
raise NotImplementedError()
|
55 |
-
|
56 |
-
@property
|
57 |
-
def num_codebooks(self):
|
58 |
-
"""Number of active codebooks.
|
59 |
-
"""
|
60 |
-
raise NotImplementedError()
|
61 |
-
|
62 |
-
def set_num_codebooks(self, n: int):
|
63 |
-
"""Set the number of active codebooks.
|
64 |
-
"""
|
65 |
-
raise NotImplementedError()
|
66 |
-
|
67 |
-
|
68 |
-
class DummyQuantizer(BaseQuantizer):
|
69 |
-
"""Fake quantizer that actually does not perform any quantization.
|
70 |
-
"""
|
71 |
-
def __init__(self):
|
72 |
-
super().__init__()
|
73 |
-
|
74 |
-
def forward(self, x: torch.Tensor, frame_rate: int):
|
75 |
-
q = x.unsqueeze(1)
|
76 |
-
return QuantizedResult(x, q, torch.tensor(q.numel() * 32 * frame_rate / 1000 / len(x)).to(x))
|
77 |
-
|
78 |
-
def encode(self, x: torch.Tensor) -> torch.Tensor:
|
79 |
-
"""Encode a given input tensor with the specified sample rate at the given bandwidth.
|
80 |
-
In the case of the DummyQuantizer, the codes are actually identical
|
81 |
-
to the input and resulting quantized representation as no quantization is done.
|
82 |
-
"""
|
83 |
-
return x.unsqueeze(1)
|
84 |
-
|
85 |
-
def decode(self, codes: torch.Tensor) -> torch.Tensor:
|
86 |
-
"""Decode the given codes to the quantized representation.
|
87 |
-
In the case of the DummyQuantizer, the codes are actually identical
|
88 |
-
to the input and resulting quantized representation as no quantization is done.
|
89 |
-
"""
|
90 |
-
return codes.squeeze(1)
|
91 |
-
|
92 |
-
@property
|
93 |
-
def total_codebooks(self):
|
94 |
-
"""Total number of codebooks.
|
95 |
-
"""
|
96 |
-
return 1
|
97 |
-
|
98 |
-
@property
|
99 |
-
def num_codebooks(self):
|
100 |
-
"""Total number of codebooks.
|
101 |
-
"""
|
102 |
-
return self.total_codebooks
|
103 |
-
|
104 |
-
def set_num_codebooks(self, n: int):
|
105 |
-
"""Set the number of active codebooks.
|
106 |
-
"""
|
107 |
-
raise AttributeError("Cannot override the number of codebooks for the dummy quantizer")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AsakuraMizu/moe-tts/modules.py
DELETED
@@ -1,390 +0,0 @@
|
|
1 |
-
import copy
|
2 |
-
import math
|
3 |
-
import numpy as np
|
4 |
-
import scipy
|
5 |
-
import torch
|
6 |
-
from torch import nn
|
7 |
-
from torch.nn import functional as F
|
8 |
-
|
9 |
-
from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
|
10 |
-
from torch.nn.utils import weight_norm, remove_weight_norm
|
11 |
-
|
12 |
-
import commons
|
13 |
-
from commons import init_weights, get_padding
|
14 |
-
from transforms import piecewise_rational_quadratic_transform
|
15 |
-
|
16 |
-
|
17 |
-
LRELU_SLOPE = 0.1
|
18 |
-
|
19 |
-
|
20 |
-
class LayerNorm(nn.Module):
|
21 |
-
def __init__(self, channels, eps=1e-5):
|
22 |
-
super().__init__()
|
23 |
-
self.channels = channels
|
24 |
-
self.eps = eps
|
25 |
-
|
26 |
-
self.gamma = nn.Parameter(torch.ones(channels))
|
27 |
-
self.beta = nn.Parameter(torch.zeros(channels))
|
28 |
-
|
29 |
-
def forward(self, x):
|
30 |
-
x = x.transpose(1, -1)
|
31 |
-
x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
|
32 |
-
return x.transpose(1, -1)
|
33 |
-
|
34 |
-
|
35 |
-
class ConvReluNorm(nn.Module):
|
36 |
-
def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
|
37 |
-
super().__init__()
|
38 |
-
self.in_channels = in_channels
|
39 |
-
self.hidden_channels = hidden_channels
|
40 |
-
self.out_channels = out_channels
|
41 |
-
self.kernel_size = kernel_size
|
42 |
-
self.n_layers = n_layers
|
43 |
-
self.p_dropout = p_dropout
|
44 |
-
assert n_layers > 1, "Number of layers should be larger than 0."
|
45 |
-
|
46 |
-
self.conv_layers = nn.ModuleList()
|
47 |
-
self.norm_layers = nn.ModuleList()
|
48 |
-
self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
|
49 |
-
self.norm_layers.append(LayerNorm(hidden_channels))
|
50 |
-
self.relu_drop = nn.Sequential(
|
51 |
-
nn.ReLU(),
|
52 |
-
nn.Dropout(p_dropout))
|
53 |
-
for _ in range(n_layers-1):
|
54 |
-
self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
|
55 |
-
self.norm_layers.append(LayerNorm(hidden_channels))
|
56 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
|
57 |
-
self.proj.weight.data.zero_()
|
58 |
-
self.proj.bias.data.zero_()
|
59 |
-
|
60 |
-
def forward(self, x, x_mask):
|
61 |
-
x_org = x
|
62 |
-
for i in range(self.n_layers):
|
63 |
-
x = self.conv_layers[i](x * x_mask)
|
64 |
-
x = self.norm_layers[i](x)
|
65 |
-
x = self.relu_drop(x)
|
66 |
-
x = x_org + self.proj(x)
|
67 |
-
return x * x_mask
|
68 |
-
|
69 |
-
|
70 |
-
class DDSConv(nn.Module):
|
71 |
-
"""
|
72 |
-
Dilated and Depth-Separable Convolution
|
73 |
-
"""
|
74 |
-
def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
|
75 |
-
super().__init__()
|
76 |
-
self.channels = channels
|
77 |
-
self.kernel_size = kernel_size
|
78 |
-
self.n_layers = n_layers
|
79 |
-
self.p_dropout = p_dropout
|
80 |
-
|
81 |
-
self.drop = nn.Dropout(p_dropout)
|
82 |
-
self.convs_sep = nn.ModuleList()
|
83 |
-
self.convs_1x1 = nn.ModuleList()
|
84 |
-
self.norms_1 = nn.ModuleList()
|
85 |
-
self.norms_2 = nn.ModuleList()
|
86 |
-
for i in range(n_layers):
|
87 |
-
dilation = kernel_size ** i
|
88 |
-
padding = (kernel_size * dilation - dilation) // 2
|
89 |
-
self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
|
90 |
-
groups=channels, dilation=dilation, padding=padding
|
91 |
-
))
|
92 |
-
self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
|
93 |
-
self.norms_1.append(LayerNorm(channels))
|
94 |
-
self.norms_2.append(LayerNorm(channels))
|
95 |
-
|
96 |
-
def forward(self, x, x_mask, g=None):
|
97 |
-
if g is not None:
|
98 |
-
x = x + g
|
99 |
-
for i in range(self.n_layers):
|
100 |
-
y = self.convs_sep[i](x * x_mask)
|
101 |
-
y = self.norms_1[i](y)
|
102 |
-
y = F.gelu(y)
|
103 |
-
y = self.convs_1x1[i](y)
|
104 |
-
y = self.norms_2[i](y)
|
105 |
-
y = F.gelu(y)
|
106 |
-
y = self.drop(y)
|
107 |
-
x = x + y
|
108 |
-
return x * x_mask
|
109 |
-
|
110 |
-
|
111 |
-
class WN(torch.nn.Module):
|
112 |
-
def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
|
113 |
-
super(WN, self).__init__()
|
114 |
-
assert(kernel_size % 2 == 1)
|
115 |
-
self.hidden_channels =hidden_channels
|
116 |
-
self.kernel_size = kernel_size,
|
117 |
-
self.dilation_rate = dilation_rate
|
118 |
-
self.n_layers = n_layers
|
119 |
-
self.gin_channels = gin_channels
|
120 |
-
self.p_dropout = p_dropout
|
121 |
-
|
122 |
-
self.in_layers = torch.nn.ModuleList()
|
123 |
-
self.res_skip_layers = torch.nn.ModuleList()
|
124 |
-
self.drop = nn.Dropout(p_dropout)
|
125 |
-
|
126 |
-
if gin_channels != 0:
|
127 |
-
cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
|
128 |
-
self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
|
129 |
-
|
130 |
-
for i in range(n_layers):
|
131 |
-
dilation = dilation_rate ** i
|
132 |
-
padding = int((kernel_size * dilation - dilation) / 2)
|
133 |
-
in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
|
134 |
-
dilation=dilation, padding=padding)
|
135 |
-
in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
|
136 |
-
self.in_layers.append(in_layer)
|
137 |
-
|
138 |
-
# last one is not necessary
|
139 |
-
if i < n_layers - 1:
|
140 |
-
res_skip_channels = 2 * hidden_channels
|
141 |
-
else:
|
142 |
-
res_skip_channels = hidden_channels
|
143 |
-
|
144 |
-
res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
|
145 |
-
res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
|
146 |
-
self.res_skip_layers.append(res_skip_layer)
|
147 |
-
|
148 |
-
def forward(self, x, x_mask, g=None, **kwargs):
|
149 |
-
output = torch.zeros_like(x)
|
150 |
-
n_channels_tensor = torch.IntTensor([self.hidden_channels])
|
151 |
-
|
152 |
-
if g is not None:
|
153 |
-
g = self.cond_layer(g)
|
154 |
-
|
155 |
-
for i in range(self.n_layers):
|
156 |
-
x_in = self.in_layers[i](x)
|
157 |
-
if g is not None:
|
158 |
-
cond_offset = i * 2 * self.hidden_channels
|
159 |
-
g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
|
160 |
-
else:
|
161 |
-
g_l = torch.zeros_like(x_in)
|
162 |
-
|
163 |
-
acts = commons.fused_add_tanh_sigmoid_multiply(
|
164 |
-
x_in,
|
165 |
-
g_l,
|
166 |
-
n_channels_tensor)
|
167 |
-
acts = self.drop(acts)
|
168 |
-
|
169 |
-
res_skip_acts = self.res_skip_layers[i](acts)
|
170 |
-
if i < self.n_layers - 1:
|
171 |
-
res_acts = res_skip_acts[:,:self.hidden_channels,:]
|
172 |
-
x = (x + res_acts) * x_mask
|
173 |
-
output = output + res_skip_acts[:,self.hidden_channels:,:]
|
174 |
-
else:
|
175 |
-
output = output + res_skip_acts
|
176 |
-
return output * x_mask
|
177 |
-
|
178 |
-
def remove_weight_norm(self):
|
179 |
-
if self.gin_channels != 0:
|
180 |
-
torch.nn.utils.remove_weight_norm(self.cond_layer)
|
181 |
-
for l in self.in_layers:
|
182 |
-
torch.nn.utils.remove_weight_norm(l)
|
183 |
-
for l in self.res_skip_layers:
|
184 |
-
torch.nn.utils.remove_weight_norm(l)
|
185 |
-
|
186 |
-
|
187 |
-
class ResBlock1(torch.nn.Module):
|
188 |
-
def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
|
189 |
-
super(ResBlock1, self).__init__()
|
190 |
-
self.convs1 = nn.ModuleList([
|
191 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
|
192 |
-
padding=get_padding(kernel_size, dilation[0]))),
|
193 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
|
194 |
-
padding=get_padding(kernel_size, dilation[1]))),
|
195 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
|
196 |
-
padding=get_padding(kernel_size, dilation[2])))
|
197 |
-
])
|
198 |
-
self.convs1.apply(init_weights)
|
199 |
-
|
200 |
-
self.convs2 = nn.ModuleList([
|
201 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
|
202 |
-
padding=get_padding(kernel_size, 1))),
|
203 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
|
204 |
-
padding=get_padding(kernel_size, 1))),
|
205 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
|
206 |
-
padding=get_padding(kernel_size, 1)))
|
207 |
-
])
|
208 |
-
self.convs2.apply(init_weights)
|
209 |
-
|
210 |
-
def forward(self, x, x_mask=None):
|
211 |
-
for c1, c2 in zip(self.convs1, self.convs2):
|
212 |
-
xt = F.leaky_relu(x, LRELU_SLOPE)
|
213 |
-
if x_mask is not None:
|
214 |
-
xt = xt * x_mask
|
215 |
-
xt = c1(xt)
|
216 |
-
xt = F.leaky_relu(xt, LRELU_SLOPE)
|
217 |
-
if x_mask is not None:
|
218 |
-
xt = xt * x_mask
|
219 |
-
xt = c2(xt)
|
220 |
-
x = xt + x
|
221 |
-
if x_mask is not None:
|
222 |
-
x = x * x_mask
|
223 |
-
return x
|
224 |
-
|
225 |
-
def remove_weight_norm(self):
|
226 |
-
for l in self.convs1:
|
227 |
-
remove_weight_norm(l)
|
228 |
-
for l in self.convs2:
|
229 |
-
remove_weight_norm(l)
|
230 |
-
|
231 |
-
|
232 |
-
class ResBlock2(torch.nn.Module):
|
233 |
-
def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
|
234 |
-
super(ResBlock2, self).__init__()
|
235 |
-
self.convs = nn.ModuleList([
|
236 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
|
237 |
-
padding=get_padding(kernel_size, dilation[0]))),
|
238 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
|
239 |
-
padding=get_padding(kernel_size, dilation[1])))
|
240 |
-
])
|
241 |
-
self.convs.apply(init_weights)
|
242 |
-
|
243 |
-
def forward(self, x, x_mask=None):
|
244 |
-
for c in self.convs:
|
245 |
-
xt = F.leaky_relu(x, LRELU_SLOPE)
|
246 |
-
if x_mask is not None:
|
247 |
-
xt = xt * x_mask
|
248 |
-
xt = c(xt)
|
249 |
-
x = xt + x
|
250 |
-
if x_mask is not None:
|
251 |
-
x = x * x_mask
|
252 |
-
return x
|
253 |
-
|
254 |
-
def remove_weight_norm(self):
|
255 |
-
for l in self.convs:
|
256 |
-
remove_weight_norm(l)
|
257 |
-
|
258 |
-
|
259 |
-
class Log(nn.Module):
|
260 |
-
def forward(self, x, x_mask, reverse=False, **kwargs):
|
261 |
-
if not reverse:
|
262 |
-
y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
|
263 |
-
logdet = torch.sum(-y, [1, 2])
|
264 |
-
return y, logdet
|
265 |
-
else:
|
266 |
-
x = torch.exp(x) * x_mask
|
267 |
-
return x
|
268 |
-
|
269 |
-
|
270 |
-
class Flip(nn.Module):
|
271 |
-
def forward(self, x, *args, reverse=False, **kwargs):
|
272 |
-
x = torch.flip(x, [1])
|
273 |
-
if not reverse:
|
274 |
-
logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
|
275 |
-
return x, logdet
|
276 |
-
else:
|
277 |
-
return x
|
278 |
-
|
279 |
-
|
280 |
-
class ElementwiseAffine(nn.Module):
|
281 |
-
def __init__(self, channels):
|
282 |
-
super().__init__()
|
283 |
-
self.channels = channels
|
284 |
-
self.m = nn.Parameter(torch.zeros(channels,1))
|
285 |
-
self.logs = nn.Parameter(torch.zeros(channels,1))
|
286 |
-
|
287 |
-
def forward(self, x, x_mask, reverse=False, **kwargs):
|
288 |
-
if not reverse:
|
289 |
-
y = self.m + torch.exp(self.logs) * x
|
290 |
-
y = y * x_mask
|
291 |
-
logdet = torch.sum(self.logs * x_mask, [1,2])
|
292 |
-
return y, logdet
|
293 |
-
else:
|
294 |
-
x = (x - self.m) * torch.exp(-self.logs) * x_mask
|
295 |
-
return x
|
296 |
-
|
297 |
-
|
298 |
-
class ResidualCouplingLayer(nn.Module):
|
299 |
-
def __init__(self,
|
300 |
-
channels,
|
301 |
-
hidden_channels,
|
302 |
-
kernel_size,
|
303 |
-
dilation_rate,
|
304 |
-
n_layers,
|
305 |
-
p_dropout=0,
|
306 |
-
gin_channels=0,
|
307 |
-
mean_only=False):
|
308 |
-
assert channels % 2 == 0, "channels should be divisible by 2"
|
309 |
-
super().__init__()
|
310 |
-
self.channels = channels
|
311 |
-
self.hidden_channels = hidden_channels
|
312 |
-
self.kernel_size = kernel_size
|
313 |
-
self.dilation_rate = dilation_rate
|
314 |
-
self.n_layers = n_layers
|
315 |
-
self.half_channels = channels // 2
|
316 |
-
self.mean_only = mean_only
|
317 |
-
|
318 |
-
self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
|
319 |
-
self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
|
320 |
-
self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
|
321 |
-
self.post.weight.data.zero_()
|
322 |
-
self.post.bias.data.zero_()
|
323 |
-
|
324 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
325 |
-
x0, x1 = torch.split(x, [self.half_channels]*2, 1)
|
326 |
-
h = self.pre(x0) * x_mask
|
327 |
-
h = self.enc(h, x_mask, g=g)
|
328 |
-
stats = self.post(h) * x_mask
|
329 |
-
if not self.mean_only:
|
330 |
-
m, logs = torch.split(stats, [self.half_channels]*2, 1)
|
331 |
-
else:
|
332 |
-
m = stats
|
333 |
-
logs = torch.zeros_like(m)
|
334 |
-
|
335 |
-
if not reverse:
|
336 |
-
x1 = m + x1 * torch.exp(logs) * x_mask
|
337 |
-
x = torch.cat([x0, x1], 1)
|
338 |
-
logdet = torch.sum(logs, [1,2])
|
339 |
-
return x, logdet
|
340 |
-
else:
|
341 |
-
x1 = (x1 - m) * torch.exp(-logs) * x_mask
|
342 |
-
x = torch.cat([x0, x1], 1)
|
343 |
-
return x
|
344 |
-
|
345 |
-
|
346 |
-
class ConvFlow(nn.Module):
|
347 |
-
def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
|
348 |
-
super().__init__()
|
349 |
-
self.in_channels = in_channels
|
350 |
-
self.filter_channels = filter_channels
|
351 |
-
self.kernel_size = kernel_size
|
352 |
-
self.n_layers = n_layers
|
353 |
-
self.num_bins = num_bins
|
354 |
-
self.tail_bound = tail_bound
|
355 |
-
self.half_channels = in_channels // 2
|
356 |
-
|
357 |
-
self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
|
358 |
-
self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
|
359 |
-
self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
|
360 |
-
self.proj.weight.data.zero_()
|
361 |
-
self.proj.bias.data.zero_()
|
362 |
-
|
363 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
364 |
-
x0, x1 = torch.split(x, [self.half_channels]*2, 1)
|
365 |
-
h = self.pre(x0)
|
366 |
-
h = self.convs(h, x_mask, g=g)
|
367 |
-
h = self.proj(h) * x_mask
|
368 |
-
|
369 |
-
b, c, t = x0.shape
|
370 |
-
h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
|
371 |
-
|
372 |
-
unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
|
373 |
-
unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
|
374 |
-
unnormalized_derivatives = h[..., 2 * self.num_bins:]
|
375 |
-
|
376 |
-
x1, logabsdet = piecewise_rational_quadratic_transform(x1,
|
377 |
-
unnormalized_widths,
|
378 |
-
unnormalized_heights,
|
379 |
-
unnormalized_derivatives,
|
380 |
-
inverse=reverse,
|
381 |
-
tails='linear',
|
382 |
-
tail_bound=self.tail_bound
|
383 |
-
)
|
384 |
-
|
385 |
-
x = torch.cat([x0, x1], 1) * x_mask
|
386 |
-
logdet = torch.sum(logabsdet * x_mask, [1,2])
|
387 |
-
if not reverse:
|
388 |
-
return x, logdet
|
389 |
-
else:
|
390 |
-
return x
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distro/distro.py
DELETED
@@ -1,1399 +0,0 @@
|
|
1 |
-
#!/usr/bin/env python
|
2 |
-
# Copyright 2015,2016,2017 Nir Cohen
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
|
16 |
-
"""
|
17 |
-
The ``distro`` package (``distro`` stands for Linux Distribution) provides
|
18 |
-
information about the Linux distribution it runs on, such as a reliable
|
19 |
-
machine-readable distro ID, or version information.
|
20 |
-
|
21 |
-
It is the recommended replacement for Python's original
|
22 |
-
:py:func:`platform.linux_distribution` function, but it provides much more
|
23 |
-
functionality. An alternative implementation became necessary because Python
|
24 |
-
3.5 deprecated this function, and Python 3.8 removed it altogether. Its
|
25 |
-
predecessor function :py:func:`platform.dist` was already deprecated since
|
26 |
-
Python 2.6 and removed in Python 3.8. Still, there are many cases in which
|
27 |
-
access to OS distribution information is needed. See `Python issue 1322
|
28 |
-
<https://bugs.python.org/issue1322>`_ for more information.
|
29 |
-
"""
|
30 |
-
|
31 |
-
import argparse
|
32 |
-
import json
|
33 |
-
import logging
|
34 |
-
import os
|
35 |
-
import re
|
36 |
-
import shlex
|
37 |
-
import subprocess
|
38 |
-
import sys
|
39 |
-
import warnings
|
40 |
-
from typing import (
|
41 |
-
Any,
|
42 |
-
Callable,
|
43 |
-
Dict,
|
44 |
-
Iterable,
|
45 |
-
Optional,
|
46 |
-
Sequence,
|
47 |
-
TextIO,
|
48 |
-
Tuple,
|
49 |
-
Type,
|
50 |
-
)
|
51 |
-
|
52 |
-
try:
|
53 |
-
from typing import TypedDict
|
54 |
-
except ImportError:
|
55 |
-
# Python 3.7
|
56 |
-
TypedDict = dict
|
57 |
-
|
58 |
-
__version__ = "1.8.0"
|
59 |
-
|
60 |
-
|
61 |
-
class VersionDict(TypedDict):
|
62 |
-
major: str
|
63 |
-
minor: str
|
64 |
-
build_number: str
|
65 |
-
|
66 |
-
|
67 |
-
class InfoDict(TypedDict):
|
68 |
-
id: str
|
69 |
-
version: str
|
70 |
-
version_parts: VersionDict
|
71 |
-
like: str
|
72 |
-
codename: str
|
73 |
-
|
74 |
-
|
75 |
-
_UNIXCONFDIR = os.environ.get("UNIXCONFDIR", "/etc")
|
76 |
-
_UNIXUSRLIBDIR = os.environ.get("UNIXUSRLIBDIR", "/usr/lib")
|
77 |
-
_OS_RELEASE_BASENAME = "os-release"
|
78 |
-
|
79 |
-
#: Translation table for normalizing the "ID" attribute defined in os-release
|
80 |
-
#: files, for use by the :func:`distro.id` method.
|
81 |
-
#:
|
82 |
-
#: * Key: Value as defined in the os-release file, translated to lower case,
|
83 |
-
#: with blanks translated to underscores.
|
84 |
-
#:
|
85 |
-
#: * Value: Normalized value.
|
86 |
-
NORMALIZED_OS_ID = {
|
87 |
-
"ol": "oracle", # Oracle Linux
|
88 |
-
"opensuse-leap": "opensuse", # Newer versions of OpenSuSE report as opensuse-leap
|
89 |
-
}
|
90 |
-
|
91 |
-
#: Translation table for normalizing the "Distributor ID" attribute returned by
|
92 |
-
#: the lsb_release command, for use by the :func:`distro.id` method.
|
93 |
-
#:
|
94 |
-
#: * Key: Value as returned by the lsb_release command, translated to lower
|
95 |
-
#: case, with blanks translated to underscores.
|
96 |
-
#:
|
97 |
-
#: * Value: Normalized value.
|
98 |
-
NORMALIZED_LSB_ID = {
|
99 |
-
"enterpriseenterpriseas": "oracle", # Oracle Enterprise Linux 4
|
100 |
-
"enterpriseenterpriseserver": "oracle", # Oracle Linux 5
|
101 |
-
"redhatenterpriseworkstation": "rhel", # RHEL 6, 7 Workstation
|
102 |
-
"redhatenterpriseserver": "rhel", # RHEL 6, 7 Server
|
103 |
-
"redhatenterprisecomputenode": "rhel", # RHEL 6 ComputeNode
|
104 |
-
}
|
105 |
-
|
106 |
-
#: Translation table for normalizing the distro ID derived from the file name
|
107 |
-
#: of distro release files, for use by the :func:`distro.id` method.
|
108 |
-
#:
|
109 |
-
#: * Key: Value as derived from the file name of a distro release file,
|
110 |
-
#: translated to lower case, with blanks translated to underscores.
|
111 |
-
#:
|
112 |
-
#: * Value: Normalized value.
|
113 |
-
NORMALIZED_DISTRO_ID = {
|
114 |
-
"redhat": "rhel", # RHEL 6.x, 7.x
|
115 |
-
}
|
116 |
-
|
117 |
-
# Pattern for content of distro release file (reversed)
|
118 |
-
_DISTRO_RELEASE_CONTENT_REVERSED_PATTERN = re.compile(
|
119 |
-
r"(?:[^)]*\)(.*)\()? *(?:STL )?([\d.+\-a-z]*\d) *(?:esaeler *)?(.+)"
|
120 |
-
)
|
121 |
-
|
122 |
-
# Pattern for base file name of distro release file
|
123 |
-
_DISTRO_RELEASE_BASENAME_PATTERN = re.compile(r"(\w+)[-_](release|version)$")
|
124 |
-
|
125 |
-
# Base file names to be looked up for if _UNIXCONFDIR is not readable.
|
126 |
-
_DISTRO_RELEASE_BASENAMES = [
|
127 |
-
"SuSE-release",
|
128 |
-
"arch-release",
|
129 |
-
"base-release",
|
130 |
-
"centos-release",
|
131 |
-
"fedora-release",
|
132 |
-
"gentoo-release",
|
133 |
-
"mageia-release",
|
134 |
-
"mandrake-release",
|
135 |
-
"mandriva-release",
|
136 |
-
"mandrivalinux-release",
|
137 |
-
"manjaro-release",
|
138 |
-
"oracle-release",
|
139 |
-
"redhat-release",
|
140 |
-
"rocky-release",
|
141 |
-
"sl-release",
|
142 |
-
"slackware-version",
|
143 |
-
]
|
144 |
-
|
145 |
-
# Base file names to be ignored when searching for distro release file
|
146 |
-
_DISTRO_RELEASE_IGNORE_BASENAMES = (
|
147 |
-
"debian_version",
|
148 |
-
"lsb-release",
|
149 |
-
"oem-release",
|
150 |
-
_OS_RELEASE_BASENAME,
|
151 |
-
"system-release",
|
152 |
-
"plesk-release",
|
153 |
-
"iredmail-release",
|
154 |
-
)
|
155 |
-
|
156 |
-
|
157 |
-
def linux_distribution(full_distribution_name: bool = True) -> Tuple[str, str, str]:
|
158 |
-
"""
|
159 |
-
.. deprecated:: 1.6.0
|
160 |
-
|
161 |
-
:func:`distro.linux_distribution()` is deprecated. It should only be
|
162 |
-
used as a compatibility shim with Python's
|
163 |
-
:py:func:`platform.linux_distribution()`. Please use :func:`distro.id`,
|
164 |
-
:func:`distro.version` and :func:`distro.name` instead.
|
165 |
-
|
166 |
-
Return information about the current OS distribution as a tuple
|
167 |
-
``(id_name, version, codename)`` with items as follows:
|
168 |
-
|
169 |
-
* ``id_name``: If *full_distribution_name* is false, the result of
|
170 |
-
:func:`distro.id`. Otherwise, the result of :func:`distro.name`.
|
171 |
-
|
172 |
-
* ``version``: The result of :func:`distro.version`.
|
173 |
-
|
174 |
-
* ``codename``: The extra item (usually in parentheses) after the
|
175 |
-
os-release version number, or the result of :func:`distro.codename`.
|
176 |
-
|
177 |
-
The interface of this function is compatible with the original
|
178 |
-
:py:func:`platform.linux_distribution` function, supporting a subset of
|
179 |
-
its parameters.
|
180 |
-
|
181 |
-
The data it returns may not exactly be the same, because it uses more data
|
182 |
-
sources than the original function, and that may lead to different data if
|
183 |
-
the OS distribution is not consistent across multiple data sources it
|
184 |
-
provides (there are indeed such distributions ...).
|
185 |
-
|
186 |
-
Another reason for differences is the fact that the :func:`distro.id`
|
187 |
-
method normalizes the distro ID string to a reliable machine-readable value
|
188 |
-
for a number of popular OS distributions.
|
189 |
-
"""
|
190 |
-
warnings.warn(
|
191 |
-
"distro.linux_distribution() is deprecated. It should only be used as a "
|
192 |
-
"compatibility shim with Python's platform.linux_distribution(). Please use "
|
193 |
-
"distro.id(), distro.version() and distro.name() instead.",
|
194 |
-
DeprecationWarning,
|
195 |
-
stacklevel=2,
|
196 |
-
)
|
197 |
-
return _distro.linux_distribution(full_distribution_name)
|
198 |
-
|
199 |
-
|
200 |
-
def id() -> str:
|
201 |
-
"""
|
202 |
-
Return the distro ID of the current distribution, as a
|
203 |
-
machine-readable string.
|
204 |
-
|
205 |
-
For a number of OS distributions, the returned distro ID value is
|
206 |
-
*reliable*, in the sense that it is documented and that it does not change
|
207 |
-
across releases of the distribution.
|
208 |
-
|
209 |
-
This package maintains the following reliable distro ID values:
|
210 |
-
|
211 |
-
============== =========================================
|
212 |
-
Distro ID Distribution
|
213 |
-
============== =========================================
|
214 |
-
"ubuntu" Ubuntu
|
215 |
-
"debian" Debian
|
216 |
-
"rhel" RedHat Enterprise Linux
|
217 |
-
"centos" CentOS
|
218 |
-
"fedora" Fedora
|
219 |
-
"sles" SUSE Linux Enterprise Server
|
220 |
-
"opensuse" openSUSE
|
221 |
-
"amzn" Amazon Linux
|
222 |
-
"arch" Arch Linux
|
223 |
-
"buildroot" Buildroot
|
224 |
-
"cloudlinux" CloudLinux OS
|
225 |
-
"exherbo" Exherbo Linux
|
226 |
-
"gentoo" GenToo Linux
|
227 |
-
"ibm_powerkvm" IBM PowerKVM
|
228 |
-
"kvmibm" KVM for IBM z Systems
|
229 |
-
"linuxmint" Linux Mint
|
230 |
-
"mageia" Mageia
|
231 |
-
"mandriva" Mandriva Linux
|
232 |
-
"parallels" Parallels
|
233 |
-
"pidora" Pidora
|
234 |
-
"raspbian" Raspbian
|
235 |
-
"oracle" Oracle Linux (and Oracle Enterprise Linux)
|
236 |
-
"scientific" Scientific Linux
|
237 |
-
"slackware" Slackware
|
238 |
-
"xenserver" XenServer
|
239 |
-
"openbsd" OpenBSD
|
240 |
-
"netbsd" NetBSD
|
241 |
-
"freebsd" FreeBSD
|
242 |
-
"midnightbsd" MidnightBSD
|
243 |
-
"rocky" Rocky Linux
|
244 |
-
"aix" AIX
|
245 |
-
"guix" Guix System
|
246 |
-
============== =========================================
|
247 |
-
|
248 |
-
If you have a need to get distros for reliable IDs added into this set,
|
249 |
-
or if you find that the :func:`distro.id` function returns a different
|
250 |
-
distro ID for one of the listed distros, please create an issue in the
|
251 |
-
`distro issue tracker`_.
|
252 |
-
|
253 |
-
**Lookup hierarchy and transformations:**
|
254 |
-
|
255 |
-
First, the ID is obtained from the following sources, in the specified
|
256 |
-
order. The first available and non-empty value is used:
|
257 |
-
|
258 |
-
* the value of the "ID" attribute of the os-release file,
|
259 |
-
|
260 |
-
* the value of the "Distributor ID" attribute returned by the lsb_release
|
261 |
-
command,
|
262 |
-
|
263 |
-
* the first part of the file name of the distro release file,
|
264 |
-
|
265 |
-
The so determined ID value then passes the following transformations,
|
266 |
-
before it is returned by this method:
|
267 |
-
|
268 |
-
* it is translated to lower case,
|
269 |
-
|
270 |
-
* blanks (which should not be there anyway) are translated to underscores,
|
271 |
-
|
272 |
-
* a normalization of the ID is performed, based upon
|
273 |
-
`normalization tables`_. The purpose of this normalization is to ensure
|
274 |
-
that the ID is as reliable as possible, even across incompatible changes
|
275 |
-
in the OS distributions. A common reason for an incompatible change is
|
276 |
-
the addition of an os-release file, or the addition of the lsb_release
|
277 |
-
command, with ID values that differ from what was previously determined
|
278 |
-
from the distro release file name.
|
279 |
-
"""
|
280 |
-
return _distro.id()
|
281 |
-
|
282 |
-
|
283 |
-
def name(pretty: bool = False) -> str:
|
284 |
-
"""
|
285 |
-
Return the name of the current OS distribution, as a human-readable
|
286 |
-
string.
|
287 |
-
|
288 |
-
If *pretty* is false, the name is returned without version or codename.
|
289 |
-
(e.g. "CentOS Linux")
|
290 |
-
|
291 |
-
If *pretty* is true, the version and codename are appended.
|
292 |
-
(e.g. "CentOS Linux 7.1.1503 (Core)")
|
293 |
-
|
294 |
-
**Lookup hierarchy:**
|
295 |
-
|
296 |
-
The name is obtained from the following sources, in the specified order.
|
297 |
-
The first available and non-empty value is used:
|
298 |
-
|
299 |
-
* If *pretty* is false:
|
300 |
-
|
301 |
-
- the value of the "NAME" attribute of the os-release file,
|
302 |
-
|
303 |
-
- the value of the "Distributor ID" attribute returned by the lsb_release
|
304 |
-
command,
|
305 |
-
|
306 |
-
- the value of the "<name>" field of the distro release file.
|
307 |
-
|
308 |
-
* If *pretty* is true:
|
309 |
-
|
310 |
-
- the value of the "PRETTY_NAME" attribute of the os-release file,
|
311 |
-
|
312 |
-
- the value of the "Description" attribute returned by the lsb_release
|
313 |
-
command,
|
314 |
-
|
315 |
-
- the value of the "<name>" field of the distro release file, appended
|
316 |
-
with the value of the pretty version ("<version_id>" and "<codename>"
|
317 |
-
fields) of the distro release file, if available.
|
318 |
-
"""
|
319 |
-
return _distro.name(pretty)
|
320 |
-
|
321 |
-
|
322 |
-
def version(pretty: bool = False, best: bool = False) -> str:
|
323 |
-
"""
|
324 |
-
Return the version of the current OS distribution, as a human-readable
|
325 |
-
string.
|
326 |
-
|
327 |
-
If *pretty* is false, the version is returned without codename (e.g.
|
328 |
-
"7.0").
|
329 |
-
|
330 |
-
If *pretty* is true, the codename in parenthesis is appended, if the
|
331 |
-
codename is non-empty (e.g. "7.0 (Maipo)").
|
332 |
-
|
333 |
-
Some distributions provide version numbers with different precisions in
|
334 |
-
the different sources of distribution information. Examining the different
|
335 |
-
sources in a fixed priority order does not always yield the most precise
|
336 |
-
version (e.g. for Debian 8.2, or CentOS 7.1).
|
337 |
-
|
338 |
-
Some other distributions may not provide this kind of information. In these
|
339 |
-
cases, an empty string would be returned. This behavior can be observed
|
340 |
-
with rolling releases distributions (e.g. Arch Linux).
|
341 |
-
|
342 |
-
The *best* parameter can be used to control the approach for the returned
|
343 |
-
version:
|
344 |
-
|
345 |
-
If *best* is false, the first non-empty version number in priority order of
|
346 |
-
the examined sources is returned.
|
347 |
-
|
348 |
-
If *best* is true, the most precise version number out of all examined
|
349 |
-
sources is returned.
|
350 |
-
|
351 |
-
**Lookup hierarchy:**
|
352 |
-
|
353 |
-
In all cases, the version number is obtained from the following sources.
|
354 |
-
If *best* is false, this order represents the priority order:
|
355 |
-
|
356 |
-
* the value of the "VERSION_ID" attribute of the os-release file,
|
357 |
-
* the value of the "Release" attribute returned by the lsb_release
|
358 |
-
command,
|
359 |
-
* the version number parsed from the "<version_id>" field of the first line
|
360 |
-
of the distro release file,
|
361 |
-
* the version number parsed from the "PRETTY_NAME" attribute of the
|
362 |
-
os-release file, if it follows the format of the distro release files.
|
363 |
-
* the version number parsed from the "Description" attribute returned by
|
364 |
-
the lsb_release command, if it follows the format of the distro release
|
365 |
-
files.
|
366 |
-
"""
|
367 |
-
return _distro.version(pretty, best)
|
368 |
-
|
369 |
-
|
370 |
-
def version_parts(best: bool = False) -> Tuple[str, str, str]:
|
371 |
-
"""
|
372 |
-
Return the version of the current OS distribution as a tuple
|
373 |
-
``(major, minor, build_number)`` with items as follows:
|
374 |
-
|
375 |
-
* ``major``: The result of :func:`distro.major_version`.
|
376 |
-
|
377 |
-
* ``minor``: The result of :func:`distro.minor_version`.
|
378 |
-
|
379 |
-
* ``build_number``: The result of :func:`distro.build_number`.
|
380 |
-
|
381 |
-
For a description of the *best* parameter, see the :func:`distro.version`
|
382 |
-
method.
|
383 |
-
"""
|
384 |
-
return _distro.version_parts(best)
|
385 |
-
|
386 |
-
|
387 |
-
def major_version(best: bool = False) -> str:
|
388 |
-
"""
|
389 |
-
Return the major version of the current OS distribution, as a string,
|
390 |
-
if provided.
|
391 |
-
Otherwise, the empty string is returned. The major version is the first
|
392 |
-
part of the dot-separated version string.
|
393 |
-
|
394 |
-
For a description of the *best* parameter, see the :func:`distro.version`
|
395 |
-
method.
|
396 |
-
"""
|
397 |
-
return _distro.major_version(best)
|
398 |
-
|
399 |
-
|
400 |
-
def minor_version(best: bool = False) -> str:
|
401 |
-
"""
|
402 |
-
Return the minor version of the current OS distribution, as a string,
|
403 |
-
if provided.
|
404 |
-
Otherwise, the empty string is returned. The minor version is the second
|
405 |
-
part of the dot-separated version string.
|
406 |
-
|
407 |
-
For a description of the *best* parameter, see the :func:`distro.version`
|
408 |
-
method.
|
409 |
-
"""
|
410 |
-
return _distro.minor_version(best)
|
411 |
-
|
412 |
-
|
413 |
-
def build_number(best: bool = False) -> str:
|
414 |
-
"""
|
415 |
-
Return the build number of the current OS distribution, as a string,
|
416 |
-
if provided.
|
417 |
-
Otherwise, the empty string is returned. The build number is the third part
|
418 |
-
of the dot-separated version string.
|
419 |
-
|
420 |
-
For a description of the *best* parameter, see the :func:`distro.version`
|
421 |
-
method.
|
422 |
-
"""
|
423 |
-
return _distro.build_number(best)
|
424 |
-
|
425 |
-
|
426 |
-
def like() -> str:
|
427 |
-
"""
|
428 |
-
Return a space-separated list of distro IDs of distributions that are
|
429 |
-
closely related to the current OS distribution in regards to packaging
|
430 |
-
and programming interfaces, for example distributions the current
|
431 |
-
distribution is a derivative from.
|
432 |
-
|
433 |
-
**Lookup hierarchy:**
|
434 |
-
|
435 |
-
This information item is only provided by the os-release file.
|
436 |
-
For details, see the description of the "ID_LIKE" attribute in the
|
437 |
-
`os-release man page
|
438 |
-
<http://www.freedesktop.org/software/systemd/man/os-release.html>`_.
|
439 |
-
"""
|
440 |
-
return _distro.like()
|
441 |
-
|
442 |
-
|
443 |
-
def codename() -> str:
|
444 |
-
"""
|
445 |
-
Return the codename for the release of the current OS distribution,
|
446 |
-
as a string.
|
447 |
-
|
448 |
-
If the distribution does not have a codename, an empty string is returned.
|
449 |
-
|
450 |
-
Note that the returned codename is not always really a codename. For
|
451 |
-
example, openSUSE returns "x86_64". This function does not handle such
|
452 |
-
cases in any special way and just returns the string it finds, if any.
|
453 |
-
|
454 |
-
**Lookup hierarchy:**
|
455 |
-
|
456 |
-
* the codename within the "VERSION" attribute of the os-release file, if
|
457 |
-
provided,
|
458 |
-
|
459 |
-
* the value of the "Codename" attribute returned by the lsb_release
|
460 |
-
command,
|
461 |
-
|
462 |
-
* the value of the "<codename>" field of the distro release file.
|
463 |
-
"""
|
464 |
-
return _distro.codename()
|
465 |
-
|
466 |
-
|
467 |
-
def info(pretty: bool = False, best: bool = False) -> InfoDict:
|
468 |
-
"""
|
469 |
-
Return certain machine-readable information items about the current OS
|
470 |
-
distribution in a dictionary, as shown in the following example:
|
471 |
-
|
472 |
-
.. sourcecode:: python
|
473 |
-
|
474 |
-
{
|
475 |
-
'id': 'rhel',
|
476 |
-
'version': '7.0',
|
477 |
-
'version_parts': {
|
478 |
-
'major': '7',
|
479 |
-
'minor': '0',
|
480 |
-
'build_number': ''
|
481 |
-
},
|
482 |
-
'like': 'fedora',
|
483 |
-
'codename': 'Maipo'
|
484 |
-
}
|
485 |
-
|
486 |
-
The dictionary structure and keys are always the same, regardless of which
|
487 |
-
information items are available in the underlying data sources. The values
|
488 |
-
for the various keys are as follows:
|
489 |
-
|
490 |
-
* ``id``: The result of :func:`distro.id`.
|
491 |
-
|
492 |
-
* ``version``: The result of :func:`distro.version`.
|
493 |
-
|
494 |
-
* ``version_parts -> major``: The result of :func:`distro.major_version`.
|
495 |
-
|
496 |
-
* ``version_parts -> minor``: The result of :func:`distro.minor_version`.
|
497 |
-
|
498 |
-
* ``version_parts -> build_number``: The result of
|
499 |
-
:func:`distro.build_number`.
|
500 |
-
|
501 |
-
* ``like``: The result of :func:`distro.like`.
|
502 |
-
|
503 |
-
* ``codename``: The result of :func:`distro.codename`.
|
504 |
-
|
505 |
-
For a description of the *pretty* and *best* parameters, see the
|
506 |
-
:func:`distro.version` method.
|
507 |
-
"""
|
508 |
-
return _distro.info(pretty, best)
|
509 |
-
|
510 |
-
|
511 |
-
def os_release_info() -> Dict[str, str]:
|
512 |
-
"""
|
513 |
-
Return a dictionary containing key-value pairs for the information items
|
514 |
-
from the os-release file data source of the current OS distribution.
|
515 |
-
|
516 |
-
See `os-release file`_ for details about these information items.
|
517 |
-
"""
|
518 |
-
return _distro.os_release_info()
|
519 |
-
|
520 |
-
|
521 |
-
def lsb_release_info() -> Dict[str, str]:
|
522 |
-
"""
|
523 |
-
Return a dictionary containing key-value pairs for the information items
|
524 |
-
from the lsb_release command data source of the current OS distribution.
|
525 |
-
|
526 |
-
See `lsb_release command output`_ for details about these information
|
527 |
-
items.
|
528 |
-
"""
|
529 |
-
return _distro.lsb_release_info()
|
530 |
-
|
531 |
-
|
532 |
-
def distro_release_info() -> Dict[str, str]:
|
533 |
-
"""
|
534 |
-
Return a dictionary containing key-value pairs for the information items
|
535 |
-
from the distro release file data source of the current OS distribution.
|
536 |
-
|
537 |
-
See `distro release file`_ for details about these information items.
|
538 |
-
"""
|
539 |
-
return _distro.distro_release_info()
|
540 |
-
|
541 |
-
|
542 |
-
def uname_info() -> Dict[str, str]:
|
543 |
-
"""
|
544 |
-
Return a dictionary containing key-value pairs for the information items
|
545 |
-
from the distro release file data source of the current OS distribution.
|
546 |
-
"""
|
547 |
-
return _distro.uname_info()
|
548 |
-
|
549 |
-
|
550 |
-
def os_release_attr(attribute: str) -> str:
|
551 |
-
"""
|
552 |
-
Return a single named information item from the os-release file data source
|
553 |
-
of the current OS distribution.
|
554 |
-
|
555 |
-
Parameters:
|
556 |
-
|
557 |
-
* ``attribute`` (string): Key of the information item.
|
558 |
-
|
559 |
-
Returns:
|
560 |
-
|
561 |
-
* (string): Value of the information item, if the item exists.
|
562 |
-
The empty string, if the item does not exist.
|
563 |
-
|
564 |
-
See `os-release file`_ for details about these information items.
|
565 |
-
"""
|
566 |
-
return _distro.os_release_attr(attribute)
|
567 |
-
|
568 |
-
|
569 |
-
def lsb_release_attr(attribute: str) -> str:
|
570 |
-
"""
|
571 |
-
Return a single named information item from the lsb_release command output
|
572 |
-
data source of the current OS distribution.
|
573 |
-
|
574 |
-
Parameters:
|
575 |
-
|
576 |
-
* ``attribute`` (string): Key of the information item.
|
577 |
-
|
578 |
-
Returns:
|
579 |
-
|
580 |
-
* (string): Value of the information item, if the item exists.
|
581 |
-
The empty string, if the item does not exist.
|
582 |
-
|
583 |
-
See `lsb_release command output`_ for details about these information
|
584 |
-
items.
|
585 |
-
"""
|
586 |
-
return _distro.lsb_release_attr(attribute)
|
587 |
-
|
588 |
-
|
589 |
-
def distro_release_attr(attribute: str) -> str:
|
590 |
-
"""
|
591 |
-
Return a single named information item from the distro release file
|
592 |
-
data source of the current OS distribution.
|
593 |
-
|
594 |
-
Parameters:
|
595 |
-
|
596 |
-
* ``attribute`` (string): Key of the information item.
|
597 |
-
|
598 |
-
Returns:
|
599 |
-
|
600 |
-
* (string): Value of the information item, if the item exists.
|
601 |
-
The empty string, if the item does not exist.
|
602 |
-
|
603 |
-
See `distro release file`_ for details about these information items.
|
604 |
-
"""
|
605 |
-
return _distro.distro_release_attr(attribute)
|
606 |
-
|
607 |
-
|
608 |
-
def uname_attr(attribute: str) -> str:
|
609 |
-
"""
|
610 |
-
Return a single named information item from the distro release file
|
611 |
-
data source of the current OS distribution.
|
612 |
-
|
613 |
-
Parameters:
|
614 |
-
|
615 |
-
* ``attribute`` (string): Key of the information item.
|
616 |
-
|
617 |
-
Returns:
|
618 |
-
|
619 |
-
* (string): Value of the information item, if the item exists.
|
620 |
-
The empty string, if the item does not exist.
|
621 |
-
"""
|
622 |
-
return _distro.uname_attr(attribute)
|
623 |
-
|
624 |
-
|
625 |
-
try:
|
626 |
-
from functools import cached_property
|
627 |
-
except ImportError:
|
628 |
-
# Python < 3.8
|
629 |
-
class cached_property: # type: ignore
|
630 |
-
"""A version of @property which caches the value. On access, it calls the
|
631 |
-
underlying function and sets the value in `__dict__` so future accesses
|
632 |
-
will not re-call the property.
|
633 |
-
"""
|
634 |
-
|
635 |
-
def __init__(self, f: Callable[[Any], Any]) -> None:
|
636 |
-
self._fname = f.__name__
|
637 |
-
self._f = f
|
638 |
-
|
639 |
-
def __get__(self, obj: Any, owner: Type[Any]) -> Any:
|
640 |
-
assert obj is not None, f"call {self._fname} on an instance"
|
641 |
-
ret = obj.__dict__[self._fname] = self._f(obj)
|
642 |
-
return ret
|
643 |
-
|
644 |
-
|
645 |
-
class LinuxDistribution:
|
646 |
-
"""
|
647 |
-
Provides information about a OS distribution.
|
648 |
-
|
649 |
-
This package creates a private module-global instance of this class with
|
650 |
-
default initialization arguments, that is used by the
|
651 |
-
`consolidated accessor functions`_ and `single source accessor functions`_.
|
652 |
-
By using default initialization arguments, that module-global instance
|
653 |
-
returns data about the current OS distribution (i.e. the distro this
|
654 |
-
package runs on).
|
655 |
-
|
656 |
-
Normally, it is not necessary to create additional instances of this class.
|
657 |
-
However, in situations where control is needed over the exact data sources
|
658 |
-
that are used, instances of this class can be created with a specific
|
659 |
-
distro release file, or a specific os-release file, or without invoking the
|
660 |
-
lsb_release command.
|
661 |
-
"""
|
662 |
-
|
663 |
-
def __init__(
|
664 |
-
self,
|
665 |
-
include_lsb: Optional[bool] = None,
|
666 |
-
os_release_file: str = "",
|
667 |
-
distro_release_file: str = "",
|
668 |
-
include_uname: Optional[bool] = None,
|
669 |
-
root_dir: Optional[str] = None,
|
670 |
-
include_oslevel: Optional[bool] = None,
|
671 |
-
) -> None:
|
672 |
-
"""
|
673 |
-
The initialization method of this class gathers information from the
|
674 |
-
available data sources, and stores that in private instance attributes.
|
675 |
-
Subsequent access to the information items uses these private instance
|
676 |
-
attributes, so that the data sources are read only once.
|
677 |
-
|
678 |
-
Parameters:
|
679 |
-
|
680 |
-
* ``include_lsb`` (bool): Controls whether the
|
681 |
-
`lsb_release command output`_ is included as a data source.
|
682 |
-
|
683 |
-
If the lsb_release command is not available in the program execution
|
684 |
-
path, the data source for the lsb_release command will be empty.
|
685 |
-
|
686 |
-
* ``os_release_file`` (string): The path name of the
|
687 |
-
`os-release file`_ that is to be used as a data source.
|
688 |
-
|
689 |
-
An empty string (the default) will cause the default path name to
|
690 |
-
be used (see `os-release file`_ for details).
|
691 |
-
|
692 |
-
If the specified or defaulted os-release file does not exist, the
|
693 |
-
data source for the os-release file will be empty.
|
694 |
-
|
695 |
-
* ``distro_release_file`` (string): The path name of the
|
696 |
-
`distro release file`_ that is to be used as a data source.
|
697 |
-
|
698 |
-
An empty string (the default) will cause a default search algorithm
|
699 |
-
to be used (see `distro release file`_ for details).
|
700 |
-
|
701 |
-
If the specified distro release file does not exist, or if no default
|
702 |
-
distro release file can be found, the data source for the distro
|
703 |
-
release file will be empty.
|
704 |
-
|
705 |
-
* ``include_uname`` (bool): Controls whether uname command output is
|
706 |
-
included as a data source. If the uname command is not available in
|
707 |
-
the program execution path the data source for the uname command will
|
708 |
-
be empty.
|
709 |
-
|
710 |
-
* ``root_dir`` (string): The absolute path to the root directory to use
|
711 |
-
to find distro-related information files. Note that ``include_*``
|
712 |
-
parameters must not be enabled in combination with ``root_dir``.
|
713 |
-
|
714 |
-
* ``include_oslevel`` (bool): Controls whether (AIX) oslevel command
|
715 |
-
output is included as a data source. If the oslevel command is not
|
716 |
-
available in the program execution path the data source will be
|
717 |
-
empty.
|
718 |
-
|
719 |
-
Public instance attributes:
|
720 |
-
|
721 |
-
* ``os_release_file`` (string): The path name of the
|
722 |
-
`os-release file`_ that is actually used as a data source. The
|
723 |
-
empty string if no distro release file is used as a data source.
|
724 |
-
|
725 |
-
* ``distro_release_file`` (string): The path name of the
|
726 |
-
`distro release file`_ that is actually used as a data source. The
|
727 |
-
empty string if no distro release file is used as a data source.
|
728 |
-
|
729 |
-
* ``include_lsb`` (bool): The result of the ``include_lsb`` parameter.
|
730 |
-
This controls whether the lsb information will be loaded.
|
731 |
-
|
732 |
-
* ``include_uname`` (bool): The result of the ``include_uname``
|
733 |
-
parameter. This controls whether the uname information will
|
734 |
-
be loaded.
|
735 |
-
|
736 |
-
* ``include_oslevel`` (bool): The result of the ``include_oslevel``
|
737 |
-
parameter. This controls whether (AIX) oslevel information will be
|
738 |
-
loaded.
|
739 |
-
|
740 |
-
* ``root_dir`` (string): The result of the ``root_dir`` parameter.
|
741 |
-
The absolute path to the root directory to use to find distro-related
|
742 |
-
information files.
|
743 |
-
|
744 |
-
Raises:
|
745 |
-
|
746 |
-
* :py:exc:`ValueError`: Initialization parameters combination is not
|
747 |
-
supported.
|
748 |
-
|
749 |
-
* :py:exc:`OSError`: Some I/O issue with an os-release file or distro
|
750 |
-
release file.
|
751 |
-
|
752 |
-
* :py:exc:`UnicodeError`: A data source has unexpected characters or
|
753 |
-
uses an unexpected encoding.
|
754 |
-
"""
|
755 |
-
self.root_dir = root_dir
|
756 |
-
self.etc_dir = os.path.join(root_dir, "etc") if root_dir else _UNIXCONFDIR
|
757 |
-
self.usr_lib_dir = (
|
758 |
-
os.path.join(root_dir, "usr/lib") if root_dir else _UNIXUSRLIBDIR
|
759 |
-
)
|
760 |
-
|
761 |
-
if os_release_file:
|
762 |
-
self.os_release_file = os_release_file
|
763 |
-
else:
|
764 |
-
etc_dir_os_release_file = os.path.join(self.etc_dir, _OS_RELEASE_BASENAME)
|
765 |
-
usr_lib_os_release_file = os.path.join(
|
766 |
-
self.usr_lib_dir, _OS_RELEASE_BASENAME
|
767 |
-
)
|
768 |
-
|
769 |
-
# NOTE: The idea is to respect order **and** have it set
|
770 |
-
# at all times for API backwards compatibility.
|
771 |
-
if os.path.isfile(etc_dir_os_release_file) or not os.path.isfile(
|
772 |
-
usr_lib_os_release_file
|
773 |
-
):
|
774 |
-
self.os_release_file = etc_dir_os_release_file
|
775 |
-
else:
|
776 |
-
self.os_release_file = usr_lib_os_release_file
|
777 |
-
|
778 |
-
self.distro_release_file = distro_release_file or "" # updated later
|
779 |
-
|
780 |
-
is_root_dir_defined = root_dir is not None
|
781 |
-
if is_root_dir_defined and (include_lsb or include_uname or include_oslevel):
|
782 |
-
raise ValueError(
|
783 |
-
"Including subprocess data sources from specific root_dir is disallowed"
|
784 |
-
" to prevent false information"
|
785 |
-
)
|
786 |
-
self.include_lsb = (
|
787 |
-
include_lsb if include_lsb is not None else not is_root_dir_defined
|
788 |
-
)
|
789 |
-
self.include_uname = (
|
790 |
-
include_uname if include_uname is not None else not is_root_dir_defined
|
791 |
-
)
|
792 |
-
self.include_oslevel = (
|
793 |
-
include_oslevel if include_oslevel is not None else not is_root_dir_defined
|
794 |
-
)
|
795 |
-
|
796 |
-
def __repr__(self) -> str:
|
797 |
-
"""Return repr of all info"""
|
798 |
-
return (
|
799 |
-
"LinuxDistribution("
|
800 |
-
"os_release_file={self.os_release_file!r}, "
|
801 |
-
"distro_release_file={self.distro_release_file!r}, "
|
802 |
-
"include_lsb={self.include_lsb!r}, "
|
803 |
-
"include_uname={self.include_uname!r}, "
|
804 |
-
"include_oslevel={self.include_oslevel!r}, "
|
805 |
-
"root_dir={self.root_dir!r}, "
|
806 |
-
"_os_release_info={self._os_release_info!r}, "
|
807 |
-
"_lsb_release_info={self._lsb_release_info!r}, "
|
808 |
-
"_distro_release_info={self._distro_release_info!r}, "
|
809 |
-
"_uname_info={self._uname_info!r}, "
|
810 |
-
"_oslevel_info={self._oslevel_info!r})".format(self=self)
|
811 |
-
)
|
812 |
-
|
813 |
-
def linux_distribution(
|
814 |
-
self, full_distribution_name: bool = True
|
815 |
-
) -> Tuple[str, str, str]:
|
816 |
-
"""
|
817 |
-
Return information about the OS distribution that is compatible
|
818 |
-
with Python's :func:`platform.linux_distribution`, supporting a subset
|
819 |
-
of its parameters.
|
820 |
-
|
821 |
-
For details, see :func:`distro.linux_distribution`.
|
822 |
-
"""
|
823 |
-
return (
|
824 |
-
self.name() if full_distribution_name else self.id(),
|
825 |
-
self.version(),
|
826 |
-
self._os_release_info.get("release_codename") or self.codename(),
|
827 |
-
)
|
828 |
-
|
829 |
-
def id(self) -> str:
|
830 |
-
"""Return the distro ID of the OS distribution, as a string.
|
831 |
-
|
832 |
-
For details, see :func:`distro.id`.
|
833 |
-
"""
|
834 |
-
|
835 |
-
def normalize(distro_id: str, table: Dict[str, str]) -> str:
|
836 |
-
distro_id = distro_id.lower().replace(" ", "_")
|
837 |
-
return table.get(distro_id, distro_id)
|
838 |
-
|
839 |
-
distro_id = self.os_release_attr("id")
|
840 |
-
if distro_id:
|
841 |
-
return normalize(distro_id, NORMALIZED_OS_ID)
|
842 |
-
|
843 |
-
distro_id = self.lsb_release_attr("distributor_id")
|
844 |
-
if distro_id:
|
845 |
-
return normalize(distro_id, NORMALIZED_LSB_ID)
|
846 |
-
|
847 |
-
distro_id = self.distro_release_attr("id")
|
848 |
-
if distro_id:
|
849 |
-
return normalize(distro_id, NORMALIZED_DISTRO_ID)
|
850 |
-
|
851 |
-
distro_id = self.uname_attr("id")
|
852 |
-
if distro_id:
|
853 |
-
return normalize(distro_id, NORMALIZED_DISTRO_ID)
|
854 |
-
|
855 |
-
return ""
|
856 |
-
|
857 |
-
def name(self, pretty: bool = False) -> str:
|
858 |
-
"""
|
859 |
-
Return the name of the OS distribution, as a string.
|
860 |
-
|
861 |
-
For details, see :func:`distro.name`.
|
862 |
-
"""
|
863 |
-
name = (
|
864 |
-
self.os_release_attr("name")
|
865 |
-
or self.lsb_release_attr("distributor_id")
|
866 |
-
or self.distro_release_attr("name")
|
867 |
-
or self.uname_attr("name")
|
868 |
-
)
|
869 |
-
if pretty:
|
870 |
-
name = self.os_release_attr("pretty_name") or self.lsb_release_attr(
|
871 |
-
"description"
|
872 |
-
)
|
873 |
-
if not name:
|
874 |
-
name = self.distro_release_attr("name") or self.uname_attr("name")
|
875 |
-
version = self.version(pretty=True)
|
876 |
-
if version:
|
877 |
-
name = f"{name} {version}"
|
878 |
-
return name or ""
|
879 |
-
|
880 |
-
def version(self, pretty: bool = False, best: bool = False) -> str:
|
881 |
-
"""
|
882 |
-
Return the version of the OS distribution, as a string.
|
883 |
-
|
884 |
-
For details, see :func:`distro.version`.
|
885 |
-
"""
|
886 |
-
versions = [
|
887 |
-
self.os_release_attr("version_id"),
|
888 |
-
self.lsb_release_attr("release"),
|
889 |
-
self.distro_release_attr("version_id"),
|
890 |
-
self._parse_distro_release_content(self.os_release_attr("pretty_name")).get(
|
891 |
-
"version_id", ""
|
892 |
-
),
|
893 |
-
self._parse_distro_release_content(
|
894 |
-
self.lsb_release_attr("description")
|
895 |
-
).get("version_id", ""),
|
896 |
-
self.uname_attr("release"),
|
897 |
-
]
|
898 |
-
if self.uname_attr("id").startswith("aix"):
|
899 |
-
# On AIX platforms, prefer oslevel command output.
|
900 |
-
versions.insert(0, self.oslevel_info())
|
901 |
-
elif self.id() == "debian" or "debian" in self.like().split():
|
902 |
-
# On Debian-like, add debian_version file content to candidates list.
|
903 |
-
versions.append(self._debian_version)
|
904 |
-
version = ""
|
905 |
-
if best:
|
906 |
-
# This algorithm uses the last version in priority order that has
|
907 |
-
# the best precision. If the versions are not in conflict, that
|
908 |
-
# does not matter; otherwise, using the last one instead of the
|
909 |
-
# first one might be considered a surprise.
|
910 |
-
for v in versions:
|
911 |
-
if v.count(".") > version.count(".") or version == "":
|
912 |
-
version = v
|
913 |
-
else:
|
914 |
-
for v in versions:
|
915 |
-
if v != "":
|
916 |
-
version = v
|
917 |
-
break
|
918 |
-
if pretty and version and self.codename():
|
919 |
-
version = f"{version} ({self.codename()})"
|
920 |
-
return version
|
921 |
-
|
922 |
-
def version_parts(self, best: bool = False) -> Tuple[str, str, str]:
|
923 |
-
"""
|
924 |
-
Return the version of the OS distribution, as a tuple of version
|
925 |
-
numbers.
|
926 |
-
|
927 |
-
For details, see :func:`distro.version_parts`.
|
928 |
-
"""
|
929 |
-
version_str = self.version(best=best)
|
930 |
-
if version_str:
|
931 |
-
version_regex = re.compile(r"(\d+)\.?(\d+)?\.?(\d+)?")
|
932 |
-
matches = version_regex.match(version_str)
|
933 |
-
if matches:
|
934 |
-
major, minor, build_number = matches.groups()
|
935 |
-
return major, minor or "", build_number or ""
|
936 |
-
return "", "", ""
|
937 |
-
|
938 |
-
def major_version(self, best: bool = False) -> str:
|
939 |
-
"""
|
940 |
-
Return the major version number of the current distribution.
|
941 |
-
|
942 |
-
For details, see :func:`distro.major_version`.
|
943 |
-
"""
|
944 |
-
return self.version_parts(best)[0]
|
945 |
-
|
946 |
-
def minor_version(self, best: bool = False) -> str:
|
947 |
-
"""
|
948 |
-
Return the minor version number of the current distribution.
|
949 |
-
|
950 |
-
For details, see :func:`distro.minor_version`.
|
951 |
-
"""
|
952 |
-
return self.version_parts(best)[1]
|
953 |
-
|
954 |
-
def build_number(self, best: bool = False) -> str:
|
955 |
-
"""
|
956 |
-
Return the build number of the current distribution.
|
957 |
-
|
958 |
-
For details, see :func:`distro.build_number`.
|
959 |
-
"""
|
960 |
-
return self.version_parts(best)[2]
|
961 |
-
|
962 |
-
def like(self) -> str:
|
963 |
-
"""
|
964 |
-
Return the IDs of distributions that are like the OS distribution.
|
965 |
-
|
966 |
-
For details, see :func:`distro.like`.
|
967 |
-
"""
|
968 |
-
return self.os_release_attr("id_like") or ""
|
969 |
-
|
970 |
-
def codename(self) -> str:
|
971 |
-
"""
|
972 |
-
Return the codename of the OS distribution.
|
973 |
-
|
974 |
-
For details, see :func:`distro.codename`.
|
975 |
-
"""
|
976 |
-
try:
|
977 |
-
# Handle os_release specially since distros might purposefully set
|
978 |
-
# this to empty string to have no codename
|
979 |
-
return self._os_release_info["codename"]
|
980 |
-
except KeyError:
|
981 |
-
return (
|
982 |
-
self.lsb_release_attr("codename")
|
983 |
-
or self.distro_release_attr("codename")
|
984 |
-
or ""
|
985 |
-
)
|
986 |
-
|
987 |
-
def info(self, pretty: bool = False, best: bool = False) -> InfoDict:
|
988 |
-
"""
|
989 |
-
Return certain machine-readable information about the OS
|
990 |
-
distribution.
|
991 |
-
|
992 |
-
For details, see :func:`distro.info`.
|
993 |
-
"""
|
994 |
-
return dict(
|
995 |
-
id=self.id(),
|
996 |
-
version=self.version(pretty, best),
|
997 |
-
version_parts=dict(
|
998 |
-
major=self.major_version(best),
|
999 |
-
minor=self.minor_version(best),
|
1000 |
-
build_number=self.build_number(best),
|
1001 |
-
),
|
1002 |
-
like=self.like(),
|
1003 |
-
codename=self.codename(),
|
1004 |
-
)
|
1005 |
-
|
1006 |
-
def os_release_info(self) -> Dict[str, str]:
|
1007 |
-
"""
|
1008 |
-
Return a dictionary containing key-value pairs for the information
|
1009 |
-
items from the os-release file data source of the OS distribution.
|
1010 |
-
|
1011 |
-
For details, see :func:`distro.os_release_info`.
|
1012 |
-
"""
|
1013 |
-
return self._os_release_info
|
1014 |
-
|
1015 |
-
def lsb_release_info(self) -> Dict[str, str]:
|
1016 |
-
"""
|
1017 |
-
Return a dictionary containing key-value pairs for the information
|
1018 |
-
items from the lsb_release command data source of the OS
|
1019 |
-
distribution.
|
1020 |
-
|
1021 |
-
For details, see :func:`distro.lsb_release_info`.
|
1022 |
-
"""
|
1023 |
-
return self._lsb_release_info
|
1024 |
-
|
1025 |
-
def distro_release_info(self) -> Dict[str, str]:
|
1026 |
-
"""
|
1027 |
-
Return a dictionary containing key-value pairs for the information
|
1028 |
-
items from the distro release file data source of the OS
|
1029 |
-
distribution.
|
1030 |
-
|
1031 |
-
For details, see :func:`distro.distro_release_info`.
|
1032 |
-
"""
|
1033 |
-
return self._distro_release_info
|
1034 |
-
|
1035 |
-
def uname_info(self) -> Dict[str, str]:
|
1036 |
-
"""
|
1037 |
-
Return a dictionary containing key-value pairs for the information
|
1038 |
-
items from the uname command data source of the OS distribution.
|
1039 |
-
|
1040 |
-
For details, see :func:`distro.uname_info`.
|
1041 |
-
"""
|
1042 |
-
return self._uname_info
|
1043 |
-
|
1044 |
-
def oslevel_info(self) -> str:
|
1045 |
-
"""
|
1046 |
-
Return AIX' oslevel command output.
|
1047 |
-
"""
|
1048 |
-
return self._oslevel_info
|
1049 |
-
|
1050 |
-
def os_release_attr(self, attribute: str) -> str:
|
1051 |
-
"""
|
1052 |
-
Return a single named information item from the os-release file data
|
1053 |
-
source of the OS distribution.
|
1054 |
-
|
1055 |
-
For details, see :func:`distro.os_release_attr`.
|
1056 |
-
"""
|
1057 |
-
return self._os_release_info.get(attribute, "")
|
1058 |
-
|
1059 |
-
def lsb_release_attr(self, attribute: str) -> str:
|
1060 |
-
"""
|
1061 |
-
Return a single named information item from the lsb_release command
|
1062 |
-
output data source of the OS distribution.
|
1063 |
-
|
1064 |
-
For details, see :func:`distro.lsb_release_attr`.
|
1065 |
-
"""
|
1066 |
-
return self._lsb_release_info.get(attribute, "")
|
1067 |
-
|
1068 |
-
def distro_release_attr(self, attribute: str) -> str:
|
1069 |
-
"""
|
1070 |
-
Return a single named information item from the distro release file
|
1071 |
-
data source of the OS distribution.
|
1072 |
-
|
1073 |
-
For details, see :func:`distro.distro_release_attr`.
|
1074 |
-
"""
|
1075 |
-
return self._distro_release_info.get(attribute, "")
|
1076 |
-
|
1077 |
-
def uname_attr(self, attribute: str) -> str:
|
1078 |
-
"""
|
1079 |
-
Return a single named information item from the uname command
|
1080 |
-
output data source of the OS distribution.
|
1081 |
-
|
1082 |
-
For details, see :func:`distro.uname_attr`.
|
1083 |
-
"""
|
1084 |
-
return self._uname_info.get(attribute, "")
|
1085 |
-
|
1086 |
-
@cached_property
|
1087 |
-
def _os_release_info(self) -> Dict[str, str]:
|
1088 |
-
"""
|
1089 |
-
Get the information items from the specified os-release file.
|
1090 |
-
|
1091 |
-
Returns:
|
1092 |
-
A dictionary containing all information items.
|
1093 |
-
"""
|
1094 |
-
if os.path.isfile(self.os_release_file):
|
1095 |
-
with open(self.os_release_file, encoding="utf-8") as release_file:
|
1096 |
-
return self._parse_os_release_content(release_file)
|
1097 |
-
return {}
|
1098 |
-
|
1099 |
-
@staticmethod
|
1100 |
-
def _parse_os_release_content(lines: TextIO) -> Dict[str, str]:
|
1101 |
-
"""
|
1102 |
-
Parse the lines of an os-release file.
|
1103 |
-
|
1104 |
-
Parameters:
|
1105 |
-
|
1106 |
-
* lines: Iterable through the lines in the os-release file.
|
1107 |
-
Each line must be a unicode string or a UTF-8 encoded byte
|
1108 |
-
string.
|
1109 |
-
|
1110 |
-
Returns:
|
1111 |
-
A dictionary containing all information items.
|
1112 |
-
"""
|
1113 |
-
props = {}
|
1114 |
-
lexer = shlex.shlex(lines, posix=True)
|
1115 |
-
lexer.whitespace_split = True
|
1116 |
-
|
1117 |
-
tokens = list(lexer)
|
1118 |
-
for token in tokens:
|
1119 |
-
# At this point, all shell-like parsing has been done (i.e.
|
1120 |
-
# comments processed, quotes and backslash escape sequences
|
1121 |
-
# processed, multi-line values assembled, trailing newlines
|
1122 |
-
# stripped, etc.), so the tokens are now either:
|
1123 |
-
# * variable assignments: var=value
|
1124 |
-
# * commands or their arguments (not allowed in os-release)
|
1125 |
-
# Ignore any tokens that are not variable assignments
|
1126 |
-
if "=" in token:
|
1127 |
-
k, v = token.split("=", 1)
|
1128 |
-
props[k.lower()] = v
|
1129 |
-
|
1130 |
-
if "version" in props:
|
1131 |
-
# extract release codename (if any) from version attribute
|
1132 |
-
match = re.search(r"\((\D+)\)|,\s*(\D+)", props["version"])
|
1133 |
-
if match:
|
1134 |
-
release_codename = match.group(1) or match.group(2)
|
1135 |
-
props["codename"] = props["release_codename"] = release_codename
|
1136 |
-
|
1137 |
-
if "version_codename" in props:
|
1138 |
-
# os-release added a version_codename field. Use that in
|
1139 |
-
# preference to anything else Note that some distros purposefully
|
1140 |
-
# do not have code names. They should be setting
|
1141 |
-
# version_codename=""
|
1142 |
-
props["codename"] = props["version_codename"]
|
1143 |
-
elif "ubuntu_codename" in props:
|
1144 |
-
# Same as above but a non-standard field name used on older Ubuntus
|
1145 |
-
props["codename"] = props["ubuntu_codename"]
|
1146 |
-
|
1147 |
-
return props
|
1148 |
-
|
1149 |
-
@cached_property
|
1150 |
-
def _lsb_release_info(self) -> Dict[str, str]:
|
1151 |
-
"""
|
1152 |
-
Get the information items from the lsb_release command output.
|
1153 |
-
|
1154 |
-
Returns:
|
1155 |
-
A dictionary containing all information items.
|
1156 |
-
"""
|
1157 |
-
if not self.include_lsb:
|
1158 |
-
return {}
|
1159 |
-
try:
|
1160 |
-
cmd = ("lsb_release", "-a")
|
1161 |
-
stdout = subprocess.check_output(cmd, stderr=subprocess.DEVNULL)
|
1162 |
-
# Command not found or lsb_release returned error
|
1163 |
-
except (OSError, subprocess.CalledProcessError):
|
1164 |
-
return {}
|
1165 |
-
content = self._to_str(stdout).splitlines()
|
1166 |
-
return self._parse_lsb_release_content(content)
|
1167 |
-
|
1168 |
-
@staticmethod
|
1169 |
-
def _parse_lsb_release_content(lines: Iterable[str]) -> Dict[str, str]:
|
1170 |
-
"""
|
1171 |
-
Parse the output of the lsb_release command.
|
1172 |
-
|
1173 |
-
Parameters:
|
1174 |
-
|
1175 |
-
* lines: Iterable through the lines of the lsb_release output.
|
1176 |
-
Each line must be a unicode string or a UTF-8 encoded byte
|
1177 |
-
string.
|
1178 |
-
|
1179 |
-
Returns:
|
1180 |
-
A dictionary containing all information items.
|
1181 |
-
"""
|
1182 |
-
props = {}
|
1183 |
-
for line in lines:
|
1184 |
-
kv = line.strip("\n").split(":", 1)
|
1185 |
-
if len(kv) != 2:
|
1186 |
-
# Ignore lines without colon.
|
1187 |
-
continue
|
1188 |
-
k, v = kv
|
1189 |
-
props.update({k.replace(" ", "_").lower(): v.strip()})
|
1190 |
-
return props
|
1191 |
-
|
1192 |
-
@cached_property
|
1193 |
-
def _uname_info(self) -> Dict[str, str]:
|
1194 |
-
if not self.include_uname:
|
1195 |
-
return {}
|
1196 |
-
try:
|
1197 |
-
cmd = ("uname", "-rs")
|
1198 |
-
stdout = subprocess.check_output(cmd, stderr=subprocess.DEVNULL)
|
1199 |
-
except OSError:
|
1200 |
-
return {}
|
1201 |
-
content = self._to_str(stdout).splitlines()
|
1202 |
-
return self._parse_uname_content(content)
|
1203 |
-
|
1204 |
-
@cached_property
|
1205 |
-
def _oslevel_info(self) -> str:
|
1206 |
-
if not self.include_oslevel:
|
1207 |
-
return ""
|
1208 |
-
try:
|
1209 |
-
stdout = subprocess.check_output("oslevel", stderr=subprocess.DEVNULL)
|
1210 |
-
except (OSError, subprocess.CalledProcessError):
|
1211 |
-
return ""
|
1212 |
-
return self._to_str(stdout).strip()
|
1213 |
-
|
1214 |
-
@cached_property
|
1215 |
-
def _debian_version(self) -> str:
|
1216 |
-
try:
|
1217 |
-
with open(
|
1218 |
-
os.path.join(self.etc_dir, "debian_version"), encoding="ascii"
|
1219 |
-
) as fp:
|
1220 |
-
return fp.readline().rstrip()
|
1221 |
-
except FileNotFoundError:
|
1222 |
-
return ""
|
1223 |
-
|
1224 |
-
@staticmethod
|
1225 |
-
def _parse_uname_content(lines: Sequence[str]) -> Dict[str, str]:
|
1226 |
-
if not lines:
|
1227 |
-
return {}
|
1228 |
-
props = {}
|
1229 |
-
match = re.search(r"^([^\s]+)\s+([\d\.]+)", lines[0].strip())
|
1230 |
-
if match:
|
1231 |
-
name, version = match.groups()
|
1232 |
-
|
1233 |
-
# This is to prevent the Linux kernel version from
|
1234 |
-
# appearing as the 'best' version on otherwise
|
1235 |
-
# identifiable distributions.
|
1236 |
-
if name == "Linux":
|
1237 |
-
return {}
|
1238 |
-
props["id"] = name.lower()
|
1239 |
-
props["name"] = name
|
1240 |
-
props["release"] = version
|
1241 |
-
return props
|
1242 |
-
|
1243 |
-
@staticmethod
|
1244 |
-
def _to_str(bytestring: bytes) -> str:
|
1245 |
-
encoding = sys.getfilesystemencoding()
|
1246 |
-
return bytestring.decode(encoding)
|
1247 |
-
|
1248 |
-
@cached_property
|
1249 |
-
def _distro_release_info(self) -> Dict[str, str]:
|
1250 |
-
"""
|
1251 |
-
Get the information items from the specified distro release file.
|
1252 |
-
|
1253 |
-
Returns:
|
1254 |
-
A dictionary containing all information items.
|
1255 |
-
"""
|
1256 |
-
if self.distro_release_file:
|
1257 |
-
# If it was specified, we use it and parse what we can, even if
|
1258 |
-
# its file name or content does not match the expected pattern.
|
1259 |
-
distro_info = self._parse_distro_release_file(self.distro_release_file)
|
1260 |
-
basename = os.path.basename(self.distro_release_file)
|
1261 |
-
# The file name pattern for user-specified distro release files
|
1262 |
-
# is somewhat more tolerant (compared to when searching for the
|
1263 |
-
# file), because we want to use what was specified as best as
|
1264 |
-
# possible.
|
1265 |
-
match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename)
|
1266 |
-
else:
|
1267 |
-
try:
|
1268 |
-
basenames = [
|
1269 |
-
basename
|
1270 |
-
for basename in os.listdir(self.etc_dir)
|
1271 |
-
if basename not in _DISTRO_RELEASE_IGNORE_BASENAMES
|
1272 |
-
and os.path.isfile(os.path.join(self.etc_dir, basename))
|
1273 |
-
]
|
1274 |
-
# We sort for repeatability in cases where there are multiple
|
1275 |
-
# distro specific files; e.g. CentOS, Oracle, Enterprise all
|
1276 |
-
# containing `redhat-release` on top of their own.
|
1277 |
-
basenames.sort()
|
1278 |
-
except OSError:
|
1279 |
-
# This may occur when /etc is not readable but we can't be
|
1280 |
-
# sure about the *-release files. Check common entries of
|
1281 |
-
# /etc for information. If they turn out to not be there the
|
1282 |
-
# error is handled in `_parse_distro_release_file()`.
|
1283 |
-
basenames = _DISTRO_RELEASE_BASENAMES
|
1284 |
-
for basename in basenames:
|
1285 |
-
match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename)
|
1286 |
-
if match is None:
|
1287 |
-
continue
|
1288 |
-
filepath = os.path.join(self.etc_dir, basename)
|
1289 |
-
distro_info = self._parse_distro_release_file(filepath)
|
1290 |
-
# The name is always present if the pattern matches.
|
1291 |
-
if "name" not in distro_info:
|
1292 |
-
continue
|
1293 |
-
self.distro_release_file = filepath
|
1294 |
-
break
|
1295 |
-
else: # the loop didn't "break": no candidate.
|
1296 |
-
return {}
|
1297 |
-
|
1298 |
-
if match is not None:
|
1299 |
-
distro_info["id"] = match.group(1)
|
1300 |
-
|
1301 |
-
# CloudLinux < 7: manually enrich info with proper id.
|
1302 |
-
if "cloudlinux" in distro_info.get("name", "").lower():
|
1303 |
-
distro_info["id"] = "cloudlinux"
|
1304 |
-
|
1305 |
-
return distro_info
|
1306 |
-
|
1307 |
-
def _parse_distro_release_file(self, filepath: str) -> Dict[str, str]:
|
1308 |
-
"""
|
1309 |
-
Parse a distro release file.
|
1310 |
-
|
1311 |
-
Parameters:
|
1312 |
-
|
1313 |
-
* filepath: Path name of the distro release file.
|
1314 |
-
|
1315 |
-
Returns:
|
1316 |
-
A dictionary containing all information items.
|
1317 |
-
"""
|
1318 |
-
try:
|
1319 |
-
with open(filepath, encoding="utf-8") as fp:
|
1320 |
-
# Only parse the first line. For instance, on SLES there
|
1321 |
-
# are multiple lines. We don't want them...
|
1322 |
-
return self._parse_distro_release_content(fp.readline())
|
1323 |
-
except OSError:
|
1324 |
-
# Ignore not being able to read a specific, seemingly version
|
1325 |
-
# related file.
|
1326 |
-
# See https://github.com/python-distro/distro/issues/162
|
1327 |
-
return {}
|
1328 |
-
|
1329 |
-
@staticmethod
|
1330 |
-
def _parse_distro_release_content(line: str) -> Dict[str, str]:
|
1331 |
-
"""
|
1332 |
-
Parse a line from a distro release file.
|
1333 |
-
|
1334 |
-
Parameters:
|
1335 |
-
* line: Line from the distro release file. Must be a unicode string
|
1336 |
-
or a UTF-8 encoded byte string.
|
1337 |
-
|
1338 |
-
Returns:
|
1339 |
-
A dictionary containing all information items.
|
1340 |
-
"""
|
1341 |
-
matches = _DISTRO_RELEASE_CONTENT_REVERSED_PATTERN.match(line.strip()[::-1])
|
1342 |
-
distro_info = {}
|
1343 |
-
if matches:
|
1344 |
-
# regexp ensures non-None
|
1345 |
-
distro_info["name"] = matches.group(3)[::-1]
|
1346 |
-
if matches.group(2):
|
1347 |
-
distro_info["version_id"] = matches.group(2)[::-1]
|
1348 |
-
if matches.group(1):
|
1349 |
-
distro_info["codename"] = matches.group(1)[::-1]
|
1350 |
-
elif line:
|
1351 |
-
distro_info["name"] = line.strip()
|
1352 |
-
return distro_info
|
1353 |
-
|
1354 |
-
|
1355 |
-
_distro = LinuxDistribution()
|
1356 |
-
|
1357 |
-
|
1358 |
-
def main() -> None:
|
1359 |
-
logger = logging.getLogger(__name__)
|
1360 |
-
logger.setLevel(logging.DEBUG)
|
1361 |
-
logger.addHandler(logging.StreamHandler(sys.stdout))
|
1362 |
-
|
1363 |
-
parser = argparse.ArgumentParser(description="OS distro info tool")
|
1364 |
-
parser.add_argument(
|
1365 |
-
"--json", "-j", help="Output in machine readable format", action="store_true"
|
1366 |
-
)
|
1367 |
-
|
1368 |
-
parser.add_argument(
|
1369 |
-
"--root-dir",
|
1370 |
-
"-r",
|
1371 |
-
type=str,
|
1372 |
-
dest="root_dir",
|
1373 |
-
help="Path to the root filesystem directory (defaults to /)",
|
1374 |
-
)
|
1375 |
-
|
1376 |
-
args = parser.parse_args()
|
1377 |
-
|
1378 |
-
if args.root_dir:
|
1379 |
-
dist = LinuxDistribution(
|
1380 |
-
include_lsb=False,
|
1381 |
-
include_uname=False,
|
1382 |
-
include_oslevel=False,
|
1383 |
-
root_dir=args.root_dir,
|
1384 |
-
)
|
1385 |
-
else:
|
1386 |
-
dist = _distro
|
1387 |
-
|
1388 |
-
if args.json:
|
1389 |
-
logger.info(json.dumps(dist.info(), indent=4, sort_keys=True))
|
1390 |
-
else:
|
1391 |
-
logger.info("Name: %s", dist.name(pretty=True))
|
1392 |
-
distribution_version = dist.version(pretty=True)
|
1393 |
-
logger.info("Version: %s", distribution_version)
|
1394 |
-
distribution_codename = dist.codename()
|
1395 |
-
logger.info("Codename: %s", distribution_codename)
|
1396 |
-
|
1397 |
-
|
1398 |
-
if __name__ == "__main__":
|
1399 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AtomdffAI/wechatgpt4atom/bot/openai/open_ai_bot.py
DELETED
@@ -1,166 +0,0 @@
|
|
1 |
-
# encoding:utf-8
|
2 |
-
|
3 |
-
from bot.bot import Bot
|
4 |
-
from config import conf
|
5 |
-
from common.log import logger
|
6 |
-
import openai
|
7 |
-
import time
|
8 |
-
|
9 |
-
user_session = dict()
|
10 |
-
|
11 |
-
# OpenAI对话模型API (可用)
|
12 |
-
class OpenAIBot(Bot):
|
13 |
-
def __init__(self):
|
14 |
-
openai.api_key = conf().get('open_ai_api_key')
|
15 |
-
|
16 |
-
|
17 |
-
def reply(self, query, context=None):
|
18 |
-
# acquire reply content
|
19 |
-
if not context or not context.get('type') or context.get('type') == 'TEXT':
|
20 |
-
logger.info("[OPEN_AI] query={}".format(query))
|
21 |
-
from_user_id = context['from_user_id']
|
22 |
-
if query == '#清除记忆':
|
23 |
-
Session.clear_session(from_user_id)
|
24 |
-
return '记忆已清除'
|
25 |
-
elif query == '#清除所有':
|
26 |
-
Session.clear_all_session()
|
27 |
-
return '所有人记忆已清除'
|
28 |
-
|
29 |
-
new_query = Session.build_session_query(query, from_user_id)
|
30 |
-
logger.debug("[OPEN_AI] session query={}".format(new_query))
|
31 |
-
|
32 |
-
reply_content = self.reply_text(new_query, from_user_id, 0)
|
33 |
-
logger.debug("[OPEN_AI] new_query={}, user={}, reply_cont={}".format(new_query, from_user_id, reply_content))
|
34 |
-
if reply_content and query:
|
35 |
-
Session.save_session(query, reply_content, from_user_id)
|
36 |
-
return reply_content
|
37 |
-
|
38 |
-
elif context.get('type', None) == 'IMAGE_CREATE':
|
39 |
-
return self.create_img(query, 0)
|
40 |
-
|
41 |
-
def reply_text(self, query, user_id, retry_count=0):
|
42 |
-
try:
|
43 |
-
response = openai.Completion.create(
|
44 |
-
model="text-davinci-003", # 对话模型的名称
|
45 |
-
prompt=query,
|
46 |
-
temperature=0.5, # 值在[0,1]之间,越大表示回复越具有不确定性
|
47 |
-
max_tokens=1500, # 回复最大的字符数
|
48 |
-
top_p=1,
|
49 |
-
frequency_penalty=0.5, # [-2,2]之间,该值越大则更倾向于产生不同的内容
|
50 |
-
presence_penalty=0.5, # [-2,2]之间,该值越大则更倾向于产生不同的内容
|
51 |
-
stop=["\n\n\n"]
|
52 |
-
)
|
53 |
-
res_content = response.choices[0]['text'].strip().replace('<|endoftext|>', '')
|
54 |
-
logger.info("[OPEN_AI] reply={}".format(res_content))
|
55 |
-
return res_content
|
56 |
-
except openai.error.RateLimitError as e:
|
57 |
-
# rate limit exception
|
58 |
-
logger.warn(e)
|
59 |
-
if retry_count < 1:
|
60 |
-
time.sleep(5)
|
61 |
-
logger.warn("[OPEN_AI] RateLimit exceed, 第{}次重试".format(retry_count+1))
|
62 |
-
return self.reply_text(query, user_id, retry_count+1)
|
63 |
-
else:
|
64 |
-
return "提问太快啦,请休息一下再问我吧"
|
65 |
-
except Exception as e:
|
66 |
-
# unknown exception
|
67 |
-
logger.exception(e)
|
68 |
-
Session.clear_session(user_id)
|
69 |
-
return "请再问我一次吧"
|
70 |
-
|
71 |
-
|
72 |
-
def create_img(self, query, retry_count=0):
|
73 |
-
try:
|
74 |
-
logger.info("[OPEN_AI] image_query={}".format(query))
|
75 |
-
response = openai.Image.create(
|
76 |
-
prompt=query, #图片描述
|
77 |
-
n=1, #每次生成图片的数量
|
78 |
-
size="1024x1024" #图片大小,可选有 256x256, 512x512, 1024x1024
|
79 |
-
)
|
80 |
-
image_url = response['data'][0]['url']
|
81 |
-
logger.info("[OPEN_AI] image_url={}".format(image_url))
|
82 |
-
return image_url
|
83 |
-
except openai.error.RateLimitError as e:
|
84 |
-
logger.warn(e)
|
85 |
-
if retry_count < 1:
|
86 |
-
time.sleep(5)
|
87 |
-
logger.warn("[OPEN_AI] ImgCreate RateLimit exceed, 第{}次重试".format(retry_count+1))
|
88 |
-
return self.reply_text(query, retry_count+1)
|
89 |
-
else:
|
90 |
-
return "提问太快啦,请休息一下再问我吧"
|
91 |
-
except Exception as e:
|
92 |
-
logger.exception(e)
|
93 |
-
return None
|
94 |
-
|
95 |
-
|
96 |
-
class Session(object):
|
97 |
-
@staticmethod
|
98 |
-
def build_session_query(query, user_id):
|
99 |
-
'''
|
100 |
-
build query with conversation history
|
101 |
-
e.g. Q: xxx
|
102 |
-
A: xxx
|
103 |
-
Q: xxx
|
104 |
-
:param query: query content
|
105 |
-
:param user_id: from user id
|
106 |
-
:return: query content with conversaction
|
107 |
-
'''
|
108 |
-
prompt = conf().get("character_desc", "")
|
109 |
-
if prompt:
|
110 |
-
prompt += "<|endoftext|>\n\n\n"
|
111 |
-
session = user_session.get(user_id, None)
|
112 |
-
if session:
|
113 |
-
for conversation in session:
|
114 |
-
prompt += "Q: " + conversation["question"] + "\n\n\nA: " + conversation["answer"] + "<|endoftext|>\n"
|
115 |
-
prompt += "Q: " + query + "\nA: "
|
116 |
-
return prompt
|
117 |
-
else:
|
118 |
-
return prompt + "Q: " + query + "\nA: "
|
119 |
-
|
120 |
-
@staticmethod
|
121 |
-
def save_session(query, answer, user_id):
|
122 |
-
max_tokens = conf().get("conversation_max_tokens")
|
123 |
-
if not max_tokens:
|
124 |
-
# default 3000
|
125 |
-
max_tokens = 1000
|
126 |
-
conversation = dict()
|
127 |
-
conversation["question"] = query
|
128 |
-
conversation["answer"] = answer
|
129 |
-
session = user_session.get(user_id)
|
130 |
-
logger.debug(conversation)
|
131 |
-
logger.debug(session)
|
132 |
-
if session:
|
133 |
-
# append conversation
|
134 |
-
session.append(conversation)
|
135 |
-
else:
|
136 |
-
# create session
|
137 |
-
queue = list()
|
138 |
-
queue.append(conversation)
|
139 |
-
user_session[user_id] = queue
|
140 |
-
|
141 |
-
# discard exceed limit conversation
|
142 |
-
Session.discard_exceed_conversation(user_session[user_id], max_tokens)
|
143 |
-
|
144 |
-
|
145 |
-
@staticmethod
|
146 |
-
def discard_exceed_conversation(session, max_tokens):
|
147 |
-
count = 0
|
148 |
-
count_list = list()
|
149 |
-
for i in range(len(session)-1, -1, -1):
|
150 |
-
# count tokens of conversation list
|
151 |
-
history_conv = session[i]
|
152 |
-
count += len(history_conv["question"]) + len(history_conv["answer"])
|
153 |
-
count_list.append(count)
|
154 |
-
|
155 |
-
for c in count_list:
|
156 |
-
if c > max_tokens:
|
157 |
-
# pop first conversation
|
158 |
-
session.pop(0)
|
159 |
-
|
160 |
-
@staticmethod
|
161 |
-
def clear_session(user_id):
|
162 |
-
user_session[user_id] = []
|
163 |
-
|
164 |
-
@staticmethod
|
165 |
-
def clear_all_session():
|
166 |
-
user_session.clear()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BAAI/dreambooth-altdiffusion/README.md
DELETED
@@ -1,14 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Dreambooth-Altdiffusion
|
3 |
-
emoji: ☁️
|
4 |
-
colorFrom: pink
|
5 |
-
colorTo: red
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.11
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: mit
|
11 |
-
duplicated_from: multimodalart/dreambooth-training
|
12 |
-
---
|
13 |
-
|
14 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bart92/RVC_HF/infer/modules/train/train.py
DELETED
@@ -1,723 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import sys
|
3 |
-
import logging
|
4 |
-
|
5 |
-
logger = logging.getLogger(__name__)
|
6 |
-
|
7 |
-
now_dir = os.getcwd()
|
8 |
-
sys.path.append(os.path.join(now_dir))
|
9 |
-
|
10 |
-
import datetime
|
11 |
-
|
12 |
-
from infer.lib.train import utils
|
13 |
-
|
14 |
-
hps = utils.get_hparams()
|
15 |
-
os.environ["CUDA_VISIBLE_DEVICES"] = hps.gpus.replace("-", ",")
|
16 |
-
n_gpus = len(hps.gpus.split("-"))
|
17 |
-
from random import randint, shuffle
|
18 |
-
|
19 |
-
import torch
|
20 |
-
try:
|
21 |
-
import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import
|
22 |
-
if torch.xpu.is_available():
|
23 |
-
from infer.modules.ipex import ipex_init
|
24 |
-
from infer.modules.ipex.gradscaler import gradscaler_init
|
25 |
-
from torch.xpu.amp import autocast
|
26 |
-
GradScaler = gradscaler_init()
|
27 |
-
ipex_init()
|
28 |
-
else:
|
29 |
-
from torch.cuda.amp import GradScaler, autocast
|
30 |
-
except Exception:
|
31 |
-
from torch.cuda.amp import GradScaler, autocast
|
32 |
-
|
33 |
-
torch.backends.cudnn.deterministic = False
|
34 |
-
torch.backends.cudnn.benchmark = False
|
35 |
-
from time import sleep
|
36 |
-
from time import time as ttime
|
37 |
-
|
38 |
-
import torch.distributed as dist
|
39 |
-
import torch.multiprocessing as mp
|
40 |
-
|
41 |
-
from torch.nn import functional as F
|
42 |
-
from torch.nn.parallel import DistributedDataParallel as DDP
|
43 |
-
from torch.utils.data import DataLoader
|
44 |
-
from torch.utils.tensorboard import SummaryWriter
|
45 |
-
|
46 |
-
from infer.lib.infer_pack import commons
|
47 |
-
from infer.lib.train.data_utils import (
|
48 |
-
DistributedBucketSampler,
|
49 |
-
TextAudioCollate,
|
50 |
-
TextAudioCollateMultiNSFsid,
|
51 |
-
TextAudioLoader,
|
52 |
-
TextAudioLoaderMultiNSFsid,
|
53 |
-
)
|
54 |
-
|
55 |
-
if hps.version == "v1":
|
56 |
-
from infer.lib.infer_pack.models import MultiPeriodDiscriminator
|
57 |
-
from infer.lib.infer_pack.models import SynthesizerTrnMs256NSFsid as RVC_Model_f0
|
58 |
-
from infer.lib.infer_pack.models import (
|
59 |
-
SynthesizerTrnMs256NSFsid_nono as RVC_Model_nof0,
|
60 |
-
)
|
61 |
-
else:
|
62 |
-
from infer.lib.infer_pack.models import (
|
63 |
-
SynthesizerTrnMs768NSFsid as RVC_Model_f0,
|
64 |
-
SynthesizerTrnMs768NSFsid_nono as RVC_Model_nof0,
|
65 |
-
MultiPeriodDiscriminatorV2 as MultiPeriodDiscriminator,
|
66 |
-
)
|
67 |
-
|
68 |
-
from infer.lib.train.losses import (
|
69 |
-
discriminator_loss,
|
70 |
-
feature_loss,
|
71 |
-
generator_loss,
|
72 |
-
kl_loss,
|
73 |
-
)
|
74 |
-
from infer.lib.train.mel_processing import mel_spectrogram_torch, spec_to_mel_torch
|
75 |
-
from infer.lib.train.process_ckpt import savee
|
76 |
-
|
77 |
-
global_step = 0
|
78 |
-
import csv
|
79 |
-
|
80 |
-
class EpochRecorder:
|
81 |
-
def __init__(self):
|
82 |
-
self.last_time = ttime()
|
83 |
-
|
84 |
-
def record(self):
|
85 |
-
now_time = ttime()
|
86 |
-
elapsed_time = now_time - self.last_time
|
87 |
-
self.last_time = now_time
|
88 |
-
elapsed_time_str = str(datetime.timedelta(seconds=elapsed_time))
|
89 |
-
current_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
90 |
-
return f"[{current_time}] | ({elapsed_time_str})"
|
91 |
-
|
92 |
-
def reset_stop_flag():
|
93 |
-
with open("csvdb/stop.csv", "w+", newline="") as STOPCSVwrite:
|
94 |
-
csv_writer = csv.writer(STOPCSVwrite, delimiter=",")
|
95 |
-
csv_writer.writerow(["False"])
|
96 |
-
|
97 |
-
def create_model(hps, model_f0, model_nof0):
|
98 |
-
filter_length_adjusted = hps.data.filter_length // 2 + 1
|
99 |
-
segment_size_adjusted = hps.train.segment_size // hps.data.hop_length
|
100 |
-
is_half = hps.train.fp16_run
|
101 |
-
sr = hps.sample_rate
|
102 |
-
|
103 |
-
model = model_f0 if hps.if_f0 == 1 else model_nof0
|
104 |
-
|
105 |
-
return model(
|
106 |
-
filter_length_adjusted,
|
107 |
-
segment_size_adjusted,
|
108 |
-
**hps.model,
|
109 |
-
is_half=is_half,
|
110 |
-
sr=sr
|
111 |
-
)
|
112 |
-
|
113 |
-
def move_model_to_cuda_if_available(model, rank):
|
114 |
-
if torch.cuda.is_available():
|
115 |
-
return model.cuda(rank)
|
116 |
-
else:
|
117 |
-
return model
|
118 |
-
|
119 |
-
def create_optimizer(model, hps):
|
120 |
-
return torch.optim.AdamW(
|
121 |
-
model.parameters(),
|
122 |
-
hps.train.learning_rate,
|
123 |
-
betas=hps.train.betas,
|
124 |
-
eps=hps.train.eps,
|
125 |
-
)
|
126 |
-
|
127 |
-
def create_ddp_model(model, rank):
|
128 |
-
if torch.cuda.is_available():
|
129 |
-
return DDP(model, device_ids=[rank])
|
130 |
-
else:
|
131 |
-
return DDP(model)
|
132 |
-
|
133 |
-
def create_dataset(hps, if_f0=True):
|
134 |
-
return TextAudioLoaderMultiNSFsid(hps.data.training_files, hps.data) if if_f0 else TextAudioLoader(hps.data.training_files, hps.data)
|
135 |
-
|
136 |
-
def create_sampler(dataset, batch_size, n_gpus, rank):
|
137 |
-
return DistributedBucketSampler(
|
138 |
-
dataset,
|
139 |
-
batch_size * n_gpus,
|
140 |
-
# [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1200,1400], # 16s
|
141 |
-
[100, 200, 300, 400, 500, 600, 700, 800, 900], # 16s
|
142 |
-
num_replicas=n_gpus,
|
143 |
-
rank=rank,
|
144 |
-
shuffle=True,
|
145 |
-
)
|
146 |
-
|
147 |
-
def set_collate_fn(if_f0=True):
|
148 |
-
return TextAudioCollateMultiNSFsid() if if_f0 else TextAudioCollate()
|
149 |
-
|
150 |
-
|
151 |
-
def main():
|
152 |
-
n_gpus = torch.cuda.device_count()
|
153 |
-
|
154 |
-
if torch.cuda.is_available() == False and torch.backends.mps.is_available() == True:
|
155 |
-
n_gpus = 1
|
156 |
-
if n_gpus < 1:
|
157 |
-
# patch to unblock people without gpus. there is probably a better way.
|
158 |
-
logger.warn("NO GPU DETECTED: falling back to CPU - this may take a while")
|
159 |
-
n_gpus = 1
|
160 |
-
os.environ["MASTER_ADDR"] = "localhost"
|
161 |
-
os.environ["MASTER_PORT"] = str(randint(20000, 55555))
|
162 |
-
children = []
|
163 |
-
for i in range(n_gpus):
|
164 |
-
subproc = mp.Process(
|
165 |
-
target=run,
|
166 |
-
args=(
|
167 |
-
i,
|
168 |
-
n_gpus,
|
169 |
-
hps,
|
170 |
-
),
|
171 |
-
)
|
172 |
-
children.append(subproc)
|
173 |
-
subproc.start()
|
174 |
-
|
175 |
-
for i in range(n_gpus):
|
176 |
-
children[i].join()
|
177 |
-
|
178 |
-
|
179 |
-
def run(rank, n_gpus, hps):
|
180 |
-
global global_step
|
181 |
-
if rank == 0:
|
182 |
-
logger = utils.get_logger(hps.model_dir)
|
183 |
-
logger.info(hps)
|
184 |
-
# utils.check_git_hash(hps.model_dir)
|
185 |
-
writer = SummaryWriter(log_dir=hps.model_dir)
|
186 |
-
writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
|
187 |
-
|
188 |
-
dist.init_process_group(
|
189 |
-
backend="gloo", init_method="env://", world_size=n_gpus, rank=rank
|
190 |
-
)
|
191 |
-
torch.manual_seed(hps.train.seed)
|
192 |
-
if torch.cuda.is_available():
|
193 |
-
torch.cuda.set_device(rank)
|
194 |
-
|
195 |
-
if hps.if_f0 == 1:
|
196 |
-
train_dataset = TextAudioLoaderMultiNSFsid(hps.data.training_files, hps.data)
|
197 |
-
else:
|
198 |
-
train_dataset = TextAudioLoader(hps.data.training_files, hps.data)
|
199 |
-
train_sampler = DistributedBucketSampler(
|
200 |
-
train_dataset,
|
201 |
-
hps.train.batch_size * n_gpus,
|
202 |
-
# [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1200,1400], # 16s
|
203 |
-
[100, 200, 300, 400, 500, 600, 700, 800, 900], # 16s
|
204 |
-
num_replicas=n_gpus,
|
205 |
-
rank=rank,
|
206 |
-
shuffle=True,
|
207 |
-
)
|
208 |
-
# It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.
|
209 |
-
# num_workers=8 -> num_workers=4
|
210 |
-
if hps.if_f0 == 1:
|
211 |
-
collate_fn = TextAudioCollateMultiNSFsid()
|
212 |
-
else:
|
213 |
-
collate_fn = TextAudioCollate()
|
214 |
-
train_loader = DataLoader(
|
215 |
-
train_dataset,
|
216 |
-
num_workers=4,
|
217 |
-
shuffle=False,
|
218 |
-
pin_memory=True,
|
219 |
-
collate_fn=collate_fn,
|
220 |
-
batch_sampler=train_sampler,
|
221 |
-
persistent_workers=True,
|
222 |
-
prefetch_factor=8,
|
223 |
-
)
|
224 |
-
if hps.if_f0 == 1:
|
225 |
-
net_g = RVC_Model_f0(
|
226 |
-
hps.data.filter_length // 2 + 1,
|
227 |
-
hps.train.segment_size // hps.data.hop_length,
|
228 |
-
**hps.model,
|
229 |
-
is_half=hps.train.fp16_run,
|
230 |
-
sr=hps.sample_rate,
|
231 |
-
)
|
232 |
-
else:
|
233 |
-
net_g = RVC_Model_nof0(
|
234 |
-
hps.data.filter_length // 2 + 1,
|
235 |
-
hps.train.segment_size // hps.data.hop_length,
|
236 |
-
**hps.model,
|
237 |
-
is_half=hps.train.fp16_run,
|
238 |
-
)
|
239 |
-
if torch.cuda.is_available():
|
240 |
-
net_g = net_g.cuda(rank)
|
241 |
-
net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm)
|
242 |
-
if torch.cuda.is_available():
|
243 |
-
net_d = net_d.cuda(rank)
|
244 |
-
optim_g = torch.optim.AdamW(
|
245 |
-
net_g.parameters(),
|
246 |
-
hps.train.learning_rate,
|
247 |
-
betas=hps.train.betas,
|
248 |
-
eps=hps.train.eps,
|
249 |
-
)
|
250 |
-
optim_d = torch.optim.AdamW(
|
251 |
-
net_d.parameters(),
|
252 |
-
hps.train.learning_rate,
|
253 |
-
betas=hps.train.betas,
|
254 |
-
eps=hps.train.eps,
|
255 |
-
)
|
256 |
-
# net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True)
|
257 |
-
# net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True)
|
258 |
-
if hasattr(torch, "xpu") and torch.xpu.is_available():
|
259 |
-
pass
|
260 |
-
elif torch.cuda.is_available():
|
261 |
-
net_g = DDP(net_g, device_ids=[rank])
|
262 |
-
net_d = DDP(net_d, device_ids=[rank])
|
263 |
-
else:
|
264 |
-
net_g = DDP(net_g)
|
265 |
-
net_d = DDP(net_d)
|
266 |
-
|
267 |
-
try: # 如果能加载自动resume
|
268 |
-
_, _, _, epoch_str = utils.load_checkpoint(
|
269 |
-
utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, optim_d
|
270 |
-
) # D多半加载没事
|
271 |
-
if rank == 0:
|
272 |
-
logger.info("loaded D")
|
273 |
-
# _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g,load_opt=0)
|
274 |
-
_, _, _, epoch_str = utils.load_checkpoint(
|
275 |
-
utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g
|
276 |
-
)
|
277 |
-
global_step = (epoch_str - 1) * len(train_loader)
|
278 |
-
# epoch_str = 1
|
279 |
-
# global_step = 0
|
280 |
-
except: # 如果首次不能加载,加载pretrain
|
281 |
-
# traceback.print_exc()
|
282 |
-
epoch_str = 1
|
283 |
-
global_step = 0
|
284 |
-
if hps.pretrainG != "":
|
285 |
-
if rank == 0:
|
286 |
-
logger.info("loaded pretrained %s" % (hps.pretrainG))
|
287 |
-
if hasattr(net_g, "module"):
|
288 |
-
logger.info(
|
289 |
-
net_g.module.load_state_dict(
|
290 |
-
torch.load(hps.pretrainG, map_location="cpu")["model"]
|
291 |
-
)
|
292 |
-
) ##测试不加载优化器
|
293 |
-
else:
|
294 |
-
logger.info(
|
295 |
-
net_g.load_state_dict(
|
296 |
-
torch.load(hps.pretrainG, map_location="cpu")["model"]
|
297 |
-
)
|
298 |
-
) ##测试不加载优化器
|
299 |
-
if hps.pretrainD != "":
|
300 |
-
if rank == 0:
|
301 |
-
logger.info("loaded pretrained %s" % (hps.pretrainD))
|
302 |
-
if hasattr(net_d, "module"):
|
303 |
-
logger.info(
|
304 |
-
net_d.module.load_state_dict(
|
305 |
-
torch.load(hps.pretrainD, map_location="cpu")["model"]
|
306 |
-
)
|
307 |
-
)
|
308 |
-
else:
|
309 |
-
logger.info(
|
310 |
-
net_d.load_state_dict(
|
311 |
-
torch.load(hps.pretrainD, map_location="cpu")["model"]
|
312 |
-
)
|
313 |
-
)
|
314 |
-
|
315 |
-
scheduler_g = torch.optim.lr_scheduler.ExponentialLR(
|
316 |
-
optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2
|
317 |
-
)
|
318 |
-
scheduler_d = torch.optim.lr_scheduler.ExponentialLR(
|
319 |
-
optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2
|
320 |
-
)
|
321 |
-
|
322 |
-
scaler = GradScaler(enabled=hps.train.fp16_run)
|
323 |
-
|
324 |
-
cache = []
|
325 |
-
for epoch in range(epoch_str, hps.train.epochs + 1):
|
326 |
-
if rank == 0:
|
327 |
-
train_and_evaluate(
|
328 |
-
rank,
|
329 |
-
epoch,
|
330 |
-
hps,
|
331 |
-
[net_g, net_d],
|
332 |
-
[optim_g, optim_d],
|
333 |
-
[scheduler_g, scheduler_d],
|
334 |
-
scaler,
|
335 |
-
[train_loader, None],
|
336 |
-
logger,
|
337 |
-
[writer, writer_eval],
|
338 |
-
cache,
|
339 |
-
)
|
340 |
-
else:
|
341 |
-
train_and_evaluate(
|
342 |
-
rank,
|
343 |
-
epoch,
|
344 |
-
hps,
|
345 |
-
[net_g, net_d],
|
346 |
-
[optim_g, optim_d],
|
347 |
-
[scheduler_g, scheduler_d],
|
348 |
-
scaler,
|
349 |
-
[train_loader, None],
|
350 |
-
None,
|
351 |
-
None,
|
352 |
-
cache,
|
353 |
-
)
|
354 |
-
scheduler_g.step()
|
355 |
-
scheduler_d.step()
|
356 |
-
|
357 |
-
|
358 |
-
def train_and_evaluate(
|
359 |
-
rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers, cache
|
360 |
-
):
|
361 |
-
net_g, net_d = nets
|
362 |
-
optim_g, optim_d = optims
|
363 |
-
train_loader, eval_loader = loaders
|
364 |
-
if writers is not None:
|
365 |
-
writer, writer_eval = writers
|
366 |
-
|
367 |
-
train_loader.batch_sampler.set_epoch(epoch)
|
368 |
-
global global_step
|
369 |
-
|
370 |
-
net_g.train()
|
371 |
-
net_d.train()
|
372 |
-
|
373 |
-
# Prepare data iterator
|
374 |
-
if hps.if_cache_data_in_gpu == True:
|
375 |
-
# Use Cache
|
376 |
-
data_iterator = cache
|
377 |
-
if cache == []:
|
378 |
-
# Make new cache
|
379 |
-
for batch_idx, info in enumerate(train_loader):
|
380 |
-
# Unpack
|
381 |
-
if hps.if_f0 == 1:
|
382 |
-
(
|
383 |
-
phone,
|
384 |
-
phone_lengths,
|
385 |
-
pitch,
|
386 |
-
pitchf,
|
387 |
-
spec,
|
388 |
-
spec_lengths,
|
389 |
-
wave,
|
390 |
-
wave_lengths,
|
391 |
-
sid,
|
392 |
-
) = info
|
393 |
-
else:
|
394 |
-
(
|
395 |
-
phone,
|
396 |
-
phone_lengths,
|
397 |
-
spec,
|
398 |
-
spec_lengths,
|
399 |
-
wave,
|
400 |
-
wave_lengths,
|
401 |
-
sid,
|
402 |
-
) = info
|
403 |
-
# Load on CUDA
|
404 |
-
if torch.cuda.is_available():
|
405 |
-
phone = phone.cuda(rank, non_blocking=True)
|
406 |
-
phone_lengths = phone_lengths.cuda(rank, non_blocking=True)
|
407 |
-
if hps.if_f0 == 1:
|
408 |
-
pitch = pitch.cuda(rank, non_blocking=True)
|
409 |
-
pitchf = pitchf.cuda(rank, non_blocking=True)
|
410 |
-
sid = sid.cuda(rank, non_blocking=True)
|
411 |
-
spec = spec.cuda(rank, non_blocking=True)
|
412 |
-
spec_lengths = spec_lengths.cuda(rank, non_blocking=True)
|
413 |
-
wave = wave.cuda(rank, non_blocking=True)
|
414 |
-
wave_lengths = wave_lengths.cuda(rank, non_blocking=True)
|
415 |
-
# Cache on list
|
416 |
-
if hps.if_f0 == 1:
|
417 |
-
cache.append(
|
418 |
-
(
|
419 |
-
batch_idx,
|
420 |
-
(
|
421 |
-
phone,
|
422 |
-
phone_lengths,
|
423 |
-
pitch,
|
424 |
-
pitchf,
|
425 |
-
spec,
|
426 |
-
spec_lengths,
|
427 |
-
wave,
|
428 |
-
wave_lengths,
|
429 |
-
sid,
|
430 |
-
),
|
431 |
-
)
|
432 |
-
)
|
433 |
-
else:
|
434 |
-
cache.append(
|
435 |
-
(
|
436 |
-
batch_idx,
|
437 |
-
(
|
438 |
-
phone,
|
439 |
-
phone_lengths,
|
440 |
-
spec,
|
441 |
-
spec_lengths,
|
442 |
-
wave,
|
443 |
-
wave_lengths,
|
444 |
-
sid,
|
445 |
-
),
|
446 |
-
)
|
447 |
-
)
|
448 |
-
else:
|
449 |
-
# Load shuffled cache
|
450 |
-
shuffle(cache)
|
451 |
-
else:
|
452 |
-
# Loader
|
453 |
-
data_iterator = enumerate(train_loader)
|
454 |
-
|
455 |
-
# Run steps
|
456 |
-
epoch_recorder = EpochRecorder()
|
457 |
-
for batch_idx, info in data_iterator:
|
458 |
-
# Data
|
459 |
-
## Unpack
|
460 |
-
if hps.if_f0 == 1:
|
461 |
-
(
|
462 |
-
phone,
|
463 |
-
phone_lengths,
|
464 |
-
pitch,
|
465 |
-
pitchf,
|
466 |
-
spec,
|
467 |
-
spec_lengths,
|
468 |
-
wave,
|
469 |
-
wave_lengths,
|
470 |
-
sid,
|
471 |
-
) = info
|
472 |
-
else:
|
473 |
-
phone, phone_lengths, spec, spec_lengths, wave, wave_lengths, sid = info
|
474 |
-
## Load on CUDA
|
475 |
-
if (hps.if_cache_data_in_gpu == False) and torch.cuda.is_available():
|
476 |
-
phone = phone.cuda(rank, non_blocking=True)
|
477 |
-
phone_lengths = phone_lengths.cuda(rank, non_blocking=True)
|
478 |
-
if hps.if_f0 == 1:
|
479 |
-
pitch = pitch.cuda(rank, non_blocking=True)
|
480 |
-
pitchf = pitchf.cuda(rank, non_blocking=True)
|
481 |
-
sid = sid.cuda(rank, non_blocking=True)
|
482 |
-
spec = spec.cuda(rank, non_blocking=True)
|
483 |
-
spec_lengths = spec_lengths.cuda(rank, non_blocking=True)
|
484 |
-
wave = wave.cuda(rank, non_blocking=True)
|
485 |
-
# wave_lengths = wave_lengths.cuda(rank, non_blocking=True)
|
486 |
-
|
487 |
-
# Calculate
|
488 |
-
with autocast(enabled=hps.train.fp16_run):
|
489 |
-
if hps.if_f0 == 1:
|
490 |
-
(
|
491 |
-
y_hat,
|
492 |
-
ids_slice,
|
493 |
-
x_mask,
|
494 |
-
z_mask,
|
495 |
-
(z, z_p, m_p, logs_p, m_q, logs_q),
|
496 |
-
) = net_g(phone, phone_lengths, pitch, pitchf, spec, spec_lengths, sid)
|
497 |
-
else:
|
498 |
-
(
|
499 |
-
y_hat,
|
500 |
-
ids_slice,
|
501 |
-
x_mask,
|
502 |
-
z_mask,
|
503 |
-
(z, z_p, m_p, logs_p, m_q, logs_q),
|
504 |
-
) = net_g(phone, phone_lengths, spec, spec_lengths, sid)
|
505 |
-
mel = spec_to_mel_torch(
|
506 |
-
spec,
|
507 |
-
hps.data.filter_length,
|
508 |
-
hps.data.n_mel_channels,
|
509 |
-
hps.data.sampling_rate,
|
510 |
-
hps.data.mel_fmin,
|
511 |
-
hps.data.mel_fmax,
|
512 |
-
)
|
513 |
-
y_mel = commons.slice_segments(
|
514 |
-
mel, ids_slice, hps.train.segment_size // hps.data.hop_length
|
515 |
-
)
|
516 |
-
with autocast(enabled=False):
|
517 |
-
y_hat_mel = mel_spectrogram_torch(
|
518 |
-
y_hat.float().squeeze(1),
|
519 |
-
hps.data.filter_length,
|
520 |
-
hps.data.n_mel_channels,
|
521 |
-
hps.data.sampling_rate,
|
522 |
-
hps.data.hop_length,
|
523 |
-
hps.data.win_length,
|
524 |
-
hps.data.mel_fmin,
|
525 |
-
hps.data.mel_fmax,
|
526 |
-
)
|
527 |
-
if hps.train.fp16_run == True:
|
528 |
-
y_hat_mel = y_hat_mel.half()
|
529 |
-
wave = commons.slice_segments(
|
530 |
-
wave, ids_slice * hps.data.hop_length, hps.train.segment_size
|
531 |
-
) # slice
|
532 |
-
|
533 |
-
# Discriminator
|
534 |
-
y_d_hat_r, y_d_hat_g, _, _ = net_d(wave, y_hat.detach())
|
535 |
-
with autocast(enabled=False):
|
536 |
-
loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(
|
537 |
-
y_d_hat_r, y_d_hat_g
|
538 |
-
)
|
539 |
-
optim_d.zero_grad()
|
540 |
-
scaler.scale(loss_disc).backward()
|
541 |
-
scaler.unscale_(optim_d)
|
542 |
-
grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
|
543 |
-
scaler.step(optim_d)
|
544 |
-
|
545 |
-
with autocast(enabled=hps.train.fp16_run):
|
546 |
-
# Generator
|
547 |
-
y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(wave, y_hat)
|
548 |
-
with autocast(enabled=False):
|
549 |
-
loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
|
550 |
-
loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
|
551 |
-
loss_fm = feature_loss(fmap_r, fmap_g)
|
552 |
-
loss_gen, losses_gen = generator_loss(y_d_hat_g)
|
553 |
-
loss_gen_all = loss_gen + loss_fm + loss_mel + loss_kl
|
554 |
-
optim_g.zero_grad()
|
555 |
-
scaler.scale(loss_gen_all).backward()
|
556 |
-
scaler.unscale_(optim_g)
|
557 |
-
grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
|
558 |
-
scaler.step(optim_g)
|
559 |
-
scaler.update()
|
560 |
-
|
561 |
-
if rank == 0:
|
562 |
-
if global_step % hps.train.log_interval == 0:
|
563 |
-
lr = optim_g.param_groups[0]["lr"]
|
564 |
-
logger.info(
|
565 |
-
"Train Epoch: {} [{:.0f}%]".format(
|
566 |
-
epoch, 100.0 * batch_idx / len(train_loader)
|
567 |
-
)
|
568 |
-
)
|
569 |
-
# Amor For Tensorboard display
|
570 |
-
if loss_mel > 75:
|
571 |
-
loss_mel = 75
|
572 |
-
if loss_kl > 9:
|
573 |
-
loss_kl = 9
|
574 |
-
|
575 |
-
logger.info([global_step, lr])
|
576 |
-
logger.info(
|
577 |
-
f"loss_disc={loss_disc:.3f}, loss_gen={loss_gen:.3f}, loss_fm={loss_fm:.3f},loss_mel={loss_mel:.3f}, loss_kl={loss_kl:.3f}"
|
578 |
-
)
|
579 |
-
scalar_dict = {
|
580 |
-
"loss/g/total": loss_gen_all,
|
581 |
-
"loss/d/total": loss_disc,
|
582 |
-
"learning_rate": lr,
|
583 |
-
"grad_norm_d": grad_norm_d,
|
584 |
-
"grad_norm_g": grad_norm_g,
|
585 |
-
}
|
586 |
-
scalar_dict.update(
|
587 |
-
{
|
588 |
-
"loss/g/fm": loss_fm,
|
589 |
-
"loss/g/mel": loss_mel,
|
590 |
-
"loss/g/kl": loss_kl,
|
591 |
-
}
|
592 |
-
)
|
593 |
-
|
594 |
-
scalar_dict.update(
|
595 |
-
{"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}
|
596 |
-
)
|
597 |
-
scalar_dict.update(
|
598 |
-
{"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}
|
599 |
-
)
|
600 |
-
scalar_dict.update(
|
601 |
-
{"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}
|
602 |
-
)
|
603 |
-
image_dict = {
|
604 |
-
"slice/mel_org": utils.plot_spectrogram_to_numpy(
|
605 |
-
y_mel[0].data.cpu().numpy()
|
606 |
-
),
|
607 |
-
"slice/mel_gen": utils.plot_spectrogram_to_numpy(
|
608 |
-
y_hat_mel[0].data.cpu().numpy()
|
609 |
-
),
|
610 |
-
"all/mel": utils.plot_spectrogram_to_numpy(
|
611 |
-
mel[0].data.cpu().numpy()
|
612 |
-
),
|
613 |
-
}
|
614 |
-
utils.summarize(
|
615 |
-
writer=writer,
|
616 |
-
global_step=global_step,
|
617 |
-
images=image_dict,
|
618 |
-
scalars=scalar_dict,
|
619 |
-
)
|
620 |
-
global_step += 1
|
621 |
-
# /Run steps
|
622 |
-
|
623 |
-
if epoch % hps.save_every_epoch == 0 and rank == 0:
|
624 |
-
if hps.if_latest == 0:
|
625 |
-
utils.save_checkpoint(
|
626 |
-
net_g,
|
627 |
-
optim_g,
|
628 |
-
hps.train.learning_rate,
|
629 |
-
epoch,
|
630 |
-
os.path.join(hps.model_dir, "G_{}.pth".format(global_step)),
|
631 |
-
)
|
632 |
-
utils.save_checkpoint(
|
633 |
-
net_d,
|
634 |
-
optim_d,
|
635 |
-
hps.train.learning_rate,
|
636 |
-
epoch,
|
637 |
-
os.path.join(hps.model_dir, "D_{}.pth".format(global_step)),
|
638 |
-
)
|
639 |
-
else:
|
640 |
-
utils.save_checkpoint(
|
641 |
-
net_g,
|
642 |
-
optim_g,
|
643 |
-
hps.train.learning_rate,
|
644 |
-
epoch,
|
645 |
-
os.path.join(hps.model_dir, "G_{}.pth".format(2333333)),
|
646 |
-
)
|
647 |
-
utils.save_checkpoint(
|
648 |
-
net_d,
|
649 |
-
optim_d,
|
650 |
-
hps.train.learning_rate,
|
651 |
-
epoch,
|
652 |
-
os.path.join(hps.model_dir, "D_{}.pth".format(2333333)),
|
653 |
-
)
|
654 |
-
if rank == 0 and hps.save_every_weights == "1":
|
655 |
-
if hasattr(net_g, "module"):
|
656 |
-
ckpt = net_g.module.state_dict()
|
657 |
-
else:
|
658 |
-
ckpt = net_g.state_dict()
|
659 |
-
logger.info(
|
660 |
-
"saving ckpt %s_e%s:%s"
|
661 |
-
% (
|
662 |
-
hps.name,
|
663 |
-
epoch,
|
664 |
-
savee(
|
665 |
-
ckpt,
|
666 |
-
hps.sample_rate,
|
667 |
-
hps.if_f0,
|
668 |
-
hps.name + "_e%s_s%s" % (epoch, global_step),
|
669 |
-
epoch,
|
670 |
-
hps.version,
|
671 |
-
hps,
|
672 |
-
),
|
673 |
-
)
|
674 |
-
)
|
675 |
-
|
676 |
-
stopbtn = False
|
677 |
-
try:
|
678 |
-
with open("csvdb/stop.csv", 'r') as csv_file:
|
679 |
-
stopbtn_str = next(csv.reader(csv_file), [None])[0]
|
680 |
-
if stopbtn_str is not None: stopbtn = stopbtn_str.lower() == 'true'
|
681 |
-
except (ValueError, TypeError, FileNotFoundError, IndexError) as e:
|
682 |
-
print(f"Handling exception: {e}")
|
683 |
-
stopbtn = False
|
684 |
-
|
685 |
-
if stopbtn:
|
686 |
-
logger.info("Stop Button was pressed. The program is closed.")
|
687 |
-
ckpt = net_g.module.state_dict() if hasattr(net_g, "module") else net_g.state_dict()
|
688 |
-
logger.info(
|
689 |
-
"saving final ckpt:%s"
|
690 |
-
% (
|
691 |
-
savee(
|
692 |
-
ckpt, hps.sample_rate, hps.if_f0, hps.name, epoch, hps.version, hps
|
693 |
-
)
|
694 |
-
)
|
695 |
-
)
|
696 |
-
sleep(1)
|
697 |
-
reset_stop_flag()
|
698 |
-
os._exit(2333333)
|
699 |
-
|
700 |
-
if rank == 0:
|
701 |
-
logger.info("====> Epoch: {} {}".format(epoch, epoch_recorder.record()))
|
702 |
-
if epoch >= hps.total_epoch and rank == 0:
|
703 |
-
logger.info("Training is done. The program is closed.")
|
704 |
-
|
705 |
-
if hasattr(net_g, "module"):
|
706 |
-
ckpt = net_g.module.state_dict()
|
707 |
-
else:
|
708 |
-
ckpt = net_g.state_dict()
|
709 |
-
logger.info(
|
710 |
-
"saving final ckpt:%s"
|
711 |
-
% (
|
712 |
-
savee(
|
713 |
-
ckpt, hps.sample_rate, hps.if_f0, hps.name, epoch, hps.version, hps
|
714 |
-
)
|
715 |
-
)
|
716 |
-
)
|
717 |
-
sleep(1)
|
718 |
-
os._exit(2333333)
|
719 |
-
|
720 |
-
|
721 |
-
if __name__ == "__main__":
|
722 |
-
torch.multiprocessing.set_start_method("spawn")
|
723 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Bowmasters Apk All Characters Unlocked 2022.md
DELETED
@@ -1,61 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Bowmasters APK all characters unlocked 2022</h1>
|
3 |
-
<p>Do you like archery games? Want to try a fun, addictive and action-packed game? Then, you will love <strong>Bowmasters</strong>, a archery game in which you can choose from more than 60 different characters and compete against other players or artificial intelligence. But what if you want to play with all the characters from the beginning? Or do you want unlimited coins to buy upgrades and customize your experience? In that case, you need to download <strong>Bowmasters MOD APK</strong>, a modified version of the game that offers you all the unlocked characters and other benefits. In this article, we tell you everything you need to know about Bowmasters and how to download and install its mod apk on your Android device.</p>
|
4 |
-
<h2>What is Bowmasters? </h2>
|
5 |
-
<p>Bowmasters is a archery game developed by Playgendary, a company known for creating casual and fun games for mobile devices. Bowmasters launched in 2016 and has since accumulated over 100 million downloads on the Google Play Store, where it has a rating of 4.5 stars. The game is also available for iOS and has a web version. </p>
|
6 |
-
<h2>bowmasters apk all characters unlocked 2022</h2><br /><p><b><b>Download File</b> >>>>> <a href="https://bltlly.com/2v6M24">https://bltlly.com/2v6M24</a></b></p><br /><br />
|
7 |
-
<h3>A fun and addictive archery game</h3>
|
8 |
-
<p>The objective of the game is simple: you must aim and shoot your bow or weapon towards your opponent, trying to hit him in the head or body to reduce his life bar. The game has realistic physics and colorful cartoon graphics that make each shot a fun and bloody experience. In addition, the game has some sound effects and voices that give more humor and personality to the game. </p>
|
9 |
-
<h3>More than 60 unique characters to choose from</h3>
|
10 |
-
|
11 |
-
<h3>Varied and challenging game modes</h3>
|
12 |
-
<p>Bowmasters also offers several game modes so you never get bored. You can play against artificial intelligence in the mode du elo, where you can face different opponents and unlock new characters and weapons. You can also play against other players online in multiplayer mode, where you can prove your skill and earn rewards. You can also try the tournament mode, where you must pass several rounds and reach the final. Or if you prefer something more relaxed, you can play target shooting mode, where you must hit different targets with your bow or gun. And if you want something more fun, you can play rubber duck mode, where you must shoot some rubber ducks floating in the water. </p>
|
13 |
-
<h2>Why download Bowmasters MOD APK? </h2>
|
14 |
-
<p>Bowmasters is a very fun and addictive game, but it also has some drawbacks. For example, to unlock all the characters and weapons, you must play long time or spend real money on integrated purchases. In addition, the game has many ads that can interrupt your fun and consume your mobile data. So if you want to enjoy Bowmasters to the fullest, we recommend that you download Bowmasters MOD APK, a modified version of the game that offers several benefits. </p>
|
15 |
-
<h3>Characters unlocked from the beginning</h3>
|
16 |
-
<p>One of the most important benefits of Bowmasters MOD APK is that it allows you to play with all the characters from the beginning, without having to unlock them one by one. Thus, you can choose the character that you like best or that best suits your style of play. In addition, you can try all the weapons and special abilities that each character has. This will give you an advantage over your opponents and make the game more varied and fun. </p>
|
17 |
-
<h3>Unlimited currencies to buy upgrades</h3>
|
18 |
-
|
19 |
-
<h3>No annoying ads or integrated purchases</h3>
|
20 |
-
<p>Finally, Bowmasters MOD APK frees you from the annoying ads and built-in purchases that the original game has. Thus, you can play without interruptions or distractions, and without spending real money on the game. Plus, you can save your mobile data and battery by not having to watch or download ads. This will make your gaming experience more fluid and enjoyable. </p>
|
21 |
-
<p></p>
|
22 |
-
<h2>How to download and install Bowmasters MOD APK? </h2>
|
23 |
-
<p>Now that you know what Bowmasters is and why to download its mod apk, we explain how to download it and install it on your Android device. It is very easy and will only take a few minutes. Just follow these steps:</p>
|
24 |
-
<h3>Step 1: Download the APK file from a trusted website</h3>
|
25 |
-
<p>The first thing to do is to download the Bowmasters MOD APK file from a reliable website. There are many websites that offer these types of files, but not all of them are secure or updated. Therefore, we recommend that you use a website like [APKPure] or [APKMirror], where you can find the latest version of Bowmasters MOD APK with all the unlocked characters and other benefits. </p>
|
26 |
-
<h3>Step 2: Enable unknown sources on your device</h3>
|
27 |
-
<p>The second thing to do is to enable the option of unknown sources on your Android device. This option allows you to install applications that do not come from the Google Play Store, such as Bowmasters MOD APK. To enable it, you just need to go to your device’s settings, then to security or privacy, and then enable the option of unknown sources or allow installation from unknown sources. </p>
|
28 |
-
<h3>Step 3: Install the APK file and open the game</h3>
|
29 |
-
|
30 |
-
<h2>Conclusion</h2>
|
31 |
-
<p>Bowmasters is a very fun and addictive archery game, offering you more than 60 different characters, each with their own bow or weapon, their own special skill and their own personality. In addition, it has several game modes so you never get bored, such as duel mode, multiplayer mode, tournament mode, target shooting mode and rubber duck mode. However, if you want to play with all the characters from the beginning, have unlimited coins to buy upgrades and customize your experience, and get rid of annoying ads and integrated purchases, we recommend that you download Bowmasters MOD APK, a modified version of the game that gives you all these benefits. Just follow the steps we have explained in this article and you can enjoy Bowmasters with all the unlocked characters on your Android device.</p>
|
32 |
-
<h4>FAQ</h4>
|
33 |
-
<p>Here are some of the most frequently asked questions about Bowmasters and its apk mod:</p>
|
34 |
-
<table>
|
35 |
-
<tr>
|
36 |
-
<th>Question</th>
|
37 |
-
<th>Answer</th>
|
38 |
-
</tr>
|
39 |
-
<tr>
|
40 |
-
<td>Is it safe to download Bowmasters MOD APK? </td>
|
41 |
-
<td>Yes, as long as you download it from a reliable website like APKPure or APKMirror, where you can find the latest version of Bowmasters MOD APK with all the unlocked characters and other benefits. These websites check the files they offer and update them constantly. </td>
|
42 |
-
</tr>
|
43 |
-
<tr>
|
44 |
-
<td>Do I need to root my device to install Bowmasters MOD APK? </td>
|
45 |
-
<td>No, you don’t need to root your device to install Bowmasters MOD APK. You just need to enable the option of unknown sources on your Android device, as we have explained in this article. </td>
|
46 |
-
</tr>
|
47 |
-
<tr>
|
48 |
-
<td>Can I play online with Bowmasters MOD APK? </td>
|
49 |
-
|
50 |
-
</tr>
|
51 |
-
<tr>
|
52 |
-
<td>Can I upgrade Bowmasters MOD APK? </td>
|
53 |
-
<td>Yes, you can upgrade Bowmasters MOD APK when a new version is available. However, you should keep in mind that when updating the game you may lose some of the benefits offered by the apk mod, such as unlocked characters or unlimited coins. Therefore, we recommend that you wait for a new version of the apk mod before updating the game. </td>
|
54 |
-
</tr>
|
55 |
-
<tr>
|
56 |
-
<td>What other games similar to Bowmasters can I try? </td>
|
57 |
-
<td>If you like Bowmasters, you might also like other similar archery or casual action games, such as Archero, Kick the Buddy, Mr Bullet, Angry Birds 2 or Fruit Ninja.</td>
|
58 |
-
</tr>
|
59 |
-
</table></p> 64aa2da5cf<br />
|
60 |
-
<br />
|
61 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVH-vn1210/make_hair/minigpt4/processors/__init__.py
DELETED
@@ -1,33 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Copyright (c) 2022, salesforce.com, inc.
|
3 |
-
All rights reserved.
|
4 |
-
SPDX-License-Identifier: BSD-3-Clause
|
5 |
-
For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
|
6 |
-
"""
|
7 |
-
|
8 |
-
from minigpt4.processors.base_processor import BaseProcessor
|
9 |
-
from minigpt4.processors.blip_processors import (
|
10 |
-
Blip2ImageTrainProcessor,
|
11 |
-
Blip2ImageEvalProcessor,
|
12 |
-
BlipCaptionProcessor,
|
13 |
-
)
|
14 |
-
|
15 |
-
from minigpt4.common.registry import registry
|
16 |
-
|
17 |
-
__all__ = [
|
18 |
-
"BaseProcessor",
|
19 |
-
"Blip2ImageTrainProcessor",
|
20 |
-
"Blip2ImageEvalProcessor",
|
21 |
-
"BlipCaptionProcessor",
|
22 |
-
]
|
23 |
-
|
24 |
-
|
25 |
-
def load_processor(name, cfg=None):
|
26 |
-
"""
|
27 |
-
Example
|
28 |
-
|
29 |
-
>>> processor = load_processor("alpro_video_train", cfg=None)
|
30 |
-
"""
|
31 |
-
processor = registry.get_processor_class(name).from_config(cfg)
|
32 |
-
|
33 |
-
return processor
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVMX-jaca-tonos/YouTube-Video-Streaming-Spanish-ASR/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: YouTube Video Spanish ASR
|
3 |
-
emoji: ⚡
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: gray
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.2.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/scan.h
DELETED
@@ -1,1564 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
|
18 |
-
/*! \file scan.h
|
19 |
-
* \brief Functions for computing prefix sums
|
20 |
-
*/
|
21 |
-
|
22 |
-
#pragma once
|
23 |
-
|
24 |
-
#include <thrust/detail/config.h>
|
25 |
-
#include <thrust/detail/execution_policy.h>
|
26 |
-
|
27 |
-
namespace thrust
|
28 |
-
{
|
29 |
-
|
30 |
-
|
31 |
-
/*! \addtogroup algorithms
|
32 |
-
*/
|
33 |
-
|
34 |
-
|
35 |
-
/*! \addtogroup prefixsums Prefix Sums
|
36 |
-
* \ingroup algorithms
|
37 |
-
* \{
|
38 |
-
*/
|
39 |
-
|
40 |
-
|
41 |
-
/*! \p inclusive_scan computes an inclusive prefix sum operation. The
|
42 |
-
* term 'inclusive' means that each result includes the corresponding
|
43 |
-
* input operand in the partial sum. More precisely, <tt>*first</tt> is
|
44 |
-
* assigned to <tt>*result</tt> and the sum of <tt>*first</tt> and
|
45 |
-
* <tt>*(first + 1)</tt> is assigned to <tt>*(result + 1)</tt>, and so on.
|
46 |
-
* This version of \p inclusive_scan assumes plus as the associative operator.
|
47 |
-
* When the input and output sequences are the same, the scan is performed
|
48 |
-
* in-place.
|
49 |
-
|
50 |
-
* \p inclusive_scan is similar to \c std::partial_sum in the STL. The primary
|
51 |
-
* difference between the two functions is that \c std::partial_sum guarantees
|
52 |
-
* a serial summation order, while \p inclusive_scan requires associativity of
|
53 |
-
* the binary operation to parallelize the prefix sum.
|
54 |
-
*
|
55 |
-
* The algorithm's execution is parallelized as determined by \p exec.
|
56 |
-
*
|
57 |
-
* \param exec The execution policy to use for parallelization.
|
58 |
-
* \param first The beginning of the input sequence.
|
59 |
-
* \param last The end of the input sequence.
|
60 |
-
* \param result The beginning of the output sequence.
|
61 |
-
* \return The end of the output sequence.
|
62 |
-
*
|
63 |
-
* \tparam DerivedPolicy The name of the derived execution policy.
|
64 |
-
* \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
65 |
-
* and \c InputIterator's \c value_type is convertible to
|
66 |
-
* \c OutputIterator's \c value_type.
|
67 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>,
|
68 |
-
* and if \c x and \c y are objects of \c OutputIterator's
|
69 |
-
* \c value_type, then <tt>x + y</tt> is defined. If \c T is
|
70 |
-
* \c OutputIterator's \c value_type, then <tt>T(0)</tt> is
|
71 |
-
* defined.
|
72 |
-
*
|
73 |
-
* \pre \p first may equal \p result but the range <tt>[first, last)</tt> and the range <tt>[result, result + (last - first))</tt> shall not overlap otherwise.
|
74 |
-
*
|
75 |
-
* The following code snippet demonstrates how to use \p inclusive_scan to compute an in-place
|
76 |
-
* prefix sum using the \p thrust::host execution policy for parallelization:
|
77 |
-
*
|
78 |
-
* \code
|
79 |
-
* #include <thrust/scan.h>
|
80 |
-
* #include <thrust/execution_policy.h>
|
81 |
-
* ...
|
82 |
-
*
|
83 |
-
* int data[6] = {1, 0, 2, 2, 1, 3};
|
84 |
-
*
|
85 |
-
* thrust::inclusive_scan(thrust::host, data, data + 6, data); // in-place scan
|
86 |
-
*
|
87 |
-
* // data is now {1, 1, 3, 5, 6, 9}
|
88 |
-
* \endcode
|
89 |
-
*
|
90 |
-
* \see http://www.sgi.com/tech/stl/partial_sum.html
|
91 |
-
*
|
92 |
-
*/
|
93 |
-
template<typename DerivedPolicy,
|
94 |
-
typename InputIterator,
|
95 |
-
typename OutputIterator>
|
96 |
-
__host__ __device__
|
97 |
-
OutputIterator inclusive_scan(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
|
98 |
-
InputIterator first,
|
99 |
-
InputIterator last,
|
100 |
-
OutputIterator result);
|
101 |
-
|
102 |
-
|
103 |
-
/*! \p inclusive_scan computes an inclusive prefix sum operation. The
|
104 |
-
* term 'inclusive' means that each result includes the corresponding
|
105 |
-
* input operand in the partial sum. More precisely, <tt>*first</tt> is
|
106 |
-
* assigned to <tt>*result</tt> and the sum of <tt>*first</tt> and
|
107 |
-
* <tt>*(first + 1)</tt> is assigned to <tt>*(result + 1)</tt>, and so on.
|
108 |
-
* This version of \p inclusive_scan assumes plus as the associative operator.
|
109 |
-
* When the input and output sequences are the same, the scan is performed
|
110 |
-
* in-place.
|
111 |
-
|
112 |
-
* \p inclusive_scan is similar to \c std::partial_sum in the STL. The primary
|
113 |
-
* difference between the two functions is that \c std::partial_sum guarantees
|
114 |
-
* a serial summation order, while \p inclusive_scan requires associativity of
|
115 |
-
* the binary operation to parallelize the prefix sum.
|
116 |
-
*
|
117 |
-
* \param first The beginning of the input sequence.
|
118 |
-
* \param last The end of the input sequence.
|
119 |
-
* \param result The beginning of the output sequence.
|
120 |
-
* \return The end of the output sequence.
|
121 |
-
*
|
122 |
-
* \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
123 |
-
* and \c InputIterator's \c value_type is convertible to
|
124 |
-
* \c OutputIterator's \c value_type.
|
125 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>,
|
126 |
-
* and if \c x and \c y are objects of \c OutputIterator's
|
127 |
-
* \c value_type, then <tt>x + y</tt> is defined. If \c T is
|
128 |
-
* \c OutputIterator's \c value_type, then <tt>T(0)</tt> is
|
129 |
-
* defined.
|
130 |
-
*
|
131 |
-
* \pre \p first may equal \p result but the range <tt>[first, last)</tt> and the range <tt>[result, result + (last - first))</tt> shall not overlap otherwise.
|
132 |
-
*
|
133 |
-
* The following code snippet demonstrates how to use \p inclusive_scan
|
134 |
-
*
|
135 |
-
* \code
|
136 |
-
* #include <thrust/scan.h>
|
137 |
-
*
|
138 |
-
* int data[6] = {1, 0, 2, 2, 1, 3};
|
139 |
-
*
|
140 |
-
* thrust::inclusive_scan(data, data + 6, data); // in-place scan
|
141 |
-
*
|
142 |
-
* // data is now {1, 1, 3, 5, 6, 9}
|
143 |
-
* \endcode
|
144 |
-
*
|
145 |
-
* \see http://www.sgi.com/tech/stl/partial_sum.html
|
146 |
-
*
|
147 |
-
*/
|
148 |
-
template<typename InputIterator,
|
149 |
-
typename OutputIterator>
|
150 |
-
OutputIterator inclusive_scan(InputIterator first,
|
151 |
-
InputIterator last,
|
152 |
-
OutputIterator result);
|
153 |
-
|
154 |
-
|
155 |
-
/*! \p inclusive_scan computes an inclusive prefix sum operation. The
|
156 |
-
* term 'inclusive' means that each result includes the corresponding
|
157 |
-
* input operand in the partial sum. When the input and output sequences
|
158 |
-
* are the same, the scan is performed in-place.
|
159 |
-
*
|
160 |
-
* \p inclusive_scan is similar to \c std::partial_sum in the STL. The primary
|
161 |
-
* difference between the two functions is that \c std::partial_sum guarantees
|
162 |
-
* a serial summation order, while \p inclusive_scan requires associativity of
|
163 |
-
* the binary operation to parallelize the prefix sum.
|
164 |
-
*
|
165 |
-
* The algorithm's execution is parallelized as determined by \p exec.
|
166 |
-
*
|
167 |
-
* \param exec The execution policy to use for parallelization.
|
168 |
-
* \param first The beginning of the input sequence.
|
169 |
-
* \param last The end of the input sequence.
|
170 |
-
* \param result The beginning of the output sequence.
|
171 |
-
* \param binary_op The associatve operator used to 'sum' values.
|
172 |
-
* \return The end of the output sequence.
|
173 |
-
*
|
174 |
-
* \tparam DerivedPolicy The name of the derived execution policy.
|
175 |
-
* \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
176 |
-
* and \c InputIterator's \c value_type is convertible to
|
177 |
-
* \c OutputIterator's \c value_type.
|
178 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>
|
179 |
-
* and \c OutputIterator's \c value_type is convertible to
|
180 |
-
* both \c AssociativeOperator's \c first_argument_type and
|
181 |
-
* \c second_argument_type.
|
182 |
-
* \tparam AssociativeOperator is a model of <a href="http://www.sgi.com/tech/stl/BinaryFunction.html">Binary Function</a>
|
183 |
-
* and \c AssociativeOperator's \c result_type is
|
184 |
-
* convertible to \c OutputIterator's \c value_type.
|
185 |
-
*
|
186 |
-
* \pre \p first may equal \p result but the range <tt>[first, last)</tt> and the range <tt>[result, result + (last - first))</tt> shall not overlap otherwise.
|
187 |
-
*
|
188 |
-
* The following code snippet demonstrates how to use \p inclusive_scan to compute an in-place
|
189 |
-
* prefix sum using the \p thrust::host execution policy for parallelization:
|
190 |
-
*
|
191 |
-
* \code
|
192 |
-
* int data[10] = {-5, 0, 2, -3, 2, 4, 0, -1, 2, 8};
|
193 |
-
*
|
194 |
-
* thrust::maximum<int> binary_op;
|
195 |
-
*
|
196 |
-
* thrust::inclusive_scan(thrust::host, data, data + 10, data, binary_op); // in-place scan
|
197 |
-
*
|
198 |
-
* // data is now {-5, 0, 2, 2, 2, 4, 4, 4, 4, 8}
|
199 |
-
* \endcode
|
200 |
-
*
|
201 |
-
* \see http://www.sgi.com/tech/stl/partial_sum.html
|
202 |
-
*/
|
203 |
-
template<typename DerivedPolicy,
|
204 |
-
typename InputIterator,
|
205 |
-
typename OutputIterator,
|
206 |
-
typename AssociativeOperator>
|
207 |
-
__host__ __device__
|
208 |
-
OutputIterator inclusive_scan(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
|
209 |
-
InputIterator first,
|
210 |
-
InputIterator last,
|
211 |
-
OutputIterator result,
|
212 |
-
AssociativeOperator binary_op);
|
213 |
-
|
214 |
-
|
215 |
-
/*! \p inclusive_scan computes an inclusive prefix sum operation. The
|
216 |
-
* term 'inclusive' means that each result includes the corresponding
|
217 |
-
* input operand in the partial sum. When the input and output sequences
|
218 |
-
* are the same, the scan is performed in-place.
|
219 |
-
*
|
220 |
-
* \p inclusive_scan is similar to \c std::partial_sum in the STL. The primary
|
221 |
-
* difference between the two functions is that \c std::partial_sum guarantees
|
222 |
-
* a serial summation order, while \p inclusive_scan requires associativity of
|
223 |
-
* the binary operation to parallelize the prefix sum.
|
224 |
-
*
|
225 |
-
* \param first The beginning of the input sequence.
|
226 |
-
* \param last The end of the input sequence.
|
227 |
-
* \param result The beginning of the output sequence.
|
228 |
-
* \param binary_op The associatve operator used to 'sum' values.
|
229 |
-
* \return The end of the output sequence.
|
230 |
-
*
|
231 |
-
* \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
232 |
-
* and \c InputIterator's \c value_type is convertible to
|
233 |
-
* \c OutputIterator's \c value_type.
|
234 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>
|
235 |
-
* and \c OutputIterator's \c value_type is convertible to
|
236 |
-
* both \c AssociativeOperator's \c first_argument_type and
|
237 |
-
* \c second_argument_type.
|
238 |
-
* \tparam AssociativeOperator is a model of <a href="http://www.sgi.com/tech/stl/BinaryFunction.html">Binary Function</a>
|
239 |
-
* and \c AssociativeOperator's \c result_type is
|
240 |
-
* convertible to \c OutputIterator's \c value_type.
|
241 |
-
*
|
242 |
-
* \pre \p first may equal \p result but the range <tt>[first, last)</tt> and the range <tt>[result, result + (last - first))</tt> shall not overlap otherwise.
|
243 |
-
*
|
244 |
-
* The following code snippet demonstrates how to use \p inclusive_scan
|
245 |
-
*
|
246 |
-
* \code
|
247 |
-
* int data[10] = {-5, 0, 2, -3, 2, 4, 0, -1, 2, 8};
|
248 |
-
*
|
249 |
-
* thrust::maximum<int> binary_op;
|
250 |
-
*
|
251 |
-
* thrust::inclusive_scan(data, data + 10, data, binary_op); // in-place scan
|
252 |
-
*
|
253 |
-
* // data is now {-5, 0, 2, 2, 2, 4, 4, 4, 4, 8}
|
254 |
-
* \endcode
|
255 |
-
*
|
256 |
-
* \see http://www.sgi.com/tech/stl/partial_sum.html
|
257 |
-
*/
|
258 |
-
template<typename InputIterator,
|
259 |
-
typename OutputIterator,
|
260 |
-
typename AssociativeOperator>
|
261 |
-
OutputIterator inclusive_scan(InputIterator first,
|
262 |
-
InputIterator last,
|
263 |
-
OutputIterator result,
|
264 |
-
AssociativeOperator binary_op);
|
265 |
-
|
266 |
-
|
267 |
-
/*! \p exclusive_scan computes an exclusive prefix sum operation. The
|
268 |
-
* term 'exclusive' means that each result does not include the
|
269 |
-
* corresponding input operand in the partial sum. More precisely,
|
270 |
-
* <tt>0</tt> is assigned to <tt>*result</tt> and the sum of
|
271 |
-
* <tt>0</tt> and <tt>*first</tt> is assigned to <tt>*(result + 1)</tt>,
|
272 |
-
* and so on. This version of \p exclusive_scan assumes plus as the
|
273 |
-
* associative operator and \c 0 as the initial value. When the input and
|
274 |
-
* output sequences are the same, the scan is performed in-place.
|
275 |
-
*
|
276 |
-
* The algorithm's execution is parallelized as determined by \p exec.
|
277 |
-
*
|
278 |
-
* \param exec The execution policy to use for parallelization.
|
279 |
-
* \param first The beginning of the input sequence.
|
280 |
-
* \param last The end of the input sequence.
|
281 |
-
* \param result The beginning of the output sequence.
|
282 |
-
* \return The end of the output sequence.
|
283 |
-
*
|
284 |
-
* \tparam DerivedPolicy The name of the derived execution policy.
|
285 |
-
* \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
286 |
-
* and \c InputIterator's \c value_type is convertible to
|
287 |
-
* \c OutputIterator's \c value_type.
|
288 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>,
|
289 |
-
* and if \c x and \c y are objects of \c OutputIterator's
|
290 |
-
* \c value_type, then <tt>x + y</tt> is defined. If \c T is
|
291 |
-
* \c OutputIterator's \c value_type, then <tt>T(0)</tt> is
|
292 |
-
* defined.
|
293 |
-
*
|
294 |
-
* \pre \p first may equal \p result but the range <tt>[first, last)</tt> and the range <tt>[result, result + (last - first))</tt> shall not overlap otherwise.
|
295 |
-
*
|
296 |
-
* The following code snippet demonstrates how to use \p exclusive_scan to compute an in-place
|
297 |
-
* prefix sum using the \p thrust::host execution policy for parallelization:
|
298 |
-
*
|
299 |
-
* \code
|
300 |
-
* #include <thrust/scan.h>
|
301 |
-
* #include <thrust/execution_policy.h>
|
302 |
-
* ...
|
303 |
-
*
|
304 |
-
* int data[6] = {1, 0, 2, 2, 1, 3};
|
305 |
-
*
|
306 |
-
* thrust::exclusive_scan(thrust::host, data, data + 6, data); // in-place scan
|
307 |
-
*
|
308 |
-
* // data is now {0, 1, 1, 3, 5, 6}
|
309 |
-
* \endcode
|
310 |
-
*
|
311 |
-
* \see http://www.sgi.com/tech/stl/partial_sum.html
|
312 |
-
*/
|
313 |
-
template<typename DerivedPolicy,
|
314 |
-
typename InputIterator,
|
315 |
-
typename OutputIterator>
|
316 |
-
__host__ __device__
|
317 |
-
OutputIterator exclusive_scan(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
|
318 |
-
InputIterator first,
|
319 |
-
InputIterator last,
|
320 |
-
OutputIterator result);
|
321 |
-
|
322 |
-
|
323 |
-
/*! \p exclusive_scan computes an exclusive prefix sum operation. The
|
324 |
-
* term 'exclusive' means that each result does not include the
|
325 |
-
* corresponding input operand in the partial sum. More precisely,
|
326 |
-
* <tt>0</tt> is assigned to <tt>*result</tt> and the sum of
|
327 |
-
* <tt>0</tt> and <tt>*first</tt> is assigned to <tt>*(result + 1)</tt>,
|
328 |
-
* and so on. This version of \p exclusive_scan assumes plus as the
|
329 |
-
* associative operator and \c 0 as the initial value. When the input and
|
330 |
-
* output sequences are the same, the scan is performed in-place.
|
331 |
-
*
|
332 |
-
* \param first The beginning of the input sequence.
|
333 |
-
* \param last The end of the input sequence.
|
334 |
-
* \param result The beginning of the output sequence.
|
335 |
-
* \return The end of the output sequence.
|
336 |
-
*
|
337 |
-
* \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
338 |
-
* and \c InputIterator's \c value_type is convertible to
|
339 |
-
* \c OutputIterator's \c value_type.
|
340 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>,
|
341 |
-
* and if \c x and \c y are objects of \c OutputIterator's
|
342 |
-
* \c value_type, then <tt>x + y</tt> is defined. If \c T is
|
343 |
-
* \c OutputIterator's \c value_type, then <tt>T(0)</tt> is
|
344 |
-
* defined.
|
345 |
-
*
|
346 |
-
* \pre \p first may equal \p result but the range <tt>[first, last)</tt> and the range <tt>[result, result + (last - first))</tt> shall not overlap otherwise.
|
347 |
-
*
|
348 |
-
* The following code snippet demonstrates how to use \p exclusive_scan
|
349 |
-
*
|
350 |
-
* \code
|
351 |
-
* #include <thrust/scan.h>
|
352 |
-
*
|
353 |
-
* int data[6] = {1, 0, 2, 2, 1, 3};
|
354 |
-
*
|
355 |
-
* thrust::exclusive_scan(data, data + 6, data); // in-place scan
|
356 |
-
*
|
357 |
-
* // data is now {0, 1, 1, 3, 5, 6}
|
358 |
-
* \endcode
|
359 |
-
*
|
360 |
-
* \see http://www.sgi.com/tech/stl/partial_sum.html
|
361 |
-
*/
|
362 |
-
template<typename InputIterator,
|
363 |
-
typename OutputIterator>
|
364 |
-
OutputIterator exclusive_scan(InputIterator first,
|
365 |
-
InputIterator last,
|
366 |
-
OutputIterator result);
|
367 |
-
|
368 |
-
|
369 |
-
/*! \p exclusive_scan computes an exclusive prefix sum operation. The
|
370 |
-
* term 'exclusive' means that each result does not include the
|
371 |
-
* corresponding input operand in the partial sum. More precisely,
|
372 |
-
* \p init is assigned to <tt>*result</tt> and the sum of \p init and
|
373 |
-
* <tt>*first</tt> is assigned to <tt>*(result + 1)</tt>, and so on.
|
374 |
-
* This version of \p exclusive_scan assumes plus as the associative
|
375 |
-
* operator but requires an initial value \p init. When the input and
|
376 |
-
* output sequences are the same, the scan is performed in-place.
|
377 |
-
*
|
378 |
-
* The algorithm's execution is parallelized as determined by \p exec.
|
379 |
-
*
|
380 |
-
* \param exec The execution policy to use for parallelization.
|
381 |
-
* \param first The beginning of the input sequence.
|
382 |
-
* \param last The end of the input sequence.
|
383 |
-
* \param result The beginning of the output sequence.
|
384 |
-
* \param init The initial value.
|
385 |
-
* \return The end of the output sequence.
|
386 |
-
*
|
387 |
-
* \tparam DerivedPolicy The name of the derived execution policy.
|
388 |
-
* \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
389 |
-
* and \c InputIterator's \c value_type is convertible to
|
390 |
-
* \c OutputIterator's \c value_type.
|
391 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>,
|
392 |
-
* and if \c x and \c y are objects of \c OutputIterator's
|
393 |
-
* \c value_type, then <tt>x + y</tt> is defined.
|
394 |
-
* \tparam T is convertible to \c OutputIterator's \c value_type.
|
395 |
-
*
|
396 |
-
* \pre \p first may equal \p result but the range <tt>[first, last)</tt> and the range <tt>[result, result + (last - first))</tt> shall not overlap otherwise.
|
397 |
-
*
|
398 |
-
* The following code snippet demonstrates how to use \p exclusive_scan to compute an in-place
|
399 |
-
* prefix sum using the \p thrust::host execution policy for parallelization:
|
400 |
-
*
|
401 |
-
* \code
|
402 |
-
* #include <thrust/scan.h>
|
403 |
-
* #include <thrust/execution_policy.h>
|
404 |
-
*
|
405 |
-
* int data[6] = {1, 0, 2, 2, 1, 3};
|
406 |
-
*
|
407 |
-
* thrust::exclusive_scan(thrust::host, data, data + 6, data, 4); // in-place scan
|
408 |
-
*
|
409 |
-
* // data is now {4, 5, 5, 7, 9, 10}
|
410 |
-
* \endcode
|
411 |
-
*
|
412 |
-
* \see http://www.sgi.com/tech/stl/partial_sum.html
|
413 |
-
*/
|
414 |
-
template<typename DerivedPolicy,
|
415 |
-
typename InputIterator,
|
416 |
-
typename OutputIterator,
|
417 |
-
typename T>
|
418 |
-
__host__ __device__
|
419 |
-
OutputIterator exclusive_scan(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
|
420 |
-
InputIterator first,
|
421 |
-
InputIterator last,
|
422 |
-
OutputIterator result,
|
423 |
-
T init);
|
424 |
-
|
425 |
-
|
426 |
-
/*! \p exclusive_scan computes an exclusive prefix sum operation. The
|
427 |
-
* term 'exclusive' means that each result does not include the
|
428 |
-
* corresponding input operand in the partial sum. More precisely,
|
429 |
-
* \p init is assigned to <tt>*result</tt> and the sum of \p init and
|
430 |
-
* <tt>*first</tt> is assigned to <tt>*(result + 1)</tt>, and so on.
|
431 |
-
* This version of \p exclusive_scan assumes plus as the associative
|
432 |
-
* operator but requires an initial value \p init. When the input and
|
433 |
-
* output sequences are the same, the scan is performed in-place.
|
434 |
-
*
|
435 |
-
* \param first The beginning of the input sequence.
|
436 |
-
* \param last The end of the input sequence.
|
437 |
-
* \param result The beginning of the output sequence.
|
438 |
-
* \param init The initial value.
|
439 |
-
* \return The end of the output sequence.
|
440 |
-
*
|
441 |
-
* \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
442 |
-
* and \c InputIterator's \c value_type is convertible to
|
443 |
-
* \c OutputIterator's \c value_type.
|
444 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>,
|
445 |
-
* and if \c x and \c y are objects of \c OutputIterator's
|
446 |
-
* \c value_type, then <tt>x + y</tt> is defined.
|
447 |
-
* \tparam T is convertible to \c OutputIterator's \c value_type.
|
448 |
-
*
|
449 |
-
* \pre \p first may equal \p result but the range <tt>[first, last)</tt> and the range <tt>[result, result + (last - first))</tt> shall not overlap otherwise.
|
450 |
-
*
|
451 |
-
* The following code snippet demonstrates how to use \p exclusive_scan
|
452 |
-
*
|
453 |
-
* \code
|
454 |
-
* #include <thrust/scan.h>
|
455 |
-
*
|
456 |
-
* int data[6] = {1, 0, 2, 2, 1, 3};
|
457 |
-
*
|
458 |
-
* thrust::exclusive_scan(data, data + 6, data, 4); // in-place scan
|
459 |
-
*
|
460 |
-
* // data is now {4, 5, 5, 7, 9, 10}
|
461 |
-
* \endcode
|
462 |
-
*
|
463 |
-
* \see http://www.sgi.com/tech/stl/partial_sum.html
|
464 |
-
*/
|
465 |
-
template<typename InputIterator,
|
466 |
-
typename OutputIterator,
|
467 |
-
typename T>
|
468 |
-
OutputIterator exclusive_scan(InputIterator first,
|
469 |
-
InputIterator last,
|
470 |
-
OutputIterator result,
|
471 |
-
T init);
|
472 |
-
|
473 |
-
|
474 |
-
/*! \p exclusive_scan computes an exclusive prefix sum operation. The
|
475 |
-
* term 'exclusive' means that each result does not include the
|
476 |
-
* corresponding input operand in the partial sum. More precisely,
|
477 |
-
* \p init is assigned to <tt>\*result</tt> and the value
|
478 |
-
* <tt>binary_op(init, \*first)</tt> is assigned to <tt>\*(result + 1)</tt>,
|
479 |
-
* and so on. This version of the function requires both an associative
|
480 |
-
* operator and an initial value \p init. When the input and output
|
481 |
-
* sequences are the same, the scan is performed in-place.
|
482 |
-
*
|
483 |
-
* The algorithm's execution is parallelized as determined by \p exec.
|
484 |
-
*
|
485 |
-
* \param exec The execution policy to use for parallelization.
|
486 |
-
* \param first The beginning of the input sequence.
|
487 |
-
* \param last The end of the input sequence.
|
488 |
-
* \param result The beginning of the output sequence.
|
489 |
-
* \param init The initial value.
|
490 |
-
* \param binary_op The associatve operator used to 'sum' values.
|
491 |
-
* \return The end of the output sequence.
|
492 |
-
*
|
493 |
-
* \tparam DerivedPolicy The name of the derived execution policy.
|
494 |
-
* \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
495 |
-
* and \c InputIterator's \c value_type is convertible to
|
496 |
-
* \c OutputIterator's \c value_type.
|
497 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>
|
498 |
-
* and \c OutputIterator's \c value_type is convertible to
|
499 |
-
* both \c AssociativeOperator's \c first_argument_type and
|
500 |
-
* \c second_argument_type.
|
501 |
-
* \tparam T is convertible to \c OutputIterator's \c value_type.
|
502 |
-
* \tparam AssociativeOperator is a model of <a href="http://www.sgi.com/tech/stl/BinaryFunction.html">Binary Function</a>
|
503 |
-
* and \c AssociativeOperator's \c result_type is
|
504 |
-
* convertible to \c OutputIterator's \c value_type.
|
505 |
-
*
|
506 |
-
* \pre \p first may equal \p result but the range <tt>[first, last)</tt> and the range <tt>[result, result + (last - first))</tt> shall not overlap otherwise.
|
507 |
-
*
|
508 |
-
* The following code snippet demonstrates how to use \p exclusive_scan to compute an in-place
|
509 |
-
* prefix sum using the \p thrust::host execution policy for parallelization:
|
510 |
-
*
|
511 |
-
* \code
|
512 |
-
* #include <thrust/scan.h>
|
513 |
-
* #include <thrust/functional.h>
|
514 |
-
* #include <thrust/execution_policy.h>
|
515 |
-
* ...
|
516 |
-
*
|
517 |
-
* int data[10] = {-5, 0, 2, -3, 2, 4, 0, -1, 2, 8};
|
518 |
-
*
|
519 |
-
* thrust::maximum<int> binary_op;
|
520 |
-
*
|
521 |
-
* thrust::exclusive_scan(thrust::host, data, data + 10, data, 1, binary_op); // in-place scan
|
522 |
-
*
|
523 |
-
* // data is now {1, 1, 1, 2, 2, 2, 4, 4, 4, 4 }
|
524 |
-
* \endcode
|
525 |
-
*
|
526 |
-
* \see http://www.sgi.com/tech/stl/partial_sum.html
|
527 |
-
*/
|
528 |
-
template<typename DerivedPolicy,
|
529 |
-
typename InputIterator,
|
530 |
-
typename OutputIterator,
|
531 |
-
typename T,
|
532 |
-
typename AssociativeOperator>
|
533 |
-
__host__ __device__
|
534 |
-
OutputIterator exclusive_scan(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
|
535 |
-
InputIterator first,
|
536 |
-
InputIterator last,
|
537 |
-
OutputIterator result,
|
538 |
-
T init,
|
539 |
-
AssociativeOperator binary_op);
|
540 |
-
|
541 |
-
|
542 |
-
/*! \p exclusive_scan computes an exclusive prefix sum operation. The
|
543 |
-
* term 'exclusive' means that each result does not include the
|
544 |
-
* corresponding input operand in the partial sum. More precisely,
|
545 |
-
* \p init is assigned to <tt>\*result</tt> and the value
|
546 |
-
* <tt>binary_op(init, \*first)</tt> is assigned to <tt>\*(result + 1)</tt>,
|
547 |
-
* and so on. This version of the function requires both an associative
|
548 |
-
* operator and an initial value \p init. When the input and output
|
549 |
-
* sequences are the same, the scan is performed in-place.
|
550 |
-
*
|
551 |
-
* \param first The beginning of the input sequence.
|
552 |
-
* \param last The end of the input sequence.
|
553 |
-
* \param result The beginning of the output sequence.
|
554 |
-
* \param init The initial value.
|
555 |
-
* \param binary_op The associatve operator used to 'sum' values.
|
556 |
-
* \return The end of the output sequence.
|
557 |
-
*
|
558 |
-
* \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
559 |
-
* and \c InputIterator's \c value_type is convertible to
|
560 |
-
* \c OutputIterator's \c value_type.
|
561 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>
|
562 |
-
* and \c OutputIterator's \c value_type is convertible to
|
563 |
-
* both \c AssociativeOperator's \c first_argument_type and
|
564 |
-
* \c second_argument_type.
|
565 |
-
* \tparam T is convertible to \c OutputIterator's \c value_type.
|
566 |
-
* \tparam AssociativeOperator is a model of <a href="http://www.sgi.com/tech/stl/BinaryFunction.html">Binary Function</a>
|
567 |
-
* and \c AssociativeOperator's \c result_type is
|
568 |
-
* convertible to \c OutputIterator's \c value_type.
|
569 |
-
*
|
570 |
-
* \pre \p first may equal \p result but the range <tt>[first, last)</tt> and the range <tt>[result, result + (last - first))</tt> shall not overlap otherwise.
|
571 |
-
*
|
572 |
-
* The following code snippet demonstrates how to use \p exclusive_scan
|
573 |
-
*
|
574 |
-
* \code
|
575 |
-
* #include <thrust/scan.h>
|
576 |
-
* #include <thrust/functional.h>
|
577 |
-
*
|
578 |
-
* int data[10] = {-5, 0, 2, -3, 2, 4, 0, -1, 2, 8};
|
579 |
-
*
|
580 |
-
* thrust::maximum<int> binary_op;
|
581 |
-
*
|
582 |
-
* thrust::exclusive_scan(data, data + 10, data, 1, binary_op); // in-place scan
|
583 |
-
*
|
584 |
-
* // data is now {1, 1, 1, 2, 2, 2, 4, 4, 4, 4 }
|
585 |
-
* \endcode
|
586 |
-
*
|
587 |
-
* \see http://www.sgi.com/tech/stl/partial_sum.html
|
588 |
-
*/
|
589 |
-
template<typename InputIterator,
|
590 |
-
typename OutputIterator,
|
591 |
-
typename T,
|
592 |
-
typename AssociativeOperator>
|
593 |
-
OutputIterator exclusive_scan(InputIterator first,
|
594 |
-
InputIterator last,
|
595 |
-
OutputIterator result,
|
596 |
-
T init,
|
597 |
-
AssociativeOperator binary_op);
|
598 |
-
|
599 |
-
|
600 |
-
/*! \addtogroup segmentedprefixsums Segmented Prefix Sums
|
601 |
-
* \ingroup prefixsums
|
602 |
-
* \{
|
603 |
-
*/
|
604 |
-
|
605 |
-
|
606 |
-
/*! \p inclusive_scan_by_key computes an inclusive key-value or 'segmented' prefix
|
607 |
-
* sum operation. The term 'inclusive' means that each result includes
|
608 |
-
* the corresponding input operand in the partial sum. The term 'segmented'
|
609 |
-
* means that the partial sums are broken into distinct segments. In other
|
610 |
-
* words, within each segment a separate inclusive scan operation is computed.
|
611 |
-
* Refer to the code sample below for example usage.
|
612 |
-
*
|
613 |
-
* This version of \p inclusive_scan_by_key assumes \c equal_to as the binary
|
614 |
-
* predicate used to compare adjacent keys. Specifically, consecutive iterators
|
615 |
-
* <tt>i</tt> and <tt>i+1</tt> in the range <tt>[first1, last1)</tt>
|
616 |
-
* belong to the same segment if <tt>*i == *(i+1)</tt>, and belong to
|
617 |
-
* different segments otherwise.
|
618 |
-
*
|
619 |
-
* This version of \p inclusive_scan_by_key assumes \c plus as the associative
|
620 |
-
* operator used to perform the prefix sum. When the input and output sequences
|
621 |
-
* are the same, the scan is performed in-place.
|
622 |
-
*
|
623 |
-
* The algorithm's execution is parallelized as determined by \p exec.
|
624 |
-
*
|
625 |
-
* \param exec The execution policy to use for parallelization.
|
626 |
-
* \param first1 The beginning of the key sequence.
|
627 |
-
* \param last1 The end of the key sequence.
|
628 |
-
* \param first2 The beginning of the input value sequence.
|
629 |
-
* \param result The beginning of the output value sequence.
|
630 |
-
* \return The end of the output sequence.
|
631 |
-
*
|
632 |
-
* \tparam DerivedPolicy The name of the derived execution policy.
|
633 |
-
* \tparam InputIterator1 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
634 |
-
* \tparam InputIterator2 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
635 |
-
* and \c InputIterator2's \c value_type is convertible to \c OutputIterator's \c value_type.
|
636 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>,
|
637 |
-
* and if \c x and \c y are objects of \c OutputIterator's \c value_type, then
|
638 |
-
* <tt>binary_op(x,y)</tt> is defined.
|
639 |
-
*
|
640 |
-
* \pre \p first1 may equal \p result but the range <tt>[first1, last1)</tt> and the range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
641 |
-
* \pre \p first2 may equal \p result but the range <tt>[first2, first2 + (last1 - first1)</tt> and range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
642 |
-
*
|
643 |
-
* The following code snippet demonstrates how to use \p inclusive_scan_by_key using the \p thrust::host
|
644 |
-
* execution policy for parallelization:
|
645 |
-
*
|
646 |
-
* \code
|
647 |
-
* #include <thrust/scan.h>
|
648 |
-
* #include <thrust/execution_policy.h>
|
649 |
-
* ...
|
650 |
-
*
|
651 |
-
* int data[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
|
652 |
-
* int keys[10] = {0, 0, 0, 1, 1, 2, 3, 3, 3, 3};
|
653 |
-
*
|
654 |
-
* thrust::inclusive_scan_by_key(thrust::host, keys, keys + 10, data, data); // in-place scan
|
655 |
-
*
|
656 |
-
* // data is now {1, 2, 3, 1, 2, 1, 1, 2, 3, 4};
|
657 |
-
* \endcode
|
658 |
-
*
|
659 |
-
* \see inclusive_scan
|
660 |
-
* \see exclusive_scan_by_key
|
661 |
-
*
|
662 |
-
*/
|
663 |
-
template<typename DerivedPolicy,
|
664 |
-
typename InputIterator1,
|
665 |
-
typename InputIterator2,
|
666 |
-
typename OutputIterator>
|
667 |
-
__host__ __device__
|
668 |
-
OutputIterator inclusive_scan_by_key(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
|
669 |
-
InputIterator1 first1,
|
670 |
-
InputIterator1 last1,
|
671 |
-
InputIterator2 first2,
|
672 |
-
OutputIterator result);
|
673 |
-
|
674 |
-
|
675 |
-
/*! \p inclusive_scan_by_key computes an inclusive key-value or 'segmented' prefix
|
676 |
-
* sum operation. The term 'inclusive' means that each result includes
|
677 |
-
* the corresponding input operand in the partial sum. The term 'segmented'
|
678 |
-
* means that the partial sums are broken into distinct segments. In other
|
679 |
-
* words, within each segment a separate inclusive scan operation is computed.
|
680 |
-
* Refer to the code sample below for example usage.
|
681 |
-
*
|
682 |
-
* This version of \p inclusive_scan_by_key assumes \c equal_to as the binary
|
683 |
-
* predicate used to compare adjacent keys. Specifically, consecutive iterators
|
684 |
-
* <tt>i</tt> and <tt>i+1</tt> in the range <tt>[first1, last1)</tt>
|
685 |
-
* belong to the same segment if <tt>*i == *(i+1)</tt>, and belong to
|
686 |
-
* different segments otherwise.
|
687 |
-
*
|
688 |
-
* This version of \p inclusive_scan_by_key assumes \c plus as the associative
|
689 |
-
* operator used to perform the prefix sum. When the input and output sequences
|
690 |
-
* are the same, the scan is performed in-place.
|
691 |
-
*
|
692 |
-
* \param first1 The beginning of the key sequence.
|
693 |
-
* \param last1 The end of the key sequence.
|
694 |
-
* \param first2 The beginning of the input value sequence.
|
695 |
-
* \param result The beginning of the output value sequence.
|
696 |
-
* \return The end of the output sequence.
|
697 |
-
*
|
698 |
-
* \tparam InputIterator1 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
699 |
-
* \tparam InputIterator2 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
700 |
-
* and \c InputIterator2's \c value_type is convertible to \c OutputIterator's \c value_type.
|
701 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>,
|
702 |
-
* and if \c x and \c y are objects of \c OutputIterator's \c value_type, then
|
703 |
-
* <tt>binary_op(x,y)</tt> is defined.
|
704 |
-
*
|
705 |
-
* \pre \p first1 may equal \p result but the range <tt>[first1, last1)</tt> and the range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
706 |
-
* \pre \p first2 may equal \p result but the range <tt>[first2, first2 + (last1 - first1)</tt> and range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
707 |
-
*
|
708 |
-
* The following code snippet demonstrates how to use \p inclusive_scan_by_key
|
709 |
-
*
|
710 |
-
* \code
|
711 |
-
* #include <thrust/scan.h>
|
712 |
-
*
|
713 |
-
* int data[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
|
714 |
-
* int keys[10] = {0, 0, 0, 1, 1, 2, 3, 3, 3, 3};
|
715 |
-
*
|
716 |
-
* thrust::inclusive_scan_by_key(keys, keys + 10, data, data); // in-place scan
|
717 |
-
*
|
718 |
-
* // data is now {1, 2, 3, 1, 2, 1, 1, 2, 3, 4};
|
719 |
-
* \endcode
|
720 |
-
*
|
721 |
-
* \see inclusive_scan
|
722 |
-
* \see exclusive_scan_by_key
|
723 |
-
*
|
724 |
-
*/
|
725 |
-
template<typename InputIterator1,
|
726 |
-
typename InputIterator2,
|
727 |
-
typename OutputIterator>
|
728 |
-
OutputIterator inclusive_scan_by_key(InputIterator1 first1,
|
729 |
-
InputIterator1 last1,
|
730 |
-
InputIterator2 first2,
|
731 |
-
OutputIterator result);
|
732 |
-
|
733 |
-
|
734 |
-
/*! \p inclusive_scan_by_key computes an inclusive key-value or 'segmented' prefix
|
735 |
-
* sum operation. The term 'inclusive' means that each result includes
|
736 |
-
* the corresponding input operand in the partial sum. The term 'segmented'
|
737 |
-
* means that the partial sums are broken into distinct segments. In other
|
738 |
-
* words, within each segment a separate inclusive scan operation is computed.
|
739 |
-
* Refer to the code sample below for example usage.
|
740 |
-
*
|
741 |
-
* This version of \p inclusive_scan_by_key uses the binary predicate
|
742 |
-
* \c pred to compare adjacent keys. Specifically, consecutive iterators
|
743 |
-
* <tt>i</tt> and <tt>i+1</tt> in the range <tt>[first1, last1)</tt>
|
744 |
-
* belong to the same segment if <tt>binary_pred(*i, *(i+1))</tt> is true, and belong to
|
745 |
-
* different segments otherwise.
|
746 |
-
*
|
747 |
-
* This version of \p inclusive_scan_by_key assumes \c plus as the associative
|
748 |
-
* operator used to perform the prefix sum. When the input and output sequences
|
749 |
-
* are the same, the scan is performed in-place.
|
750 |
-
*
|
751 |
-
* The algorithm's execution is parallelized as determined by \p exec.
|
752 |
-
*
|
753 |
-
* \param exec The execution policy to use for parallelization.
|
754 |
-
* \param first1 The beginning of the key sequence.
|
755 |
-
* \param last1 The end of the key sequence.
|
756 |
-
* \param first2 The beginning of the input value sequence.
|
757 |
-
* \param result The beginning of the output value sequence.
|
758 |
-
* \param binary_pred The binary predicate used to determine equality of keys.
|
759 |
-
* \return The end of the output sequence.
|
760 |
-
*
|
761 |
-
* \tparam DerivedPolicy The name of the derived execution policy.
|
762 |
-
* \tparam InputIterator1 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
763 |
-
* \tparam InputIterator2 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
764 |
-
* and \c InputIterator2's \c value_type is convertible to \c OutputIterator's \c value_type.
|
765 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>,
|
766 |
-
* and if \c x and \c y are objects of \c OutputIterator's \c value_type, then
|
767 |
-
* <tt>binary_op(x,y)</tt> is defined.
|
768 |
-
* \tparam BinaryPredicate is a model of <a href="http://www.sgi.com/tech/stl/BinaryPredicate.html">Binary Predicate</a>.
|
769 |
-
*
|
770 |
-
* \pre \p first1 may equal \p result but the range <tt>[first1, last1)</tt> and the range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
771 |
-
* \pre \p first2 may equal \p result but the range <tt>[first2, first2 + (last1 - first1)</tt> and range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
772 |
-
*
|
773 |
-
* The following code snippet demonstrates how to use \p inclusive_scan_by_key using the \p thrust::host
|
774 |
-
* execution policy for parallelization:
|
775 |
-
*
|
776 |
-
* \code
|
777 |
-
* #include <thrust/scan.h>
|
778 |
-
* #include <thrust/functional.h>
|
779 |
-
* #include <thrust/execution_policy.h>
|
780 |
-
* ...
|
781 |
-
*
|
782 |
-
* int data[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
|
783 |
-
* int keys[10] = {0, 0, 0, 1, 1, 2, 3, 3, 3, 3};
|
784 |
-
*
|
785 |
-
* thrust::equal_to<int> binary_pred;
|
786 |
-
*
|
787 |
-
* thrust::inclusive_scan_by_key(thrust::host, keys, keys + 10, data, data, binary_pred); // in-place scan
|
788 |
-
*
|
789 |
-
* // data is now {1, 2, 3, 1, 2, 1, 1, 2, 3, 4};
|
790 |
-
* \endcode
|
791 |
-
*
|
792 |
-
* \see inclusive_scan
|
793 |
-
* \see exclusive_scan_by_key
|
794 |
-
*
|
795 |
-
*/
|
796 |
-
template<typename DerivedPolicy,
|
797 |
-
typename InputIterator1,
|
798 |
-
typename InputIterator2,
|
799 |
-
typename OutputIterator,
|
800 |
-
typename BinaryPredicate>
|
801 |
-
__host__ __device__
|
802 |
-
OutputIterator inclusive_scan_by_key(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
|
803 |
-
InputIterator1 first1,
|
804 |
-
InputIterator1 last1,
|
805 |
-
InputIterator2 first2,
|
806 |
-
OutputIterator result,
|
807 |
-
BinaryPredicate binary_pred);
|
808 |
-
|
809 |
-
|
810 |
-
/*! \p inclusive_scan_by_key computes an inclusive key-value or 'segmented' prefix
|
811 |
-
* sum operation. The term 'inclusive' means that each result includes
|
812 |
-
* the corresponding input operand in the partial sum. The term 'segmented'
|
813 |
-
* means that the partial sums are broken into distinct segments. In other
|
814 |
-
* words, within each segment a separate inclusive scan operation is computed.
|
815 |
-
* Refer to the code sample below for example usage.
|
816 |
-
*
|
817 |
-
* This version of \p inclusive_scan_by_key uses the binary predicate
|
818 |
-
* \c pred to compare adjacent keys. Specifically, consecutive iterators
|
819 |
-
* <tt>i</tt> and <tt>i+1</tt> in the range <tt>[first1, last1)</tt>
|
820 |
-
* belong to the same segment if <tt>binary_pred(*i, *(i+1))</tt> is true, and belong to
|
821 |
-
* different segments otherwise.
|
822 |
-
*
|
823 |
-
* This version of \p inclusive_scan_by_key assumes \c plus as the associative
|
824 |
-
* operator used to perform the prefix sum. When the input and output sequences
|
825 |
-
* are the same, the scan is performed in-place.
|
826 |
-
*
|
827 |
-
* \param first1 The beginning of the key sequence.
|
828 |
-
* \param last1 The end of the key sequence.
|
829 |
-
* \param first2 The beginning of the input value sequence.
|
830 |
-
* \param result The beginning of the output value sequence.
|
831 |
-
* \param binary_pred The binary predicate used to determine equality of keys.
|
832 |
-
* \return The end of the output sequence.
|
833 |
-
*
|
834 |
-
* \tparam InputIterator1 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
835 |
-
* \tparam InputIterator2 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
836 |
-
* and \c InputIterator2's \c value_type is convertible to \c OutputIterator's \c value_type.
|
837 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>,
|
838 |
-
* and if \c x and \c y are objects of \c OutputIterator's \c value_type, then
|
839 |
-
* <tt>binary_op(x,y)</tt> is defined.
|
840 |
-
* \tparam BinaryPredicate is a model of <a href="http://www.sgi.com/tech/stl/BinaryPredicate.html">Binary Predicate</a>.
|
841 |
-
*
|
842 |
-
* \pre \p first1 may equal \p result but the range <tt>[first1, last1)</tt> and the range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
843 |
-
* \pre \p first2 may equal \p result but the range <tt>[first2, first2 + (last1 - first1)</tt> and range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
844 |
-
*
|
845 |
-
* The following code snippet demonstrates how to use \p inclusive_scan_by_key
|
846 |
-
*
|
847 |
-
* \code
|
848 |
-
* #include <thrust/scan.h>
|
849 |
-
* #include <thrust/functional.h>
|
850 |
-
*
|
851 |
-
* int data[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
|
852 |
-
* int keys[10] = {0, 0, 0, 1, 1, 2, 3, 3, 3, 3};
|
853 |
-
*
|
854 |
-
* thrust::equal_to<int> binary_pred;
|
855 |
-
*
|
856 |
-
* thrust::inclusive_scan_by_key(keys, keys + 10, data, data, binary_pred); // in-place scan
|
857 |
-
*
|
858 |
-
* // data is now {1, 2, 3, 1, 2, 1, 1, 2, 3, 4};
|
859 |
-
* \endcode
|
860 |
-
*
|
861 |
-
* \see inclusive_scan
|
862 |
-
* \see exclusive_scan_by_key
|
863 |
-
*
|
864 |
-
*/
|
865 |
-
template<typename InputIterator1,
|
866 |
-
typename InputIterator2,
|
867 |
-
typename OutputIterator,
|
868 |
-
typename BinaryPredicate>
|
869 |
-
OutputIterator inclusive_scan_by_key(InputIterator1 first1,
|
870 |
-
InputIterator1 last1,
|
871 |
-
InputIterator2 first2,
|
872 |
-
OutputIterator result,
|
873 |
-
BinaryPredicate binary_pred);
|
874 |
-
|
875 |
-
|
876 |
-
/*! \p inclusive_scan_by_key computes an inclusive key-value or 'segmented' prefix
|
877 |
-
* sum operation. The term 'inclusive' means that each result includes
|
878 |
-
* the corresponding input operand in the partial sum. The term 'segmented'
|
879 |
-
* means that the partial sums are broken into distinct segments. In other
|
880 |
-
* words, within each segment a separate inclusive scan operation is computed.
|
881 |
-
* Refer to the code sample below for example usage.
|
882 |
-
*
|
883 |
-
* This version of \p inclusive_scan_by_key uses the binary predicate
|
884 |
-
* \c pred to compare adjacent keys. Specifically, consecutive iterators
|
885 |
-
* <tt>i</tt> and <tt>i+1</tt> in the range <tt>[first1, last1)</tt>
|
886 |
-
* belong to the same segment if <tt>binary_pred(*i, *(i+1))</tt> is true, and belong to
|
887 |
-
* different segments otherwise.
|
888 |
-
*
|
889 |
-
* This version of \p inclusive_scan_by_key uses the associative operator
|
890 |
-
* \c binary_op to perform the prefix sum. When the input and output sequences
|
891 |
-
* are the same, the scan is performed in-place.
|
892 |
-
*
|
893 |
-
* The algorithm's execution is parallelized as determined by \p exec.
|
894 |
-
*
|
895 |
-
* \param exec The execution policy to use for parallelization.
|
896 |
-
* \param first1 The beginning of the key sequence.
|
897 |
-
* \param last1 The end of the key sequence.
|
898 |
-
* \param first2 The beginning of the input value sequence.
|
899 |
-
* \param result The beginning of the output value sequence.
|
900 |
-
* \param binary_pred The binary predicate used to determine equality of keys.
|
901 |
-
* \param binary_op The associatve operator used to 'sum' values.
|
902 |
-
* \return The end of the output sequence.
|
903 |
-
*
|
904 |
-
* \tparam DerivedPolicy The name of the derived execution policy.
|
905 |
-
* \tparam InputIterator1 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
906 |
-
* \tparam InputIterator2 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
907 |
-
* and \c InputIterator2's \c value_type is convertible to \c OutputIterator's \c value_type.
|
908 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>,
|
909 |
-
* and if \c x and \c y are objects of \c OutputIterator's \c value_type, then
|
910 |
-
* <tt>binary_op(x,y)</tt> is defined.
|
911 |
-
* \tparam BinaryPredicate is a model of <a href="http://www.sgi.com/tech/stl/BinaryPredicate.html">Binary Predicate</a>.
|
912 |
-
* \tparam AssociativeOperator is a model of <a href="http://www.sgi.com/tech/stl/BinaryFunction.html">Binary Function</a>
|
913 |
-
* and \c AssociativeOperator's \c result_type is
|
914 |
-
* convertible to \c OutputIterator's \c value_type.
|
915 |
-
*
|
916 |
-
* \pre \p first1 may equal \p result but the range <tt>[first1, last1)</tt> and the range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
917 |
-
* \pre \p first2 may equal \p result but the range <tt>[first2, first2 + (last1 - first1)</tt> and range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
918 |
-
*
|
919 |
-
* The following code snippet demonstrates how to use \p inclusive_scan_by_key using the \p thrust::host
|
920 |
-
* execution policy for parallelization:
|
921 |
-
*
|
922 |
-
* \code
|
923 |
-
* #include <thrust/scan.h>
|
924 |
-
* #include <thrust/functional.h>
|
925 |
-
* #include <thrust/execution_policy.h>
|
926 |
-
* ...
|
927 |
-
*
|
928 |
-
* int data[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
|
929 |
-
* int keys[10] = {0, 0, 0, 1, 1, 2, 3, 3, 3, 3};
|
930 |
-
*
|
931 |
-
* thrust::equal_to<int> binary_pred;
|
932 |
-
* thrust::plus<int> binary_op;
|
933 |
-
*
|
934 |
-
* thrust::inclusive_scan_by_key(thrust::host, keys, keys + 10, data, data, binary_pred, binary_op); // in-place scan
|
935 |
-
*
|
936 |
-
* // data is now {1, 2, 3, 1, 2, 1, 1, 2, 3, 4};
|
937 |
-
* \endcode
|
938 |
-
*
|
939 |
-
* \see inclusive_scan
|
940 |
-
* \see exclusive_scan_by_key
|
941 |
-
*
|
942 |
-
*/
|
943 |
-
template<typename DerivedPolicy,
|
944 |
-
typename InputIterator1,
|
945 |
-
typename InputIterator2,
|
946 |
-
typename OutputIterator,
|
947 |
-
typename BinaryPredicate,
|
948 |
-
typename AssociativeOperator>
|
949 |
-
__host__ __device__
|
950 |
-
OutputIterator inclusive_scan_by_key(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
|
951 |
-
InputIterator1 first1,
|
952 |
-
InputIterator1 last1,
|
953 |
-
InputIterator2 first2,
|
954 |
-
OutputIterator result,
|
955 |
-
BinaryPredicate binary_pred,
|
956 |
-
AssociativeOperator binary_op);
|
957 |
-
|
958 |
-
|
959 |
-
/*! \p inclusive_scan_by_key computes an inclusive key-value or 'segmented' prefix
|
960 |
-
* sum operation. The term 'inclusive' means that each result includes
|
961 |
-
* the corresponding input operand in the partial sum. The term 'segmented'
|
962 |
-
* means that the partial sums are broken into distinct segments. In other
|
963 |
-
* words, within each segment a separate inclusive scan operation is computed.
|
964 |
-
* Refer to the code sample below for example usage.
|
965 |
-
*
|
966 |
-
* This version of \p inclusive_scan_by_key uses the binary predicate
|
967 |
-
* \c pred to compare adjacent keys. Specifically, consecutive iterators
|
968 |
-
* <tt>i</tt> and <tt>i+1</tt> in the range <tt>[first1, last1)</tt>
|
969 |
-
* belong to the same segment if <tt>binary_pred(*i, *(i+1))</tt> is true, and belong to
|
970 |
-
* different segments otherwise.
|
971 |
-
*
|
972 |
-
* This version of \p inclusive_scan_by_key uses the associative operator
|
973 |
-
* \c binary_op to perform the prefix sum. When the input and output sequences
|
974 |
-
* are the same, the scan is performed in-place.
|
975 |
-
*
|
976 |
-
* \param first1 The beginning of the key sequence.
|
977 |
-
* \param last1 The end of the key sequence.
|
978 |
-
* \param first2 The beginning of the input value sequence.
|
979 |
-
* \param result The beginning of the output value sequence.
|
980 |
-
* \param binary_pred The binary predicate used to determine equality of keys.
|
981 |
-
* \param binary_op The associatve operator used to 'sum' values.
|
982 |
-
* \return The end of the output sequence.
|
983 |
-
*
|
984 |
-
* \tparam InputIterator1 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
985 |
-
* \tparam InputIterator2 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
986 |
-
* and \c InputIterator2's \c value_type is convertible to \c OutputIterator's \c value_type.
|
987 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>,
|
988 |
-
* and if \c x and \c y are objects of \c OutputIterator's \c value_type, then
|
989 |
-
* <tt>binary_op(x,y)</tt> is defined.
|
990 |
-
* \tparam BinaryPredicate is a model of <a href="http://www.sgi.com/tech/stl/BinaryPredicate.html">Binary Predicate</a>.
|
991 |
-
* \tparam AssociativeOperator is a model of <a href="http://www.sgi.com/tech/stl/BinaryFunction.html">Binary Function</a>
|
992 |
-
* and \c AssociativeOperator's \c result_type is
|
993 |
-
* convertible to \c OutputIterator's \c value_type.
|
994 |
-
*
|
995 |
-
* \pre \p first1 may equal \p result but the range <tt>[first1, last1)</tt> and the range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
996 |
-
* \pre \p first2 may equal \p result but the range <tt>[first2, first2 + (last1 - first1)</tt> and range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
997 |
-
*
|
998 |
-
* The following code snippet demonstrates how to use \p inclusive_scan_by_key
|
999 |
-
*
|
1000 |
-
* \code
|
1001 |
-
* #include <thrust/scan.h>
|
1002 |
-
* #include <thrust/functional.h>
|
1003 |
-
*
|
1004 |
-
* int data[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
|
1005 |
-
* int keys[10] = {0, 0, 0, 1, 1, 2, 3, 3, 3, 3};
|
1006 |
-
*
|
1007 |
-
* thrust::equal_to<int> binary_pred;
|
1008 |
-
* thrust::plus<int> binary_op;
|
1009 |
-
*
|
1010 |
-
* thrust::inclusive_scan_by_key(keys, keys + 10, data, data, binary_pred, binary_op); // in-place scan
|
1011 |
-
*
|
1012 |
-
* // data is now {1, 2, 3, 1, 2, 1, 1, 2, 3, 4};
|
1013 |
-
* \endcode
|
1014 |
-
*
|
1015 |
-
* \see inclusive_scan
|
1016 |
-
* \see exclusive_scan_by_key
|
1017 |
-
*
|
1018 |
-
*/
|
1019 |
-
template<typename InputIterator1,
|
1020 |
-
typename InputIterator2,
|
1021 |
-
typename OutputIterator,
|
1022 |
-
typename BinaryPredicate,
|
1023 |
-
typename AssociativeOperator>
|
1024 |
-
OutputIterator inclusive_scan_by_key(InputIterator1 first1,
|
1025 |
-
InputIterator1 last1,
|
1026 |
-
InputIterator2 first2,
|
1027 |
-
OutputIterator result,
|
1028 |
-
BinaryPredicate binary_pred,
|
1029 |
-
AssociativeOperator binary_op);
|
1030 |
-
|
1031 |
-
|
1032 |
-
/*! \p exclusive_scan_by_key computes an exclusive segmented prefix
|
1033 |
-
*
|
1034 |
-
* This version of \p exclusive_scan_by_key uses the value \c 0 to
|
1035 |
-
* initialize the exclusive scan operation.
|
1036 |
-
*
|
1037 |
-
* This version of \p exclusive_scan_by_key assumes \c plus as the associative
|
1038 |
-
* operator used to perform the prefix sum. When the input and output sequences
|
1039 |
-
* are the same, the scan is performed in-place.
|
1040 |
-
*
|
1041 |
-
* This version of \p exclusive_scan_by_key assumes \c equal_to as the binary
|
1042 |
-
* predicate used to compare adjacent keys. Specifically, consecutive iterators
|
1043 |
-
* <tt>i</tt> and <tt>i+1</tt> in the range <tt>[first1, last1</tt>
|
1044 |
-
* belong to the same segment if <tt>*i == *(i+1)</tt>, and belong to
|
1045 |
-
* different segments otherwise.
|
1046 |
-
*
|
1047 |
-
* Refer to the most general form of \p exclusive_scan_by_key for additional details.
|
1048 |
-
*
|
1049 |
-
* The algorithm's execution is parallelized as determined by \p exec.
|
1050 |
-
*
|
1051 |
-
* \param exec The execution policy to use for parallelization.
|
1052 |
-
* \param first1 The beginning of the key sequence.
|
1053 |
-
* \param last1 The end of the key sequence.
|
1054 |
-
* \param first2 The beginning of the input value sequence.
|
1055 |
-
* \param result The beginning of the output value sequence.
|
1056 |
-
*
|
1057 |
-
* \pre \p first1 may equal \p result but the range <tt>[first1, last1)</tt> and the range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
1058 |
-
* \pre \p first2 may equal \p result but the range <tt>[first2, first2 + (last1 - first1)</tt> and range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
1059 |
-
*
|
1060 |
-
* The following code snippet demonstrates how to use \p exclusive_scan_by_key using the
|
1061 |
-
* \p thrust::host execution policy for parallelization:
|
1062 |
-
*
|
1063 |
-
* \code
|
1064 |
-
* #include <thrust/scan.h>
|
1065 |
-
* #include <thrust/execution_policy.h>
|
1066 |
-
* ...
|
1067 |
-
*
|
1068 |
-
* int keys[10] = {0, 0, 0, 1, 1, 2, 3, 3, 3, 3};
|
1069 |
-
* int vals[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
|
1070 |
-
*
|
1071 |
-
* thrust::exclusive_scan_by_key(thrust::host, key, key + 10, vals, vals); // in-place scan
|
1072 |
-
*
|
1073 |
-
* // vals is now {0, 1, 2, 0, 1, 0, 0, 1, 2, 3};
|
1074 |
-
* \endcode
|
1075 |
-
*
|
1076 |
-
* \see exclusive_scan
|
1077 |
-
*
|
1078 |
-
*/
|
1079 |
-
template<typename DerivedPolicy,
|
1080 |
-
typename InputIterator1,
|
1081 |
-
typename InputIterator2,
|
1082 |
-
typename OutputIterator>
|
1083 |
-
__host__ __device__
|
1084 |
-
OutputIterator exclusive_scan_by_key(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
|
1085 |
-
InputIterator1 first1,
|
1086 |
-
InputIterator1 last1,
|
1087 |
-
InputIterator2 first2,
|
1088 |
-
OutputIterator result);
|
1089 |
-
|
1090 |
-
|
1091 |
-
/*! \p exclusive_scan_by_key computes an exclusive segmented prefix
|
1092 |
-
*
|
1093 |
-
* This version of \p exclusive_scan_by_key uses the value \c 0 to
|
1094 |
-
* initialize the exclusive scan operation.
|
1095 |
-
*
|
1096 |
-
* This version of \p exclusive_scan_by_key assumes \c plus as the associative
|
1097 |
-
* operator used to perform the prefix sum. When the input and output sequences
|
1098 |
-
* are the same, the scan is performed in-place.
|
1099 |
-
*
|
1100 |
-
* This version of \p exclusive_scan_by_key assumes \c equal_to as the binary
|
1101 |
-
* predicate used to compare adjacent keys. Specifically, consecutive iterators
|
1102 |
-
* <tt>i</tt> and <tt>i+1</tt> in the range <tt>[first1, last1</tt>
|
1103 |
-
* belong to the same segment if <tt>*i == *(i+1)</tt>, and belong to
|
1104 |
-
* different segments otherwise.
|
1105 |
-
*
|
1106 |
-
* Refer to the most general form of \p exclusive_scan_by_key for additional details.
|
1107 |
-
*
|
1108 |
-
* \param first1 The beginning of the key sequence.
|
1109 |
-
* \param last1 The end of the key sequence.
|
1110 |
-
* \param first2 The beginning of the input value sequence.
|
1111 |
-
* \param result The beginning of the output value sequence.
|
1112 |
-
*
|
1113 |
-
* \pre \p first1 may equal \p result but the range <tt>[first1, last1)</tt> and the range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
1114 |
-
* \pre \p first2 may equal \p result but the range <tt>[first2, first2 + (last1 - first1)</tt> and range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
1115 |
-
*
|
1116 |
-
* The following code snippet demonstrates how to use \p exclusive_scan_by_key.
|
1117 |
-
*
|
1118 |
-
* \code
|
1119 |
-
* #include <thrust/scan.h>
|
1120 |
-
*
|
1121 |
-
* int keys[10] = {0, 0, 0, 1, 1, 2, 3, 3, 3, 3};
|
1122 |
-
* int vals[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
|
1123 |
-
*
|
1124 |
-
* thrust::exclusive_scan_by_key(key, key + 10, vals, vals); // in-place scan
|
1125 |
-
*
|
1126 |
-
* // vals is now {0, 1, 2, 0, 1, 0, 0, 1, 2, 3};
|
1127 |
-
* \endcode
|
1128 |
-
*
|
1129 |
-
* \see exclusive_scan
|
1130 |
-
*
|
1131 |
-
*/
|
1132 |
-
template<typename InputIterator1,
|
1133 |
-
typename InputIterator2,
|
1134 |
-
typename OutputIterator>
|
1135 |
-
OutputIterator exclusive_scan_by_key(InputIterator1 first1,
|
1136 |
-
InputIterator1 last1,
|
1137 |
-
InputIterator2 first2,
|
1138 |
-
OutputIterator result);
|
1139 |
-
|
1140 |
-
|
1141 |
-
/*! \p exclusive_scan_by_key computes an exclusive key-value or 'segmented' prefix
|
1142 |
-
* sum operation. The term 'exclusive' means that each result does not include
|
1143 |
-
* the corresponding input operand in the partial sum. The term 'segmented'
|
1144 |
-
* means that the partial sums are broken into distinct segments. In other
|
1145 |
-
* words, within each segment a separate exclusive scan operation is computed.
|
1146 |
-
* Refer to the code sample below for example usage.
|
1147 |
-
*
|
1148 |
-
* This version of \p exclusive_scan_by_key uses the value \c init to
|
1149 |
-
* initialize the exclusive scan operation.
|
1150 |
-
*
|
1151 |
-
* The algorithm's execution is parallelized as determined by \p exec.
|
1152 |
-
*
|
1153 |
-
* \param exec The execution policy to use for parallelization.
|
1154 |
-
* \param first1 The beginning of the key sequence.
|
1155 |
-
* \param last1 The end of the key sequence.
|
1156 |
-
* \param first2 The beginning of the input value sequence.
|
1157 |
-
* \param result The beginning of the output value sequence.
|
1158 |
-
* \param init The initial of the exclusive sum value.
|
1159 |
-
* \return The end of the output sequence.
|
1160 |
-
*
|
1161 |
-
* \pre \p first1 may equal \p result but the range <tt>[first1, last1)</tt> and the range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
1162 |
-
* \pre \p first2 may equal \p result but the range <tt>[first2, first2 + (last1 - first1)</tt> and range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
1163 |
-
*
|
1164 |
-
* The following code snippet demonstrates how to use \p exclusive_scan_by_key using the \p
|
1165 |
-
* thrust::host execution policy for parallelization:
|
1166 |
-
*
|
1167 |
-
* \code
|
1168 |
-
* #include <thrust/scan.h>
|
1169 |
-
* #include <thrust/functional.h>
|
1170 |
-
* #include <thrust/execution_policy.h>
|
1171 |
-
* ...
|
1172 |
-
*
|
1173 |
-
* int keys[10] = {0, 0, 0, 1, 1, 2, 3, 3, 3, 3};
|
1174 |
-
* int vals[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
|
1175 |
-
*
|
1176 |
-
* int init = 5;
|
1177 |
-
*
|
1178 |
-
* thrust::exclusive_scan_by_key(thrust::host, key, key + 10, vals, vals, init); // in-place scan
|
1179 |
-
*
|
1180 |
-
* // vals is now {5, 6, 7, 5, 6, 5, 5, 6, 7, 8};
|
1181 |
-
* \endcode
|
1182 |
-
*
|
1183 |
-
* \see exclusive_scan
|
1184 |
-
* \see inclusive_scan_by_key
|
1185 |
-
*
|
1186 |
-
*/
|
1187 |
-
template<typename DerivedPolicy,
|
1188 |
-
typename InputIterator1,
|
1189 |
-
typename InputIterator2,
|
1190 |
-
typename OutputIterator,
|
1191 |
-
typename T>
|
1192 |
-
__host__ __device__
|
1193 |
-
OutputIterator exclusive_scan_by_key(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
|
1194 |
-
InputIterator1 first1,
|
1195 |
-
InputIterator1 last1,
|
1196 |
-
InputIterator2 first2,
|
1197 |
-
OutputIterator result,
|
1198 |
-
T init);
|
1199 |
-
|
1200 |
-
|
1201 |
-
/*! \p exclusive_scan_by_key computes an exclusive key-value or 'segmented' prefix
|
1202 |
-
* sum operation. The term 'exclusive' means that each result does not include
|
1203 |
-
* the corresponding input operand in the partial sum. The term 'segmented'
|
1204 |
-
* means that the partial sums are broken into distinct segments. In other
|
1205 |
-
* words, within each segment a separate exclusive scan operation is computed.
|
1206 |
-
* Refer to the code sample below for example usage.
|
1207 |
-
*
|
1208 |
-
* This version of \p exclusive_scan_by_key uses the value \c init to
|
1209 |
-
* initialize the exclusive scan operation.
|
1210 |
-
*
|
1211 |
-
* \param first1 The beginning of the key sequence.
|
1212 |
-
* \param last1 The end of the key sequence.
|
1213 |
-
* \param first2 The beginning of the input value sequence.
|
1214 |
-
* \param result The beginning of the output value sequence.
|
1215 |
-
* \param init The initial of the exclusive sum value.
|
1216 |
-
* \return The end of the output sequence.
|
1217 |
-
*
|
1218 |
-
* \pre \p first1 may equal \p result but the range <tt>[first1, last1)</tt> and the range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
1219 |
-
* \pre \p first2 may equal \p result but the range <tt>[first2, first2 + (last1 - first1)</tt> and range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
1220 |
-
*
|
1221 |
-
* The following code snippet demonstrates how to use \p exclusive_scan_by_key
|
1222 |
-
*
|
1223 |
-
* \code
|
1224 |
-
* #include <thrust/scan.h>
|
1225 |
-
* #include <thrust/functional.h>
|
1226 |
-
*
|
1227 |
-
* int keys[10] = {0, 0, 0, 1, 1, 2, 3, 3, 3, 3};
|
1228 |
-
* int vals[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
|
1229 |
-
*
|
1230 |
-
* int init = 5;
|
1231 |
-
*
|
1232 |
-
* thrust::exclusive_scan_by_key(key, key + 10, vals, vals, init); // in-place scan
|
1233 |
-
*
|
1234 |
-
* // vals is now {5, 6, 7, 5, 6, 5, 5, 6, 7, 8};
|
1235 |
-
* \endcode
|
1236 |
-
*
|
1237 |
-
* \see exclusive_scan
|
1238 |
-
* \see inclusive_scan_by_key
|
1239 |
-
*
|
1240 |
-
*/
|
1241 |
-
template<typename InputIterator1,
|
1242 |
-
typename InputIterator2,
|
1243 |
-
typename OutputIterator,
|
1244 |
-
typename T>
|
1245 |
-
OutputIterator exclusive_scan_by_key(InputIterator1 first1,
|
1246 |
-
InputIterator1 last1,
|
1247 |
-
InputIterator2 first2,
|
1248 |
-
OutputIterator result,
|
1249 |
-
T init);
|
1250 |
-
|
1251 |
-
|
1252 |
-
/*! \p exclusive_scan_by_key computes an exclusive key-value or 'segmented' prefix
|
1253 |
-
* sum operation. The term 'exclusive' means that each result does not include
|
1254 |
-
* the corresponding input operand in the partial sum. The term 'segmented'
|
1255 |
-
* means that the partial sums are broken into distinct segments. In other
|
1256 |
-
* words, within each segment a separate exclusive scan operation is computed.
|
1257 |
-
* Refer to the code sample below for example usage.
|
1258 |
-
*
|
1259 |
-
* This version of \p exclusive_scan_by_key uses the value \c init to
|
1260 |
-
* initialize the exclusive scan operation.
|
1261 |
-
*
|
1262 |
-
* This version of \p exclusive_scan_by_key uses the binary predicate \c binary_pred
|
1263 |
-
* to compare adjacent keys. Specifically, consecutive iterators <tt>i</tt> and
|
1264 |
-
* <tt>i+1</tt> in the range <tt>[first1, last1)</tt> belong to the same segment if
|
1265 |
-
* <tt>binary_pred(*i, *(i+1))</tt> is true, and belong to different segments otherwise.
|
1266 |
-
*
|
1267 |
-
* The algorithm's execution is parallelized as determined by \p exec.
|
1268 |
-
*
|
1269 |
-
* \param exec The execution policy to use for parallelization.
|
1270 |
-
* \param first1 The beginning of the key sequence.
|
1271 |
-
* \param last1 The end of the key sequence.
|
1272 |
-
* \param first2 The beginning of the input value sequence.
|
1273 |
-
* \param result The beginning of the output value sequence.
|
1274 |
-
* \param init The initial of the exclusive sum value.
|
1275 |
-
* \param binary_pred The binary predicate used to determine equality of keys.
|
1276 |
-
* \return The end of the output sequence.
|
1277 |
-
*
|
1278 |
-
* \pre \p first1 may equal \p result but the range <tt>[first1, last1)</tt> and the range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
1279 |
-
* \pre \p first2 may equal \p result but the range <tt>[first2, first2 + (last1 - first1)</tt> and range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
1280 |
-
*
|
1281 |
-
* The following code snippet demonstrates how to use \p exclusive_scan_by_key using the
|
1282 |
-
* \p thrust::host execution policy for parallelization:
|
1283 |
-
*
|
1284 |
-
* \code
|
1285 |
-
* #include <thrust/scan.h>
|
1286 |
-
* #include <thrust/functional.h>
|
1287 |
-
* #include <thrust/execution_policy.h>
|
1288 |
-
* ...
|
1289 |
-
*
|
1290 |
-
* int keys[10] = {0, 0, 0, 1, 1, 2, 3, 3, 3, 3};
|
1291 |
-
* int vals[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
|
1292 |
-
*
|
1293 |
-
* int init = 5;
|
1294 |
-
*
|
1295 |
-
* thrust::equal_to<int> binary_pred;
|
1296 |
-
*
|
1297 |
-
* thrust::exclusive_scan_by_key(thrust::host, key, key + 10, vals, vals, init, binary_pred); // in-place scan
|
1298 |
-
*
|
1299 |
-
* // vals is now {5, 6, 7, 5, 6, 5, 5, 6, 7, 8};
|
1300 |
-
* \endcode
|
1301 |
-
*
|
1302 |
-
* \see exclusive_scan
|
1303 |
-
* \see inclusive_scan_by_key
|
1304 |
-
*
|
1305 |
-
*/
|
1306 |
-
template<typename DerivedPolicy,
|
1307 |
-
typename InputIterator1,
|
1308 |
-
typename InputIterator2,
|
1309 |
-
typename OutputIterator,
|
1310 |
-
typename T,
|
1311 |
-
typename BinaryPredicate>
|
1312 |
-
__host__ __device__
|
1313 |
-
OutputIterator exclusive_scan_by_key(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
|
1314 |
-
InputIterator1 first1,
|
1315 |
-
InputIterator1 last1,
|
1316 |
-
InputIterator2 first2,
|
1317 |
-
OutputIterator result,
|
1318 |
-
T init,
|
1319 |
-
BinaryPredicate binary_pred);
|
1320 |
-
|
1321 |
-
|
1322 |
-
/*! \p exclusive_scan_by_key computes an exclusive key-value or 'segmented' prefix
|
1323 |
-
* sum operation. The term 'exclusive' means that each result does not include
|
1324 |
-
* the corresponding input operand in the partial sum. The term 'segmented'
|
1325 |
-
* means that the partial sums are broken into distinct segments. In other
|
1326 |
-
* words, within each segment a separate exclusive scan operation is computed.
|
1327 |
-
* Refer to the code sample below for example usage.
|
1328 |
-
*
|
1329 |
-
* This version of \p exclusive_scan_by_key uses the value \c init to
|
1330 |
-
* initialize the exclusive scan operation.
|
1331 |
-
*
|
1332 |
-
* This version of \p exclusive_scan_by_key uses the binary predicate \c binary_pred
|
1333 |
-
* to compare adjacent keys. Specifically, consecutive iterators <tt>i</tt> and
|
1334 |
-
* <tt>i+1</tt> in the range <tt>[first1, last1)</tt> belong to the same segment if
|
1335 |
-
* <tt>binary_pred(*i, *(i+1))</tt> is true, and belong to different segments otherwise.
|
1336 |
-
*
|
1337 |
-
* \param first1 The beginning of the key sequence.
|
1338 |
-
* \param last1 The end of the key sequence.
|
1339 |
-
* \param first2 The beginning of the input value sequence.
|
1340 |
-
* \param result The beginning of the output value sequence.
|
1341 |
-
* \param init The initial of the exclusive sum value.
|
1342 |
-
* \param binary_pred The binary predicate used to determine equality of keys.
|
1343 |
-
* \return The end of the output sequence.
|
1344 |
-
*
|
1345 |
-
* \pre \p first1 may equal \p result but the range <tt>[first1, last1)</tt> and the range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
1346 |
-
* \pre \p first2 may equal \p result but the range <tt>[first2, first2 + (last1 - first1)</tt> and range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
1347 |
-
*
|
1348 |
-
* The following code snippet demonstrates how to use \p exclusive_scan_by_key
|
1349 |
-
*
|
1350 |
-
* \code
|
1351 |
-
* #include <thrust/scan.h>
|
1352 |
-
* #include <thrust/functional.h>
|
1353 |
-
*
|
1354 |
-
* int keys[10] = {0, 0, 0, 1, 1, 2, 3, 3, 3, 3};
|
1355 |
-
* int vals[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
|
1356 |
-
*
|
1357 |
-
* int init = 5;
|
1358 |
-
*
|
1359 |
-
* thrust::equal_to<int> binary_pred;
|
1360 |
-
*
|
1361 |
-
* thrust::exclusive_scan_by_key(key, key + 10, vals, vals, init, binary_pred); // in-place scan
|
1362 |
-
*
|
1363 |
-
* // vals is now {5, 6, 7, 5, 6, 5, 5, 6, 7, 8};
|
1364 |
-
* \endcode
|
1365 |
-
*
|
1366 |
-
* \see exclusive_scan
|
1367 |
-
* \see inclusive_scan_by_key
|
1368 |
-
*
|
1369 |
-
*/
|
1370 |
-
template<typename InputIterator1,
|
1371 |
-
typename InputIterator2,
|
1372 |
-
typename OutputIterator,
|
1373 |
-
typename T,
|
1374 |
-
typename BinaryPredicate>
|
1375 |
-
OutputIterator exclusive_scan_by_key(InputIterator1 first1,
|
1376 |
-
InputIterator1 last1,
|
1377 |
-
InputIterator2 first2,
|
1378 |
-
OutputIterator result,
|
1379 |
-
T init,
|
1380 |
-
BinaryPredicate binary_pred);
|
1381 |
-
|
1382 |
-
|
1383 |
-
/*! \p exclusive_scan_by_key computes an exclusive key-value or 'segmented' prefix
|
1384 |
-
* sum operation. The term 'exclusive' means that each result does not include
|
1385 |
-
* the corresponding input operand in the partial sum. The term 'segmented'
|
1386 |
-
* means that the partial sums are broken into distinct segments. In other
|
1387 |
-
* words, within each segment a separate exclusive scan operation is computed.
|
1388 |
-
* Refer to the code sample below for example usage.
|
1389 |
-
*
|
1390 |
-
* This version of \p exclusive_scan_by_key uses the value \c init to
|
1391 |
-
* initialize the exclusive scan operation.
|
1392 |
-
*
|
1393 |
-
* This version of \p exclusive_scan_by_key uses the binary predicate \c binary_pred
|
1394 |
-
* to compare adjacent keys. Specifically, consecutive iterators <tt>i</tt> and
|
1395 |
-
* <tt>i+1</tt> in the range <tt>[first1, last1)</tt> belong to the same segment if
|
1396 |
-
* <tt>binary_pred(*i, *(i+1))</tt> is true, and belong to different segments otherwise.
|
1397 |
-
*
|
1398 |
-
* This version of \p exclusive_scan_by_key uses the associative operator
|
1399 |
-
* \c binary_op to perform the prefix sum. When the input and output sequences
|
1400 |
-
* are the same, the scan is performed in-place.
|
1401 |
-
*
|
1402 |
-
* The algorithm's execution is parallelized as determined by \p exec.
|
1403 |
-
*
|
1404 |
-
* \param exec The execution policy to use for parallelization.
|
1405 |
-
* \param first1 The beginning of the key sequence.
|
1406 |
-
* \param last1 The end of the key sequence.
|
1407 |
-
* \param first2 The beginning of the input value sequence.
|
1408 |
-
* \param result The beginning of the output value sequence.
|
1409 |
-
* \param init The initial of the exclusive sum value.
|
1410 |
-
* \param binary_pred The binary predicate used to determine equality of keys.
|
1411 |
-
* \param binary_op The associatve operator used to 'sum' values.
|
1412 |
-
* \return The end of the output sequence.
|
1413 |
-
*
|
1414 |
-
* \tparam DerivedPolicy The name of the derived execution policy.
|
1415 |
-
* \tparam InputIterator1 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
1416 |
-
* \tparam InputIterator2 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
1417 |
-
* and \c InputIterator2's \c value_type is convertible to \c OutputIterator's \c value_type.
|
1418 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>,
|
1419 |
-
* and if \c x and \c y are objects of \c OutputIterator's \c value_type, then
|
1420 |
-
* <tt>binary_op(x,y)</tt> is defined.
|
1421 |
-
* \tparam T is convertible to \c OutputIterator's \c value_type.
|
1422 |
-
* \tparam BinaryPredicate is a model of <a href="http://www.sgi.com/tech/stl/BinaryPredicate.html">Binary Predicate</a>.
|
1423 |
-
* \tparam AssociativeOperator is a model of <a href="http://www.sgi.com/tech/stl/BinaryFunction.html">Binary Function</a>
|
1424 |
-
* and \c AssociativeOperator's \c result_type is convertible to \c OutputIterator's \c value_type.
|
1425 |
-
*
|
1426 |
-
* \pre \p first1 may equal \p result but the range <tt>[first1, last1)</tt> and the range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
1427 |
-
* \pre \p first2 may equal \p result but the range <tt>[first2, first2 + (last1 - first1)</tt> and range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
1428 |
-
*
|
1429 |
-
* The following code snippet demonstrates how to use \p exclusive_scan_by_key using the
|
1430 |
-
* \p thrust::host execution policy for parallelization:
|
1431 |
-
*
|
1432 |
-
* \code
|
1433 |
-
* #include <thrust/scan.h>
|
1434 |
-
* #include <thrust/functional.h>
|
1435 |
-
* #include <thrust/execution_policy.h>
|
1436 |
-
* ...
|
1437 |
-
*
|
1438 |
-
* int keys[10] = {0, 0, 0, 1, 1, 2, 3, 3, 3, 3};
|
1439 |
-
* int vals[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
|
1440 |
-
*
|
1441 |
-
* int init = 5;
|
1442 |
-
*
|
1443 |
-
* thrust::equal_to<int> binary_pred;
|
1444 |
-
* thrust::plus<int> binary_op;
|
1445 |
-
*
|
1446 |
-
* thrust::exclusive_scan_by_key(thrust::host, key, key + 10, vals, vals, init, binary_pred, binary_op); // in-place scan
|
1447 |
-
*
|
1448 |
-
* // vals is now {5, 6, 7, 5, 6, 5, 5, 6, 7, 8};
|
1449 |
-
* \endcode
|
1450 |
-
*
|
1451 |
-
* \see exclusive_scan
|
1452 |
-
* \see inclusive_scan_by_key
|
1453 |
-
*
|
1454 |
-
*/
|
1455 |
-
template<typename DerivedPolicy,
|
1456 |
-
typename InputIterator1,
|
1457 |
-
typename InputIterator2,
|
1458 |
-
typename OutputIterator,
|
1459 |
-
typename T,
|
1460 |
-
typename BinaryPredicate,
|
1461 |
-
typename AssociativeOperator>
|
1462 |
-
__host__ __device__
|
1463 |
-
OutputIterator exclusive_scan_by_key(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
|
1464 |
-
InputIterator1 first1,
|
1465 |
-
InputIterator1 last1,
|
1466 |
-
InputIterator2 first2,
|
1467 |
-
OutputIterator result,
|
1468 |
-
T init,
|
1469 |
-
BinaryPredicate binary_pred,
|
1470 |
-
AssociativeOperator binary_op);
|
1471 |
-
|
1472 |
-
|
1473 |
-
/*! \p exclusive_scan_by_key computes an exclusive key-value or 'segmented' prefix
|
1474 |
-
* sum operation. The term 'exclusive' means that each result does not include
|
1475 |
-
* the corresponding input operand in the partial sum. The term 'segmented'
|
1476 |
-
* means that the partial sums are broken into distinct segments. In other
|
1477 |
-
* words, within each segment a separate exclusive scan operation is computed.
|
1478 |
-
* Refer to the code sample below for example usage.
|
1479 |
-
*
|
1480 |
-
* This version of \p exclusive_scan_by_key uses the value \c init to
|
1481 |
-
* initialize the exclusive scan operation.
|
1482 |
-
*
|
1483 |
-
* This version of \p exclusive_scan_by_key uses the binary predicate \c binary_pred
|
1484 |
-
* to compare adjacent keys. Specifically, consecutive iterators <tt>i</tt> and
|
1485 |
-
* <tt>i+1</tt> in the range <tt>[first1, last1)</tt> belong to the same segment if
|
1486 |
-
* <tt>binary_pred(*i, *(i+1))</tt> is true, and belong to different segments otherwise.
|
1487 |
-
*
|
1488 |
-
* This version of \p exclusive_scan_by_key uses the associative operator
|
1489 |
-
* \c binary_op to perform the prefix sum. When the input and output sequences
|
1490 |
-
* are the same, the scan is performed in-place.
|
1491 |
-
*
|
1492 |
-
* \param first1 The beginning of the key sequence.
|
1493 |
-
* \param last1 The end of the key sequence.
|
1494 |
-
* \param first2 The beginning of the input value sequence.
|
1495 |
-
* \param result The beginning of the output value sequence.
|
1496 |
-
* \param init The initial of the exclusive sum value.
|
1497 |
-
* \param binary_pred The binary predicate used to determine equality of keys.
|
1498 |
-
* \param binary_op The associatve operator used to 'sum' values.
|
1499 |
-
* \return The end of the output sequence.
|
1500 |
-
*
|
1501 |
-
* \tparam InputIterator1 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
1502 |
-
* \tparam InputIterator2 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>
|
1503 |
-
* and \c InputIterator2's \c value_type is convertible to \c OutputIterator's \c value_type.
|
1504 |
-
* \tparam OutputIterator is a model of <a href="http://www.sgi.com/tech/stl/OutputIterator.html">Output Iterator</a>,
|
1505 |
-
* and if \c x and \c y are objects of \c OutputIterator's \c value_type, then
|
1506 |
-
* <tt>binary_op(x,y)</tt> is defined.
|
1507 |
-
* \tparam T is convertible to \c OutputIterator's \c value_type.
|
1508 |
-
* \tparam BinaryPredicate is a model of <a href="http://www.sgi.com/tech/stl/BinaryPredicate.html">Binary Predicate</a>.
|
1509 |
-
* \tparam AssociativeOperator is a model of <a href="http://www.sgi.com/tech/stl/BinaryFunction.html">Binary Function</a>
|
1510 |
-
* and \c AssociativeOperator's \c result_type is convertible to \c OutputIterator's \c value_type.
|
1511 |
-
*
|
1512 |
-
* \pre \p first1 may equal \p result but the range <tt>[first1, last1)</tt> and the range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
1513 |
-
* \pre \p first2 may equal \p result but the range <tt>[first2, first2 + (last1 - first1)</tt> and range <tt>[result, result + (last1 - first1))</tt> shall not overlap otherwise.
|
1514 |
-
*
|
1515 |
-
* The following code snippet demonstrates how to use \p exclusive_scan_by_key
|
1516 |
-
*
|
1517 |
-
* \code
|
1518 |
-
* #include <thrust/scan.h>
|
1519 |
-
* #include <thrust/functional.h>
|
1520 |
-
*
|
1521 |
-
* int keys[10] = {0, 0, 0, 1, 1, 2, 3, 3, 3, 3};
|
1522 |
-
* int vals[10] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
|
1523 |
-
*
|
1524 |
-
* int init = 5;
|
1525 |
-
*
|
1526 |
-
* thrust::equal_to<int> binary_pred;
|
1527 |
-
* thrust::plus<int> binary_op;
|
1528 |
-
*
|
1529 |
-
* thrust::exclusive_scan_by_key(key, key + 10, vals, vals, init, binary_pred, binary_op); // in-place scan
|
1530 |
-
*
|
1531 |
-
* // vals is now {5, 6, 7, 5, 6, 5, 5, 6, 7, 8};
|
1532 |
-
* \endcode
|
1533 |
-
*
|
1534 |
-
* \see exclusive_scan
|
1535 |
-
* \see inclusive_scan_by_key
|
1536 |
-
*
|
1537 |
-
*/
|
1538 |
-
template<typename InputIterator1,
|
1539 |
-
typename InputIterator2,
|
1540 |
-
typename OutputIterator,
|
1541 |
-
typename T,
|
1542 |
-
typename BinaryPredicate,
|
1543 |
-
typename AssociativeOperator>
|
1544 |
-
OutputIterator exclusive_scan_by_key(InputIterator1 first1,
|
1545 |
-
InputIterator1 last1,
|
1546 |
-
InputIterator2 first2,
|
1547 |
-
OutputIterator result,
|
1548 |
-
T init,
|
1549 |
-
BinaryPredicate binary_pred,
|
1550 |
-
AssociativeOperator binary_op);
|
1551 |
-
|
1552 |
-
|
1553 |
-
/*! \} // end segmentedprefixsums
|
1554 |
-
*/
|
1555 |
-
|
1556 |
-
|
1557 |
-
/*! \} // end prefix sums
|
1558 |
-
*/
|
1559 |
-
|
1560 |
-
|
1561 |
-
} // end namespace thrust
|
1562 |
-
|
1563 |
-
#include <thrust/detail/scan.inl>
|
1564 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/uninitialized_fill.h
DELETED
@@ -1,114 +0,0 @@
|
|
1 |
-
/******************************************************************************
|
2 |
-
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
|
3 |
-
*
|
4 |
-
* Redistribution and use in source and binary forms, with or without
|
5 |
-
* modification, are permitted provided that the following conditions are met:
|
6 |
-
* * Redistributions of source code must retain the above copyright
|
7 |
-
* notice, this list of conditions and the following disclaimer.
|
8 |
-
* * Redistributions in binary form must reproduce the above copyright
|
9 |
-
* notice, this list of conditions and the following disclaimer in the
|
10 |
-
* documentation and/or other materials provided with the distribution.
|
11 |
-
* * Neither the name of the NVIDIA CORPORATION nor the
|
12 |
-
* names of its contributors may be used to endorse or promote products
|
13 |
-
* derived from this software without specific prior written permission.
|
14 |
-
*
|
15 |
-
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
16 |
-
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
17 |
-
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
18 |
-
* ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
|
19 |
-
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
20 |
-
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
21 |
-
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
22 |
-
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
23 |
-
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
24 |
-
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
25 |
-
*
|
26 |
-
******************************************************************************/
|
27 |
-
#pragma once
|
28 |
-
|
29 |
-
|
30 |
-
#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
|
31 |
-
#include <iterator>
|
32 |
-
#include <thrust/distance.h>
|
33 |
-
#include <thrust/system/cuda/detail/execution_policy.h>
|
34 |
-
#include <thrust/system/cuda/detail/util.h>
|
35 |
-
#include <thrust/system/cuda/detail/parallel_for.h>
|
36 |
-
|
37 |
-
namespace thrust
|
38 |
-
{
|
39 |
-
|
40 |
-
namespace cuda_cub {
|
41 |
-
|
42 |
-
namespace __uninitialized_fill {
|
43 |
-
|
44 |
-
template <class Iterator, class T>
|
45 |
-
struct functor
|
46 |
-
{
|
47 |
-
Iterator items;
|
48 |
-
T value;
|
49 |
-
|
50 |
-
typedef typename iterator_traits<Iterator>::value_type value_type;
|
51 |
-
|
52 |
-
THRUST_FUNCTION
|
53 |
-
functor(Iterator items_, T const& value_)
|
54 |
-
: items(items_), value(value_) {}
|
55 |
-
|
56 |
-
template<class Size>
|
57 |
-
void THRUST_DEVICE_FUNCTION operator()(Size idx)
|
58 |
-
{
|
59 |
-
value_type& out = raw_reference_cast(items[idx]);
|
60 |
-
|
61 |
-
#if defined(__CUDA__) && defined(__clang__)
|
62 |
-
// XXX unsafe. cuda-clang is seemingly unable to call ::new in device code
|
63 |
-
out = value;
|
64 |
-
#else
|
65 |
-
::new (static_cast<void *>(&out)) value_type(value);
|
66 |
-
#endif
|
67 |
-
}
|
68 |
-
}; // struct functor
|
69 |
-
|
70 |
-
} // namespace __uninitialized_copy
|
71 |
-
|
72 |
-
template <class Derived,
|
73 |
-
class Iterator,
|
74 |
-
class Size,
|
75 |
-
class T>
|
76 |
-
Iterator __host__ __device__
|
77 |
-
uninitialized_fill_n(execution_policy<Derived>& policy,
|
78 |
-
Iterator first,
|
79 |
-
Size count,
|
80 |
-
T const& x)
|
81 |
-
{
|
82 |
-
typedef __uninitialized_fill::functor<Iterator,T> functor_t;
|
83 |
-
|
84 |
-
cuda_cub::parallel_for(policy,
|
85 |
-
functor_t(first, x),
|
86 |
-
count);
|
87 |
-
|
88 |
-
cuda_cub::throw_on_error(
|
89 |
-
cuda_cub::synchronize(policy)
|
90 |
-
, "uninitialized_fill_n: failed to synchronize"
|
91 |
-
);
|
92 |
-
|
93 |
-
return first + count;
|
94 |
-
}
|
95 |
-
|
96 |
-
template <class Derived,
|
97 |
-
class Iterator,
|
98 |
-
class T>
|
99 |
-
void __host__ __device__
|
100 |
-
uninitialized_fill(execution_policy<Derived>& policy,
|
101 |
-
Iterator first,
|
102 |
-
Iterator last,
|
103 |
-
T const& x)
|
104 |
-
{
|
105 |
-
cuda_cub::uninitialized_fill_n(policy,
|
106 |
-
first,
|
107 |
-
thrust::distance(first, last),
|
108 |
-
x);
|
109 |
-
}
|
110 |
-
|
111 |
-
} // namespace cuda_cub
|
112 |
-
|
113 |
-
} // end namespace thrust
|
114 |
-
#endif
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/mmdet/models/backbones/regnet.py
DELETED
@@ -1,325 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import torch.nn as nn
|
3 |
-
from mmcv.cnn import build_conv_layer, build_norm_layer
|
4 |
-
|
5 |
-
from ..builder import BACKBONES
|
6 |
-
from .resnet import ResNet
|
7 |
-
from .resnext import Bottleneck
|
8 |
-
|
9 |
-
|
10 |
-
@BACKBONES.register_module()
|
11 |
-
class RegNet(ResNet):
|
12 |
-
"""RegNet backbone.
|
13 |
-
|
14 |
-
More details can be found in `paper <https://arxiv.org/abs/2003.13678>`_ .
|
15 |
-
|
16 |
-
Args:
|
17 |
-
arch (dict): The parameter of RegNets.
|
18 |
-
|
19 |
-
- w0 (int): initial width
|
20 |
-
- wa (float): slope of width
|
21 |
-
- wm (float): quantization parameter to quantize the width
|
22 |
-
- depth (int): depth of the backbone
|
23 |
-
- group_w (int): width of group
|
24 |
-
- bot_mul (float): bottleneck ratio, i.e. expansion of bottleneck.
|
25 |
-
strides (Sequence[int]): Strides of the first block of each stage.
|
26 |
-
base_channels (int): Base channels after stem layer.
|
27 |
-
in_channels (int): Number of input image channels. Default: 3.
|
28 |
-
dilations (Sequence[int]): Dilation of each stage.
|
29 |
-
out_indices (Sequence[int]): Output from which stages.
|
30 |
-
style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two
|
31 |
-
layer is the 3x3 conv layer, otherwise the stride-two layer is
|
32 |
-
the first 1x1 conv layer.
|
33 |
-
frozen_stages (int): Stages to be frozen (all param fixed). -1 means
|
34 |
-
not freezing any parameters.
|
35 |
-
norm_cfg (dict): dictionary to construct and config norm layer.
|
36 |
-
norm_eval (bool): Whether to set norm layers to eval mode, namely,
|
37 |
-
freeze running stats (mean and var). Note: Effect on Batch Norm
|
38 |
-
and its variants only.
|
39 |
-
with_cp (bool): Use checkpoint or not. Using checkpoint will save some
|
40 |
-
memory while slowing down the training speed.
|
41 |
-
zero_init_residual (bool): whether to use zero init for last norm layer
|
42 |
-
in resblocks to let them behave as identity.
|
43 |
-
|
44 |
-
Example:
|
45 |
-
>>> from mmdet.models import RegNet
|
46 |
-
>>> import torch
|
47 |
-
>>> self = RegNet(
|
48 |
-
arch=dict(
|
49 |
-
w0=88,
|
50 |
-
wa=26.31,
|
51 |
-
wm=2.25,
|
52 |
-
group_w=48,
|
53 |
-
depth=25,
|
54 |
-
bot_mul=1.0))
|
55 |
-
>>> self.eval()
|
56 |
-
>>> inputs = torch.rand(1, 3, 32, 32)
|
57 |
-
>>> level_outputs = self.forward(inputs)
|
58 |
-
>>> for level_out in level_outputs:
|
59 |
-
... print(tuple(level_out.shape))
|
60 |
-
(1, 96, 8, 8)
|
61 |
-
(1, 192, 4, 4)
|
62 |
-
(1, 432, 2, 2)
|
63 |
-
(1, 1008, 1, 1)
|
64 |
-
"""
|
65 |
-
arch_settings = {
|
66 |
-
'regnetx_400mf':
|
67 |
-
dict(w0=24, wa=24.48, wm=2.54, group_w=16, depth=22, bot_mul=1.0),
|
68 |
-
'regnetx_800mf':
|
69 |
-
dict(w0=56, wa=35.73, wm=2.28, group_w=16, depth=16, bot_mul=1.0),
|
70 |
-
'regnetx_1.6gf':
|
71 |
-
dict(w0=80, wa=34.01, wm=2.25, group_w=24, depth=18, bot_mul=1.0),
|
72 |
-
'regnetx_3.2gf':
|
73 |
-
dict(w0=88, wa=26.31, wm=2.25, group_w=48, depth=25, bot_mul=1.0),
|
74 |
-
'regnetx_4.0gf':
|
75 |
-
dict(w0=96, wa=38.65, wm=2.43, group_w=40, depth=23, bot_mul=1.0),
|
76 |
-
'regnetx_6.4gf':
|
77 |
-
dict(w0=184, wa=60.83, wm=2.07, group_w=56, depth=17, bot_mul=1.0),
|
78 |
-
'regnetx_8.0gf':
|
79 |
-
dict(w0=80, wa=49.56, wm=2.88, group_w=120, depth=23, bot_mul=1.0),
|
80 |
-
'regnetx_12gf':
|
81 |
-
dict(w0=168, wa=73.36, wm=2.37, group_w=112, depth=19, bot_mul=1.0),
|
82 |
-
}
|
83 |
-
|
84 |
-
def __init__(self,
|
85 |
-
arch,
|
86 |
-
in_channels=3,
|
87 |
-
stem_channels=32,
|
88 |
-
base_channels=32,
|
89 |
-
strides=(2, 2, 2, 2),
|
90 |
-
dilations=(1, 1, 1, 1),
|
91 |
-
out_indices=(0, 1, 2, 3),
|
92 |
-
style='pytorch',
|
93 |
-
deep_stem=False,
|
94 |
-
avg_down=False,
|
95 |
-
frozen_stages=-1,
|
96 |
-
conv_cfg=None,
|
97 |
-
norm_cfg=dict(type='BN', requires_grad=True),
|
98 |
-
norm_eval=True,
|
99 |
-
dcn=None,
|
100 |
-
stage_with_dcn=(False, False, False, False),
|
101 |
-
plugins=None,
|
102 |
-
with_cp=False,
|
103 |
-
zero_init_residual=True):
|
104 |
-
super(ResNet, self).__init__()
|
105 |
-
|
106 |
-
# Generate RegNet parameters first
|
107 |
-
if isinstance(arch, str):
|
108 |
-
assert arch in self.arch_settings, \
|
109 |
-
f'"arch": "{arch}" is not one of the' \
|
110 |
-
' arch_settings'
|
111 |
-
arch = self.arch_settings[arch]
|
112 |
-
elif not isinstance(arch, dict):
|
113 |
-
raise ValueError('Expect "arch" to be either a string '
|
114 |
-
f'or a dict, got {type(arch)}')
|
115 |
-
|
116 |
-
widths, num_stages = self.generate_regnet(
|
117 |
-
arch['w0'],
|
118 |
-
arch['wa'],
|
119 |
-
arch['wm'],
|
120 |
-
arch['depth'],
|
121 |
-
)
|
122 |
-
# Convert to per stage format
|
123 |
-
stage_widths, stage_blocks = self.get_stages_from_blocks(widths)
|
124 |
-
# Generate group widths and bot muls
|
125 |
-
group_widths = [arch['group_w'] for _ in range(num_stages)]
|
126 |
-
self.bottleneck_ratio = [arch['bot_mul'] for _ in range(num_stages)]
|
127 |
-
# Adjust the compatibility of stage_widths and group_widths
|
128 |
-
stage_widths, group_widths = self.adjust_width_group(
|
129 |
-
stage_widths, self.bottleneck_ratio, group_widths)
|
130 |
-
|
131 |
-
# Group params by stage
|
132 |
-
self.stage_widths = stage_widths
|
133 |
-
self.group_widths = group_widths
|
134 |
-
self.depth = sum(stage_blocks)
|
135 |
-
self.stem_channels = stem_channels
|
136 |
-
self.base_channels = base_channels
|
137 |
-
self.num_stages = num_stages
|
138 |
-
assert num_stages >= 1 and num_stages <= 4
|
139 |
-
self.strides = strides
|
140 |
-
self.dilations = dilations
|
141 |
-
assert len(strides) == len(dilations) == num_stages
|
142 |
-
self.out_indices = out_indices
|
143 |
-
assert max(out_indices) < num_stages
|
144 |
-
self.style = style
|
145 |
-
self.deep_stem = deep_stem
|
146 |
-
self.avg_down = avg_down
|
147 |
-
self.frozen_stages = frozen_stages
|
148 |
-
self.conv_cfg = conv_cfg
|
149 |
-
self.norm_cfg = norm_cfg
|
150 |
-
self.with_cp = with_cp
|
151 |
-
self.norm_eval = norm_eval
|
152 |
-
self.dcn = dcn
|
153 |
-
self.stage_with_dcn = stage_with_dcn
|
154 |
-
if dcn is not None:
|
155 |
-
assert len(stage_with_dcn) == num_stages
|
156 |
-
self.plugins = plugins
|
157 |
-
self.zero_init_residual = zero_init_residual
|
158 |
-
self.block = Bottleneck
|
159 |
-
expansion_bak = self.block.expansion
|
160 |
-
self.block.expansion = 1
|
161 |
-
self.stage_blocks = stage_blocks[:num_stages]
|
162 |
-
|
163 |
-
self._make_stem_layer(in_channels, stem_channels)
|
164 |
-
|
165 |
-
self.inplanes = stem_channels
|
166 |
-
self.res_layers = []
|
167 |
-
for i, num_blocks in enumerate(self.stage_blocks):
|
168 |
-
stride = self.strides[i]
|
169 |
-
dilation = self.dilations[i]
|
170 |
-
group_width = self.group_widths[i]
|
171 |
-
width = int(round(self.stage_widths[i] * self.bottleneck_ratio[i]))
|
172 |
-
stage_groups = width // group_width
|
173 |
-
|
174 |
-
dcn = self.dcn if self.stage_with_dcn[i] else None
|
175 |
-
if self.plugins is not None:
|
176 |
-
stage_plugins = self.make_stage_plugins(self.plugins, i)
|
177 |
-
else:
|
178 |
-
stage_plugins = None
|
179 |
-
|
180 |
-
res_layer = self.make_res_layer(
|
181 |
-
block=self.block,
|
182 |
-
inplanes=self.inplanes,
|
183 |
-
planes=self.stage_widths[i],
|
184 |
-
num_blocks=num_blocks,
|
185 |
-
stride=stride,
|
186 |
-
dilation=dilation,
|
187 |
-
style=self.style,
|
188 |
-
avg_down=self.avg_down,
|
189 |
-
with_cp=self.with_cp,
|
190 |
-
conv_cfg=self.conv_cfg,
|
191 |
-
norm_cfg=self.norm_cfg,
|
192 |
-
dcn=dcn,
|
193 |
-
plugins=stage_plugins,
|
194 |
-
groups=stage_groups,
|
195 |
-
base_width=group_width,
|
196 |
-
base_channels=self.stage_widths[i])
|
197 |
-
self.inplanes = self.stage_widths[i]
|
198 |
-
layer_name = f'layer{i + 1}'
|
199 |
-
self.add_module(layer_name, res_layer)
|
200 |
-
self.res_layers.append(layer_name)
|
201 |
-
|
202 |
-
self._freeze_stages()
|
203 |
-
|
204 |
-
self.feat_dim = stage_widths[-1]
|
205 |
-
self.block.expansion = expansion_bak
|
206 |
-
|
207 |
-
def _make_stem_layer(self, in_channels, base_channels):
|
208 |
-
self.conv1 = build_conv_layer(
|
209 |
-
self.conv_cfg,
|
210 |
-
in_channels,
|
211 |
-
base_channels,
|
212 |
-
kernel_size=3,
|
213 |
-
stride=2,
|
214 |
-
padding=1,
|
215 |
-
bias=False)
|
216 |
-
self.norm1_name, norm1 = build_norm_layer(
|
217 |
-
self.norm_cfg, base_channels, postfix=1)
|
218 |
-
self.add_module(self.norm1_name, norm1)
|
219 |
-
self.relu = nn.ReLU(inplace=True)
|
220 |
-
|
221 |
-
def generate_regnet(self,
|
222 |
-
initial_width,
|
223 |
-
width_slope,
|
224 |
-
width_parameter,
|
225 |
-
depth,
|
226 |
-
divisor=8):
|
227 |
-
"""Generates per block width from RegNet parameters.
|
228 |
-
|
229 |
-
Args:
|
230 |
-
initial_width ([int]): Initial width of the backbone
|
231 |
-
width_slope ([float]): Slope of the quantized linear function
|
232 |
-
width_parameter ([int]): Parameter used to quantize the width.
|
233 |
-
depth ([int]): Depth of the backbone.
|
234 |
-
divisor (int, optional): The divisor of channels. Defaults to 8.
|
235 |
-
|
236 |
-
Returns:
|
237 |
-
list, int: return a list of widths of each stage and the number \
|
238 |
-
of stages
|
239 |
-
"""
|
240 |
-
assert width_slope >= 0
|
241 |
-
assert initial_width > 0
|
242 |
-
assert width_parameter > 1
|
243 |
-
assert initial_width % divisor == 0
|
244 |
-
widths_cont = np.arange(depth) * width_slope + initial_width
|
245 |
-
ks = np.round(
|
246 |
-
np.log(widths_cont / initial_width) / np.log(width_parameter))
|
247 |
-
widths = initial_width * np.power(width_parameter, ks)
|
248 |
-
widths = np.round(np.divide(widths, divisor)) * divisor
|
249 |
-
num_stages = len(np.unique(widths))
|
250 |
-
widths, widths_cont = widths.astype(int).tolist(), widths_cont.tolist()
|
251 |
-
return widths, num_stages
|
252 |
-
|
253 |
-
@staticmethod
|
254 |
-
def quantize_float(number, divisor):
|
255 |
-
"""Converts a float to closest non-zero int divisible by divisor.
|
256 |
-
|
257 |
-
Args:
|
258 |
-
number (int): Original number to be quantized.
|
259 |
-
divisor (int): Divisor used to quantize the number.
|
260 |
-
|
261 |
-
Returns:
|
262 |
-
int: quantized number that is divisible by devisor.
|
263 |
-
"""
|
264 |
-
return int(round(number / divisor) * divisor)
|
265 |
-
|
266 |
-
def adjust_width_group(self, widths, bottleneck_ratio, groups):
|
267 |
-
"""Adjusts the compatibility of widths and groups.
|
268 |
-
|
269 |
-
Args:
|
270 |
-
widths (list[int]): Width of each stage.
|
271 |
-
bottleneck_ratio (float): Bottleneck ratio.
|
272 |
-
groups (int): number of groups in each stage
|
273 |
-
|
274 |
-
Returns:
|
275 |
-
tuple(list): The adjusted widths and groups of each stage.
|
276 |
-
"""
|
277 |
-
bottleneck_width = [
|
278 |
-
int(w * b) for w, b in zip(widths, bottleneck_ratio)
|
279 |
-
]
|
280 |
-
groups = [min(g, w_bot) for g, w_bot in zip(groups, bottleneck_width)]
|
281 |
-
bottleneck_width = [
|
282 |
-
self.quantize_float(w_bot, g)
|
283 |
-
for w_bot, g in zip(bottleneck_width, groups)
|
284 |
-
]
|
285 |
-
widths = [
|
286 |
-
int(w_bot / b)
|
287 |
-
for w_bot, b in zip(bottleneck_width, bottleneck_ratio)
|
288 |
-
]
|
289 |
-
return widths, groups
|
290 |
-
|
291 |
-
def get_stages_from_blocks(self, widths):
|
292 |
-
"""Gets widths/stage_blocks of network at each stage.
|
293 |
-
|
294 |
-
Args:
|
295 |
-
widths (list[int]): Width in each stage.
|
296 |
-
|
297 |
-
Returns:
|
298 |
-
tuple(list): width and depth of each stage
|
299 |
-
"""
|
300 |
-
width_diff = [
|
301 |
-
width != width_prev
|
302 |
-
for width, width_prev in zip(widths + [0], [0] + widths)
|
303 |
-
]
|
304 |
-
stage_widths = [
|
305 |
-
width for width, diff in zip(widths, width_diff[:-1]) if diff
|
306 |
-
]
|
307 |
-
stage_blocks = np.diff([
|
308 |
-
depth for depth, diff in zip(range(len(width_diff)), width_diff)
|
309 |
-
if diff
|
310 |
-
]).tolist()
|
311 |
-
return stage_widths, stage_blocks
|
312 |
-
|
313 |
-
def forward(self, x):
|
314 |
-
"""Forward function."""
|
315 |
-
x = self.conv1(x)
|
316 |
-
x = self.norm1(x)
|
317 |
-
x = self.relu(x)
|
318 |
-
|
319 |
-
outs = []
|
320 |
-
for i, layer_name in enumerate(self.res_layers):
|
321 |
-
res_layer = getattr(self, layer_name)
|
322 |
-
x = res_layer(x)
|
323 |
-
if i in self.out_indices:
|
324 |
-
outs.append(x)
|
325 |
-
return tuple(outs)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/mmdet/models/dense_heads/fovea_head.py
DELETED
@@ -1,341 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.nn as nn
|
3 |
-
from mmcv.cnn import ConvModule, normal_init
|
4 |
-
from mmcv.ops import DeformConv2d
|
5 |
-
|
6 |
-
from mmdet.core import multi_apply, multiclass_nms
|
7 |
-
from ..builder import HEADS
|
8 |
-
from .anchor_free_head import AnchorFreeHead
|
9 |
-
|
10 |
-
INF = 1e8
|
11 |
-
|
12 |
-
|
13 |
-
class FeatureAlign(nn.Module):
|
14 |
-
|
15 |
-
def __init__(self,
|
16 |
-
in_channels,
|
17 |
-
out_channels,
|
18 |
-
kernel_size=3,
|
19 |
-
deform_groups=4):
|
20 |
-
super(FeatureAlign, self).__init__()
|
21 |
-
offset_channels = kernel_size * kernel_size * 2
|
22 |
-
self.conv_offset = nn.Conv2d(
|
23 |
-
4, deform_groups * offset_channels, 1, bias=False)
|
24 |
-
self.conv_adaption = DeformConv2d(
|
25 |
-
in_channels,
|
26 |
-
out_channels,
|
27 |
-
kernel_size=kernel_size,
|
28 |
-
padding=(kernel_size - 1) // 2,
|
29 |
-
deform_groups=deform_groups)
|
30 |
-
self.relu = nn.ReLU(inplace=True)
|
31 |
-
|
32 |
-
def init_weights(self):
|
33 |
-
normal_init(self.conv_offset, std=0.1)
|
34 |
-
normal_init(self.conv_adaption, std=0.01)
|
35 |
-
|
36 |
-
def forward(self, x, shape):
|
37 |
-
offset = self.conv_offset(shape)
|
38 |
-
x = self.relu(self.conv_adaption(x, offset))
|
39 |
-
return x
|
40 |
-
|
41 |
-
|
42 |
-
@HEADS.register_module()
|
43 |
-
class FoveaHead(AnchorFreeHead):
|
44 |
-
"""FoveaBox: Beyond Anchor-based Object Detector
|
45 |
-
https://arxiv.org/abs/1904.03797
|
46 |
-
"""
|
47 |
-
|
48 |
-
def __init__(self,
|
49 |
-
num_classes,
|
50 |
-
in_channels,
|
51 |
-
base_edge_list=(16, 32, 64, 128, 256),
|
52 |
-
scale_ranges=((8, 32), (16, 64), (32, 128), (64, 256), (128,
|
53 |
-
512)),
|
54 |
-
sigma=0.4,
|
55 |
-
with_deform=False,
|
56 |
-
deform_groups=4,
|
57 |
-
**kwargs):
|
58 |
-
self.base_edge_list = base_edge_list
|
59 |
-
self.scale_ranges = scale_ranges
|
60 |
-
self.sigma = sigma
|
61 |
-
self.with_deform = with_deform
|
62 |
-
self.deform_groups = deform_groups
|
63 |
-
super().__init__(num_classes, in_channels, **kwargs)
|
64 |
-
|
65 |
-
def _init_layers(self):
|
66 |
-
# box branch
|
67 |
-
super()._init_reg_convs()
|
68 |
-
self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1)
|
69 |
-
|
70 |
-
# cls branch
|
71 |
-
if not self.with_deform:
|
72 |
-
super()._init_cls_convs()
|
73 |
-
self.conv_cls = nn.Conv2d(
|
74 |
-
self.feat_channels, self.cls_out_channels, 3, padding=1)
|
75 |
-
else:
|
76 |
-
self.cls_convs = nn.ModuleList()
|
77 |
-
self.cls_convs.append(
|
78 |
-
ConvModule(
|
79 |
-
self.feat_channels, (self.feat_channels * 4),
|
80 |
-
3,
|
81 |
-
stride=1,
|
82 |
-
padding=1,
|
83 |
-
conv_cfg=self.conv_cfg,
|
84 |
-
norm_cfg=self.norm_cfg,
|
85 |
-
bias=self.norm_cfg is None))
|
86 |
-
self.cls_convs.append(
|
87 |
-
ConvModule((self.feat_channels * 4), (self.feat_channels * 4),
|
88 |
-
1,
|
89 |
-
stride=1,
|
90 |
-
padding=0,
|
91 |
-
conv_cfg=self.conv_cfg,
|
92 |
-
norm_cfg=self.norm_cfg,
|
93 |
-
bias=self.norm_cfg is None))
|
94 |
-
self.feature_adaption = FeatureAlign(
|
95 |
-
self.feat_channels,
|
96 |
-
self.feat_channels,
|
97 |
-
kernel_size=3,
|
98 |
-
deform_groups=self.deform_groups)
|
99 |
-
self.conv_cls = nn.Conv2d(
|
100 |
-
int(self.feat_channels * 4),
|
101 |
-
self.cls_out_channels,
|
102 |
-
3,
|
103 |
-
padding=1)
|
104 |
-
|
105 |
-
def init_weights(self):
|
106 |
-
super().init_weights()
|
107 |
-
if self.with_deform:
|
108 |
-
self.feature_adaption.init_weights()
|
109 |
-
|
110 |
-
def forward_single(self, x):
|
111 |
-
cls_feat = x
|
112 |
-
reg_feat = x
|
113 |
-
for reg_layer in self.reg_convs:
|
114 |
-
reg_feat = reg_layer(reg_feat)
|
115 |
-
bbox_pred = self.conv_reg(reg_feat)
|
116 |
-
if self.with_deform:
|
117 |
-
cls_feat = self.feature_adaption(cls_feat, bbox_pred.exp())
|
118 |
-
for cls_layer in self.cls_convs:
|
119 |
-
cls_feat = cls_layer(cls_feat)
|
120 |
-
cls_score = self.conv_cls(cls_feat)
|
121 |
-
return cls_score, bbox_pred
|
122 |
-
|
123 |
-
def _get_points_single(self, *args, **kwargs):
|
124 |
-
y, x = super()._get_points_single(*args, **kwargs)
|
125 |
-
return y + 0.5, x + 0.5
|
126 |
-
|
127 |
-
def loss(self,
|
128 |
-
cls_scores,
|
129 |
-
bbox_preds,
|
130 |
-
gt_bbox_list,
|
131 |
-
gt_label_list,
|
132 |
-
img_metas,
|
133 |
-
gt_bboxes_ignore=None):
|
134 |
-
assert len(cls_scores) == len(bbox_preds)
|
135 |
-
|
136 |
-
featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
|
137 |
-
points = self.get_points(featmap_sizes, bbox_preds[0].dtype,
|
138 |
-
bbox_preds[0].device)
|
139 |
-
num_imgs = cls_scores[0].size(0)
|
140 |
-
flatten_cls_scores = [
|
141 |
-
cls_score.permute(0, 2, 3, 1).reshape(-1, self.cls_out_channels)
|
142 |
-
for cls_score in cls_scores
|
143 |
-
]
|
144 |
-
flatten_bbox_preds = [
|
145 |
-
bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4)
|
146 |
-
for bbox_pred in bbox_preds
|
147 |
-
]
|
148 |
-
flatten_cls_scores = torch.cat(flatten_cls_scores)
|
149 |
-
flatten_bbox_preds = torch.cat(flatten_bbox_preds)
|
150 |
-
flatten_labels, flatten_bbox_targets = self.get_targets(
|
151 |
-
gt_bbox_list, gt_label_list, featmap_sizes, points)
|
152 |
-
|
153 |
-
# FG cat_id: [0, num_classes -1], BG cat_id: num_classes
|
154 |
-
pos_inds = ((flatten_labels >= 0)
|
155 |
-
& (flatten_labels < self.num_classes)).nonzero().view(-1)
|
156 |
-
num_pos = len(pos_inds)
|
157 |
-
|
158 |
-
loss_cls = self.loss_cls(
|
159 |
-
flatten_cls_scores, flatten_labels, avg_factor=num_pos + num_imgs)
|
160 |
-
if num_pos > 0:
|
161 |
-
pos_bbox_preds = flatten_bbox_preds[pos_inds]
|
162 |
-
pos_bbox_targets = flatten_bbox_targets[pos_inds]
|
163 |
-
pos_weights = pos_bbox_targets.new_zeros(
|
164 |
-
pos_bbox_targets.size()) + 1.0
|
165 |
-
loss_bbox = self.loss_bbox(
|
166 |
-
pos_bbox_preds,
|
167 |
-
pos_bbox_targets,
|
168 |
-
pos_weights,
|
169 |
-
avg_factor=num_pos)
|
170 |
-
else:
|
171 |
-
loss_bbox = torch.tensor(
|
172 |
-
0,
|
173 |
-
dtype=flatten_bbox_preds.dtype,
|
174 |
-
device=flatten_bbox_preds.device)
|
175 |
-
return dict(loss_cls=loss_cls, loss_bbox=loss_bbox)
|
176 |
-
|
177 |
-
def get_targets(self, gt_bbox_list, gt_label_list, featmap_sizes, points):
|
178 |
-
label_list, bbox_target_list = multi_apply(
|
179 |
-
self._get_target_single,
|
180 |
-
gt_bbox_list,
|
181 |
-
gt_label_list,
|
182 |
-
featmap_size_list=featmap_sizes,
|
183 |
-
point_list=points)
|
184 |
-
flatten_labels = [
|
185 |
-
torch.cat([
|
186 |
-
labels_level_img.flatten() for labels_level_img in labels_level
|
187 |
-
]) for labels_level in zip(*label_list)
|
188 |
-
]
|
189 |
-
flatten_bbox_targets = [
|
190 |
-
torch.cat([
|
191 |
-
bbox_targets_level_img.reshape(-1, 4)
|
192 |
-
for bbox_targets_level_img in bbox_targets_level
|
193 |
-
]) for bbox_targets_level in zip(*bbox_target_list)
|
194 |
-
]
|
195 |
-
flatten_labels = torch.cat(flatten_labels)
|
196 |
-
flatten_bbox_targets = torch.cat(flatten_bbox_targets)
|
197 |
-
return flatten_labels, flatten_bbox_targets
|
198 |
-
|
199 |
-
def _get_target_single(self,
|
200 |
-
gt_bboxes_raw,
|
201 |
-
gt_labels_raw,
|
202 |
-
featmap_size_list=None,
|
203 |
-
point_list=None):
|
204 |
-
|
205 |
-
gt_areas = torch.sqrt((gt_bboxes_raw[:, 2] - gt_bboxes_raw[:, 0]) *
|
206 |
-
(gt_bboxes_raw[:, 3] - gt_bboxes_raw[:, 1]))
|
207 |
-
label_list = []
|
208 |
-
bbox_target_list = []
|
209 |
-
# for each pyramid, find the cls and box target
|
210 |
-
for base_len, (lower_bound, upper_bound), stride, featmap_size, \
|
211 |
-
(y, x) in zip(self.base_edge_list, self.scale_ranges,
|
212 |
-
self.strides, featmap_size_list, point_list):
|
213 |
-
# FG cat_id: [0, num_classes -1], BG cat_id: num_classes
|
214 |
-
labels = gt_labels_raw.new_zeros(featmap_size) + self.num_classes
|
215 |
-
bbox_targets = gt_bboxes_raw.new(featmap_size[0], featmap_size[1],
|
216 |
-
4) + 1
|
217 |
-
# scale assignment
|
218 |
-
hit_indices = ((gt_areas >= lower_bound) &
|
219 |
-
(gt_areas <= upper_bound)).nonzero().flatten()
|
220 |
-
if len(hit_indices) == 0:
|
221 |
-
label_list.append(labels)
|
222 |
-
bbox_target_list.append(torch.log(bbox_targets))
|
223 |
-
continue
|
224 |
-
_, hit_index_order = torch.sort(-gt_areas[hit_indices])
|
225 |
-
hit_indices = hit_indices[hit_index_order]
|
226 |
-
gt_bboxes = gt_bboxes_raw[hit_indices, :] / stride
|
227 |
-
gt_labels = gt_labels_raw[hit_indices]
|
228 |
-
half_w = 0.5 * (gt_bboxes[:, 2] - gt_bboxes[:, 0])
|
229 |
-
half_h = 0.5 * (gt_bboxes[:, 3] - gt_bboxes[:, 1])
|
230 |
-
# valid fovea area: left, right, top, down
|
231 |
-
pos_left = torch.ceil(
|
232 |
-
gt_bboxes[:, 0] + (1 - self.sigma) * half_w - 0.5).long().\
|
233 |
-
clamp(0, featmap_size[1] - 1)
|
234 |
-
pos_right = torch.floor(
|
235 |
-
gt_bboxes[:, 0] + (1 + self.sigma) * half_w - 0.5).long().\
|
236 |
-
clamp(0, featmap_size[1] - 1)
|
237 |
-
pos_top = torch.ceil(
|
238 |
-
gt_bboxes[:, 1] + (1 - self.sigma) * half_h - 0.5).long().\
|
239 |
-
clamp(0, featmap_size[0] - 1)
|
240 |
-
pos_down = torch.floor(
|
241 |
-
gt_bboxes[:, 1] + (1 + self.sigma) * half_h - 0.5).long().\
|
242 |
-
clamp(0, featmap_size[0] - 1)
|
243 |
-
for px1, py1, px2, py2, label, (gt_x1, gt_y1, gt_x2, gt_y2) in \
|
244 |
-
zip(pos_left, pos_top, pos_right, pos_down, gt_labels,
|
245 |
-
gt_bboxes_raw[hit_indices, :]):
|
246 |
-
labels[py1:py2 + 1, px1:px2 + 1] = label
|
247 |
-
bbox_targets[py1:py2 + 1, px1:px2 + 1, 0] = \
|
248 |
-
(stride * x[py1:py2 + 1, px1:px2 + 1] - gt_x1) / base_len
|
249 |
-
bbox_targets[py1:py2 + 1, px1:px2 + 1, 1] = \
|
250 |
-
(stride * y[py1:py2 + 1, px1:px2 + 1] - gt_y1) / base_len
|
251 |
-
bbox_targets[py1:py2 + 1, px1:px2 + 1, 2] = \
|
252 |
-
(gt_x2 - stride * x[py1:py2 + 1, px1:px2 + 1]) / base_len
|
253 |
-
bbox_targets[py1:py2 + 1, px1:px2 + 1, 3] = \
|
254 |
-
(gt_y2 - stride * y[py1:py2 + 1, px1:px2 + 1]) / base_len
|
255 |
-
bbox_targets = bbox_targets.clamp(min=1. / 16, max=16.)
|
256 |
-
label_list.append(labels)
|
257 |
-
bbox_target_list.append(torch.log(bbox_targets))
|
258 |
-
return label_list, bbox_target_list
|
259 |
-
|
260 |
-
def get_bboxes(self,
|
261 |
-
cls_scores,
|
262 |
-
bbox_preds,
|
263 |
-
img_metas,
|
264 |
-
cfg=None,
|
265 |
-
rescale=None):
|
266 |
-
assert len(cls_scores) == len(bbox_preds)
|
267 |
-
num_levels = len(cls_scores)
|
268 |
-
featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
|
269 |
-
points = self.get_points(
|
270 |
-
featmap_sizes,
|
271 |
-
bbox_preds[0].dtype,
|
272 |
-
bbox_preds[0].device,
|
273 |
-
flatten=True)
|
274 |
-
result_list = []
|
275 |
-
for img_id in range(len(img_metas)):
|
276 |
-
cls_score_list = [
|
277 |
-
cls_scores[i][img_id].detach() for i in range(num_levels)
|
278 |
-
]
|
279 |
-
bbox_pred_list = [
|
280 |
-
bbox_preds[i][img_id].detach() for i in range(num_levels)
|
281 |
-
]
|
282 |
-
img_shape = img_metas[img_id]['img_shape']
|
283 |
-
scale_factor = img_metas[img_id]['scale_factor']
|
284 |
-
det_bboxes = self._get_bboxes_single(cls_score_list,
|
285 |
-
bbox_pred_list, featmap_sizes,
|
286 |
-
points, img_shape,
|
287 |
-
scale_factor, cfg, rescale)
|
288 |
-
result_list.append(det_bboxes)
|
289 |
-
return result_list
|
290 |
-
|
291 |
-
def _get_bboxes_single(self,
|
292 |
-
cls_scores,
|
293 |
-
bbox_preds,
|
294 |
-
featmap_sizes,
|
295 |
-
point_list,
|
296 |
-
img_shape,
|
297 |
-
scale_factor,
|
298 |
-
cfg,
|
299 |
-
rescale=False):
|
300 |
-
cfg = self.test_cfg if cfg is None else cfg
|
301 |
-
assert len(cls_scores) == len(bbox_preds) == len(point_list)
|
302 |
-
det_bboxes = []
|
303 |
-
det_scores = []
|
304 |
-
for cls_score, bbox_pred, featmap_size, stride, base_len, (y, x) \
|
305 |
-
in zip(cls_scores, bbox_preds, featmap_sizes, self.strides,
|
306 |
-
self.base_edge_list, point_list):
|
307 |
-
assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
|
308 |
-
scores = cls_score.permute(1, 2, 0).reshape(
|
309 |
-
-1, self.cls_out_channels).sigmoid()
|
310 |
-
bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4).exp()
|
311 |
-
nms_pre = cfg.get('nms_pre', -1)
|
312 |
-
if (nms_pre > 0) and (scores.shape[0] > nms_pre):
|
313 |
-
max_scores, _ = scores.max(dim=1)
|
314 |
-
_, topk_inds = max_scores.topk(nms_pre)
|
315 |
-
bbox_pred = bbox_pred[topk_inds, :]
|
316 |
-
scores = scores[topk_inds, :]
|
317 |
-
y = y[topk_inds]
|
318 |
-
x = x[topk_inds]
|
319 |
-
x1 = (stride * x - base_len * bbox_pred[:, 0]).\
|
320 |
-
clamp(min=0, max=img_shape[1] - 1)
|
321 |
-
y1 = (stride * y - base_len * bbox_pred[:, 1]).\
|
322 |
-
clamp(min=0, max=img_shape[0] - 1)
|
323 |
-
x2 = (stride * x + base_len * bbox_pred[:, 2]).\
|
324 |
-
clamp(min=0, max=img_shape[1] - 1)
|
325 |
-
y2 = (stride * y + base_len * bbox_pred[:, 3]).\
|
326 |
-
clamp(min=0, max=img_shape[0] - 1)
|
327 |
-
bboxes = torch.stack([x1, y1, x2, y2], -1)
|
328 |
-
det_bboxes.append(bboxes)
|
329 |
-
det_scores.append(scores)
|
330 |
-
det_bboxes = torch.cat(det_bboxes)
|
331 |
-
if rescale:
|
332 |
-
det_bboxes /= det_bboxes.new_tensor(scale_factor)
|
333 |
-
det_scores = torch.cat(det_scores)
|
334 |
-
padding = det_scores.new_zeros(det_scores.shape[0], 1)
|
335 |
-
# remind that we set FG labels to [0, num_class-1] since mmdet v2.0
|
336 |
-
# BG cat_id: num_class
|
337 |
-
det_scores = torch.cat([det_scores, padding], dim=1)
|
338 |
-
det_bboxes, det_labels = multiclass_nms(det_bboxes, det_scores,
|
339 |
-
cfg.score_thr, cfg.nms,
|
340 |
-
cfg.max_per_img)
|
341 |
-
return det_bboxes, det_labels
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/mmdet/utils/profiling.py
DELETED
@@ -1,39 +0,0 @@
|
|
1 |
-
import contextlib
|
2 |
-
import sys
|
3 |
-
import time
|
4 |
-
|
5 |
-
import torch
|
6 |
-
|
7 |
-
if sys.version_info >= (3, 7):
|
8 |
-
|
9 |
-
@contextlib.contextmanager
|
10 |
-
def profile_time(trace_name,
|
11 |
-
name,
|
12 |
-
enabled=True,
|
13 |
-
stream=None,
|
14 |
-
end_stream=None):
|
15 |
-
"""Print time spent by CPU and GPU.
|
16 |
-
|
17 |
-
Useful as a temporary context manager to find sweet spots of code
|
18 |
-
suitable for async implementation.
|
19 |
-
"""
|
20 |
-
if (not enabled) or not torch.cuda.is_available():
|
21 |
-
yield
|
22 |
-
return
|
23 |
-
stream = stream if stream else torch.cuda.current_stream()
|
24 |
-
end_stream = end_stream if end_stream else stream
|
25 |
-
start = torch.cuda.Event(enable_timing=True)
|
26 |
-
end = torch.cuda.Event(enable_timing=True)
|
27 |
-
stream.record_event(start)
|
28 |
-
try:
|
29 |
-
cpu_start = time.monotonic()
|
30 |
-
yield
|
31 |
-
finally:
|
32 |
-
cpu_end = time.monotonic()
|
33 |
-
end_stream.record_event(end)
|
34 |
-
end.synchronize()
|
35 |
-
cpu_time = (cpu_end - cpu_start) * 1000
|
36 |
-
gpu_time = start.elapsed_time(end)
|
37 |
-
msg = f'{trace_name} {name} cpu_time {cpu_time:.2f} ms '
|
38 |
-
msg += f'gpu_time {gpu_time:.2f} ms stream {stream}'
|
39 |
-
print(msg, end_stream)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Caoyunkang/Segment-Any-Anomaly/SAM/scripts/export_onnx_model.py
DELETED
@@ -1,204 +0,0 @@
|
|
1 |
-
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
import torch
|
8 |
-
|
9 |
-
from SAM import build_sam, build_sam_vit_b, build_sam_vit_l
|
10 |
-
from SAM.utils.onnx import SamOnnxModel
|
11 |
-
|
12 |
-
import argparse
|
13 |
-
import warnings
|
14 |
-
|
15 |
-
try:
|
16 |
-
import onnxruntime # type: ignore
|
17 |
-
|
18 |
-
onnxruntime_exists = True
|
19 |
-
except ImportError:
|
20 |
-
onnxruntime_exists = False
|
21 |
-
|
22 |
-
parser = argparse.ArgumentParser(
|
23 |
-
description="Export the SAM prompt encoder and mask decoder to an ONNX model."
|
24 |
-
)
|
25 |
-
|
26 |
-
parser.add_argument(
|
27 |
-
"--checkpoint", type=str, required=True, help="The path to the SAM model checkpoint."
|
28 |
-
)
|
29 |
-
|
30 |
-
parser.add_argument(
|
31 |
-
"--output", type=str, required=True, help="The filename to save the ONNX model to."
|
32 |
-
)
|
33 |
-
|
34 |
-
parser.add_argument(
|
35 |
-
"--model-type",
|
36 |
-
type=str,
|
37 |
-
default="default",
|
38 |
-
help="In ['default', 'vit_b', 'vit_l']. Which type of SAM model to export.",
|
39 |
-
)
|
40 |
-
|
41 |
-
parser.add_argument(
|
42 |
-
"--return-single-mask",
|
43 |
-
action="store_true",
|
44 |
-
help=(
|
45 |
-
"If true, the exported ONNX model will only return the best mask, "
|
46 |
-
"instead of returning multiple masks. For high resolution images "
|
47 |
-
"this can improve runtime when upscaling masks is expensive."
|
48 |
-
),
|
49 |
-
)
|
50 |
-
|
51 |
-
parser.add_argument(
|
52 |
-
"--opset",
|
53 |
-
type=int,
|
54 |
-
default=17,
|
55 |
-
help="The ONNX opset version to use. Must be >=11",
|
56 |
-
)
|
57 |
-
|
58 |
-
parser.add_argument(
|
59 |
-
"--quantize-out",
|
60 |
-
type=str,
|
61 |
-
default=None,
|
62 |
-
help=(
|
63 |
-
"If set, will quantize the model and save it with this name. "
|
64 |
-
"Quantization is performed with quantize_dynamic from onnxruntime.quantization.quantize."
|
65 |
-
),
|
66 |
-
)
|
67 |
-
|
68 |
-
parser.add_argument(
|
69 |
-
"--gelu-approximate",
|
70 |
-
action="store_true",
|
71 |
-
help=(
|
72 |
-
"Replace GELU operations with approximations using tanh. Useful "
|
73 |
-
"for some runtimes that have slow or unimplemented erf ops, used in GELU."
|
74 |
-
),
|
75 |
-
)
|
76 |
-
|
77 |
-
parser.add_argument(
|
78 |
-
"--use-stability-score",
|
79 |
-
action="store_true",
|
80 |
-
help=(
|
81 |
-
"Replaces the model's predicted mask quality score with the stability "
|
82 |
-
"score calculated on the low resolution masks using an offset of 1.0. "
|
83 |
-
),
|
84 |
-
)
|
85 |
-
|
86 |
-
parser.add_argument(
|
87 |
-
"--return-extra-metrics",
|
88 |
-
action="store_true",
|
89 |
-
help=(
|
90 |
-
"The model will return five results: (masks, scores, stability_scores, "
|
91 |
-
"areas, low_res_logits) instead of the usual three. This can be "
|
92 |
-
"significantly slower for high resolution outputs."
|
93 |
-
),
|
94 |
-
)
|
95 |
-
|
96 |
-
|
97 |
-
def run_export(
|
98 |
-
model_type: str,
|
99 |
-
checkpoint: str,
|
100 |
-
output: str,
|
101 |
-
opset: int,
|
102 |
-
return_single_mask: bool,
|
103 |
-
gelu_approximate: bool = False,
|
104 |
-
use_stability_score: bool = False,
|
105 |
-
return_extra_metrics=False,
|
106 |
-
):
|
107 |
-
print("Loading model...")
|
108 |
-
if model_type == "vit_b":
|
109 |
-
sam = build_sam_vit_b(checkpoint)
|
110 |
-
elif model_type == "vit_l":
|
111 |
-
sam = build_sam_vit_l(checkpoint)
|
112 |
-
else:
|
113 |
-
sam = build_sam(checkpoint)
|
114 |
-
|
115 |
-
onnx_model = SamOnnxModel(
|
116 |
-
model=sam,
|
117 |
-
return_single_mask=return_single_mask,
|
118 |
-
use_stability_score=use_stability_score,
|
119 |
-
return_extra_metrics=return_extra_metrics,
|
120 |
-
)
|
121 |
-
|
122 |
-
if gelu_approximate:
|
123 |
-
for n, m in onnx_model.named_modules():
|
124 |
-
if isinstance(m, torch.nn.GELU):
|
125 |
-
m.approximate = "tanh"
|
126 |
-
|
127 |
-
dynamic_axes = {
|
128 |
-
"point_coords": {1: "num_points"},
|
129 |
-
"point_labels": {1: "num_points"},
|
130 |
-
}
|
131 |
-
|
132 |
-
embed_dim = sam.prompt_encoder.embed_dim
|
133 |
-
embed_size = sam.prompt_encoder.image_embedding_size
|
134 |
-
mask_input_size = [4 * x for x in embed_size]
|
135 |
-
dummy_inputs = {
|
136 |
-
"image_embeddings": torch.randn(1, embed_dim, *embed_size, dtype=torch.float),
|
137 |
-
"point_coords": torch.randint(low=0, high=1024, size=(1, 5, 2), dtype=torch.float),
|
138 |
-
"point_labels": torch.randint(low=0, high=4, size=(1, 5), dtype=torch.float),
|
139 |
-
"mask_input": torch.randn(1, 1, *mask_input_size, dtype=torch.float),
|
140 |
-
"has_mask_input": torch.tensor([1], dtype=torch.float),
|
141 |
-
"orig_im_size": torch.tensor([1500, 2250], dtype=torch.float),
|
142 |
-
}
|
143 |
-
|
144 |
-
_ = onnx_model(**dummy_inputs)
|
145 |
-
|
146 |
-
output_names = ["masks", "iou_predictions", "low_res_masks"]
|
147 |
-
|
148 |
-
with warnings.catch_warnings():
|
149 |
-
warnings.filterwarnings("ignore", category=torch.jit.TracerWarning)
|
150 |
-
warnings.filterwarnings("ignore", category=UserWarning)
|
151 |
-
with open(output, "wb") as f:
|
152 |
-
print(f"Exporing onnx model to {output}...")
|
153 |
-
torch.onnx.export(
|
154 |
-
onnx_model,
|
155 |
-
tuple(dummy_inputs.values()),
|
156 |
-
f,
|
157 |
-
export_params=True,
|
158 |
-
verbose=False,
|
159 |
-
opset_version=opset,
|
160 |
-
do_constant_folding=True,
|
161 |
-
input_names=list(dummy_inputs.keys()),
|
162 |
-
output_names=output_names,
|
163 |
-
dynamic_axes=dynamic_axes,
|
164 |
-
)
|
165 |
-
|
166 |
-
if onnxruntime_exists:
|
167 |
-
ort_inputs = {k: to_numpy(v) for k, v in dummy_inputs.items()}
|
168 |
-
ort_session = onnxruntime.InferenceSession(output)
|
169 |
-
_ = ort_session.run(None, ort_inputs)
|
170 |
-
print("Model has successfully been run with ONNXRuntime.")
|
171 |
-
|
172 |
-
|
173 |
-
def to_numpy(tensor):
|
174 |
-
return tensor.cpu().numpy()
|
175 |
-
|
176 |
-
|
177 |
-
if __name__ == "__main__":
|
178 |
-
args = parser.parse_args()
|
179 |
-
run_export(
|
180 |
-
model_type=args.model_type,
|
181 |
-
checkpoint=args.checkpoint,
|
182 |
-
output=args.output,
|
183 |
-
opset=args.opset,
|
184 |
-
return_single_mask=args.return_single_mask,
|
185 |
-
gelu_approximate=args.gelu_approximate,
|
186 |
-
use_stability_score=args.use_stability_score,
|
187 |
-
return_extra_metrics=args.return_extra_metrics,
|
188 |
-
)
|
189 |
-
|
190 |
-
if args.quantize_out is not None:
|
191 |
-
assert onnxruntime_exists, "onnxruntime is required to quantize the model."
|
192 |
-
from onnxruntime.quantization import QuantType # type: ignore
|
193 |
-
from onnxruntime.quantization.quantize import quantize_dynamic # type: ignore
|
194 |
-
|
195 |
-
print(f"Quantizing model and writing to {args.quantize_out}...")
|
196 |
-
quantize_dynamic(
|
197 |
-
model_input=args.output,
|
198 |
-
model_output=args.quantize_out,
|
199 |
-
optimize_model=True,
|
200 |
-
per_channel=False,
|
201 |
-
reduce_range=False,
|
202 |
-
weight_type=QuantType.QUInt8,
|
203 |
-
)
|
204 |
-
print("Done!")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CofAI/chat.b4/client/css/options.css
DELETED
@@ -1,10 +0,0 @@
|
|
1 |
-
.options-container {
|
2 |
-
display: flex;
|
3 |
-
flex-wrap: wrap;
|
4 |
-
}
|
5 |
-
|
6 |
-
@media screen and (max-width: 990px) {
|
7 |
-
.options-container {
|
8 |
-
justify-content: space-between;
|
9 |
-
}
|
10 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CofAI/chat/client/css/message.css
DELETED
@@ -1,65 +0,0 @@
|
|
1 |
-
.message {
|
2 |
-
width: 100%;
|
3 |
-
overflow-wrap: break-word;
|
4 |
-
display: flex;
|
5 |
-
gap: var(--section-gap);
|
6 |
-
padding: var(--section-gap);
|
7 |
-
padding-bottom: 0;
|
8 |
-
}
|
9 |
-
|
10 |
-
.message:last-child {
|
11 |
-
animation: 0.6s show_message;
|
12 |
-
}
|
13 |
-
|
14 |
-
@keyframes show_message {
|
15 |
-
from {
|
16 |
-
transform: translateY(10px);
|
17 |
-
opacity: 0;
|
18 |
-
}
|
19 |
-
}
|
20 |
-
|
21 |
-
.message .avatar-container img {
|
22 |
-
max-width: 48px;
|
23 |
-
max-height: 48px;
|
24 |
-
box-shadow: 0.4px 0.5px 0.7px -2px rgba(0, 0, 0, 0.08), 1.1px 1.3px 2px -2px rgba(0, 0, 0, 0.041),
|
25 |
-
2.7px 3px 4.8px -2px rgba(0, 0, 0, 0.029), 9px 10px 16px -2px rgba(0, 0, 0, 0.022);
|
26 |
-
}
|
27 |
-
|
28 |
-
.message .content {
|
29 |
-
display: flex;
|
30 |
-
flex-direction: column;
|
31 |
-
width: 90%;
|
32 |
-
gap: 18px;
|
33 |
-
}
|
34 |
-
|
35 |
-
.message .content p,
|
36 |
-
.message .content li,
|
37 |
-
.message .content code {
|
38 |
-
font-size: 1rem;
|
39 |
-
line-height: 1.3;
|
40 |
-
}
|
41 |
-
|
42 |
-
@media screen and (max-height: 720px) {
|
43 |
-
.message {
|
44 |
-
padding: 12px;
|
45 |
-
gap: 0;
|
46 |
-
}
|
47 |
-
|
48 |
-
.message .content {
|
49 |
-
margin-left: 8px;
|
50 |
-
width: 80%;
|
51 |
-
}
|
52 |
-
|
53 |
-
.message .avatar-container img {
|
54 |
-
max-width: 32px;
|
55 |
-
max-height: 32px;
|
56 |
-
}
|
57 |
-
|
58 |
-
.message .content,
|
59 |
-
.message .content p,
|
60 |
-
.message .content li,
|
61 |
-
.message .content code {
|
62 |
-
font-size: 0.875rem;
|
63 |
-
line-height: 1.3;
|
64 |
-
}
|
65 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DEEMOSTECH/ChatAvatar/index.html
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="/favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="DreamFace"/><link rel="apple-touch-icon" href="/logo192.png"/><link rel="manifest" href="/manifest.json"/><title>DreamFace</title><script defer="defer" src="/static/js/main.c187623b.js"></script><link href="/static/css/main.b0d5db9d.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/configTools.py
DELETED
@@ -1,348 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Code of the config system; not related to fontTools or fonts in particular.
|
3 |
-
|
4 |
-
The options that are specific to fontTools are in :mod:`fontTools.config`.
|
5 |
-
|
6 |
-
To create your own config system, you need to create an instance of
|
7 |
-
:class:`Options`, and a subclass of :class:`AbstractConfig` with its
|
8 |
-
``options`` class variable set to your instance of Options.
|
9 |
-
|
10 |
-
"""
|
11 |
-
from __future__ import annotations
|
12 |
-
|
13 |
-
import logging
|
14 |
-
from dataclasses import dataclass
|
15 |
-
from typing import (
|
16 |
-
Any,
|
17 |
-
Callable,
|
18 |
-
ClassVar,
|
19 |
-
Dict,
|
20 |
-
Iterable,
|
21 |
-
Mapping,
|
22 |
-
MutableMapping,
|
23 |
-
Optional,
|
24 |
-
Set,
|
25 |
-
Union,
|
26 |
-
)
|
27 |
-
|
28 |
-
|
29 |
-
log = logging.getLogger(__name__)
|
30 |
-
|
31 |
-
__all__ = [
|
32 |
-
"AbstractConfig",
|
33 |
-
"ConfigAlreadyRegisteredError",
|
34 |
-
"ConfigError",
|
35 |
-
"ConfigUnknownOptionError",
|
36 |
-
"ConfigValueParsingError",
|
37 |
-
"ConfigValueValidationError",
|
38 |
-
"Option",
|
39 |
-
"Options",
|
40 |
-
]
|
41 |
-
|
42 |
-
|
43 |
-
class ConfigError(Exception):
|
44 |
-
"""Base exception for the config module."""
|
45 |
-
|
46 |
-
|
47 |
-
class ConfigAlreadyRegisteredError(ConfigError):
|
48 |
-
"""Raised when a module tries to register a configuration option that
|
49 |
-
already exists.
|
50 |
-
|
51 |
-
Should not be raised too much really, only when developing new fontTools
|
52 |
-
modules.
|
53 |
-
"""
|
54 |
-
|
55 |
-
def __init__(self, name):
|
56 |
-
super().__init__(f"Config option {name} is already registered.")
|
57 |
-
|
58 |
-
|
59 |
-
class ConfigValueParsingError(ConfigError):
|
60 |
-
"""Raised when a configuration value cannot be parsed."""
|
61 |
-
|
62 |
-
def __init__(self, name, value):
|
63 |
-
super().__init__(
|
64 |
-
f"Config option {name}: value cannot be parsed (given {repr(value)})"
|
65 |
-
)
|
66 |
-
|
67 |
-
|
68 |
-
class ConfigValueValidationError(ConfigError):
|
69 |
-
"""Raised when a configuration value cannot be validated."""
|
70 |
-
|
71 |
-
def __init__(self, name, value):
|
72 |
-
super().__init__(
|
73 |
-
f"Config option {name}: value is invalid (given {repr(value)})"
|
74 |
-
)
|
75 |
-
|
76 |
-
|
77 |
-
class ConfigUnknownOptionError(ConfigError):
|
78 |
-
"""Raised when a configuration option is unknown."""
|
79 |
-
|
80 |
-
def __init__(self, option_or_name):
|
81 |
-
name = (
|
82 |
-
f"'{option_or_name.name}' (id={id(option_or_name)})>"
|
83 |
-
if isinstance(option_or_name, Option)
|
84 |
-
else f"'{option_or_name}'"
|
85 |
-
)
|
86 |
-
super().__init__(f"Config option {name} is unknown")
|
87 |
-
|
88 |
-
|
89 |
-
# eq=False because Options are unique, not fungible objects
|
90 |
-
@dataclass(frozen=True, eq=False)
|
91 |
-
class Option:
|
92 |
-
name: str
|
93 |
-
"""Unique name identifying the option (e.g. package.module:MY_OPTION)."""
|
94 |
-
help: str
|
95 |
-
"""Help text for this option."""
|
96 |
-
default: Any
|
97 |
-
"""Default value for this option."""
|
98 |
-
parse: Callable[[str], Any]
|
99 |
-
"""Turn input (e.g. string) into proper type. Only when reading from file."""
|
100 |
-
validate: Optional[Callable[[Any], bool]] = None
|
101 |
-
"""Return true if the given value is an acceptable value."""
|
102 |
-
|
103 |
-
@staticmethod
|
104 |
-
def parse_optional_bool(v: str) -> Optional[bool]:
|
105 |
-
s = str(v).lower()
|
106 |
-
if s in {"0", "no", "false"}:
|
107 |
-
return False
|
108 |
-
if s in {"1", "yes", "true"}:
|
109 |
-
return True
|
110 |
-
if s in {"auto", "none"}:
|
111 |
-
return None
|
112 |
-
raise ValueError("invalid optional bool: {v!r}")
|
113 |
-
|
114 |
-
@staticmethod
|
115 |
-
def validate_optional_bool(v: Any) -> bool:
|
116 |
-
return v is None or isinstance(v, bool)
|
117 |
-
|
118 |
-
|
119 |
-
class Options(Mapping):
|
120 |
-
"""Registry of available options for a given config system.
|
121 |
-
|
122 |
-
Define new options using the :meth:`register()` method.
|
123 |
-
|
124 |
-
Access existing options using the Mapping interface.
|
125 |
-
"""
|
126 |
-
|
127 |
-
__options: Dict[str, Option]
|
128 |
-
|
129 |
-
def __init__(self, other: "Options" = None) -> None:
|
130 |
-
self.__options = {}
|
131 |
-
if other is not None:
|
132 |
-
for option in other.values():
|
133 |
-
self.register_option(option)
|
134 |
-
|
135 |
-
def register(
|
136 |
-
self,
|
137 |
-
name: str,
|
138 |
-
help: str,
|
139 |
-
default: Any,
|
140 |
-
parse: Callable[[str], Any],
|
141 |
-
validate: Optional[Callable[[Any], bool]] = None,
|
142 |
-
) -> Option:
|
143 |
-
"""Create and register a new option."""
|
144 |
-
return self.register_option(Option(name, help, default, parse, validate))
|
145 |
-
|
146 |
-
def register_option(self, option: Option) -> Option:
|
147 |
-
"""Register a new option."""
|
148 |
-
name = option.name
|
149 |
-
if name in self.__options:
|
150 |
-
raise ConfigAlreadyRegisteredError(name)
|
151 |
-
self.__options[name] = option
|
152 |
-
return option
|
153 |
-
|
154 |
-
def is_registered(self, option: Option) -> bool:
|
155 |
-
"""Return True if the same option object is already registered."""
|
156 |
-
return self.__options.get(option.name) is option
|
157 |
-
|
158 |
-
def __getitem__(self, key: str) -> Option:
|
159 |
-
return self.__options.__getitem__(key)
|
160 |
-
|
161 |
-
def __iter__(self) -> Iterator[str]:
|
162 |
-
return self.__options.__iter__()
|
163 |
-
|
164 |
-
def __len__(self) -> int:
|
165 |
-
return self.__options.__len__()
|
166 |
-
|
167 |
-
def __repr__(self) -> str:
|
168 |
-
return (
|
169 |
-
f"{self.__class__.__name__}({{\n"
|
170 |
-
+ "".join(
|
171 |
-
f" {k!r}: Option(default={v.default!r}, ...),\n"
|
172 |
-
for k, v in self.__options.items()
|
173 |
-
)
|
174 |
-
+ "})"
|
175 |
-
)
|
176 |
-
|
177 |
-
|
178 |
-
_USE_GLOBAL_DEFAULT = object()
|
179 |
-
|
180 |
-
|
181 |
-
class AbstractConfig(MutableMapping):
|
182 |
-
"""
|
183 |
-
Create a set of config values, optionally pre-filled with values from
|
184 |
-
the given dictionary or pre-existing config object.
|
185 |
-
|
186 |
-
The class implements the MutableMapping protocol keyed by option name (`str`).
|
187 |
-
For convenience its methods accept either Option or str as the key parameter.
|
188 |
-
|
189 |
-
.. seealso:: :meth:`set()`
|
190 |
-
|
191 |
-
This config class is abstract because it needs its ``options`` class
|
192 |
-
var to be set to an instance of :class:`Options` before it can be
|
193 |
-
instanciated and used.
|
194 |
-
|
195 |
-
.. code:: python
|
196 |
-
|
197 |
-
class MyConfig(AbstractConfig):
|
198 |
-
options = Options()
|
199 |
-
|
200 |
-
MyConfig.register_option( "test:option_name", "This is an option", 0, int, lambda v: isinstance(v, int))
|
201 |
-
|
202 |
-
cfg = MyConfig({"test:option_name": 10})
|
203 |
-
|
204 |
-
"""
|
205 |
-
|
206 |
-
options: ClassVar[Options]
|
207 |
-
|
208 |
-
@classmethod
|
209 |
-
def register_option(
|
210 |
-
cls,
|
211 |
-
name: str,
|
212 |
-
help: str,
|
213 |
-
default: Any,
|
214 |
-
parse: Callable[[str], Any],
|
215 |
-
validate: Optional[Callable[[Any], bool]] = None,
|
216 |
-
) -> Option:
|
217 |
-
"""Register an available option in this config system."""
|
218 |
-
return cls.options.register(
|
219 |
-
name, help=help, default=default, parse=parse, validate=validate
|
220 |
-
)
|
221 |
-
|
222 |
-
_values: Dict[str, Any]
|
223 |
-
|
224 |
-
def __init__(
|
225 |
-
self,
|
226 |
-
values: Union[AbstractConfig, Dict[Union[Option, str], Any]] = {},
|
227 |
-
parse_values: bool = False,
|
228 |
-
skip_unknown: bool = False,
|
229 |
-
):
|
230 |
-
self._values = {}
|
231 |
-
values_dict = values._values if isinstance(values, AbstractConfig) else values
|
232 |
-
for name, value in values_dict.items():
|
233 |
-
self.set(name, value, parse_values, skip_unknown)
|
234 |
-
|
235 |
-
def _resolve_option(self, option_or_name: Union[Option, str]) -> Option:
|
236 |
-
if isinstance(option_or_name, Option):
|
237 |
-
option = option_or_name
|
238 |
-
if not self.options.is_registered(option):
|
239 |
-
raise ConfigUnknownOptionError(option)
|
240 |
-
return option
|
241 |
-
elif isinstance(option_or_name, str):
|
242 |
-
name = option_or_name
|
243 |
-
try:
|
244 |
-
return self.options[name]
|
245 |
-
except KeyError:
|
246 |
-
raise ConfigUnknownOptionError(name)
|
247 |
-
else:
|
248 |
-
raise TypeError(
|
249 |
-
"expected Option or str, found "
|
250 |
-
f"{type(option_or_name).__name__}: {option_or_name!r}"
|
251 |
-
)
|
252 |
-
|
253 |
-
def set(
|
254 |
-
self,
|
255 |
-
option_or_name: Union[Option, str],
|
256 |
-
value: Any,
|
257 |
-
parse_values: bool = False,
|
258 |
-
skip_unknown: bool = False,
|
259 |
-
):
|
260 |
-
"""Set the value of an option.
|
261 |
-
|
262 |
-
Args:
|
263 |
-
* `option_or_name`: an `Option` object or its name (`str`).
|
264 |
-
* `value`: the value to be assigned to given option.
|
265 |
-
* `parse_values`: parse the configuration value from a string into
|
266 |
-
its proper type, as per its `Option` object. The default
|
267 |
-
behavior is to raise `ConfigValueValidationError` when the value
|
268 |
-
is not of the right type. Useful when reading options from a
|
269 |
-
file type that doesn't support as many types as Python.
|
270 |
-
* `skip_unknown`: skip unknown configuration options. The default
|
271 |
-
behaviour is to raise `ConfigUnknownOptionError`. Useful when
|
272 |
-
reading options from a configuration file that has extra entries
|
273 |
-
(e.g. for a later version of fontTools)
|
274 |
-
"""
|
275 |
-
try:
|
276 |
-
option = self._resolve_option(option_or_name)
|
277 |
-
except ConfigUnknownOptionError as e:
|
278 |
-
if skip_unknown:
|
279 |
-
log.debug(str(e))
|
280 |
-
return
|
281 |
-
raise
|
282 |
-
|
283 |
-
# Can be useful if the values come from a source that doesn't have
|
284 |
-
# strict typing (.ini file? Terminal input?)
|
285 |
-
if parse_values:
|
286 |
-
try:
|
287 |
-
value = option.parse(value)
|
288 |
-
except Exception as e:
|
289 |
-
raise ConfigValueParsingError(option.name, value) from e
|
290 |
-
|
291 |
-
if option.validate is not None and not option.validate(value):
|
292 |
-
raise ConfigValueValidationError(option.name, value)
|
293 |
-
|
294 |
-
self._values[option.name] = value
|
295 |
-
|
296 |
-
def get(
|
297 |
-
self, option_or_name: Union[Option, str], default: Any = _USE_GLOBAL_DEFAULT
|
298 |
-
) -> Any:
|
299 |
-
"""
|
300 |
-
Get the value of an option. The value which is returned is the first
|
301 |
-
provided among:
|
302 |
-
|
303 |
-
1. a user-provided value in the options's ``self._values`` dict
|
304 |
-
2. a caller-provided default value to this method call
|
305 |
-
3. the global default for the option provided in ``fontTools.config``
|
306 |
-
|
307 |
-
This is to provide the ability to migrate progressively from config
|
308 |
-
options passed as arguments to fontTools APIs to config options read
|
309 |
-
from the current TTFont, e.g.
|
310 |
-
|
311 |
-
.. code:: python
|
312 |
-
|
313 |
-
def fontToolsAPI(font, some_option):
|
314 |
-
value = font.cfg.get("someLib.module:SOME_OPTION", some_option)
|
315 |
-
# use value
|
316 |
-
|
317 |
-
That way, the function will work the same for users of the API that
|
318 |
-
still pass the option to the function call, but will favour the new
|
319 |
-
config mechanism if the given font specifies a value for that option.
|
320 |
-
"""
|
321 |
-
option = self._resolve_option(option_or_name)
|
322 |
-
if option.name in self._values:
|
323 |
-
return self._values[option.name]
|
324 |
-
if default is not _USE_GLOBAL_DEFAULT:
|
325 |
-
return default
|
326 |
-
return option.default
|
327 |
-
|
328 |
-
def copy(self):
|
329 |
-
return self.__class__(self._values)
|
330 |
-
|
331 |
-
def __getitem__(self, option_or_name: Union[Option, str]) -> Any:
|
332 |
-
return self.get(option_or_name)
|
333 |
-
|
334 |
-
def __setitem__(self, option_or_name: Union[Option, str], value: Any) -> None:
|
335 |
-
return self.set(option_or_name, value)
|
336 |
-
|
337 |
-
def __delitem__(self, option_or_name: Union[Option, str]) -> None:
|
338 |
-
option = self._resolve_option(option_or_name)
|
339 |
-
del self._values[option.name]
|
340 |
-
|
341 |
-
def __iter__(self) -> Iterable[str]:
|
342 |
-
return self._values.__iter__()
|
343 |
-
|
344 |
-
def __len__(self) -> int:
|
345 |
-
return len(self._values)
|
346 |
-
|
347 |
-
def __repr__(self) -> str:
|
348 |
-
return f"{self.__class__.__name__}({repr(self._values)})"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/utils/colors.py
DELETED
@@ -1,359 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
|
4 |
-
class Color:
|
5 |
-
all = []
|
6 |
-
|
7 |
-
def __init__(
|
8 |
-
self,
|
9 |
-
c50: str,
|
10 |
-
c100: str,
|
11 |
-
c200: str,
|
12 |
-
c300: str,
|
13 |
-
c400: str,
|
14 |
-
c500: str,
|
15 |
-
c600: str,
|
16 |
-
c700: str,
|
17 |
-
c800: str,
|
18 |
-
c900: str,
|
19 |
-
c950: str,
|
20 |
-
name: str | None = None,
|
21 |
-
):
|
22 |
-
self.c50 = c50
|
23 |
-
self.c100 = c100
|
24 |
-
self.c200 = c200
|
25 |
-
self.c300 = c300
|
26 |
-
self.c400 = c400
|
27 |
-
self.c500 = c500
|
28 |
-
self.c600 = c600
|
29 |
-
self.c700 = c700
|
30 |
-
self.c800 = c800
|
31 |
-
self.c900 = c900
|
32 |
-
self.c950 = c950
|
33 |
-
self.name = name
|
34 |
-
Color.all.append(self)
|
35 |
-
|
36 |
-
def expand(self) -> list[str]:
|
37 |
-
return [
|
38 |
-
self.c50,
|
39 |
-
self.c100,
|
40 |
-
self.c200,
|
41 |
-
self.c300,
|
42 |
-
self.c400,
|
43 |
-
self.c500,
|
44 |
-
self.c600,
|
45 |
-
self.c700,
|
46 |
-
self.c800,
|
47 |
-
self.c900,
|
48 |
-
self.c950,
|
49 |
-
]
|
50 |
-
|
51 |
-
|
52 |
-
slate = Color(
|
53 |
-
name="slate",
|
54 |
-
c50="#f8fafc",
|
55 |
-
c100="#f1f5f9",
|
56 |
-
c200="#e2e8f0",
|
57 |
-
c300="#cbd5e1",
|
58 |
-
c400="#94a3b8",
|
59 |
-
c500="#64748b",
|
60 |
-
c600="#475569",
|
61 |
-
c700="#334155",
|
62 |
-
c800="#1e293b",
|
63 |
-
c900="#0f172a",
|
64 |
-
c950="#0a0f1e",
|
65 |
-
)
|
66 |
-
gray = Color(
|
67 |
-
name="gray",
|
68 |
-
c50="#f9fafb",
|
69 |
-
c100="#f3f4f6",
|
70 |
-
c200="#e5e7eb",
|
71 |
-
c300="#d1d5db",
|
72 |
-
c400="#9ca3af",
|
73 |
-
c500="#6b7280",
|
74 |
-
c600="#4b5563",
|
75 |
-
c700="#374151",
|
76 |
-
c800="#1f2937",
|
77 |
-
c900="#111827",
|
78 |
-
c950="#0b0f19",
|
79 |
-
)
|
80 |
-
zinc = Color(
|
81 |
-
name="zinc",
|
82 |
-
c50="#fafafa",
|
83 |
-
c100="#f4f4f5",
|
84 |
-
c200="#e4e4e7",
|
85 |
-
c300="#d4d4d8",
|
86 |
-
c400="#a1a1aa",
|
87 |
-
c500="#71717a",
|
88 |
-
c600="#52525b",
|
89 |
-
c700="#3f3f46",
|
90 |
-
c800="#27272a",
|
91 |
-
c900="#18181b",
|
92 |
-
c950="#0f0f11",
|
93 |
-
)
|
94 |
-
neutral = Color(
|
95 |
-
name="neutral",
|
96 |
-
c50="#fafafa",
|
97 |
-
c100="#f5f5f5",
|
98 |
-
c200="#e5e5e5",
|
99 |
-
c300="#d4d4d4",
|
100 |
-
c400="#a3a3a3",
|
101 |
-
c500="#737373",
|
102 |
-
c600="#525252",
|
103 |
-
c700="#404040",
|
104 |
-
c800="#262626",
|
105 |
-
c900="#171717",
|
106 |
-
c950="#0f0f0f",
|
107 |
-
)
|
108 |
-
stone = Color(
|
109 |
-
name="stone",
|
110 |
-
c50="#fafaf9",
|
111 |
-
c100="#f5f5f4",
|
112 |
-
c200="#e7e5e4",
|
113 |
-
c300="#d6d3d1",
|
114 |
-
c400="#a8a29e",
|
115 |
-
c500="#78716c",
|
116 |
-
c600="#57534e",
|
117 |
-
c700="#44403c",
|
118 |
-
c800="#292524",
|
119 |
-
c900="#1c1917",
|
120 |
-
c950="#0f0e0d",
|
121 |
-
)
|
122 |
-
red = Color(
|
123 |
-
name="red",
|
124 |
-
c50="#fef2f2",
|
125 |
-
c100="#fee2e2",
|
126 |
-
c200="#fecaca",
|
127 |
-
c300="#fca5a5",
|
128 |
-
c400="#f87171",
|
129 |
-
c500="#ef4444",
|
130 |
-
c600="#dc2626",
|
131 |
-
c700="#b91c1c",
|
132 |
-
c800="#991b1b",
|
133 |
-
c900="#7f1d1d",
|
134 |
-
c950="#6c1e1e",
|
135 |
-
)
|
136 |
-
orange = Color(
|
137 |
-
name="orange",
|
138 |
-
c50="#fff7ed",
|
139 |
-
c100="#ffedd5",
|
140 |
-
c200="#fed7aa",
|
141 |
-
c300="#fdba74",
|
142 |
-
c400="#fb923c",
|
143 |
-
c500="#f97316",
|
144 |
-
c600="#ea580c",
|
145 |
-
c700="#c2410c",
|
146 |
-
c800="#9a3412",
|
147 |
-
c900="#7c2d12",
|
148 |
-
c950="#6c2e12",
|
149 |
-
)
|
150 |
-
amber = Color(
|
151 |
-
name="amber",
|
152 |
-
c50="#fffbeb",
|
153 |
-
c100="#fef3c7",
|
154 |
-
c200="#fde68a",
|
155 |
-
c300="#fcd34d",
|
156 |
-
c400="#fbbf24",
|
157 |
-
c500="#f59e0b",
|
158 |
-
c600="#d97706",
|
159 |
-
c700="#b45309",
|
160 |
-
c800="#92400e",
|
161 |
-
c900="#78350f",
|
162 |
-
c950="#6c370f",
|
163 |
-
)
|
164 |
-
yellow = Color(
|
165 |
-
name="yellow",
|
166 |
-
c50="#fefce8",
|
167 |
-
c100="#fef9c3",
|
168 |
-
c200="#fef08a",
|
169 |
-
c300="#fde047",
|
170 |
-
c400="#facc15",
|
171 |
-
c500="#eab308",
|
172 |
-
c600="#ca8a04",
|
173 |
-
c700="#a16207",
|
174 |
-
c800="#854d0e",
|
175 |
-
c900="#713f12",
|
176 |
-
c950="#653b12",
|
177 |
-
)
|
178 |
-
lime = Color(
|
179 |
-
name="lime",
|
180 |
-
c50="#f7fee7",
|
181 |
-
c100="#ecfccb",
|
182 |
-
c200="#d9f99d",
|
183 |
-
c300="#bef264",
|
184 |
-
c400="#a3e635",
|
185 |
-
c500="#84cc16",
|
186 |
-
c600="#65a30d",
|
187 |
-
c700="#4d7c0f",
|
188 |
-
c800="#3f6212",
|
189 |
-
c900="#365314",
|
190 |
-
c950="#2f4e14",
|
191 |
-
)
|
192 |
-
green = Color(
|
193 |
-
name="green",
|
194 |
-
c50="#f0fdf4",
|
195 |
-
c100="#dcfce7",
|
196 |
-
c200="#bbf7d0",
|
197 |
-
c300="#86efac",
|
198 |
-
c400="#4ade80",
|
199 |
-
c500="#22c55e",
|
200 |
-
c600="#16a34a",
|
201 |
-
c700="#15803d",
|
202 |
-
c800="#166534",
|
203 |
-
c900="#14532d",
|
204 |
-
c950="#134e28",
|
205 |
-
)
|
206 |
-
emerald = Color(
|
207 |
-
name="emerald",
|
208 |
-
c50="#ecfdf5",
|
209 |
-
c100="#d1fae5",
|
210 |
-
c200="#a7f3d0",
|
211 |
-
c300="#6ee7b7",
|
212 |
-
c400="#34d399",
|
213 |
-
c500="#10b981",
|
214 |
-
c600="#059669",
|
215 |
-
c700="#047857",
|
216 |
-
c800="#065f46",
|
217 |
-
c900="#064e3b",
|
218 |
-
c950="#054436",
|
219 |
-
)
|
220 |
-
teal = Color(
|
221 |
-
name="teal",
|
222 |
-
c50="#f0fdfa",
|
223 |
-
c100="#ccfbf1",
|
224 |
-
c200="#99f6e4",
|
225 |
-
c300="#5eead4",
|
226 |
-
c400="#2dd4bf",
|
227 |
-
c500="#14b8a6",
|
228 |
-
c600="#0d9488",
|
229 |
-
c700="#0f766e",
|
230 |
-
c800="#115e59",
|
231 |
-
c900="#134e4a",
|
232 |
-
c950="#12443e",
|
233 |
-
)
|
234 |
-
cyan = Color(
|
235 |
-
name="cyan",
|
236 |
-
c50="#ecfeff",
|
237 |
-
c100="#cffafe",
|
238 |
-
c200="#a5f3fc",
|
239 |
-
c300="#67e8f9",
|
240 |
-
c400="#22d3ee",
|
241 |
-
c500="#06b6d4",
|
242 |
-
c600="#0891b2",
|
243 |
-
c700="#0e7490",
|
244 |
-
c800="#155e75",
|
245 |
-
c900="#164e63",
|
246 |
-
c950="#14455c",
|
247 |
-
)
|
248 |
-
sky = Color(
|
249 |
-
name="sky",
|
250 |
-
c50="#f0f9ff",
|
251 |
-
c100="#e0f2fe",
|
252 |
-
c200="#bae6fd",
|
253 |
-
c300="#7dd3fc",
|
254 |
-
c400="#38bdf8",
|
255 |
-
c500="#0ea5e9",
|
256 |
-
c600="#0284c7",
|
257 |
-
c700="#0369a1",
|
258 |
-
c800="#075985",
|
259 |
-
c900="#0c4a6e",
|
260 |
-
c950="#0b4165",
|
261 |
-
)
|
262 |
-
blue = Color(
|
263 |
-
name="blue",
|
264 |
-
c50="#eff6ff",
|
265 |
-
c100="#dbeafe",
|
266 |
-
c200="#bfdbfe",
|
267 |
-
c300="#93c5fd",
|
268 |
-
c400="#60a5fa",
|
269 |
-
c500="#3b82f6",
|
270 |
-
c600="#2563eb",
|
271 |
-
c700="#1d4ed8",
|
272 |
-
c800="#1e40af",
|
273 |
-
c900="#1e3a8a",
|
274 |
-
c950="#1d3660",
|
275 |
-
)
|
276 |
-
indigo = Color(
|
277 |
-
name="indigo",
|
278 |
-
c50="#eef2ff",
|
279 |
-
c100="#e0e7ff",
|
280 |
-
c200="#c7d2fe",
|
281 |
-
c300="#a5b4fc",
|
282 |
-
c400="#818cf8",
|
283 |
-
c500="#6366f1",
|
284 |
-
c600="#4f46e5",
|
285 |
-
c700="#4338ca",
|
286 |
-
c800="#3730a3",
|
287 |
-
c900="#312e81",
|
288 |
-
c950="#2b2c5e",
|
289 |
-
)
|
290 |
-
violet = Color(
|
291 |
-
name="violet",
|
292 |
-
c50="#f5f3ff",
|
293 |
-
c100="#ede9fe",
|
294 |
-
c200="#ddd6fe",
|
295 |
-
c300="#c4b5fd",
|
296 |
-
c400="#a78bfa",
|
297 |
-
c500="#8b5cf6",
|
298 |
-
c600="#7c3aed",
|
299 |
-
c700="#6d28d9",
|
300 |
-
c800="#5b21b6",
|
301 |
-
c900="#4c1d95",
|
302 |
-
c950="#431d7f",
|
303 |
-
)
|
304 |
-
purple = Color(
|
305 |
-
name="purple",
|
306 |
-
c50="#faf5ff",
|
307 |
-
c100="#f3e8ff",
|
308 |
-
c200="#e9d5ff",
|
309 |
-
c300="#d8b4fe",
|
310 |
-
c400="#c084fc",
|
311 |
-
c500="#a855f7",
|
312 |
-
c600="#9333ea",
|
313 |
-
c700="#7e22ce",
|
314 |
-
c800="#6b21a8",
|
315 |
-
c900="#581c87",
|
316 |
-
c950="#4c1a73",
|
317 |
-
)
|
318 |
-
fuchsia = Color(
|
319 |
-
name="fuchsia",
|
320 |
-
c50="#fdf4ff",
|
321 |
-
c100="#fae8ff",
|
322 |
-
c200="#f5d0fe",
|
323 |
-
c300="#f0abfc",
|
324 |
-
c400="#e879f9",
|
325 |
-
c500="#d946ef",
|
326 |
-
c600="#c026d3",
|
327 |
-
c700="#a21caf",
|
328 |
-
c800="#86198f",
|
329 |
-
c900="#701a75",
|
330 |
-
c950="#5e1a66",
|
331 |
-
)
|
332 |
-
pink = Color(
|
333 |
-
name="pink",
|
334 |
-
c50="#fdf2f8",
|
335 |
-
c100="#fce7f3",
|
336 |
-
c200="#fbcfe8",
|
337 |
-
c300="#f9a8d4",
|
338 |
-
c400="#f472b6",
|
339 |
-
c500="#ec4899",
|
340 |
-
c600="#db2777",
|
341 |
-
c700="#be185d",
|
342 |
-
c800="#9d174d",
|
343 |
-
c900="#831843",
|
344 |
-
c950="#6e1a3d",
|
345 |
-
)
|
346 |
-
rose = Color(
|
347 |
-
name="rose",
|
348 |
-
c50="#fff1f2",
|
349 |
-
c100="#ffe4e6",
|
350 |
-
c200="#fecdd3",
|
351 |
-
c300="#fda4af",
|
352 |
-
c400="#fb7185",
|
353 |
-
c500="#f43f5e",
|
354 |
-
c600="#e11d48",
|
355 |
-
c700="#be123c",
|
356 |
-
c800="#9f1239",
|
357 |
-
c900="#881337",
|
358 |
-
c950="#771d3a",
|
359 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Dabs/wordcloud/README.md
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Wordcloud
|
3 |
-
emoji: 📈
|
4 |
-
colorFrom: purple
|
5 |
-
colorTo: blue
|
6 |
-
sdk: gradio
|
7 |
-
app_file: app.py
|
8 |
-
pinned: false
|
9 |
-
---
|
10 |
-
|
11 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DaleChen/AutoGPT/autogpt/agent/agent_manager.py
DELETED
@@ -1,103 +0,0 @@
|
|
1 |
-
"""Agent manager for managing GPT agents"""
|
2 |
-
from __future__ import annotations
|
3 |
-
|
4 |
-
from typing import Union
|
5 |
-
|
6 |
-
from autogpt.config.config import Singleton
|
7 |
-
from autogpt.llm_utils import create_chat_completion
|
8 |
-
|
9 |
-
|
10 |
-
class AgentManager(metaclass=Singleton):
|
11 |
-
"""Agent manager for managing GPT agents"""
|
12 |
-
|
13 |
-
def __init__(self):
|
14 |
-
self.next_key = 0
|
15 |
-
self.agents = {} # key, (task, full_message_history, model)
|
16 |
-
|
17 |
-
# Create new GPT agent
|
18 |
-
# TODO: Centralise use of create_chat_completion() to globally enforce token limit
|
19 |
-
|
20 |
-
def create_agent(self, task: str, prompt: str, model: str) -> tuple[int, str]:
|
21 |
-
"""Create a new agent and return its key
|
22 |
-
|
23 |
-
Args:
|
24 |
-
task: The task to perform
|
25 |
-
prompt: The prompt to use
|
26 |
-
model: The model to use
|
27 |
-
|
28 |
-
Returns:
|
29 |
-
The key of the new agent
|
30 |
-
"""
|
31 |
-
messages = [
|
32 |
-
{"role": "user", "content": prompt},
|
33 |
-
]
|
34 |
-
|
35 |
-
# Start GPT instance
|
36 |
-
agent_reply = create_chat_completion(
|
37 |
-
model=model,
|
38 |
-
messages=messages,
|
39 |
-
)
|
40 |
-
|
41 |
-
# Update full message history
|
42 |
-
messages.append({"role": "assistant", "content": agent_reply})
|
43 |
-
|
44 |
-
key = self.next_key
|
45 |
-
# This is done instead of len(agents) to make keys unique even if agents
|
46 |
-
# are deleted
|
47 |
-
self.next_key += 1
|
48 |
-
|
49 |
-
self.agents[key] = (task, messages, model)
|
50 |
-
|
51 |
-
return key, agent_reply
|
52 |
-
|
53 |
-
def message_agent(self, key: str | int, message: str) -> str:
|
54 |
-
"""Send a message to an agent and return its response
|
55 |
-
|
56 |
-
Args:
|
57 |
-
key: The key of the agent to message
|
58 |
-
message: The message to send to the agent
|
59 |
-
|
60 |
-
Returns:
|
61 |
-
The agent's response
|
62 |
-
"""
|
63 |
-
task, messages, model = self.agents[int(key)]
|
64 |
-
|
65 |
-
# Add user message to message history before sending to agent
|
66 |
-
messages.append({"role": "user", "content": message})
|
67 |
-
|
68 |
-
# Start GPT instance
|
69 |
-
agent_reply = create_chat_completion(
|
70 |
-
model=model,
|
71 |
-
messages=messages,
|
72 |
-
)
|
73 |
-
|
74 |
-
# Update full message history
|
75 |
-
messages.append({"role": "assistant", "content": agent_reply})
|
76 |
-
|
77 |
-
return agent_reply
|
78 |
-
|
79 |
-
def list_agents(self) -> list[tuple[str | int, str]]:
|
80 |
-
"""Return a list of all agents
|
81 |
-
|
82 |
-
Returns:
|
83 |
-
A list of tuples of the form (key, task)
|
84 |
-
"""
|
85 |
-
|
86 |
-
# Return a list of agent keys and their tasks
|
87 |
-
return [(key, task) for key, (task, _, _) in self.agents.items()]
|
88 |
-
|
89 |
-
def delete_agent(self, key: Union[str, int]) -> bool:
|
90 |
-
"""Delete an agent from the agent manager
|
91 |
-
|
92 |
-
Args:
|
93 |
-
key: The key of the agent to delete
|
94 |
-
|
95 |
-
Returns:
|
96 |
-
True if successful, False otherwise
|
97 |
-
"""
|
98 |
-
|
99 |
-
try:
|
100 |
-
del self.agents[int(key)]
|
101 |
-
return True
|
102 |
-
except KeyError:
|
103 |
-
return False
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Dana19/animal_classifier/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Animal Classifier
|
3 |
-
emoji: 📚
|
4 |
-
colorFrom: gray
|
5 |
-
colorTo: blue
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.4.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: apache-2.0
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DarwinAnim8or/Mistral-Chat/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Mistral Chat (fast)
|
3 |
-
emoji: 😻
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: yellow
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.45.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DemoLou/moe-tts/text/ngu_dialect.py
DELETED
@@ -1,30 +0,0 @@
|
|
1 |
-
import re
|
2 |
-
import opencc
|
3 |
-
|
4 |
-
|
5 |
-
dialects = {'SZ': 'suzhou', 'WX': 'wuxi', 'CZ': 'changzhou', 'HZ': 'hangzhou',
|
6 |
-
'SX': 'shaoxing', 'NB': 'ningbo', 'JJ': 'jingjiang', 'YX': 'yixing',
|
7 |
-
'JD': 'jiading', 'ZR': 'zhenru', 'PH': 'pinghu', 'TX': 'tongxiang',
|
8 |
-
'JS': 'jiashan', 'HN': 'xiashi', 'LP': 'linping', 'XS': 'xiaoshan',
|
9 |
-
'FY': 'fuyang', 'RA': 'ruao', 'CX': 'cixi', 'SM': 'sanmen',
|
10 |
-
'TT': 'tiantai', 'WZ': 'wenzhou', 'SC': 'suichang', 'YB': 'youbu'}
|
11 |
-
|
12 |
-
converters = {}
|
13 |
-
|
14 |
-
for dialect in dialects.values():
|
15 |
-
try:
|
16 |
-
converters[dialect] = opencc.OpenCC("chinese_dialect_lexicons/"+dialect)
|
17 |
-
except:
|
18 |
-
pass
|
19 |
-
|
20 |
-
|
21 |
-
def ngu_dialect_to_ipa(text, dialect):
|
22 |
-
dialect = dialects[dialect]
|
23 |
-
text = converters[dialect].convert(text).replace('-','').replace('$',' ')
|
24 |
-
text = re.sub(r'[、;:]', ',', text)
|
25 |
-
text = re.sub(r'\s*,\s*', ', ', text)
|
26 |
-
text = re.sub(r'\s*。\s*', '. ', text)
|
27 |
-
text = re.sub(r'\s*?\s*', '? ', text)
|
28 |
-
text = re.sub(r'\s*!\s*', '! ', text)
|
29 |
-
text = re.sub(r'\s*$', '', text)
|
30 |
-
return text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DragGan/DragGan-Inversion/stylegan_human/alignment.py
DELETED
@@ -1,233 +0,0 @@
|
|
1 |
-
# Copyright (c) SenseTime Research. All rights reserved.
|
2 |
-
|
3 |
-
|
4 |
-
import os
|
5 |
-
import argparse
|
6 |
-
import numpy as np
|
7 |
-
import torch
|
8 |
-
from torch.utils.data import DataLoader
|
9 |
-
from torchvision.transforms import transforms
|
10 |
-
from utils.ImagesDataset import ImagesDataset
|
11 |
-
|
12 |
-
import cv2
|
13 |
-
import time
|
14 |
-
import copy
|
15 |
-
import imutils
|
16 |
-
|
17 |
-
# for openpose body keypoint detector : # (src:https://github.com/Hzzone/pytorch-openpose)
|
18 |
-
from openpose.src import util
|
19 |
-
from openpose.src.body import Body
|
20 |
-
|
21 |
-
# for paddlepaddle human segmentation : #(src: https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.5/contrib/PP-HumanSeg/)
|
22 |
-
from PP_HumanSeg.deploy.infer import Predictor as PP_HumenSeg_Predictor
|
23 |
-
|
24 |
-
import math
|
25 |
-
|
26 |
-
|
27 |
-
def angle_between_points(p0, p1, p2):
|
28 |
-
if p0[1] == -1 or p1[1] == -1 or p2[1] == -1:
|
29 |
-
return -1
|
30 |
-
a = (p1[0]-p0[0])**2 + (p1[1]-p0[1])**2
|
31 |
-
b = (p1[0]-p2[0])**2 + (p1[1]-p2[1])**2
|
32 |
-
c = (p2[0]-p0[0])**2 + (p2[1]-p0[1])**2
|
33 |
-
if a * b == 0:
|
34 |
-
return -1
|
35 |
-
return math.acos((a+b-c) / math.sqrt(4*a*b)) * 180 / math.pi
|
36 |
-
|
37 |
-
|
38 |
-
def crop_img_with_padding(img, keypoints, rect):
|
39 |
-
person_xmin, person_xmax, ymin, ymax = rect
|
40 |
-
img_h, img_w, _ = img.shape # find body center using keypoints
|
41 |
-
middle_shoulder_x = keypoints[1][0]
|
42 |
-
middle_hip_x = (keypoints[8][0] + keypoints[11][0]) // 2
|
43 |
-
mid_x = (middle_hip_x + middle_shoulder_x) // 2
|
44 |
-
mid_y = (ymin + ymax) // 2
|
45 |
-
# find which side (l or r) is further than center x, use the further side
|
46 |
-
if abs(mid_x-person_xmin) > abs(person_xmax-mid_x): # left further
|
47 |
-
xmin = person_xmin
|
48 |
-
xmax = mid_x + (mid_x-person_xmin)
|
49 |
-
else:
|
50 |
-
# may be negtive
|
51 |
-
# in this case, the script won't output any image, leave the case like this
|
52 |
-
# since we don't want to pad human body
|
53 |
-
xmin = mid_x - (person_xmax-mid_x)
|
54 |
-
xmax = person_xmax
|
55 |
-
|
56 |
-
w = xmax - xmin
|
57 |
-
h = ymax - ymin
|
58 |
-
# pad rectangle to w:h = 1:2 ## calculate desired border length
|
59 |
-
if h / w >= 2: # pad horizontally
|
60 |
-
target_w = h // 2
|
61 |
-
xmin_prime = int(mid_x - target_w / 2)
|
62 |
-
xmax_prime = int(mid_x + target_w / 2)
|
63 |
-
if xmin_prime < 0:
|
64 |
-
pad_left = abs(xmin_prime) # - xmin
|
65 |
-
xmin = 0
|
66 |
-
else:
|
67 |
-
pad_left = 0
|
68 |
-
xmin = xmin_prime
|
69 |
-
if xmax_prime > img_w:
|
70 |
-
pad_right = xmax_prime - img_w
|
71 |
-
xmax = img_w
|
72 |
-
else:
|
73 |
-
pad_right = 0
|
74 |
-
xmax = xmax_prime
|
75 |
-
|
76 |
-
cropped_img = img[int(ymin):int(ymax), int(xmin):int(xmax)]
|
77 |
-
im_pad = cv2.copyMakeBorder(cropped_img, 0, 0, int(
|
78 |
-
pad_left), int(pad_right), cv2.BORDER_REPLICATE)
|
79 |
-
else: # pad vertically
|
80 |
-
target_h = w * 2
|
81 |
-
ymin_prime = mid_y - (target_h / 2)
|
82 |
-
ymax_prime = mid_y + (target_h / 2)
|
83 |
-
if ymin_prime < 0:
|
84 |
-
pad_up = abs(ymin_prime) # - ymin
|
85 |
-
ymin = 0
|
86 |
-
else:
|
87 |
-
pad_up = 0
|
88 |
-
ymin = ymin_prime
|
89 |
-
if ymax_prime > img_h:
|
90 |
-
pad_down = ymax_prime - img_h
|
91 |
-
ymax = img_h
|
92 |
-
else:
|
93 |
-
pad_down = 0
|
94 |
-
ymax = ymax_prime
|
95 |
-
print(ymin, ymax, xmin, xmax, img.shape)
|
96 |
-
|
97 |
-
cropped_img = img[int(ymin):int(ymax), int(xmin):int(xmax)]
|
98 |
-
im_pad = cv2.copyMakeBorder(cropped_img, int(pad_up), int(pad_down), 0,
|
99 |
-
0, cv2.BORDER_REPLICATE)
|
100 |
-
result = cv2.resize(im_pad, (512, 1024), interpolation=cv2.INTER_AREA)
|
101 |
-
return result
|
102 |
-
|
103 |
-
|
104 |
-
def run(args):
|
105 |
-
os.makedirs(args.output_folder, exist_ok=True)
|
106 |
-
dataset = ImagesDataset(
|
107 |
-
args.image_folder, transforms.Compose([transforms.ToTensor()]))
|
108 |
-
dataloader = DataLoader(dataset, batch_size=1, shuffle=False)
|
109 |
-
|
110 |
-
body_estimation = Body('openpose/model/body_pose_model.pth')
|
111 |
-
|
112 |
-
total = len(dataloader)
|
113 |
-
print('Num of dataloader : ', total)
|
114 |
-
os.makedirs(f'{args.output_folder}', exist_ok=True)
|
115 |
-
# os.makedirs(f'{args.output_folder}/middle_result', exist_ok=True)
|
116 |
-
|
117 |
-
# initialzide HumenSeg
|
118 |
-
human_seg_args = {}
|
119 |
-
human_seg_args['cfg'] = 'PP_HumanSeg/export_model/deeplabv3p_resnet50_os8_humanseg_512x512_100k_with_softmax/deploy.yaml'
|
120 |
-
human_seg_args['input_shape'] = [1024, 512]
|
121 |
-
human_seg_args['save_dir'] = args.output_folder
|
122 |
-
human_seg_args['soft_predict'] = False
|
123 |
-
human_seg_args['use_gpu'] = True
|
124 |
-
human_seg_args['test_speed'] = False
|
125 |
-
human_seg_args['use_optic_flow'] = False
|
126 |
-
human_seg_args['add_argmax'] = True
|
127 |
-
human_seg_args = argparse.Namespace(**human_seg_args)
|
128 |
-
human_seg = PP_HumenSeg_Predictor(human_seg_args)
|
129 |
-
|
130 |
-
from tqdm import tqdm
|
131 |
-
for fname, image in tqdm(dataloader):
|
132 |
-
# try:
|
133 |
-
# tensor to numpy image
|
134 |
-
fname = fname[0]
|
135 |
-
print(f'Processing \'{fname}\'.')
|
136 |
-
|
137 |
-
image = (image.permute(0, 2, 3, 1) * 255).clamp(0, 255)
|
138 |
-
image = image.squeeze(0).numpy() # --> tensor to numpy, (H,W,C)
|
139 |
-
# avoid super high res img
|
140 |
-
if image.shape[0] >= 2000: # height ### for shein image
|
141 |
-
ratio = image.shape[0]/1200 # height
|
142 |
-
dim = (int(image.shape[1]/ratio), 1200) # (width, height)
|
143 |
-
image = cv2.resize(image, dim, interpolation=cv2.INTER_AREA)
|
144 |
-
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
|
145 |
-
|
146 |
-
# create segmentation
|
147 |
-
# mybg = cv2.imread('mybg.png')
|
148 |
-
comb, segmentation, bg, ori_img = human_seg.run(image, None) # mybg)
|
149 |
-
# cv2.imwrite('comb.png',comb) # [0,255]
|
150 |
-
# cv2.imwrite('alpha.png',segmentation*255) # segmentation [0,1] --> [0.255]
|
151 |
-
# cv2.imwrite('bg.png',bg) #[0,255]
|
152 |
-
# cv2.imwrite('ori_img.png',ori_img) # [0,255]
|
153 |
-
|
154 |
-
masks_np = (segmentation * 255) # .byte().cpu().numpy() #1024,512,1
|
155 |
-
mask0_np = masks_np[:, :, 0].astype(np.uint8) # [0, :, :]
|
156 |
-
contours = cv2.findContours(
|
157 |
-
mask0_np, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
|
158 |
-
cnts = imutils.grab_contours(contours)
|
159 |
-
c = max(cnts, key=cv2.contourArea)
|
160 |
-
extTop = tuple(c[c[:, :, 1].argmin()][0])
|
161 |
-
extBot = tuple(c[c[:, :, 1].argmax()][0])
|
162 |
-
extBot = list(extBot)
|
163 |
-
extTop = list(extTop)
|
164 |
-
pad_range = int((extBot[1]-extTop[1])*0.05)
|
165 |
-
# seg mask already reaches to the edge
|
166 |
-
if (int(extTop[1]) <= 5 and int(extTop[1]) > 0) and (comb.shape[0] > int(extBot[1]) and int(extBot[1]) >= comb.shape[0]-5):
|
167 |
-
# pad with pure white, top 100 px, bottom 100 px
|
168 |
-
comb = cv2.copyMakeBorder(
|
169 |
-
comb, pad_range+5, pad_range+5, 0, 0, cv2.BORDER_CONSTANT, value=[255, 255, 255])
|
170 |
-
elif int(extTop[1]) <= 0 or int(extBot[1]) >= comb.shape[0]:
|
171 |
-
print('PAD: body out of boundary', fname) # should not happened
|
172 |
-
return {}
|
173 |
-
else:
|
174 |
-
# 105 instead of 100: give some extra space
|
175 |
-
comb = cv2.copyMakeBorder(
|
176 |
-
comb, pad_range+5, pad_range+5, 0, 0, cv2.BORDER_REPLICATE)
|
177 |
-
extBot[1] = extBot[1] + pad_range+5
|
178 |
-
extTop[1] = extTop[1] + pad_range+5
|
179 |
-
|
180 |
-
extLeft = tuple(c[c[:, :, 0].argmin()][0])
|
181 |
-
extRight = tuple(c[c[:, :, 0].argmax()][0])
|
182 |
-
extLeft = list(extLeft)
|
183 |
-
extRight = list(extRight)
|
184 |
-
person_ymin = int(extTop[1])-pad_range # 100
|
185 |
-
person_ymax = int(extBot[1])+pad_range # 100 #height
|
186 |
-
if person_ymin < 0 or person_ymax > comb.shape[0]: # out of range
|
187 |
-
return {}
|
188 |
-
person_xmin = int(extLeft[0])
|
189 |
-
person_xmax = int(extRight[0])
|
190 |
-
rect = [person_xmin, person_xmax, person_ymin, person_ymax]
|
191 |
-
# recimg = copy.deepcopy(comb)
|
192 |
-
# cv2.rectangle(recimg,(person_xmin,person_ymin),(person_xmax,person_ymax),(0,255,0),2)
|
193 |
-
# cv2.imwrite(f'{args.output_folder}/middle_result/{fname}_rec.png',recimg)
|
194 |
-
|
195 |
-
# detect keypoints
|
196 |
-
keypoints, subset = body_estimation(comb)
|
197 |
-
# print(keypoints, subset, len(subset))
|
198 |
-
if len(subset) != 1 or (len(subset) == 1 and subset[0][-1] < 15):
|
199 |
-
print(
|
200 |
-
f'Processing \'{fname}\'. Please import image contains one person only. Also can check segmentation mask. ')
|
201 |
-
continue
|
202 |
-
|
203 |
-
# canvas = copy.deepcopy(comb)
|
204 |
-
# canvas = util.draw_bodypose(canvas, keypoints, subset, show_number=True)
|
205 |
-
# cv2.imwrite(f'{args.output_folder}/middle_result/{fname}_keypoints.png',canvas)
|
206 |
-
|
207 |
-
comb = crop_img_with_padding(comb, keypoints, rect)
|
208 |
-
|
209 |
-
cv2.imwrite(f'{args.output_folder}/{fname}.png', comb)
|
210 |
-
print(f' -- Finished processing \'{fname}\'. --')
|
211 |
-
# except:
|
212 |
-
# print(f'Processing \'{fname}\'. Not satisfied the alignment strategy.')
|
213 |
-
|
214 |
-
|
215 |
-
if __name__ == '__main__':
|
216 |
-
torch.backends.cudnn.benchmark = True
|
217 |
-
torch.backends.cudnn.deterministic = False
|
218 |
-
|
219 |
-
t1 = time.time()
|
220 |
-
arg_formatter = argparse.ArgumentDefaultsHelpFormatter
|
221 |
-
description = 'StyleGAN-Human data process'
|
222 |
-
parser = argparse.ArgumentParser(formatter_class=arg_formatter,
|
223 |
-
description=description)
|
224 |
-
parser.add_argument('--image-folder', type=str, dest='image_folder')
|
225 |
-
parser.add_argument('--output-folder',
|
226 |
-
dest='output_folder', default='results', type=str)
|
227 |
-
# parser.add_argument('--cfg', dest='cfg for segmentation', default='PP_HumanSeg/export_model/ppseg_lite_portrait_398x224_with_softmax/deploy.yaml', type=str)
|
228 |
-
|
229 |
-
print('parsing arguments')
|
230 |
-
cmd_args = parser.parse_args()
|
231 |
-
run(cmd_args)
|
232 |
-
|
233 |
-
print('total time elapsed: ', str(time.time() - t1))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ECCV2022/bytetrack/tutorials/ctracker/mot_online/kalman_filter.py
DELETED
@@ -1,269 +0,0 @@
|
|
1 |
-
# vim: expandtab:ts=4:sw=4
|
2 |
-
import numpy as np
|
3 |
-
import scipy.linalg
|
4 |
-
|
5 |
-
"""
|
6 |
-
Table for the 0.95 quantile of the chi-square distribution with N degrees of
|
7 |
-
freedom (contains values for N=1, ..., 9). Taken from MATLAB/Octave's chi2inv
|
8 |
-
function and used as Mahalanobis gating threshold.
|
9 |
-
"""
|
10 |
-
chi2inv95 = {
|
11 |
-
1: 3.8415,
|
12 |
-
2: 5.9915,
|
13 |
-
3: 7.8147,
|
14 |
-
4: 9.4877,
|
15 |
-
5: 11.070,
|
16 |
-
6: 12.592,
|
17 |
-
7: 14.067,
|
18 |
-
8: 15.507,
|
19 |
-
9: 16.919}
|
20 |
-
|
21 |
-
|
22 |
-
class KalmanFilter(object):
|
23 |
-
"""
|
24 |
-
A simple Kalman filter for tracking bounding boxes in image space.
|
25 |
-
|
26 |
-
The 8-dimensional state space
|
27 |
-
|
28 |
-
x, y, a, h, vx, vy, va, vh
|
29 |
-
|
30 |
-
contains the bounding box center position (x, y), aspect ratio a, height h,
|
31 |
-
and their respective velocities.
|
32 |
-
|
33 |
-
Object motion follows a constant velocity model. The bounding box location
|
34 |
-
(x, y, a, h) is taken as direct observation of the state space (linear
|
35 |
-
observation model).
|
36 |
-
|
37 |
-
"""
|
38 |
-
|
39 |
-
def __init__(self):
|
40 |
-
ndim, dt = 4, 1.
|
41 |
-
|
42 |
-
# Create Kalman filter model matrices.
|
43 |
-
self._motion_mat = np.eye(2 * ndim, 2 * ndim)
|
44 |
-
for i in range(ndim):
|
45 |
-
self._motion_mat[i, ndim + i] = dt
|
46 |
-
self._update_mat = np.eye(ndim, 2 * ndim)
|
47 |
-
|
48 |
-
# Motion and observation uncertainty are chosen relative to the current
|
49 |
-
# state estimate. These weights control the amount of uncertainty in
|
50 |
-
# the model. This is a bit hacky.
|
51 |
-
self._std_weight_position = 1. / 20
|
52 |
-
self._std_weight_velocity = 1. / 160
|
53 |
-
|
54 |
-
def initiate(self, measurement):
|
55 |
-
"""Create track from unassociated measurement.
|
56 |
-
|
57 |
-
Parameters
|
58 |
-
----------
|
59 |
-
measurement : ndarray
|
60 |
-
Bounding box coordinates (x, y, a, h) with center position (x, y),
|
61 |
-
aspect ratio a, and height h.
|
62 |
-
|
63 |
-
Returns
|
64 |
-
-------
|
65 |
-
(ndarray, ndarray)
|
66 |
-
Returns the mean vector (8 dimensional) and covariance matrix (8x8
|
67 |
-
dimensional) of the new track. Unobserved velocities are initialized
|
68 |
-
to 0 mean.
|
69 |
-
|
70 |
-
"""
|
71 |
-
mean_pos = measurement
|
72 |
-
mean_vel = np.zeros_like(mean_pos)
|
73 |
-
mean = np.r_[mean_pos, mean_vel]
|
74 |
-
|
75 |
-
std = [
|
76 |
-
2 * self._std_weight_position * measurement[3],
|
77 |
-
2 * self._std_weight_position * measurement[3],
|
78 |
-
1e-2,
|
79 |
-
2 * self._std_weight_position * measurement[3],
|
80 |
-
10 * self._std_weight_velocity * measurement[3],
|
81 |
-
10 * self._std_weight_velocity * measurement[3],
|
82 |
-
1e-5,
|
83 |
-
10 * self._std_weight_velocity * measurement[3]]
|
84 |
-
covariance = np.diag(np.square(std))
|
85 |
-
return mean, covariance
|
86 |
-
|
87 |
-
def predict(self, mean, covariance):
|
88 |
-
"""Run Kalman filter prediction step.
|
89 |
-
|
90 |
-
Parameters
|
91 |
-
----------
|
92 |
-
mean : ndarray
|
93 |
-
The 8 dimensional mean vector of the object state at the previous
|
94 |
-
time step.
|
95 |
-
covariance : ndarray
|
96 |
-
The 8x8 dimensional covariance matrix of the object state at the
|
97 |
-
previous time step.
|
98 |
-
|
99 |
-
Returns
|
100 |
-
-------
|
101 |
-
(ndarray, ndarray)
|
102 |
-
Returns the mean vector and covariance matrix of the predicted
|
103 |
-
state. Unobserved velocities are initialized to 0 mean.
|
104 |
-
|
105 |
-
"""
|
106 |
-
std_pos = [
|
107 |
-
self._std_weight_position * mean[3],
|
108 |
-
self._std_weight_position * mean[3],
|
109 |
-
1e-2,
|
110 |
-
self._std_weight_position * mean[3]]
|
111 |
-
std_vel = [
|
112 |
-
self._std_weight_velocity * mean[3],
|
113 |
-
self._std_weight_velocity * mean[3],
|
114 |
-
1e-5,
|
115 |
-
self._std_weight_velocity * mean[3]]
|
116 |
-
motion_cov = np.diag(np.square(np.r_[std_pos, std_vel]))
|
117 |
-
|
118 |
-
#mean = np.dot(self._motion_mat, mean)
|
119 |
-
mean = np.dot(mean, self._motion_mat.T)
|
120 |
-
covariance = np.linalg.multi_dot((
|
121 |
-
self._motion_mat, covariance, self._motion_mat.T)) + motion_cov
|
122 |
-
|
123 |
-
return mean, covariance
|
124 |
-
|
125 |
-
def project(self, mean, covariance):
|
126 |
-
"""Project state distribution to measurement space.
|
127 |
-
|
128 |
-
Parameters
|
129 |
-
----------
|
130 |
-
mean : ndarray
|
131 |
-
The state's mean vector (8 dimensional array).
|
132 |
-
covariance : ndarray
|
133 |
-
The state's covariance matrix (8x8 dimensional).
|
134 |
-
|
135 |
-
Returns
|
136 |
-
-------
|
137 |
-
(ndarray, ndarray)
|
138 |
-
Returns the projected mean and covariance matrix of the given state
|
139 |
-
estimate.
|
140 |
-
|
141 |
-
"""
|
142 |
-
std = [
|
143 |
-
self._std_weight_position * mean[3],
|
144 |
-
self._std_weight_position * mean[3],
|
145 |
-
1e-1,
|
146 |
-
self._std_weight_position * mean[3]]
|
147 |
-
innovation_cov = np.diag(np.square(std))
|
148 |
-
|
149 |
-
mean = np.dot(self._update_mat, mean)
|
150 |
-
covariance = np.linalg.multi_dot((
|
151 |
-
self._update_mat, covariance, self._update_mat.T))
|
152 |
-
return mean, covariance + innovation_cov
|
153 |
-
|
154 |
-
def multi_predict(self, mean, covariance):
|
155 |
-
"""Run Kalman filter prediction step (Vectorized version).
|
156 |
-
Parameters
|
157 |
-
----------
|
158 |
-
mean : ndarray
|
159 |
-
The Nx8 dimensional mean matrix of the object states at the previous
|
160 |
-
time step.
|
161 |
-
covariance : ndarray
|
162 |
-
The Nx8x8 dimensional covariance matrics of the object states at the
|
163 |
-
previous time step.
|
164 |
-
Returns
|
165 |
-
-------
|
166 |
-
(ndarray, ndarray)
|
167 |
-
Returns the mean vector and covariance matrix of the predicted
|
168 |
-
state. Unobserved velocities are initialized to 0 mean.
|
169 |
-
"""
|
170 |
-
std_pos = [
|
171 |
-
self._std_weight_position * mean[:, 3],
|
172 |
-
self._std_weight_position * mean[:, 3],
|
173 |
-
1e-2 * np.ones_like(mean[:, 3]),
|
174 |
-
self._std_weight_position * mean[:, 3]]
|
175 |
-
std_vel = [
|
176 |
-
self._std_weight_velocity * mean[:, 3],
|
177 |
-
self._std_weight_velocity * mean[:, 3],
|
178 |
-
1e-5 * np.ones_like(mean[:, 3]),
|
179 |
-
self._std_weight_velocity * mean[:, 3]]
|
180 |
-
sqr = np.square(np.r_[std_pos, std_vel]).T
|
181 |
-
|
182 |
-
motion_cov = []
|
183 |
-
for i in range(len(mean)):
|
184 |
-
motion_cov.append(np.diag(sqr[i]))
|
185 |
-
motion_cov = np.asarray(motion_cov)
|
186 |
-
|
187 |
-
mean = np.dot(mean, self._motion_mat.T)
|
188 |
-
left = np.dot(self._motion_mat, covariance).transpose((1, 0, 2))
|
189 |
-
covariance = np.dot(left, self._motion_mat.T) + motion_cov
|
190 |
-
|
191 |
-
return mean, covariance
|
192 |
-
|
193 |
-
def update(self, mean, covariance, measurement):
|
194 |
-
"""Run Kalman filter correction step.
|
195 |
-
|
196 |
-
Parameters
|
197 |
-
----------
|
198 |
-
mean : ndarray
|
199 |
-
The predicted state's mean vector (8 dimensional).
|
200 |
-
covariance : ndarray
|
201 |
-
The state's covariance matrix (8x8 dimensional).
|
202 |
-
measurement : ndarray
|
203 |
-
The 4 dimensional measurement vector (x, y, a, h), where (x, y)
|
204 |
-
is the center position, a the aspect ratio, and h the height of the
|
205 |
-
bounding box.
|
206 |
-
|
207 |
-
Returns
|
208 |
-
-------
|
209 |
-
(ndarray, ndarray)
|
210 |
-
Returns the measurement-corrected state distribution.
|
211 |
-
|
212 |
-
"""
|
213 |
-
projected_mean, projected_cov = self.project(mean, covariance)
|
214 |
-
|
215 |
-
chol_factor, lower = scipy.linalg.cho_factor(
|
216 |
-
projected_cov, lower=True, check_finite=False)
|
217 |
-
kalman_gain = scipy.linalg.cho_solve(
|
218 |
-
(chol_factor, lower), np.dot(covariance, self._update_mat.T).T,
|
219 |
-
check_finite=False).T
|
220 |
-
innovation = measurement - projected_mean
|
221 |
-
|
222 |
-
new_mean = mean + np.dot(innovation, kalman_gain.T)
|
223 |
-
new_covariance = covariance - np.linalg.multi_dot((
|
224 |
-
kalman_gain, projected_cov, kalman_gain.T))
|
225 |
-
return new_mean, new_covariance
|
226 |
-
|
227 |
-
def gating_distance(self, mean, covariance, measurements,
|
228 |
-
only_position=False, metric='maha'):
|
229 |
-
"""Compute gating distance between state distribution and measurements.
|
230 |
-
A suitable distance threshold can be obtained from `chi2inv95`. If
|
231 |
-
`only_position` is False, the chi-square distribution has 4 degrees of
|
232 |
-
freedom, otherwise 2.
|
233 |
-
Parameters
|
234 |
-
----------
|
235 |
-
mean : ndarray
|
236 |
-
Mean vector over the state distribution (8 dimensional).
|
237 |
-
covariance : ndarray
|
238 |
-
Covariance of the state distribution (8x8 dimensional).
|
239 |
-
measurements : ndarray
|
240 |
-
An Nx4 dimensional matrix of N measurements, each in
|
241 |
-
format (x, y, a, h) where (x, y) is the bounding box center
|
242 |
-
position, a the aspect ratio, and h the height.
|
243 |
-
only_position : Optional[bool]
|
244 |
-
If True, distance computation is done with respect to the bounding
|
245 |
-
box center position only.
|
246 |
-
Returns
|
247 |
-
-------
|
248 |
-
ndarray
|
249 |
-
Returns an array of length N, where the i-th element contains the
|
250 |
-
squared Mahalanobis distance between (mean, covariance) and
|
251 |
-
`measurements[i]`.
|
252 |
-
"""
|
253 |
-
mean, covariance = self.project(mean, covariance)
|
254 |
-
if only_position:
|
255 |
-
mean, covariance = mean[:2], covariance[:2, :2]
|
256 |
-
measurements = measurements[:, :2]
|
257 |
-
|
258 |
-
d = measurements - mean
|
259 |
-
if metric == 'gaussian':
|
260 |
-
return np.sum(d * d, axis=1)
|
261 |
-
elif metric == 'maha':
|
262 |
-
cholesky_factor = np.linalg.cholesky(covariance)
|
263 |
-
z = scipy.linalg.solve_triangular(
|
264 |
-
cholesky_factor, d.T, lower=True, check_finite=False,
|
265 |
-
overwrite_b=True)
|
266 |
-
squared_maha = np.sum(z * z, axis=0)
|
267 |
-
return squared_maha
|
268 |
-
else:
|
269 |
-
raise ValueError('invalid distance metric')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ELam/text_generator/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Text Generator
|
3 |
-
emoji: 💻
|
4 |
-
colorFrom: purple
|
5 |
-
colorTo: blue
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.11.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_datasets/academic_test.py
DELETED
@@ -1,57 +0,0 @@
|
|
1 |
-
# Text Recognition Testing set, including:
|
2 |
-
# Regular Datasets: IIIT5K, SVT, IC13
|
3 |
-
# Irregular Datasets: IC15, SVTP, CT80
|
4 |
-
|
5 |
-
test_root = 'data/mixture'
|
6 |
-
|
7 |
-
test_img_prefix1 = f'{test_root}/IIIT5K/'
|
8 |
-
test_img_prefix2 = f'{test_root}/svt/'
|
9 |
-
test_img_prefix3 = f'{test_root}/icdar_2013/'
|
10 |
-
test_img_prefix4 = f'{test_root}/icdar_2015/'
|
11 |
-
test_img_prefix5 = f'{test_root}/svtp/'
|
12 |
-
test_img_prefix6 = f'{test_root}/ct80/'
|
13 |
-
|
14 |
-
test_ann_file1 = f'{test_root}/IIIT5K/test_label.txt'
|
15 |
-
test_ann_file2 = f'{test_root}/svt/test_label.txt'
|
16 |
-
test_ann_file3 = f'{test_root}/icdar_2013/test_label_1015.txt'
|
17 |
-
test_ann_file4 = f'{test_root}/icdar_2015/test_label.txt'
|
18 |
-
test_ann_file5 = f'{test_root}/svtp/test_label.txt'
|
19 |
-
test_ann_file6 = f'{test_root}/ct80/test_label.txt'
|
20 |
-
|
21 |
-
test1 = dict(
|
22 |
-
type='OCRDataset',
|
23 |
-
img_prefix=test_img_prefix1,
|
24 |
-
ann_file=test_ann_file1,
|
25 |
-
loader=dict(
|
26 |
-
type='AnnFileLoader',
|
27 |
-
repeat=1,
|
28 |
-
file_format='txt',
|
29 |
-
parser=dict(
|
30 |
-
type='LineStrParser',
|
31 |
-
keys=['filename', 'text'],
|
32 |
-
keys_idx=[0, 1],
|
33 |
-
separator=' ')),
|
34 |
-
pipeline=None,
|
35 |
-
test_mode=True)
|
36 |
-
|
37 |
-
test2 = {key: value for key, value in test1.items()}
|
38 |
-
test2['img_prefix'] = test_img_prefix2
|
39 |
-
test2['ann_file'] = test_ann_file2
|
40 |
-
|
41 |
-
test3 = {key: value for key, value in test1.items()}
|
42 |
-
test3['img_prefix'] = test_img_prefix3
|
43 |
-
test3['ann_file'] = test_ann_file3
|
44 |
-
|
45 |
-
test4 = {key: value for key, value in test1.items()}
|
46 |
-
test4['img_prefix'] = test_img_prefix4
|
47 |
-
test4['ann_file'] = test_ann_file4
|
48 |
-
|
49 |
-
test5 = {key: value for key, value in test1.items()}
|
50 |
-
test5['img_prefix'] = test_img_prefix5
|
51 |
-
test5['ann_file'] = test_ann_file5
|
52 |
-
|
53 |
-
test6 = {key: value for key, value in test1.items()}
|
54 |
-
test6['img_prefix'] = test_img_prefix6
|
55 |
-
test6['ann_file'] = test_ann_file6
|
56 |
-
|
57 |
-
test_list = [test1, test2, test3, test4, test5, test6]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|