parquet-converter commited on
Commit
3038341
·
1 Parent(s): 2250aa7

Update parquet files (step 40 of 121)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Anne (dvd 12) S01 Ep 123 124 Fix.md +0 -19
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download The Sims 2 for PC and Create Your Own Virtual World.md +0 -17
  3. spaces/1pelhydcardo/ChatGPT-prompt-generator/Avengers-Age-Of-Ultron-Full-Movie-Hd-Download-Free-EXCLUSIVE.md +0 -94
  4. spaces/1phancelerku/anime-remove-background/Descarga Clash Royale Hack APK con todas las cartas desbloqueadas y gemas ilimitadas.md +0 -69
  5. spaces/1phancelerku/anime-remove-background/Download Amp Install Google Drive For Desktop HOT!.md +0 -84
  6. spaces/1phancelerku/anime-remove-background/Download Spider Fighter Mod APK and Become the Ultimate Hero.md +0 -70
  7. spaces/2kaara/oreo/README.md +0 -10
  8. spaces/34we12er/newbing/README.md +0 -12
  9. spaces/AI-ZTH-03-23/3.HTML5-Aframe-3dMap-Flight/index.html +0 -46
  10. spaces/AIFILMS/generate_human_motion/VQ-Trans/visualize/vis_utils.py +0 -66
  11. spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/tts/syntaspeech/syntactic_graph_encoder.py +0 -193
  12. spaces/AIZerotoHero-Health4All/01-Gradio-Speech2Text2Speech-AIPipeline/app.py +0 -160
  13. spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov6/yolov6_m_syncbn_fast_8xb32-300e_coco.py +0 -62
  14. spaces/Aabbhishekk/MistralQnA/README.md +0 -12
  15. spaces/Abs6187/AI_Chatbot/app.py +0 -24
  16. spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Mishalsgpt.py +0 -23
  17. spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/V50.py +0 -67
  18. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/csvtoarray.d.ts +0 -2
  19. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/custom/Custom.d.ts +0 -48
  20. spaces/AlexZou/Deploy_Restoration/net/block.py +0 -477
  21. spaces/Ameaou/academic-chatgpt3.1/crazy_functions/Latex全文润色.py +0 -175
  22. spaces/Andy1621/uniformer_image_detection/configs/ld/ld_r34_gflv1_r101_fpn_coco_1x.py +0 -19
  23. spaces/Anilegna/Colour-Personallity/app.py +0 -172
  24. spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/utils.py +0 -132
  25. spaces/Anonymous-sub/Rerender/ControlNet/annotator/openpose/hand.py +0 -86
  26. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/utils/logger.py +0 -27
  27. spaces/AsakuraMizu/moe-tts/attentions.py +0 -300
  28. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/msgpack/fallback.py +0 -1010
  29. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/__init__.py +0 -182
  30. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/diagram/__init__.py +0 -642
  31. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/config/_validate_pyproject/__init__.py +0 -34
  32. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/api.py +0 -235
  33. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/deploy/export_model.py +0 -235
  34. spaces/BBrother/Pandora/Dockerfile +0 -25
  35. spaces/BartPoint/VoiceChange_Beta/README.md +0 -12
  36. spaces/Beasto/Day_to_Night_Cyclegan/app.py +0 -49
  37. spaces/Benson/text-generation/Examples/Bb-8 Sphero App Descargar Ios.md +0 -73
  38. spaces/Benson/text-generation/Examples/Coche Real Aparcamiento Multijugador Apk Android Oyun Club.md +0 -59
  39. spaces/Benson/text-generation/Examples/Descargar Apk Garena Gratis Fuego Booyah Da.md +0 -115
  40. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/vcs/git.py +0 -526
  41. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/errors.py +0 -34
  42. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tomli/_parser.py +0 -691
  43. spaces/BigSalmon/Paraphrase/app.py +0 -41
  44. spaces/CVPR/Bamboo_ViT-B16_demo/app.py +0 -105
  45. spaces/CVPR/LIVE/pybind11/.github/ISSUE_TEMPLATE/feature-request.md +0 -16
  46. spaces/CVPR/WALT/mmdet/models/backbones/resnet.py +0 -663
  47. spaces/CVPR/regionclip-demo/detectron2/modeling/matcher.py +0 -126
  48. spaces/ChandraMohanNayal/AutoGPT/autogpt/agent/__init__.py +0 -4
  49. spaces/Cicooo/vits-uma-genshin-honkai/attentions.py +0 -300
  50. spaces/CikeyQI/meme-api/meme_generator/memes/love_you/__init__.py +0 -26
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Anne (dvd 12) S01 Ep 123 124 Fix.md DELETED
@@ -1,19 +0,0 @@
1
-
2
- <h1>Anne with an E: A Review of Season 1 Episodes 123 and 124</h1>
3
- <p>Anne with an E is a Canadian television series based on the classic novel Anne of Green Gables by Lucy Maud Montgomery. The series follows the adventures of Anne Shirley, a spirited and imaginative orphan girl who finds a new home with an elderly brother and sister on Prince Edward Island.</p>
4
- <h2>anne (dvd 12) s01 ep 123 124</h2><br /><p><b><b>Download</b> &#9733;&#9733;&#9733; <a href="https://byltly.com/2uKwN5">https://byltly.com/2uKwN5</a></b></p><br /><br />
5
- <p>In this article, I will review the last two episodes of season 1, which are available on DVD 12. These episodes are titled "Remorse Is the Poison of Life" and "Wherever You Are Is My Home".</p>
6
- <h2>Remorse Is the Poison of Life</h2>
7
- <p>This episode deals with the aftermath of a tragic accident that leaves one of Anne's friends in a coma. Anne blames herself for what happened and struggles with guilt and remorse. She also faces the wrath of the town's gossip, Mrs. Lynde, who accuses her of being a bad influence.</p>
8
- <p>Meanwhile, Marilla and Matthew face a financial crisis that threatens to take away their farm. They try to find a way to save Green Gables, but their options are limited. Marilla also receives a visit from an old flame, who stirs up some memories of her past.</p>
9
- <p>This episode is emotional and dramatic, as it shows how Anne and the Cuthberts cope with adversity and loss. It also reveals some secrets about Marilla's history and her feelings for Matthew. The performances of Amybeth McNulty as Anne, Geraldine James as Marilla, and R.H. Thomson as Matthew are superb and moving.</p>
10
- <p></p>
11
- <h2>Wherever You Are Is My Home</h2>
12
- <p>This episode is the season finale, and it wraps up some of the main conflicts and challenges that Anne and the Cuthberts faced throughout the season. Anne finally gets to attend the Christmas ball at the Barrys' house, where she hopes to impress Gilbert Blythe, her academic rival and crush. However, things do not go as planned, and Anne has to deal with some unexpected surprises.</p>
13
- <p>On the other hand, Marilla and Matthew manage to find a solution to their financial troubles, thanks to a generous offer from an unexpected source. They also decide to make Anne's stay at Green Gables permanent, by officially adopting her as their daughter.</p>
14
- <p>This episode is heartwarming and satisfying, as it shows how Anne's dreams come true and how she finds a true family with the Cuthberts. It also sets up some new possibilities for season 2, such as Anne's friendship with Gilbert, her education at Queen's College, and her exploration of her identity and heritage.</p>
15
- <h3>Conclusion</h3>
16
- <p>Anne with an E is a wonderful adaptation of a beloved classic, that brings new life and depth to the characters and the story. The series is beautifully shot, well-written, and superbly acted. It tackles some serious issues such as orphanhood, trauma, prejudice, and identity, while also celebrating the joys of imagination, friendship, and love.</p>
17
- <p>The last two episodes of season 1 are a testament to the quality and charm of this series. They are emotional, engaging, and uplifting. They leave the viewers wanting more of Anne's adventures and growth.</p> cec2833e83<br />
18
- <br />
19
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download The Sims 2 for PC and Create Your Own Virtual World.md DELETED
@@ -1,17 +0,0 @@
1
-
2
- <h1>The Sims 2 Download for PC</h1>
3
- <p>The Sims 2 is a popular life simulation game that was released in 2004 by Electronic Arts. It is the sequel to The Sims, and it allows players to create and control their own virtual characters, called Sims, in various activities and scenarios. The game has multiple expansion and stuff packs that add new features, content, and gameplay options to the base game.</p>
4
- <p>If you want to download The Sims 2 for PC, you have a few options. One of them is to get the original game from Old Games Download, which provides a link to download a zip file containing the game ISO and a serial key. You will need to extract the file, mount the ISO, run the setup.exe, and copy the Sims2.exe file from the NoCD folder to the game folder. You will also need to install DirectX from the Redist folder and run the game as administrator.</p>
5
- <h2>the sims 2 download for pc</h2><br /><p><b><b>Download File</b> &raquo;&raquo;&raquo; <a href="https://byltly.com/2uKwhh">https://byltly.com/2uKwhh</a></b></p><br /><br />
6
- <p>Another option is to get The Sims 2: Ultimate Collection from Old Games Download, which includes the original game along with all expansion and stuff packs. This is a repack of the game that comes in a rar file that you need to extract and run the Setup.exe. You will also need to install DirectX from the Redist folder and run the game as administrator. The first time you open the game, it may take some time to start and you may see a black screen for a while. You will also need to change the shadows settings from high to low or medium to avoid black squares under your Sims.</p>
7
- <p>A third option is to get The Sims 2: Ultimate Collection from Origin, which is an online platform that sells and distributes digital games. You will need to create an account on Origin and download their client software. Then you can locate The Sims 2 Ultimate Collection in your Origin Game Library and click on download. You will then be presented with a series of pop-up boxes that will guide you through the installation process.</p>
8
- <p>Whichever option you choose, you will be able to enjoy The Sims 2 on your PC and create your own virtual world with endless possibilities.</p><h2>The Sims 2 Tips and Tricks</h2>
9
- <p>Once you have downloaded The Sims 2 for PC, you may want to learn some tips and tricks to make your gameplay more enjoyable and efficient. Here are some of them:</p>
10
- <ul>
11
- <li>When you want to go to a community lot, you can use the phone or the computer to call a taxi or a carpool. This will save you time and money, as you won't have to wait for the car to arrive or pay for the gas. You can also invite other Sims to join you by clicking on their portraits in the relationship panel.</li>
12
- <li>If you want to change the topic of conversation with another Sim, you can use the "Change Topic" interaction. This will allow you to choose from a list of topics that are relevant to the current situation, such as hobbies, interests, weather, etc. This can help you avoid boring or awkward conversations and improve your relationship with other Sims.</li>
13
- <li>If you have the Seasons expansion pack installed, you can use various features to make your Sims' lives more realistic and fun. For example, you can use the weather machine to change the weather at any time, or you can use the gardening skill to grow your own fruits and vegetables. You can also enjoy seasonal activities such as snowball fights, ice skating, fishing, etc.</li>
14
- </ul>
15
- <p>These are just some of the tips and tricks that you can use in The Sims 2. There are many more that you can discover by exploring the game and trying different things. Have fun!</p> ddb901b051<br />
16
- <br />
17
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/Avengers-Age-Of-Ultron-Full-Movie-Hd-Download-Free-EXCLUSIVE.md DELETED
@@ -1,94 +0,0 @@
1
- ## Avengers: Age Of Ultron Full Movie Hd Download Free
2
-
3
-
4
-
5
-
6
-
7
- ![Avengers: Age Of Ultron Full Movie Hd Download Free ((EXCLUSIVE))](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTBHIeSh9UhrWrkROv7I1Kjyj_vkDXfdCbviu8eCZAGzhNgrZgqtFCNTCPM)
8
-
9
-
10
-
11
-
12
-
13
- **Download ○ [https://kneedacexbrew.blogspot.com/?d=2txjhR](https://kneedacexbrew.blogspot.com/?d=2txjhR)**
14
-
15
-
16
-
17
-
18
-
19
-
20
-
21
-
22
-
23
-
24
-
25
-
26
-
27
- # How to Watch Avengers: Age Of Ultron Full Movie HD for Free Online
28
-
29
-
30
-
31
- If you are a fan of Marvel's superhero movies, you might be wondering how to watch Avengers: Age Of Ultron full movie HD for free online. Avengers: Age Of Ultron is the sequel to the 2012 blockbuster The Avengers, and it features the return of Iron Man, Captain America, Thor, Hulk, Black Widow, Hawkeye, and Nick Fury as they face a new threat from a rogue artificial intelligence named Ultron.
32
-
33
-
34
-
35
- Avengers: Age Of Ultron was released in 2015 and it was a huge success at the box office, grossing over $1.4 billion worldwide. It received positive reviews from critics and audiences alike, who praised its action, humor, visual effects, and performances. The movie also introduced new characters such as Scarlet Witch, Quicksilver, Vision, and Ant-Man to the Marvel Cinematic Universe.
36
-
37
-
38
-
39
- However, if you missed the chance to watch Avengers: Age Of Ultron in theaters or on streaming platforms, you might be looking for ways to watch it for free online. There are many websites that claim to offer Avengers: Age Of Ultron full movie HD download free, but most of them are either illegal, unsafe, or low-quality. In this article, we will show you how to watch Avengers: Age Of Ultron full movie HD for free online legally and safely.
40
-
41
-
42
-
43
- ## The Best Way to Watch Avengers: Age Of Ultron Full Movie HD for Free Online
44
-
45
-
46
-
47
- The best way to watch Avengers: Age Of Ultron full movie HD for free online is to use a reputable and reliable website that offers legal and high-quality streaming services. One of such websites is M4uHD[^1^], which is a popular and trusted site that provides thousands of movies and TV shows for free.
48
-
49
-
50
-
51
- M4uHD has Avengers: Age Of Ultron full movie HD available for streaming on its website. You can watch it without any registration or subscription. All you need is a stable internet connection and a compatible device such as a computer, smartphone, tablet, or smart TV. You can also choose from different video qualities ranging from 360p to 1080p.
52
-
53
-
54
-
55
- M4uHD is not only free but also safe and secure. It does not host any illegal or pirated content on its servers. It only links to third-party sources that host the movies and TV shows. It also does not contain any malware or viruses that can harm your device or data. It respects your privacy and does not collect any personal information from you.
56
-
57
-
58
-
59
- To watch Avengers: Age Of Ultron full movie HD for free online on M4uHD, follow these simple steps:
60
-
61
-
62
-
63
- 1. Go to M4uHD's website by clicking [here](https://m4uhd.tv/watch-movie-avengers-age-of-ultron-2015-4932.html).
64
-
65
- 2. Search for Avengers: Age Of Ultron in the search bar or browse through the categories.
66
-
67
- 3. Click on the movie poster or title to open the movie page.
68
-
69
- 4. Choose a video quality and a server from the list of options.
70
-
71
- 5. Click on the play button and enjoy watching Avengers: Age Of Ultron full movie HD for free online.
72
-
73
-
74
-
75
- ## Other Ways to Watch Avengers: Age Of Ultron Full Movie HD for Free Online
76
-
77
-
78
-
79
- If you are looking for other ways to watch Avengers: Age Of Ultron full movie HD for free online, you can also try some of these alternatives:
80
-
81
-
82
-
83
- - MoviesFlix[^2^]: MoviesFlix is another website that offers free streaming of movies and TV shows in various genres and languages. It has Avengers: Age Of Ultron full movie HD download free option as well as streaming option. You can download the movie in different formats such as 300MB, 480p, 720p, and 1080p. You can also watch it online without any registration or subscription.
84
-
85
- - Google Drive[^3^]: Google Drive is a cloud storage service that allows you to store and share files online. You can also use it to watch movies and TV shows that are uploaded 1b8d091108
86
-
87
-
88
-
89
-
90
-
91
-
92
-
93
-
94
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Descarga Clash Royale Hack APK con todas las cartas desbloqueadas y gemas ilimitadas.md DELETED
@@ -1,69 +0,0 @@
1
-
2
- <h1>Clash Royale Hack APK Todas Las Cartas: What You Need to Know</h1>
3
- <p>Clash Royale is a popular mobile game that combines elements of strategy, card collecting, and tower defense. It is developed by Supercell, the same company behind Clash of Clans. In Clash Royale, you can collect and upgrade over 100 cards featuring characters from the Clash universe, as well as spells and defenses. You can also build your own battle deck and challenge other players in real-time multiplayer matches.</p>
4
- <h2>clash royale hack apk todas las cartas</h2><br /><p><b><b>DOWNLOAD</b> &#9733;&#9733;&#9733; <a href="https://jinyurl.com/2uNNhq">https://jinyurl.com/2uNNhq</a></b></p><br /><br />
5
- <p>However, some players are not satisfied with playing Clash Royale the way it is intended. They want to get access to all the cards in the game without spending any money or time. That's why they look for hack APKs for Clash Royale.</p>
6
- <p>A hack APK is a modified version of an app that allows users to cheat or bypass certain restrictions in the game. For example, a hack APK for Clash Royale may let you get unlimited gems, gold, and cards for free. It may also let you unlock all arenas and chests instantly. In Spanish, "todas las cartas" means "all cards", so a hack APK that claims to offer "todas las cartas" is one that gives you access to every card in the game.</p>
7
- <p>But before you download and install a hack APK for Clash Royale, you should know that there are many risks and drawbacks involved. In this article, we will explain why using hack APKs for Clash Royale is not a good idea, and how you can play Clash Royale without using them.</p>
8
- <p></p>
9
- <h2>Why Do People Use Hack APKs for Clash Royale?</h2>
10
- <p>There are several reasons why some players use hack APKs for Clash Royale. Here are some of them:</p>
11
- <ul>
12
- <li>They want to get unlimited gems, gold, and cards for free. Gems are the premium currency in Clash Royale that can be used to buy chests, cards, gold, and other items. Gold is used to upgrade cards and buy new ones from the shop. Cards are the units that you use to fight in battles. Getting more gems, gold, and cards can help you progress faster in the game and get an edge over your opponents.</li>
13
- all arenas and chests instantly. Arenas are the different stages in Clash Royale that have different themes and rewards. Chests are the containers that hold cards, gold, and gems. You can get chests by winning battles, completing quests, or buying them with gems. Unlocking all arenas and chests can give you access to more cards and resources.</li>
14
- <li>They want to gain an unfair advantage over other players. Hack APKs for Clash Royale may allow you to cheat in battles by giving you unlimited elixir, making your cards stronger, or disabling your opponent's cards. Elixir is the resource that you use to deploy cards in battles. Having more elixir than your opponent can give you a huge advantage. Making your cards stronger or disabling your opponent's cards can also make you win easily.</li>
15
- </ul>
16
- <p>These are some of the reasons why some players use hack APKs for Clash Royale. However, they may not realize that using hack APKs for Clash Royale is not only illegal, but also risky and unethical.</p>
17
- <h3>How Do Hack APKs Work for Clash Royale?</h3>
18
- <p>Hack APKs for Clash Royale work by modifying the original game files, bypassing the security checks, and injecting fake data into the game servers. Here is how they do it:</p>
19
- <ul>
20
- <li>They modify the original game files. Hack APKs for Clash Royale are usually downloaded from third-party websites that are not affiliated with Supercell or Google Play. These websites may offer modified versions of the game that have been altered to enable cheating features. For example, they may change the values of gems, gold, and cards in the game code, or add new codes that allow you to manipulate the game mechanics.</li>
21
- <li>They bypass the security checks. Clash Royale is an online game that requires a constant connection to the game servers. The game servers are responsible for verifying the authenticity of the game files and the data exchanged between the players and the game. Hack APKs for Clash Royale may use various methods to bypass these security checks, such as using proxy servers, encryption, or spoofing techniques. These methods can make the game servers think that the hack APKs are legitimate versions of the game.</li>
22
- <li>They inject fake data into the game servers. Hack APKs for Clash Royale may also send fake data to the game servers, such as false results of battles, false requests for chests, or false information about cards. These fake data can trick the game servers into giving you more rewards or making you win more battles.</li>
23
- </ul>
24
- <h3>What Are the Risks and Drawbacks of Using Hack APKs for Clash Royale?</h3>
25
- <p>Using hack APKs for Clash Royale may seem tempting, but it comes with many risks and drawbacks. Here are some of them:</p>
26
- <h4>Legal Issues</h4>
27
- <p>Using hack APKs for Clash Royale is illegal and can get you in trouble with the law. By using hack APKs, you are violating the terms of service of Clash Royale, which state that you are not allowed to modify, hack, or cheat in the game. You are also infringing the intellectual property rights of Supercell, which own the exclusive rights to the game and its content. If you are caught using hack APKs, you may face legal actions from Supercell, such as lawsuits, fines, or bans.</p>
28
- <h4>Technical Issues</h4>
29
- <p>Using hack APKs for Clash Royale is risky and can damage your device and data. Hack APKs are not verified or approved by Supercell or Google Play, and may contain malware, viruses, or spyware that can harm your device or steal your personal information. Hack APKs may also cause data loss, device damage, or compatibility issues, as they may not be compatible with the latest updates or versions of the game. If you use hack APKs, you may lose your progress, account, or device.</p>
30
- <h4>Ethical Issues</h4>
31
- <p>Using hack APKs for Clash Royale is unethical and can ruin the game experience for yourself and others. Hack APKs are a form of cheating that gives you an unfair advantage over other players who play by the rules. By using hack APKs, you are ruining the game balance, the game challenge, and the game fun. You are also disrespecting other players who work hard to earn their rewards and achievements legitimately. If you use hack APKs, you may lose your reputation, respect, or friends.</p>
32
- <h2>How to Play Clash Royale Without Using Hack APKs?</h2>
33
- <p>Now that you know the risks and drawbacks of using hack APKs for Clash Royale, you may wonder how to play Clash Royale without using them. The answer is simple: just play the game as it is meant to be played. Here are some tips and benefits of playing Clash Royale without using hack APKs:</p>
34
- <h4>Tips and Tricks for Beginners</h4>
35
- <p>If you are new to Clash Royale, you may feel overwhelmed by the game's complexity and difficulty. However, there are some tips and tricks that can help you improve your skills and strategies in Clash Royale. Here are some of them:</p>
36
- <ul>
37
- <li>Join a clan. A clan is a group of players who can chat, donate cards, request cards, and participate in clan wars together. Joining a clan can help you get more cards, learn from other players, and have more fun.</li>
38
- <li>Build a balanced deck. A deck is a set of eight cards that you use in battles. A balanced deck should have a mix of different card types, such as troops, spells, and buildings. It should also have a mix of different elixir costs, such as low-cost cards (1-3 elixir), medium-cost cards (4-5 elixir), and high-cost cards (6+ elixir). A balanced deck can help you deal with different situations and opponents.</li>
39
- <li>Manage your elixir. Elixir is the resource that you use to deploy cards in battles. It regenerates over time at a constant rate. Managing your elixir means spending it wisely and efficiently. You should avoid wasting elixir by deploying cards when they are not needed or by deploying more cards than necessary. You should also try to gain an elixir advantage over your opponent by making positive elixir trades. A positive elixir trade is when you use less elixir than your opponent to counter their cards or to damage their towers.</li>
40
- <li>Learn from others. One of the best ways to improve your skills and strategies in Clash Royale is to learn from other players who are better than you. You can watch replays of top players in TV Royale or YouTube videos of popular streamers or content creators. You can also ask for advice from your clanmates or friends who play Clash Royale.</li>
41
- </ul>
42
- <h4>Features and Benefits of Playing Legally</h4>
43
- <p>If you play Clash Royale without using hack APKs, you will enjoy many features and benefits that hack APKs cannot offer. Here are some of them:</p>
44
- <ul>
45
- it has 13 arenas with different themes and rewards, it has various game modes and events that keep the game fresh and exciting. It also has a fair and balanced matchmaking system that ensures that you face opponents of similar skill level. Playing the game as it is designed will give you a more satisfying and rewarding experience.</li>
46
- <li>You will earn rewards fairly. Clash Royale is a free-to-play game that offers many ways to earn rewards without spending any money. You can get chests by winning battles, completing quests, or participating in clan wars. You can get gems by opening free chests, completing achievements, or watching ads. You can get gold by opening chests, donating cards, or winning battles. You can get cards by opening chests, requesting cards, or buying them from the shop. Earning rewards fairly will make you appreciate them more and motivate you to play more.</li>
47
- <li>You will support the developers. Clash Royale is a game that is developed by Supercell, a company that has created many other popular games such as Clash of Clans, Brawl Stars, and Hay Day. Supercell is a company that cares about its players and listens to their feedback. They are constantly updating and improving the game to make it better and more enjoyable. They also provide customer service and technical support to help players with any issues or problems they may encounter. Playing the game without using hack APKs will show your support and gratitude to the developers who work hard to create and maintain the game.</li>
48
- <li>You will be part of a respectful community. Clash Royale is a game that has a large and active community of players from all over the world. You can interact with other players through chat, emotes, or clan wars. You can also join online forums, social media groups, or fan websites where you can share your thoughts, opinions, or tips about the game. You can also watch or follow professional players, streamers, or content creators who showcase their skills and strategies in the game. Playing the game without using hack APKs will make you a respectful and responsible member of the community who follows the rules and respects other players.</li>
49
- </ul>
50
- <h1>Conclusion</h1>
51
- <p>In conclusion, using hack APKs for Clash Royale is not a good idea, as it can cause many legal, technical, and ethical issues. Hack APKs for Clash Royale are illegal, risky, and unethical. They can get you in trouble with the law, damage your device and data, and ruin the game experience for yourself and others.</p>
52
- <p>Instead of using hack APKs for Clash Royale, you should play the game without using them. Playing Clash Royale without using hack APKs is simple, safe, and fun. You can improve your skills and strategies in Clash Royale by following some tips and tricks for beginners. You can also enjoy many features and benefits of playing legally, such as enjoying the game's original design, earning rewards fairly, supporting the developers, and being part of a respectful community.</p>
53
- <p>So what are you waiting for? Download Clash Royale from Google Play or App Store today and start playing without using hack APKs. You will have a blast!</p>
54
- <h4>FAQs</h4>
55
- <p>Here are some frequently asked questions about Clash Royale and hack APKs:</p>
56
- <ul>
57
- <li>Q: What is Clash Royale?</li>
58
- <li>A: Clash Royale is a popular mobile game that combines elements of strategy, card collecting, and tower defense. It is developed by Supercell, the same company behind Clash of Clans.</li>
59
- <li>Q: What are hack APKs for Clash Royale?</li>
60
- <li>A: Hack APKs for Clash Royale are modified versions of the game that allow users to cheat or bypass certain restrictions in the game. For example, they may let users get unlimited gems, gold, and cards for free.</li>
61
- <li>Q: Why are hack APKs for Clash Royale illegal?</li>
62
- <li>A: Hack APKs for Clash Royale are illegal because they violate the terms of service of Clash Royale and infringe the intellectual property rights of Supercell. If users are caught using hack APKs, they may face legal actions from Supercell.</li>
63
- <li>Q: Why are hack APKs for Clash Royale risky?</li>
64
- <li>A: Hack APKs for Clash Royale are risky because they may contain malware, viruses, or spyware that can harm users' devices or steal their personal information. They may also cause data loss, device damage, or compatibility issues.</li>
65
- <li>Q: Why are hack APKs for Clash Royale unethical?</li>
66
- and the game fun. They also disrespect other players who work hard to earn their rewards and achievements legitimately.</li>
67
- </ul></p> 197e85843d<br />
68
- <br />
69
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Amp Install Google Drive For Desktop HOT!.md DELETED
@@ -1,84 +0,0 @@
1
-
2
- <h1>How to Download and Install Google Drive for Desktop</h1>
3
- <p>Google Drive is a popular online storage service that lets you store and access your files from any device. But did you know that you can also use Google Drive on your desktop computer? With Google Drive for desktop, you can sync your files between the cloud and your computer, open files directly from your computer, back up your photos, collaborate on Microsoft Office files, and more. In this article, we will show you how to download and install Google Drive for desktop, how to use its features, and some tips and tricks to make the most of it.</p>
4
- <h2>download amp; install google drive for desktop</h2><br /><p><b><b>DOWNLOAD</b> &gt;&gt;&gt; <a href="https://jinyurl.com/2uNSTF">https://jinyurl.com/2uNSTF</a></b></p><br /><br />
5
- <h2>What is Google Drive for Desktop and Why You Need It</h2>
6
- <p>Google Drive for desktop is an app that lets you access your Google Drive files and folders on your computer with Windows File Explorer or macOS Finder. It also lets you sync folders from your computer to Google Drive or backup to Google Photos. When you sync, your files download from the cloud and upload from your computer’s hard drive. After you sync, your computer's files match those in the cloud. Your files stay up to date and accessible, any change you make applies across devices.</p>
7
- <p>Some of the benefits of using Google Drive for desktop are:</p>
8
- <ul>
9
- <li>You can save storage space on your computer by streaming files from the cloud instead of downloading them.</li>
10
- <li>You can open files stored on the cloud directly on your computer without using a browser.</li>
11
- <li>You can save files and folders for offline use, including files from shared drives.</li>
12
- <li>You can collaborate on Microsoft Office files in real time with other users.</li>
13
- <li>You can extend the power of Drive with third-party apps.</li>
14
- </ul>
15
- <h2>How to Download Google Drive for Desktop</h2>
16
- <p>To download Google Drive for desktop, follow these steps:</p>
17
- <ol>
18
- <li>Go to <a href="(^1^)">https://www.google.com/drive/download/</a> and click <strong>Download</strong> under <strong>Drive for desktop</strong>.</li>
19
- <li>Select <strong>DOWNLOAD FOR WINDOWS</strong> or <strong>DOWNLOAD FOR MAC</strong>, depending on your operating system.</li>
20
- <li>A file named <strong>GoogleDriveSetup.exe</strong> (for Windows) or <strong>GoogleDrive.dmg</strong> (for Mac) will be downloaded to your computer. Open it and follow the on-screen instructions.</li>
21
- </ol>
22
- <h2>How to Install Google Drive for Desktop</h2>
23
- <p>To install Google Drive for desktop, follow these steps:</p>
24
- <ol>
25
- <li>After you open the downloaded file, a window will appear asking you to sign in with your Google account. Enter your email address and password, then click <strong>Next</strong>.</li>
26
- <li>A window will appear asking you to choose how you want to use Google Drive for desktop. You can choose between <strong>Stream files to free up space</strong> or <strong>Sync files to your computer</strong>. The first option lets you stream files from the cloud without downloading them, while the second option lets you sync files between your computer and the cloud. Choose the option that suits your needs and click <strong>Next</strong>.</li>
27
- <li>A window will appear asking you to choose which folders you want to sync or stream. You can select all folders or specific folders from <strong>My Drive</strong> and <strong>Shared drives</strong>. You can also change the location of the Google Drive folder on your computer. Click <strong>Start</strong> when you are done.</li>
28
- <li>A window will appear showing the progress of the installation. Wait until it is complete, then click <strong>Close</strong>.</li>
29
- <li>A Google Drive icon will appear on your taskbar (for Windows) or menu bar (for Mac). Click it to open Google Drive for desktop and access your files.</li>
30
- </ol>
31
- <h2>How to Use Google Drive for Desktop</h2>
32
- <p>Google Drive for desktop has many features and functions that you can use to manage your files and folders. Here are some of the main ones:</p>
33
- <p></p>
34
- <ul>
35
- <li>To open a file from Google Drive on your computer, double-click it in the Google Drive folder. It will open in the default app for that file type, such as Microsoft Word, Adobe Photoshop, or VLC Media Player.</li>
36
- <li>To upload a file or folder to Google Drive from your computer, drag and drop it into the Google Drive folder. It will sync to the cloud and be available on other devices.</li>
37
- <li>To share a file or folder with someone else, right-click it in the Google Drive folder and select <strong>Share with Google Drive</strong>. A window will appear where you can enter the email addresses of the people you want to share with, choose their access level (view, comment, or edit), and add a message. Click <strong>Send</strong> when you are done.</li>
38
- <li>To view or change the settings of Google Drive for desktop, click the Google Drive icon on your taskbar or menu bar and select <strong>Preferences</strong>. A window will appear where you can adjust various options, such as syncing, streaming, notifications, backup, and more.</li>
39
- </ul>
40
- <h2>Tips and Tricks for Google Drive for Desktop</h2>
41
- <p>Here are some tips and tricks to help you make the most of Google Drive for desktop:</p>
42
- <ul>
43
- <li>To save storage space on your computer, you can stream files from the cloud instead of syncing them. To do this, go to <strong>Preferences > Syncing > Stream files to free up space</strong>. You can also choose which folders to stream or sync by going to <strong>Preferences > Syncing > Choose folders</strong>.</li>
44
- <li>To access your Google Drive files offline, you can save them for offline use. To do this, right-click a file or folder in the Google Drive folder and select <strong>Available offline</strong>. A checkmark will appear next to it. You can also see which files are available offline by going to <strong>Preferences > Offline Files</strong>.</li>
45
- <li>To collaborate on Microsoft Office files with other users, you can use Google Workspace plugins for Microsoft Office. To do this, go to <a href="">https://workspace.google.com/marketplace/app/drive_for_office/1016028792930</a> and download the plugin for your version of Office. After installing it, you can open Office files from Google Drive on your computer and edit them in real time with other users.</li>
46
- <li>To extend the power of Drive with third-party apps, you can use Google Workspace Marketplace. To do this, go to <a href="">https://workspace.google.com/marketplace/</a> and browse or search for apps that work with Drive. You can find apps for various purposes, such as editing images, creating diagrams, signing documents, and more.</li>
47
- </ul>
48
- <h2>Conclusion</h2>
49
- <p>In this article, we have shown you how to download and install Google Drive for desktop, how to use its features, and some tips and tricks to make the most of it. Google Drive for desktop is a powerful app that lets you access your Google Drive files and folders on your computer with ease. It also lets you sync folders from your computer to Google Drive or backup to Google Photos. You can also open files directly from your computer, save files for offline use, collaborate on Microsoft Office files, and extend the power of Drive with third-party apps. If you haven't tried it yet, we recommend you download it today and see how it can improve your productivity and workflow.</p>
50
- <h2>FAQs</h2> <h3>What are the system requirements for Google Drive for desktop?</h3>
51
- <p>To use Google Drive for desktop, you need to have a computer that meets the following system requirements:</p>
52
- <table>
53
- <tr>
54
- <th>Operating system</th>
55
- <th>Minimum requirements</th>
56
- </tr>
57
- <tr>
58
- <td>Windows</td>
59
- <td>Windows 7 and up. .NET Framework 4.5.2 or higher.</td>
60
- </tr>
61
- <tr>
62
- <td>Mac</td>
63
- <td>El Capitan (10.11) and up.</td>
64
- </tr>
65
- </table>
66
- <h3>How much storage space does Google Drive for desktop use?</h3>
67
- <p>The amount of storage space that Google Drive for desktop uses depends on the option you choose for syncing or streaming files. If you choose to stream files, Google Drive for desktop will use a small amount of disk space to cache some of your files for faster access. You can change the cache size by going to <strong>Preferences > Syncing > Cache size</strong>. If you choose to sync files, Google Drive for desktop will use the same amount of disk space as the files you sync. You can change the folders you sync by going to <strong>Preferences > Syncing > Choose folders</strong>.</p>
68
- <h3>How can I access my Google Drive files offline?</h3>
69
- <p>To access your Google Drive files offline, you need to save them for offline use. To do this, right-click a file or folder in the Google Drive folder and select <strong>Available offline</strong>. A checkmark will appear next to it. You can also see which files are available offline by going to <strong>Preferences > Offline Files</strong>. When you are offline, you can open and edit your offline files as usual. When you are online again, your changes will sync to the cloud.</p>
70
- <h3>How can I collaborate on Microsoft Office files with Google Drive for desktop?</h3>
71
- <p>To collaborate on Microsoft Office files with Google Drive for desktop, you need to use Google Workspace plugins for Microsoft Office. To do this, go to <a href="">https://workspace.google.com/marketplace/app/drive_for_office/1016028792930</a> and download the plugin for your version of Office. After installing it, you can open Office files from Google Drive on your computer and edit them in real time with other users. You can also see their comments and suggestions, and save your changes to the cloud.</p>
72
- <h3>How can I troubleshoot errors in Google Drive for desktop?</h3>
73
- <p>If you encounter any errors or issues in Google Drive for desktop, you can try the following steps to fix them:</p>
74
- <ul>
75
- <li>Check your internet connection and make sure it is stable and fast.</li>
76
- <li>Check your disk space and make sure you have enough free space on your computer.</li>
77
- <li>Check your sync settings and make sure they are correct and up to date.</li>
78
- <li>Check your file names and make sure they do not contain any invalid characters or exceed the maximum length.</li>
79
- <li>Check your antivirus or firewall settings and make sure they do not block or interfere with Google Drive for desktop.</li>
80
- <li>Restart Google Drive for desktop or your computer and see if the problem persists.</li>
81
- <li>Contact Google support or visit the help center for more assistance.</li>
82
- </ul></p> 401be4b1e0<br />
83
- <br />
84
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Spider Fighter Mod APK and Become the Ultimate Hero.md DELETED
@@ -1,70 +0,0 @@
1
- <br />
2
- <h1>Download Game Spider Fighter Mod Apk: A Guide for Superhero Fans</h1>
3
- <p>Do you love superhero games? Do you want to become a spider hero and fight against the evil forces in the city? If yes, then you should download game spider fighter mod apk, a thrilling action game that will make you feel like a real superhero. In this game, you can craft superskills and spider's powers, fight crime bosses and corrupted forces, and explore the grand city and its secrets. In this article, we will tell you everything you need to know about spider fighter mod apk, including its features, how to download and install it, and why you should download it. So, let's get started!</p>
4
- <h2>download game spider fighter mod apk</h2><br /><p><b><b>Download File</b> &#187; <a href="https://jinyurl.com/2uNPVv">https://jinyurl.com/2uNPVv</a></b></p><br /><br />
5
- <h2>What is Spider Fighter Mod Apk?</h2>
6
- <p>Spider Fighter Mod Apk is a modified version of the original game Spider Fighter, developed by Superhero Academy. It is an action game that lets you play as a spider hero who has to save the city from the mafia and other enemies. You can use your spider's powers, such as web-shooting, wall-crawling, and swinging, to move around the city and fight your foes. You can also craft superskills, such as fireballs, lightning bolts, and ice blasts, to enhance your abilities and defeat your enemies faster. The game has a captivating storyline, realistic physics, and stunning graphics that will keep you hooked for hours.</p>
7
- <h3>Features of Spider Fighter Mod Apk</h3>
8
- <h4>Craft superskills and spider's powers</h4>
9
- <p>One of the most exciting features of spider fighter mod apk is that you can craft superskills and spider's powers to complement your man body and become more powerful. You can choose from different types of skills, such as fire, ice, lightning, earth, wind, and water, and combine them with your spider's powers to create unique effects. For example, you can shoot fireballs from your web-shooter, or freeze your enemies with your ice blast. You can also upgrade your skills and powers as you progress in the game and unlock new ones.</p>
10
- <h4>Fight crime bosses and corrupted forces</h4>
11
- <p>The city is in danger as crime bosses have occupied it and corrupted the police and army forces. You have to fight them and restore peace in the city. You will face different types of enemies, such as thugs, gangsters, snipers, robots, helicopters, tanks, and more. You have to use your skills and powers wisely to defeat them and avoid their attacks. You can also use the environment to your advantage, such as cars, buildings, bridges, and more. You will also encounter boss battles that will challenge your skills and strategy.</p>
12
- <p>How to download game spider fighter mod apk for free<br />
13
- Download game spider fighter 3 mod apk latest version<br />
14
- Spider fighter game review: a thrilling superhero adventure<br />
15
- Best tips and tricks for playing spider fighter game on android<br />
16
- Spider fighter game features: what makes it stand out from other spider games<br />
17
- Download game spider fighter mod apk unlimited money and gems<br />
18
- Spider fighter game walkthrough: how to complete all missions and challenges<br />
19
- Download game spider fighter mod apk offline mode<br />
20
- Spider fighter game cheats: how to unlock all spider suits and gadgets<br />
21
- Spider fighter game vs spider man game: which one is better<br />
22
- Download game spider fighter mod apk for PC using emulator<br />
23
- Spider fighter game system requirements: can your device run it smoothly<br />
24
- Download game spider fighter mod apk no root needed<br />
25
- Spider fighter game update: what's new in the latest version<br />
26
- Spider fighter game fan art: how to create your own spider hero<br />
27
- Download game spider fighter mod apk with obb data file<br />
28
- Spider fighter game soundtrack: how to download and enjoy the music<br />
29
- Spider fighter game online: how to play with other players around the world<br />
30
- Download game spider fighter mod apk with all levels unlocked<br />
31
- Spider fighter game bugs and glitches: how to fix them easily<br />
32
- Download game spider fighter mod apk from APKCombo[^1^] [^2^]<br />
33
- Spider fighter game ratings and reviews: what do other players think about it<br />
34
- Download game spider fighter mod apk without ads and pop-ups<br />
35
- Spider fighter game lore and backstory: how did the spider hero become who he is<br />
36
- Spider fighter game achievements and trophies: how to earn them all</p>
37
- <h4>Explore the grand city and its secrets</h4>
38
- <p>The game has a large open-world map that allows you to explore the grand city and its secrets. You can roam around freely and discover different places, such as parks, streets, alleys, rooftops, tunnels, warehouses, and more. You can also interact with various objects and people in the city. You can find hidden items, such as coins, gems, health packs, weapons, and more. You can also complete missions and quests that will reward you with money and experience points. You can use the money to buy new costumes, gadgets, vehicles, and more.</p>
39
- <h3>How to download and install Spider Fighter Mod Apk?</h3>
40
- <h4>Step 1: Download the mod apk file from a trusted source</h4>
41
- <p>The first step to download game spider fighter mod apk is to find a reliable source that offers the mod apk file for free and without any viruses or malware. You can use the link below to download the mod apk file from our website. The file size is about 100 MB and it is compatible with Android 4.4 and above.</p>
42
- <h4>Step 2: Enable unknown sources on your device</h4>
43
- <p>The next step is to enable unknown sources on your device so that you can install the mod apk file. To do this, go to your device settings and look for the security or privacy option. Then, find the unknown sources option and toggle it on. This will allow you to install apps from sources other than the Google Play Store.</p>
44
- <h4>Step 3: Install the mod apk file and enjoy the game</h4>
45
- <p>The final step is to install the mod apk file and enjoy the game. To do this, locate the downloaded mod apk file on your device and tap on it. Then, follow the instructions on the screen to complete the installation process. Once done, you can launch the game and start playing as a spider hero.</p>
46
- <h3>Why should you download Spider Fighter Mod Apk?</h3>
47
- <h4>Unlimited money and gems</h4>
48
- <p>One of the main reasons why you should download spider fighter mod apk is that it gives you unlimited money and gems in the game. Money and gems are the main currencies in the game that you can use to buy new costumes, gadgets, vehicles, and more. You can also use them to upgrade your skills and powers and unlock new ones. With unlimited money and gems, you can enjoy the game without any limitations or restrictions.</p>
49
- <h4>No ads and no root required</h4>
50
- <p>Another reason why you should download spider fighter mod apk is that it removes all the ads and does not require root access on your device. Ads can be annoying and distracting when you are playing the game, especially when they pop up in between the action scenes. They can also consume your data and battery life. With spider fighter mod apk, you can play the game without any ads and interruptions. Moreover, you do not need to root your device to install the mod apk file, which means you do not have to risk damaging your device or voiding its warranty.</p>
51
- <h4>High-quality graphics and sound effects</h4>
52
- <p>The last reason why you should download spider fighter mod apk is that it enhances the graphics and sound effects of the game. The game has high-quality graphics that make the city look realistic and immersive. You can see the details of the buildings, cars, bridges, and more. The game also has amazing sound effects that make you feel like you are in a real superhero movie. You can hear the sounds of your web-shooting, swinging, fighting, and more. The game also has a catchy soundtrack that matches the mood of the game.</p>
53
- <h2>Conclusion</h2>
54
- <p>Spider Fighter Mod Apk is a must-have game for all superhero fans who want to become a spider hero and save the city from evil forces. The game has many features that make it fun and exciting, such as crafting superskills and spider's powers, fighting crime bosses and corrupted forces, exploring the grand city and its secrets, unlimited money and gems, no ads and no root required, high-quality graphics and sound effects, and more. You can download game spider fighter mod apk from our website for free and without any viruses or malware. Just follow the steps we have provided above and enjoy the game.</p>
55
- <h2>FAQs</h2>
56
- <p>Here are some frequently asked questions about spider fighter mod apk:</p>
57
- <ul>
58
- <li><b>Q: Is spider fighter mod apk safe to download?</b></li>
59
- <li>A: Yes, spider fighter mod apk is safe to download as long as you get it from a trusted source like our website. We have tested the mod apk file for any viruses or malware and found none.</li>
60
- <li><b>Q: How can I update spider fighter mod apk?</b></li>
61
- <li>A: You can update spider fighter mod apk by visiting our website regularly and downloading the latest version of the mod apk file. You can also follow us on social media to get notified of any updates.</li>
62
- <li><b>Q: Can I play spider fighter mod apk offline?</b></li>
63
- <li>A: Yes, you can play spider fighter mod apk offline without any internet connection. However, some features may not work properly offline, such as missions, quests, leaderboards, etc.</li>
64
- <li><b>Q: Can I play spider fighter mod apk with my friends?</b></li>
65
- <li>A: Unfortunately, spider fighter mod apk does not support multiplayer mode at the moment. You can only play solo as a spider hero.</li>
66
- <li><b>Q: What are some alternatives to spider fighter mod apk?</b></li>
67
- <li>A: Some alternatives to spider fighter mod apk are Spider-Man: Ultimate Power, Spider Hero: Superhero Fighting, and Spider Rope Hero: Vice Town. You can find these games on the Google Play Store or other sources.</li>
68
- </ul></p> 401be4b1e0<br />
69
- <br />
70
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/2kaara/oreo/README.md DELETED
@@ -1,10 +0,0 @@
1
- ---
2
- title: Oreo
3
- emoji: 🏆
4
- colorFrom: pink
5
- colorTo: purple
6
- sdk: docker
7
- pinned: false
8
- ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
spaces/34we12er/newbing/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Newbing
3
- emoji: 🐨
4
- colorFrom: green
5
- colorTo: pink
6
- sdk: docker
7
- pinned: false
8
- license: mit
9
- app_port: 8080
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-ZTH-03-23/3.HTML5-Aframe-3dMap-Flight/index.html DELETED
@@ -1,46 +0,0 @@
1
- <!DOCTYPE html>
2
- <html>
3
- <head>
4
- <meta charset="utf-8">
5
- <title>Recursive Polygons in 3D</title>
6
- <script src="https://aframe.io/releases/1.2.0/aframe.min.js"></script>
7
- <script src="https://unpkg.com/aframe-environment-component/dist/aframe-environment-component.min.js"></script>
8
- <style>
9
- #canvas {
10
- height: 500px;
11
- width: 800px;
12
- }
13
- </style>
14
- </head>
15
- <body>
16
- <a-scene>
17
- <a-entity environment="preset: forest"></a-entity>
18
-
19
- <!-- Recursive Polygon Component -->
20
- <a-entity recursive-polygon="
21
- vertices: 6; // Number of vertices
22
- scale: 2; // Scale factor
23
- level: 5; // Recursive level
24
- color: #FFC65D; // Polygon color
25
- height: 0.5; // Polygon height
26
- x: 0; // X-position
27
- y: 0; // Y-position
28
- z: -5 // Z-position
29
- "></a-entity>
30
-
31
- <!-- Math Function -->
32
- <a-entity math-function="
33
- func: sin(x^2+y^2)/sqrt(x^2+y^2); // Math function to evaluate
34
- xmin: -5; xmax: 5; // Range of x-values
35
- ymin: -5; ymax: 5; // Range of y-values
36
- xstep: 0.2; ystep: 0.2; // Step size for x and y
37
- scale: 0.5; // Scale factor
38
- color: #8CEEEF; // Function color
39
- height: 0.1; // Function height
40
- z: -5 // Z-position
41
- "></a-entity>
42
-
43
- <a-entity camera position="0 1.6 0" look-controls wasd-controls></a-entity>
44
- </a-scene>
45
- </body>
46
- </html>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/generate_human_motion/VQ-Trans/visualize/vis_utils.py DELETED
@@ -1,66 +0,0 @@
1
- from model.rotation2xyz import Rotation2xyz
2
- import numpy as np
3
- from trimesh import Trimesh
4
- import os
5
- import torch
6
- from visualize.simplify_loc2rot import joints2smpl
7
-
8
- class npy2obj:
9
- def __init__(self, npy_path, sample_idx, rep_idx, device=0, cuda=True):
10
- self.npy_path = npy_path
11
- self.motions = np.load(self.npy_path, allow_pickle=True)
12
- if self.npy_path.endswith('.npz'):
13
- self.motions = self.motions['arr_0']
14
- self.motions = self.motions[None][0]
15
- self.rot2xyz = Rotation2xyz(device='cpu')
16
- self.faces = self.rot2xyz.smpl_model.faces
17
- self.bs, self.njoints, self.nfeats, self.nframes = self.motions['motion'].shape
18
- self.opt_cache = {}
19
- self.sample_idx = sample_idx
20
- self.total_num_samples = self.motions['num_samples']
21
- self.rep_idx = rep_idx
22
- self.absl_idx = self.rep_idx*self.total_num_samples + self.sample_idx
23
- self.num_frames = self.motions['motion'][self.absl_idx].shape[-1]
24
- self.j2s = joints2smpl(num_frames=self.num_frames, device_id=device, cuda=cuda)
25
-
26
- if self.nfeats == 3:
27
- print(f'Running SMPLify For sample [{sample_idx}], repetition [{rep_idx}], it may take a few minutes.')
28
- motion_tensor, opt_dict = self.j2s.joint2smpl(self.motions['motion'][self.absl_idx].transpose(2, 0, 1)) # [nframes, njoints, 3]
29
- self.motions['motion'] = motion_tensor.cpu().numpy()
30
- elif self.nfeats == 6:
31
- self.motions['motion'] = self.motions['motion'][[self.absl_idx]]
32
- self.bs, self.njoints, self.nfeats, self.nframes = self.motions['motion'].shape
33
- self.real_num_frames = self.motions['lengths'][self.absl_idx]
34
-
35
- self.vertices = self.rot2xyz(torch.tensor(self.motions['motion']), mask=None,
36
- pose_rep='rot6d', translation=True, glob=True,
37
- jointstype='vertices',
38
- # jointstype='smpl', # for joint locations
39
- vertstrans=True)
40
- self.root_loc = self.motions['motion'][:, -1, :3, :].reshape(1, 1, 3, -1)
41
- self.vertices += self.root_loc
42
-
43
- def get_vertices(self, sample_i, frame_i):
44
- return self.vertices[sample_i, :, :, frame_i].squeeze().tolist()
45
-
46
- def get_trimesh(self, sample_i, frame_i):
47
- return Trimesh(vertices=self.get_vertices(sample_i, frame_i),
48
- faces=self.faces)
49
-
50
- def save_obj(self, save_path, frame_i):
51
- mesh = self.get_trimesh(0, frame_i)
52
- with open(save_path, 'w') as fw:
53
- mesh.export(fw, 'obj')
54
- return save_path
55
-
56
- def save_npy(self, save_path):
57
- data_dict = {
58
- 'motion': self.motions['motion'][0, :, :, :self.real_num_frames],
59
- 'thetas': self.motions['motion'][0, :-1, :, :self.real_num_frames],
60
- 'root_translation': self.motions['motion'][0, -1, :3, :self.real_num_frames],
61
- 'faces': self.faces,
62
- 'vertices': self.vertices[0, :, :, :self.real_num_frames],
63
- 'text': self.motions['text'][0],
64
- 'length': self.real_num_frames,
65
- }
66
- np.save(save_path, data_dict)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/tts/syntaspeech/syntactic_graph_encoder.py DELETED
@@ -1,193 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- import torch.nn.functional as F
4
-
5
- import dgl
6
- from dgl.nn.pytorch import GatedGraphConv
7
-
8
- def sequence_mask(lengths, maxlen, dtype=torch.bool):
9
- if maxlen is None:
10
- maxlen = lengths.max()
11
- mask = ~(torch.ones((len(lengths), maxlen)).to(lengths.device).cumsum(dim=1).t() > lengths).t()
12
- mask.type(dtype)
13
- return mask
14
-
15
-
16
- def group_hidden_by_segs(h, seg_ids, max_len):
17
- """
18
- :param h: [B, T, H]
19
- :param seg_ids: [B, T]
20
- :return: h_ph: [B, T_ph, H]
21
- """
22
- B, T, H = h.shape
23
- h_gby_segs = h.new_zeros([B, max_len + 1, H]).scatter_add_(1, seg_ids[:, :, None].repeat([1, 1, H]), h)
24
- all_ones = h.new_ones(h.shape[:2])
25
- cnt_gby_segs = h.new_zeros([B, max_len + 1]).scatter_add_(1, seg_ids, all_ones).contiguous()
26
- h_gby_segs = h_gby_segs[:, 1:]
27
- cnt_gby_segs = cnt_gby_segs[:, 1:]
28
- h_gby_segs = h_gby_segs / torch.clamp(cnt_gby_segs[:, :, None], min=1)
29
- # assert h_gby_segs.shape[-1] == 192
30
- return h_gby_segs
31
-
32
- class GraphAuxEnc(nn.Module):
33
- def __init__(self, in_dim, hid_dim, out_dim, n_iterations=5, n_edge_types=6):
34
- super(GraphAuxEnc, self).__init__()
35
- self.in_dim = in_dim
36
- self.hid_dim = hid_dim
37
- self.out_dim = out_dim
38
- self.skip_connect = True
39
- self.dropout_after_gae = False
40
-
41
- self.ggc_1 = GatedGraphConv(in_feats=in_dim, out_feats=hid_dim
42
- , n_steps=n_iterations, n_etypes=n_edge_types)
43
- self.ggc_2 = GatedGraphConv(in_feats=hid_dim, out_feats=out_dim
44
- , n_steps=n_iterations, n_etypes=n_edge_types)
45
- self.dropout = nn.Dropout(p=0.5)
46
-
47
- @staticmethod
48
- def ph_encoding_to_word_encoding(ph_encoding, ph2word, word_len):
49
- """
50
- ph_encoding: [batch, t_p, hid]
51
- ph2word: tensor [batch, t_w]
52
- word_len: tensor [batch]
53
- """
54
- word_encoding_for_graph, batch_word_encoding, has_word_row_idx = GraphAuxEnc._process_ph_to_word_encoding(
55
- ph_encoding,
56
- ph2word,
57
- word_len)
58
- # [batch, t_w, hid]
59
- return batch_word_encoding, word_encoding_for_graph
60
-
61
- def pad_word_encoding_to_phoneme(self, word_encoding, ph2word, t_p):
62
- return self._postprocess_word2ph(word_encoding, ph2word, t_p)
63
-
64
- @staticmethod
65
- def _process_ph_to_word_encoding(ph_encoding, ph2word, word_len=None):
66
- """
67
- ph_encoding: [batch, t_p, hid]
68
- ph2word: tensor [batch, t_w]
69
- word_len: tensor [batch]
70
- """
71
- word_len = word_len.reshape([-1,])
72
- max_len = max(word_len)
73
- num_nodes = sum(word_len)
74
-
75
- batch_word_encoding = group_hidden_by_segs(ph_encoding, ph2word, max_len)
76
- bs, t_p, hid = batch_word_encoding.shape
77
- has_word_mask = sequence_mask(word_len, max_len) # [batch, t_p, 1]
78
- word_encoding = batch_word_encoding.reshape([bs * t_p, hid])
79
- has_word_row_idx = has_word_mask.reshape([-1])
80
- word_encoding = word_encoding[has_word_row_idx]
81
- assert word_encoding.shape[0] == num_nodes
82
- return word_encoding, batch_word_encoding, has_word_row_idx
83
-
84
- @staticmethod
85
- def _postprocess_word2ph(word_encoding, ph2word, t_p):
86
- word_encoding = F.pad(word_encoding,[0,0,1,0])
87
- ph2word_ = ph2word[:, :, None].repeat([1, 1, word_encoding.shape[-1]])
88
- out = torch.gather(word_encoding, 1, ph2word_) # [B, T, H]
89
- return out
90
-
91
- @staticmethod
92
- def _repeat_one_sequence(x, d, T):
93
- """Repeat each frame according to duration."""
94
- if d.sum() == 0:
95
- d = d.fill_(1)
96
- hid = x.shape[-1]
97
- expanded_lst = [x_.repeat(int(d_), 1) for x_, d_ in zip(x, d) if d_ != 0]
98
- expanded = torch.cat(expanded_lst, dim=0)
99
- if T > expanded.shape[0]:
100
- expanded = torch.cat([expanded, torch.zeros([T - expanded.shape[0], hid]).to(expanded.device)], dim=0)
101
- return expanded
102
-
103
- def word_forward(self, graph_lst, word_encoding, etypes_lst):
104
- """
105
- word encoding in, word encoding out.
106
- """
107
- batched_graph = dgl.batch(graph_lst)
108
- inp = word_encoding
109
- batched_etypes = torch.cat(etypes_lst) # [num_edges_in_batch, 1]
110
- assert batched_graph.num_nodes() == inp.shape[0]
111
-
112
- gcc1_out = self.ggc_1(batched_graph, inp, batched_etypes)
113
- if self.dropout_after_gae:
114
- gcc1_out = self.dropout(gcc1_out)
115
- gcc2_out = self.ggc_2(batched_graph, gcc1_out, batched_etypes) # [num_nodes_in_batch, hin]
116
- if self.dropout_after_gae:
117
- gcc2_out = self.ggc_2(batched_graph, gcc2_out, batched_etypes)
118
- if self.skip_connect:
119
- assert self.in_dim == self.hid_dim and self.hid_dim == self.out_dim
120
- gcc2_out = inp + gcc1_out + gcc2_out
121
-
122
- word_len = torch.tensor([g.num_nodes() for g in graph_lst]).reshape([-1])
123
- max_len = max(word_len)
124
- has_word_mask = sequence_mask(word_len, max_len) # [batch, t_p, 1]
125
- has_word_row_idx = has_word_mask.reshape([-1])
126
- bs = len(graph_lst)
127
- t_w = max([g.num_nodes() for g in graph_lst])
128
- hid = word_encoding.shape[-1]
129
- output = torch.zeros([bs * t_w, hid]).to(gcc2_out.device)
130
- output[has_word_row_idx] = gcc2_out
131
- output = output.reshape([bs, t_w, hid])
132
- word_level_output = output
133
- return torch.transpose(word_level_output, 1, 2)
134
-
135
- def forward(self, graph_lst, ph_encoding, ph2word, etypes_lst, return_word_encoding=False):
136
- """
137
- graph_lst: [list of dgl_graph]
138
- ph_encoding: [batch, hid, t_p]
139
- ph2word: [list of list[1,2,2,2,3,3,3]]
140
- etypes_lst: [list of etypes]; etypes: torch.LongTensor
141
- """
142
- t_p = ph_encoding.shape[-1]
143
- ph_encoding = ph_encoding.transpose(1,2) # [batch, t_p, hid]
144
- word_len = torch.tensor([g.num_nodes() for g in graph_lst]).reshape([-1])
145
- batched_graph = dgl.batch(graph_lst)
146
- inp, batched_word_encoding, has_word_row_idx = self._process_ph_to_word_encoding(ph_encoding, ph2word,
147
- word_len=word_len) # [num_nodes_in_batch, in_dim]
148
- bs, t_w, hid = batched_word_encoding.shape
149
- batched_etypes = torch.cat(etypes_lst) # [num_edges_in_batch, 1]
150
- gcc1_out = self.ggc_1(batched_graph, inp, batched_etypes)
151
- gcc2_out = self.ggc_2(batched_graph, gcc1_out, batched_etypes) # [num_nodes_in_batch, hin]
152
- # skip connection
153
- gcc2_out = inp + gcc1_out + gcc2_out # [n_nodes, hid]
154
-
155
- output = torch.zeros([bs * t_w, hid]).to(gcc2_out.device)
156
- output[has_word_row_idx] = gcc2_out
157
- output = output.reshape([bs, t_w, hid])
158
- word_level_output = output
159
- output = self._postprocess_word2ph(word_level_output, ph2word, t_p) # [batch, t_p, hid]
160
- output = torch.transpose(output, 1, 2)
161
-
162
- if return_word_encoding:
163
- return output, torch.transpose(word_level_output, 1, 2)
164
- else:
165
- return output
166
-
167
- if __name__ == '__main__':
168
- # Unit Test for batching graphs
169
- from text_to_speech.modules.tts.syntaspeech.syntactic_graph_buider import Sentence2GraphParser, plot_dgl_sentence_graph
170
- parser = Sentence2GraphParser("en")
171
-
172
- # Unit Test for English Graph Builder
173
- text1 = "To be or not to be , that 's a question ."
174
- text2 = "I love you . You love me . Mixue ice-scream and tea ."
175
- graph1, etypes1 = parser.parse(text1)
176
- graph2, etypes2 = parser.parse(text2)
177
- batched_text = "<BOS> " + text1 + " <EOS>" + " " + "<BOS> " + text2 + " <EOS>"
178
- batched_nodes = [graph1.num_nodes(), graph2.num_nodes()]
179
- plot_dgl_sentence_graph(dgl.batch([graph1, graph2]), {i: w for i, w in enumerate(batched_text.split(" "))})
180
- etypes_lst = [etypes1, etypes2]
181
-
182
- # Unit Test for Graph Encoder forward
183
- in_feats = 4
184
- out_feats = 4
185
- enc = GraphAuxEnc(in_dim=in_feats, hid_dim=in_feats, out_dim=out_feats)
186
- ph2word = torch.tensor([
187
- [1, 2, 3, 3, 3, 4, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 0],
188
- [1, 2, 3, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]
189
- ])
190
- inp = torch.randn([2, in_feats, 17]) # [N_sentence, feat, ph_length]
191
- graph_lst = [graph1, graph2]
192
- out = enc(graph_lst, inp, ph2word, etypes_lst)
193
- print(out.shape) # [N_sentence, feat, ph_length]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIZerotoHero-Health4All/01-Gradio-Speech2Text2Speech-AIPipeline/app.py DELETED
@@ -1,160 +0,0 @@
1
- import streamlit as st
2
- import datetime
3
- from transformers import pipeline
4
- import gradio as gr
5
-
6
- import tempfile
7
- from typing import Optional
8
- import numpy as np
9
- from TTS.utils.manage import ModelManager
10
- from TTS.utils.synthesizer import Synthesizer
11
-
12
- # PersistDataset -----
13
- import os
14
- import csv
15
- import gradio as gr
16
- from gradio import inputs, outputs
17
- import huggingface_hub
18
- from huggingface_hub import Repository, hf_hub_download, upload_file
19
- from datetime import datetime
20
-
21
- # created new dataset as awacke1/MindfulStory.csv
22
- #DATASET_REPO_URL = "https://huggingface.co/datasets/awacke1/MindfulStory.csv"
23
- #DATASET_REPO_ID = "awacke1/MindfulStory.csv"
24
- #DATA_FILENAME = "MindfulStory.csv"
25
- #DATA_FILE = os.path.join("data", DATA_FILENAME)
26
- HF_TOKEN = os.environ.get("HF_TOKEN")
27
-
28
- # Download dataset repo using hub download
29
- #try:
30
- # hf_hub_download(
31
- # repo_id=DATASET_REPO_ID,
32
- # filename=DATA_FILENAME,
33
- # cache_dir=DATA_DIRNAME,
34
- # force_filename=DATA_FILENAME
35
- # )
36
- #except:
37
- # print("file not found")
38
-
39
- #def AIMemory(name: str, message: str):
40
- # if name and message:
41
- # with open(DATA_FILE, "a") as csvfile:
42
- # writer = csv.DictWriter(csvfile, fieldnames=["name", "message", "time"])
43
- # writer.writerow({"name": name, "message": message, "time": str(datetime.now())})
44
- # commit_url = repo.push_to_hub()
45
- # return {"name": name, "message": message, "time": str(datetime.now())}
46
-
47
- #with open('Mindfulness.txt', 'r') as file:
48
- # context = file.read()
49
-
50
- # Set up cloned dataset from repo for operations
51
- #repo = Repository( local_dir="data", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN)
52
-
53
- # set up ASR
54
- asr = pipeline("automatic-speech-recognition", "facebook/wav2vec2-base-960h")
55
-
56
- # set up TTS
57
- MODEL_NAMES = [
58
- "en/ljspeech/tacotron2-DDC",
59
- "en/ljspeech/glow-tts",
60
- "en/ljspeech/speedy-speech-wn",
61
- "en/ljspeech/vits",
62
- "en/sam/tacotron-DDC",
63
- "fr/mai/tacotron2-DDC",
64
- "de/thorsten/tacotron2-DCA",
65
- ]
66
-
67
- # Use Model Manager to load vocoders
68
- MODELS = {}
69
- manager = ModelManager()
70
- for MODEL_NAME in MODEL_NAMES:
71
- print(f"downloading {MODEL_NAME}")
72
- model_path, config_path, model_item = manager.download_model(f"tts_models/{MODEL_NAME}")
73
- vocoder_name: Optional[str] = model_item["default_vocoder"]
74
- vocoder_path = None
75
- vocoder_config_path = None
76
- if vocoder_name is not None:
77
- vocoder_path, vocoder_config_path, _ = manager.download_model(vocoder_name)
78
-
79
- synthesizer = Synthesizer(
80
- model_path, config_path, None, vocoder_path, vocoder_config_path,
81
- )
82
- MODELS[MODEL_NAME] = synthesizer
83
-
84
- # transcribe
85
- def transcribe(audio):
86
- text = asr(audio)["text"]
87
- return text
88
-
89
- #text classifier
90
- classifier = pipeline("text-classification")
91
-
92
-
93
- def speech_to_text(speech):
94
- text = asr(speech)["text"]
95
- #rMem = AIMemory("STT", text)
96
- return text
97
-
98
- def text_to_sentiment(text):
99
- sentiment = classifier(text)[0]["label"]
100
- #rMem = AIMemory(text, sentiment)
101
- return sentiment
102
-
103
- def upsert(text):
104
- date_time =str(datetime.datetime.today())
105
- doc_ref = db.collection('Text2SpeechSentimentSave').document(date_time)
106
- doc_ref.set({u'firefield': 'Recognize Speech', u'first': 'https://huggingface.co/spaces/awacke1/TTS-STT-Blocks/', u'last': text, u'born': date_time,})
107
- saved = select('TTS-STT', date_time)
108
- return saved
109
-
110
- def select(collection, document):
111
- doc_ref = db.collection(collection).document(document)
112
- doc = doc_ref.get()
113
- docid = ("The id is: ", doc.id)
114
- contents = ("The contents are: ", doc.to_dict())
115
- return contents
116
-
117
- def selectall(text):
118
- docs = db.collection('Text2SpeechSentimentSave').stream()
119
- doclist=''
120
- for doc in docs:
121
- r=(f'{doc.id} => {doc.to_dict()}')
122
- doclist += r
123
- return doclist
124
-
125
- def tts(text: str, model_name: str):
126
- print(text, model_name)
127
- synthesizer = MODELS.get(model_name, None)
128
- if synthesizer is None:
129
- raise NameError("model not found")
130
- wavs = synthesizer.tts(text)
131
- with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp:
132
- synthesizer.save_wav(wavs, fp)
133
-
134
- #rMem = AIMemory("TTS", text + model_name)
135
-
136
- return fp.name
137
-
138
- demo = gr.Blocks()
139
- with demo:
140
- audio_file = gr.inputs.Audio(source="microphone", type="filepath")
141
- text = gr.Textbox(label="Speech to Text")
142
- #label = gr.Label()
143
- #saved = gr.Textbox(label="Saved")
144
- #savedAll = gr.Textbox(label="SavedAll")
145
- TTSchoice = gr.inputs.Radio( label="Pick a Text to Speech Model", choices=MODEL_NAMES, )
146
- audio = gr.Audio(label="Output", interactive=False)
147
-
148
- b1 = gr.Button("Recognize Speech")
149
- #b2 = gr.Button("Classify Sentiment")
150
- #b3 = gr.Button("Save Speech to Text")
151
- #b4 = gr.Button("Retrieve All")
152
- b5 = gr.Button("Read It Back Aloud")
153
-
154
- b1.click(speech_to_text, inputs=audio_file, outputs=text)
155
- #b2.click(text_to_sentiment, inputs=text, outputs=label)
156
- #b3.click(upsert, inputs=text, outputs=saved)
157
- #b4.click(selectall, inputs=text, outputs=savedAll)
158
- b5.click(tts, inputs=[text,TTSchoice], outputs=audio)
159
-
160
- demo.launch(share=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov6/yolov6_m_syncbn_fast_8xb32-300e_coco.py DELETED
@@ -1,62 +0,0 @@
1
- _base_ = './yolov6_s_syncbn_fast_8xb32-300e_coco.py'
2
-
3
- # ======================= Possible modified parameters =======================
4
- # -----model related-----
5
- # The scaling factor that controls the depth of the network structure
6
- deepen_factor = 0.6
7
- # The scaling factor that controls the width of the network structure
8
- widen_factor = 0.75
9
-
10
- # -----train val related-----
11
- affine_scale = 0.9 # YOLOv5RandomAffine scaling ratio
12
-
13
- # ============================== Unmodified in most cases ===================
14
- model = dict(
15
- backbone=dict(
16
- type='YOLOv6CSPBep',
17
- deepen_factor=deepen_factor,
18
- widen_factor=widen_factor,
19
- hidden_ratio=2. / 3,
20
- block_cfg=dict(type='RepVGGBlock'),
21
- act_cfg=dict(type='ReLU', inplace=True)),
22
- neck=dict(
23
- type='YOLOv6CSPRepPAFPN',
24
- deepen_factor=deepen_factor,
25
- widen_factor=widen_factor,
26
- block_cfg=dict(type='RepVGGBlock'),
27
- hidden_ratio=2. / 3,
28
- block_act_cfg=dict(type='ReLU', inplace=True)),
29
- bbox_head=dict(
30
- type='YOLOv6Head', head_module=dict(widen_factor=widen_factor)))
31
-
32
- mosaic_affine_pipeline = [
33
- dict(
34
- type='Mosaic',
35
- img_scale=_base_.img_scale,
36
- pad_val=114.0,
37
- pre_transform=_base_.pre_transform),
38
- dict(
39
- type='YOLOv5RandomAffine',
40
- max_rotate_degree=0.0,
41
- max_shear_degree=0.0,
42
- scaling_ratio_range=(1 - affine_scale, 1 + affine_scale),
43
- # img_scale is (width, height)
44
- border=(-_base_.img_scale[0] // 2, -_base_.img_scale[1] // 2),
45
- border_val=(114, 114, 114))
46
- ]
47
-
48
- train_pipeline = [
49
- *_base_.pre_transform, *mosaic_affine_pipeline,
50
- dict(
51
- type='YOLOv5MixUp',
52
- prob=0.1,
53
- pre_transform=[*_base_.pre_transform, *mosaic_affine_pipeline]),
54
- dict(type='YOLOv5HSVRandomAug'),
55
- dict(type='mmdet.RandomFlip', prob=0.5),
56
- dict(
57
- type='mmdet.PackDetInputs',
58
- meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip',
59
- 'flip_direction'))
60
- ]
61
-
62
- train_dataloader = dict(dataset=dict(pipeline=train_pipeline))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aabbhishekk/MistralQnA/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: FalconInstructQA
3
- emoji: 😻
4
- colorFrom: red
5
- colorTo: gray
6
- sdk: streamlit
7
- sdk_version: 1.21.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abs6187/AI_Chatbot/app.py DELETED
@@ -1,24 +0,0 @@
1
-
2
- import gradio
3
- from transformers import pipeline
4
-
5
- import openai
6
- import gradio
7
-
8
- openai.api_key = "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"
9
-
10
- messages = [{"role": "system", "content": "You are a Indian Lawyer and Gave Advice according to Indian constitution"}]
11
-
12
- def CustomChatGPT(user_input):
13
- messages.append({"role": "user", "content": user_input})
14
- response = openai.ChatCompletion.create(
15
- model = "gpt-3.5-turbo",
16
- messages = messages
17
- )
18
- ChatGPT_reply = response["choices"][0]["message"]["content"]
19
- messages.append({"role": "assistant", "content": ChatGPT_reply})
20
- return ChatGPT_reply
21
-
22
- iface = gradio.Interface(fn=CustomChatGPT, inputs = "text", outputs = "text", title = "AI Chat Bot for Legal Assistance This is Free version and if it doesnt work kindly contact at Email ID [email protected]")
23
- print(" This is Free version and if it doesnt work kindly contact at Email ID [email protected]")
24
- iface.launch(inline=False)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Mishalsgpt.py DELETED
@@ -1,23 +0,0 @@
1
- import os, requests, uuid
2
- from ...typing import sha256, Dict, get_type_hints
3
-
4
- url = 'https://mishalsgpt.vercel.app'
5
- model = ['gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo']
6
- supports_stream = True
7
- needs_auth = False
8
-
9
- def _create_completion(model: str, messages: list, stream: bool, **kwargs):
10
- headers = {
11
- 'Content-Type': 'application/json',
12
- }
13
- data = {
14
- 'model': model,
15
- 'temperature': 0.7,
16
- 'messages': messages
17
- }
18
- response = requests.post(url + '/api/openai/v1/chat/completions',
19
- headers=headers, json=data, stream=True)
20
- yield response.json()['choices'][0]['message']['content']
21
-
22
- params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
23
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/V50.py DELETED
@@ -1,67 +0,0 @@
1
- from __future__ import annotations
2
-
3
- import uuid
4
-
5
- import requests
6
-
7
- from ...typing import Any, CreateResult
8
- from ..base_provider import BaseProvider
9
-
10
-
11
- class V50(BaseProvider):
12
- url = 'https://p5.v50.ltd'
13
- supports_gpt_35_turbo = True
14
- supports_stream = False
15
- needs_auth = False
16
- working = False
17
-
18
- @staticmethod
19
- def create_completion(
20
- model: str,
21
- messages: list[dict[str, str]],
22
- stream: bool, **kwargs: Any) -> CreateResult:
23
-
24
- conversation = "\n".join(f"{message['role']}: {message['content']}" for message in messages)
25
- conversation += "\nassistant: "
26
-
27
- payload = {
28
- "prompt" : conversation,
29
- "options" : {},
30
- "systemMessage" : ".",
31
- "temperature" : kwargs.get("temperature", 0.4),
32
- "top_p" : kwargs.get("top_p", 0.4),
33
- "model" : model,
34
- "user" : str(uuid.uuid4())
35
- }
36
-
37
- headers = {
38
- 'authority' : 'p5.v50.ltd',
39
- 'accept' : 'application/json, text/plain, */*',
40
- 'accept-language' : 'id-ID,id;q=0.9,en-US;q=0.8,en;q=0.7',
41
- 'content-type' : 'application/json',
42
- 'origin' : 'https://p5.v50.ltd',
43
- 'referer' : 'https://p5.v50.ltd/',
44
- 'sec-ch-ua-platform': '"Windows"',
45
- 'sec-fetch-dest' : 'empty',
46
- 'sec-fetch-mode' : 'cors',
47
- 'sec-fetch-site' : 'same-origin',
48
- 'user-agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36'
49
- }
50
- response = requests.post("https://p5.v50.ltd/api/chat-process",
51
- json=payload, headers=headers, proxies=kwargs['proxy'] if 'proxy' in kwargs else {})
52
-
53
- if "https://fk1.v50.ltd" not in response.text:
54
- yield response.text
55
-
56
- @classmethod
57
- @property
58
- def params(cls):
59
- params = [
60
- ("model", "str"),
61
- ("messages", "list[dict[str, str]]"),
62
- ("stream", "bool"),
63
- ("temperature", "float"),
64
- ("top_p", "int"),
65
- ]
66
- param = ", ".join([": ".join(p) for p in params])
67
- return f"g4f.provider.{cls.__name__} supports: ({param})"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/csvtoarray.d.ts DELETED
@@ -1,2 +0,0 @@
1
- import CSVToArray from './data/csvtoarray/CSVToArray';
2
- export default CSVToArray;
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/custom/Custom.d.ts DELETED
@@ -1,48 +0,0 @@
1
- import Base from '../base/Base';
2
- import * as Geoms from '../../../plugins/gameobjects/shape/shapes/geoms';
3
-
4
- export default Custom;
5
-
6
- declare namespace Custom {
7
-
8
- type NameTypes = string | string[] | number;
9
-
10
- type Arc = Geoms.Arc;
11
- type Circle = Geoms.Circle;
12
- type Curve = Geoms.Curve;
13
- type Ellipse = Geoms.Ellipse;
14
- type Line = Geoms.Line;
15
- type Lines = Geoms.Lines;
16
- type Rectangle = Geoms.Rectangle;
17
- type RoundRectangle = Geoms.RoundRectangle;
18
- type Triangle = Geoms.Triangle;
19
- type ShapeTypes = Arc | Circle | Curve | Ellipse |
20
- Line | Lines | Rectangle | RoundRectangle | Triangle;
21
-
22
- interface IConfig extends Base.IConfig {
23
- create?: {
24
- arc?: NameTypes,
25
- circle?: NameTypes,
26
- ellipse?: NameTypes,
27
- line?: NameTypes,
28
- lines?: NameTypes,
29
- rectangle?: NameTypes,
30
- triangle?: NameTypes,
31
- } | ((this: Custom) => void);
32
-
33
- update?: (this: Custom) => void;
34
-
35
- type?: string,
36
- }
37
-
38
- }
39
-
40
- declare class Custom extends Base {
41
- constructor(
42
- scene: Phaser.Scene,
43
- config?: Custom.IConfig
44
- )
45
-
46
- getShape(name: string): Custom.ShapeTypes;
47
- getShapes(): Custom.ShapeTypes[];
48
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlexZou/Deploy_Restoration/net/block.py DELETED
@@ -1,477 +0,0 @@
1
- import torch.nn as nn
2
- import torch.nn.functional as F
3
- import torch as th
4
- import datetime
5
- import os
6
- import time
7
- import timeit
8
- import copy
9
- import numpy as np
10
- from torch.nn import ModuleList
11
- from torch.nn import Conv2d
12
- from torch.nn import LeakyReLU
13
-
14
-
15
-
16
-
17
- #PixelwiseNorm代替了BatchNorm
18
- class PixelwiseNorm(th.nn.Module):
19
- def __init__(self):
20
- super(PixelwiseNorm, self).__init__()
21
-
22
- def forward(self, x, alpha=1e-8):
23
- """
24
- forward pass of the module
25
- :param x: input activations volume
26
- :param alpha: small number for numerical stability
27
- :return: y => pixel normalized activations
28
- """
29
- y = x.pow(2.).mean(dim=1, keepdim=True).add(alpha).sqrt() # [N1HW]
30
- y = x / y # normalize the input x volume
31
- return y
32
-
33
-
34
-
35
- class MinibatchStdDev(th.nn.Module):
36
- """
37
- Minibatch standard deviation layer for the discriminator
38
- """
39
-
40
- def __init__(self):
41
- """
42
- derived class constructor
43
- """
44
- super().__init__()
45
-
46
- def forward(self, x, alpha=1e-8):
47
- """
48
- forward pass of the layer
49
- :param x: input activation volume
50
- :param alpha: small number for numerical stability
51
- :return: y => x appended with standard deviation constant map
52
- """
53
- batch_size, _, height, width = x.shape
54
-
55
- # [B x C x H x W] Subtract mean over batch.
56
- y = x - x.mean(dim=0, keepdim=True)
57
-
58
- # [1 x C x H x W] Calc standard deviation over batch
59
- y = th.sqrt(y.pow(2.).mean(dim=0, keepdim=False) + alpha)
60
-
61
- # [1] Take average over feature_maps and pixels.
62
- y = y.mean().view(1, 1, 1, 1)
63
-
64
- # [B x 1 x H x W] Replicate over group and pixels.
65
- y = y.repeat(batch_size, 1, height, width)
66
-
67
- # [B x C x H x W] Append as new feature_map.
68
- y = th.cat([x, y], 1)
69
-
70
- # return the computed values:
71
- return y
72
-
73
-
74
-
75
-
76
-
77
- # ==========================================================
78
- # Equalized learning rate blocks:
79
- # extending Conv2D and Deconv2D layers for equalized learning rate logic
80
- # ==========================================================
81
- class _equalized_conv2d(th.nn.Module):
82
- """ conv2d with the concept of equalized learning rate
83
- Args:
84
- :param c_in: input channels
85
- :param c_out: output channels
86
- :param k_size: kernel size (h, w) should be a tuple or a single integer
87
- :param stride: stride for conv
88
- :param pad: padding
89
- :param bias: whether to use bias or not
90
- """
91
-
92
- def __init__(self, c_in, c_out, k_size, stride=1, pad=0, bias=True):
93
- """ constructor for the class """
94
- from torch.nn.modules.utils import _pair
95
- from numpy import sqrt, prod
96
-
97
- super().__init__()
98
-
99
- # define the weight and bias if to be used
100
- self.weight = th.nn.Parameter(th.nn.init.normal_(
101
- th.empty(c_out, c_in, *_pair(k_size))
102
- ))
103
-
104
- self.use_bias = bias
105
- self.stride = stride
106
- self.pad = pad
107
-
108
- if self.use_bias:
109
- self.bias = th.nn.Parameter(th.FloatTensor(c_out).fill_(0))
110
-
111
- fan_in = prod(_pair(k_size)) * c_in # value of fan_in
112
- self.scale = sqrt(2) / sqrt(fan_in)
113
-
114
- def forward(self, x):
115
- """
116
- forward pass of the network
117
- :param x: input
118
- :return: y => output
119
- """
120
- from torch.nn.functional import conv2d
121
-
122
- return conv2d(input=x,
123
- weight=self.weight * self.scale, # scale the weight on runtime
124
- bias=self.bias if self.use_bias else None,
125
- stride=self.stride,
126
- padding=self.pad)
127
-
128
- def extra_repr(self):
129
- return ", ".join(map(str, self.weight.shape))
130
-
131
-
132
- class _equalized_deconv2d(th.nn.Module):
133
- """ Transpose convolution using the equalized learning rate
134
- Args:
135
- :param c_in: input channels
136
- :param c_out: output channels
137
- :param k_size: kernel size
138
- :param stride: stride for convolution transpose
139
- :param pad: padding
140
- :param bias: whether to use bias or not
141
- """
142
-
143
- def __init__(self, c_in, c_out, k_size, stride=1, pad=0, bias=True):
144
- """ constructor for the class """
145
- from torch.nn.modules.utils import _pair
146
- from numpy import sqrt
147
-
148
- super().__init__()
149
-
150
- # define the weight and bias if to be used
151
- self.weight = th.nn.Parameter(th.nn.init.normal_(
152
- th.empty(c_in, c_out, *_pair(k_size))
153
- ))
154
-
155
- self.use_bias = bias
156
- self.stride = stride
157
- self.pad = pad
158
-
159
- if self.use_bias:
160
- self.bias = th.nn.Parameter(th.FloatTensor(c_out).fill_(0))
161
-
162
- fan_in = c_in # value of fan_in for deconv
163
- self.scale = sqrt(2) / sqrt(fan_in)
164
-
165
- def forward(self, x):
166
- """
167
- forward pass of the layer
168
- :param x: input
169
- :return: y => output
170
- """
171
- from torch.nn.functional import conv_transpose2d
172
-
173
- return conv_transpose2d(input=x,
174
- weight=self.weight * self.scale, # scale the weight on runtime
175
- bias=self.bias if self.use_bias else None,
176
- stride=self.stride,
177
- padding=self.pad)
178
-
179
- def extra_repr(self):
180
- return ", ".join(map(str, self.weight.shape))
181
-
182
-
183
-
184
- #basic block of the encoding part of the genarater
185
- #编码器的基本卷积块
186
- class conv_block(nn.Module):
187
- """
188
- Convolution Block
189
- with two convolution layers
190
- """
191
- def __init__(self, in_ch, out_ch,use_eql=True):
192
- super(conv_block, self).__init__()
193
-
194
- if use_eql:
195
- self.conv_1= _equalized_conv2d(in_ch, out_ch, (1, 1),
196
- pad=0, bias=True)
197
- self.conv_2 = _equalized_conv2d(out_ch, out_ch, (3, 3),
198
- pad=1, bias=True)
199
- self.conv_3 = _equalized_conv2d(out_ch, out_ch, (3, 3),
200
- pad=1, bias=True)
201
-
202
- else:
203
- self.conv_1 = Conv2d(in_ch, out_ch, (3, 3),
204
- padding=1, bias=True)
205
- self.conv_2 = Conv2d(out_ch, out_ch, (3, 3),
206
- padding=1, bias=True)
207
-
208
- # pixel_wise feature normalizer:
209
- self.pixNorm = PixelwiseNorm()
210
-
211
- # leaky_relu:
212
- self.lrelu = LeakyReLU(0.2)
213
-
214
- def forward(self, x):
215
- """
216
- forward pass of the block
217
- :param x: input
218
- :return: y => output
219
- """
220
- from torch.nn.functional import interpolate
221
-
222
- #y = interpolate(x, scale_factor=2)
223
- y=self.conv_1(self.lrelu(self.pixNorm(x)))
224
- residual=y
225
- y=self.conv_2(self.lrelu(self.pixNorm(y)))
226
- y=self.conv_3(self.lrelu(self.pixNorm(y)))
227
- y=y+residual
228
-
229
-
230
- return y
231
-
232
-
233
-
234
-
235
- #basic up convolution block of the encoding part of the genarater
236
- #编码器的基本卷积块
237
- class up_conv(nn.Module):
238
- """
239
- Up Convolution Block
240
- """
241
- def __init__(self, in_ch, out_ch,use_eql=True):
242
- super(up_conv, self).__init__()
243
- if use_eql:
244
- self.conv_1= _equalized_conv2d(in_ch, out_ch, (1, 1),
245
- pad=0, bias=True)
246
- self.conv_2 = _equalized_conv2d(out_ch, out_ch, (3, 3),
247
- pad=1, bias=True)
248
- self.conv_3 = _equalized_conv2d(out_ch, out_ch, (3, 3),
249
- pad=1, bias=True)
250
-
251
- else:
252
- self.conv_1 = Conv2d(in_ch, out_ch, (3, 3),
253
- padding=1, bias=True)
254
- self.conv_2 = Conv2d(out_ch, out_ch, (3, 3),
255
- padding=1, bias=True)
256
-
257
- # pixel_wise feature normalizer:
258
- self.pixNorm = PixelwiseNorm()
259
-
260
- # leaky_relu:
261
- self.lrelu = LeakyReLU(0.2)
262
-
263
- def forward(self, x):
264
- """
265
- forward pass of the block
266
- :param x: input
267
- :return: y => output
268
- """
269
- from torch.nn.functional import interpolate
270
-
271
- x = interpolate(x, scale_factor=2, mode="bilinear")
272
- y=self.conv_1(self.lrelu(self.pixNorm(x)))
273
- residual=y
274
- y=self.conv_2(self.lrelu(self.pixNorm(y)))
275
- y=self.conv_3(self.lrelu(self.pixNorm(y)))
276
- y=y+residual
277
-
278
- return y
279
-
280
-
281
-
282
-
283
- #判别器的最后一层
284
- class DisFinalBlock(th.nn.Module):
285
- """ Final block for the Discriminator """
286
-
287
- def __init__(self, in_channels, use_eql=True):
288
- """
289
- constructor of the class
290
- :param in_channels: number of input channels
291
- :param use_eql: whether to use equalized learning rate
292
- """
293
- from torch.nn import LeakyReLU
294
- from torch.nn import Conv2d
295
-
296
- super().__init__()
297
-
298
- # declare the required modules for forward pass
299
- self.batch_discriminator = MinibatchStdDev()
300
-
301
- if use_eql:
302
- self.conv_1 = _equalized_conv2d(in_channels + 1, in_channels, (3, 3),
303
- pad=1, bias=True)
304
- self.conv_2 = _equalized_conv2d(in_channels, in_channels, (4, 4),stride=2,pad=1,
305
- bias=True)
306
-
307
- # final layer emulates the fully connected layer
308
- self.conv_3 = _equalized_conv2d(in_channels, 1, (1, 1), bias=True)
309
-
310
- else:
311
- # modules required:
312
- self.conv_1 = Conv2d(in_channels + 1, in_channels, (3, 3), padding=1, bias=True)
313
- self.conv_2 = Conv2d(in_channels, in_channels, (4, 4), bias=True)
314
-
315
- # final conv layer emulates a fully connected layer
316
- self.conv_3 = Conv2d(in_channels, 1, (1, 1), bias=True)
317
-
318
- # leaky_relu:
319
- self.lrelu = LeakyReLU(0.2)
320
-
321
- def forward(self, x):
322
- """
323
- forward pass of the FinalBlock
324
- :param x: input
325
- :return: y => output
326
- """
327
- # minibatch_std_dev layer
328
- y = self.batch_discriminator(x)
329
-
330
- # define the computations
331
- y = self.lrelu(self.conv_1(y))
332
- y = self.lrelu(self.conv_2(y))
333
-
334
- # fully connected layer
335
- y = self.conv_3(y) # This layer has linear activation
336
-
337
- # flatten the output raw discriminator scores
338
- return y
339
-
340
-
341
-
342
- #判别器基本卷积块
343
- class DisGeneralConvBlock(th.nn.Module):
344
- """ General block in the discriminator """
345
-
346
- def __init__(self, in_channels, out_channels, use_eql=True):
347
- """
348
- constructor of the class
349
- :param in_channels: number of input channels
350
- :param out_channels: number of output channels
351
- :param use_eql: whether to use equalized learning rate
352
- """
353
- from torch.nn import AvgPool2d, LeakyReLU
354
- from torch.nn import Conv2d
355
-
356
- super().__init__()
357
-
358
- if use_eql:
359
- self.conv_1 = _equalized_conv2d(in_channels, in_channels, (3, 3),
360
- pad=1, bias=True)
361
- self.conv_2 = _equalized_conv2d(in_channels, out_channels, (3, 3),
362
- pad=1, bias=True)
363
- else:
364
- # convolutional modules
365
- self.conv_1 = Conv2d(in_channels, in_channels, (3, 3),
366
- padding=1, bias=True)
367
- self.conv_2 = Conv2d(in_channels, out_channels, (3, 3),
368
- padding=1, bias=True)
369
-
370
- self.downSampler = AvgPool2d(2) # downsampler
371
-
372
- # leaky_relu:
373
- self.lrelu = LeakyReLU(0.2)
374
-
375
- def forward(self, x):
376
- """
377
- forward pass of the module
378
- :param x: input
379
- :return: y => output
380
- """
381
- # define the computations
382
- y = self.lrelu(self.conv_1(x))
383
- y = self.lrelu(self.conv_2(y))
384
- y = self.downSampler(y)
385
-
386
- return y
387
-
388
-
389
-
390
-
391
-
392
- class from_rgb(nn.Module):
393
- """
394
- The RGB image is transformed into a multi-channel feature map to be concatenated with
395
- the feature map with the same number of channels in the network
396
- 把RGB图转换为多通道特征图,以便与网络中相同通道数的特征图拼接
397
- """
398
- def __init__(self, outchannels, use_eql=True):
399
- super(from_rgb, self).__init__()
400
- if use_eql:
401
- self.conv_1 = _equalized_conv2d(3, outchannels, (1, 1), bias=True)
402
- else:
403
- self.conv_1 = nn.Conv2d(3, outchannels, (1, 1),bias=True)
404
- # pixel_wise feature normalizer:
405
- self.pixNorm = PixelwiseNorm()
406
-
407
- # leaky_relu:
408
- self.lrelu = LeakyReLU(0.2)
409
-
410
-
411
- def forward(self, x):
412
- """
413
- forward pass of the block
414
- :param x: input
415
- :return: y => output
416
- """
417
- y = self.pixNorm(self.lrelu(self.conv_1(x)))
418
- return y
419
-
420
- class to_rgb(nn.Module):
421
- """
422
- 把多通道特征图转换为RGB三通道图,以便输入判别器
423
- The multi-channel feature map is converted into RGB image for input to the discriminator
424
- """
425
- def __init__(self, inchannels, use_eql=True):
426
- super(to_rgb, self).__init__()
427
- if use_eql:
428
- self.conv_1 = _equalized_conv2d(inchannels, 3, (1, 1), bias=True)
429
- else:
430
- self.conv_1 = nn.Conv2d(inchannels, 3, (1, 1),bias=True)
431
-
432
-
433
-
434
-
435
-
436
- def forward(self, x):
437
- """
438
- forward pass of the block
439
- :param x: input
440
- :return: y => output
441
- """
442
-
443
- y = self.conv_1(x)
444
-
445
- return y
446
-
447
- class Flatten(nn.Module):
448
- def forward(self, x):
449
- return x.view(x.size(0), -1)
450
-
451
-
452
-
453
- class CCA(nn.Module):
454
- """
455
- CCA Block
456
- """
457
- def __init__(self, F_g, F_x):
458
- super().__init__()
459
- self.mlp_x = nn.Sequential(
460
- Flatten(),
461
- nn.Linear(F_x, F_x))
462
- self.mlp_g = nn.Sequential(
463
- Flatten(),
464
- nn.Linear(F_g, F_x))
465
- self.relu = nn.ReLU(inplace=True)
466
-
467
- def forward(self, g, x):
468
- # channel-wise attention
469
- avg_pool_x = F.avg_pool2d( x, (x.size(2), x.size(3)), stride=(x.size(2), x.size(3)))
470
- channel_att_x = self.mlp_x(avg_pool_x)
471
- avg_pool_g = F.avg_pool2d( g, (g.size(2), g.size(3)), stride=(g.size(2), g.size(3)))
472
- channel_att_g = self.mlp_g(avg_pool_g)
473
- channel_att_sum = (channel_att_x + channel_att_g)/2.0
474
- scale = th.sigmoid(channel_att_sum).unsqueeze(2).unsqueeze(3).expand_as(x)
475
- x_after_channel = x * scale
476
- out = self.relu(x_after_channel)
477
- return out
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ameaou/academic-chatgpt3.1/crazy_functions/Latex全文润色.py DELETED
@@ -1,175 +0,0 @@
1
- from toolbox import update_ui
2
- from toolbox import CatchException, report_execption, write_results_to_file
3
- fast_debug = False
4
-
5
- class PaperFileGroup():
6
- def __init__(self):
7
- self.file_paths = []
8
- self.file_contents = []
9
- self.sp_file_contents = []
10
- self.sp_file_index = []
11
- self.sp_file_tag = []
12
-
13
- # count_token
14
- from request_llm.bridge_all import model_info
15
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
16
- def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
17
- self.get_token_num = get_token_num
18
-
19
- def run_file_split(self, max_token_limit=1900):
20
- """
21
- 将长文本分离开来
22
- """
23
- for index, file_content in enumerate(self.file_contents):
24
- if self.get_token_num(file_content) < max_token_limit:
25
- self.sp_file_contents.append(file_content)
26
- self.sp_file_index.append(index)
27
- self.sp_file_tag.append(self.file_paths[index])
28
- else:
29
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
30
- segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit)
31
- for j, segment in enumerate(segments):
32
- self.sp_file_contents.append(segment)
33
- self.sp_file_index.append(index)
34
- self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex")
35
-
36
- print('Segmentation: done')
37
-
38
- def 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'):
39
- import time, os, re
40
- from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
41
-
42
-
43
- # <-------- 读取Latex文件,删除其中的所有注释 ---------->
44
- pfg = PaperFileGroup()
45
-
46
- for index, fp in enumerate(file_manifest):
47
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
48
- file_content = f.read()
49
- # 定义注释的正则表达式
50
- comment_pattern = r'%.*'
51
- # 使用正则表达式查找注释,并替换为空字符串
52
- clean_tex_content = re.sub(comment_pattern, '', file_content)
53
- # 记录删除注释后的文本
54
- pfg.file_paths.append(fp)
55
- pfg.file_contents.append(clean_tex_content)
56
-
57
- # <-------- 拆分过长的latex文件 ---------->
58
- pfg.run_file_split(max_token_limit=1024)
59
- n_split = len(pfg.sp_file_contents)
60
-
61
- # <-------- 抽取摘要 ---------->
62
- # if language == 'en':
63
- # abs_extract_inputs = f"Please write an abstract for this paper"
64
-
65
- # # 单线,获取文章meta信息
66
- # paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
67
- # inputs=abs_extract_inputs,
68
- # inputs_show_user=f"正在抽取摘要信息。",
69
- # llm_kwargs=llm_kwargs,
70
- # chatbot=chatbot, history=[],
71
- # sys_prompt="Your job is to collect information from materials。",
72
- # )
73
-
74
- # <-------- 多线程润色开始 ---------->
75
- if language == 'en':
76
- inputs_array = ["Below is a section from an academic paper, polish this section to meet the academic standard, improve the grammar, clarity and overall readability, do not modify any latex command such as \section, \cite and equations:" +
77
- f"\n\n{frag}" for frag in pfg.sp_file_contents]
78
- inputs_show_user_array = [f"Polish {f}" for f in pfg.sp_file_tag]
79
- sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
80
- elif language == 'zh':
81
- inputs_array = [f"以下是一篇学术论文中的一段内容,请将此部分润色以满足学术标准,提高语法、清晰度和整体可读性,不要修改任何LaTeX命令,例如\section,\cite和方程式:" +
82
- f"\n\n{frag}" for frag in pfg.sp_file_contents]
83
- inputs_show_user_array = [f"润色 {f}" for f in pfg.sp_file_tag]
84
- sys_prompt_array=["你是一位专业的中文学术论文作家。" for _ in range(n_split)]
85
-
86
-
87
- gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
88
- inputs_array=inputs_array,
89
- inputs_show_user_array=inputs_show_user_array,
90
- llm_kwargs=llm_kwargs,
91
- chatbot=chatbot,
92
- history_array=[[""] for _ in range(n_split)],
93
- sys_prompt_array=sys_prompt_array,
94
- # max_workers=5, # 并行任务数量限制,最多同时执行5个,其他的排队等待
95
- scroller_max_len = 80
96
- )
97
-
98
- # <-------- 整理结果,退出 ---------->
99
- create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md"
100
- res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name)
101
- history = gpt_response_collection
102
- chatbot.append((f"{fp}完成了吗?", res))
103
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
104
-
105
-
106
- @CatchException
107
- def Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
108
- # 基本信息:功能、贡献者
109
- chatbot.append([
110
- "函数插件功能?",
111
- "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky"])
112
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
113
-
114
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
115
- try:
116
- import tiktoken
117
- except:
118
- report_execption(chatbot, history,
119
- a=f"解析项目: {txt}",
120
- b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
121
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
122
- return
123
- history = [] # 清空历史,以免输入溢出
124
- import glob, os
125
- if os.path.exists(txt):
126
- project_folder = txt
127
- else:
128
- if txt == "": txt = '空空如也的输入栏'
129
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
130
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
131
- return
132
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
133
- if len(file_manifest) == 0:
134
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
135
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
136
- return
137
- yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en')
138
-
139
-
140
-
141
-
142
-
143
-
144
- @CatchException
145
- def Latex中文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
146
- # 基本信息:功能、贡献者
147
- chatbot.append([
148
- "函数插件功能?",
149
- "对整个Latex项目进行润色。函数插件贡献者: Binary-Husky"])
150
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
151
-
152
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
153
- try:
154
- import tiktoken
155
- except:
156
- report_execption(chatbot, history,
157
- a=f"解析项目: {txt}",
158
- b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
159
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
160
- return
161
- history = [] # 清空历史,以免输入溢出
162
- import glob, os
163
- if os.path.exists(txt):
164
- project_folder = txt
165
- else:
166
- if txt == "": txt = '空空如也的输入栏'
167
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
168
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
169
- return
170
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
171
- if len(file_manifest) == 0:
172
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
173
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
174
- return
175
- yield from 多文件润色(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/ld/ld_r34_gflv1_r101_fpn_coco_1x.py DELETED
@@ -1,19 +0,0 @@
1
- _base_ = ['./ld_r18_gflv1_r101_fpn_coco_1x.py']
2
- model = dict(
3
- pretrained='torchvision://resnet34',
4
- backbone=dict(
5
- type='ResNet',
6
- depth=34,
7
- num_stages=4,
8
- out_indices=(0, 1, 2, 3),
9
- frozen_stages=1,
10
- norm_cfg=dict(type='BN', requires_grad=True),
11
- norm_eval=True,
12
- style='pytorch'),
13
- neck=dict(
14
- type='FPN',
15
- in_channels=[64, 128, 256, 512],
16
- out_channels=256,
17
- start_level=1,
18
- add_extra_convs='on_output',
19
- num_outs=5))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anilegna/Colour-Personallity/app.py DELETED
@@ -1,172 +0,0 @@
1
- ### ----------------------------- ###
2
- ### libraries ###
3
- ### ----------------------------- ###
4
-
5
- import gradio as gr
6
- import pandas as pd
7
- import numpy as np
8
- from sklearn.model_selection import train_test_split
9
- from sklearn.linear_model import LogisticRegression
10
- from sklearn import metrics
11
-
12
-
13
- ### ------------------------------ ###
14
- ### data transformation ###
15
- ### ------------------------------ ###
16
-
17
- # load dataset
18
- uncleaned_data = pd.read_csv('data.csv')
19
-
20
- # remove timestamp from dataset (always first column)
21
- uncleaned_data = uncleaned_data.iloc[: , 1:]
22
- data = pd.DataFrame()
23
-
24
- # keep track of which columns are categorical and what
25
- # those columns' value mappings are
26
- # structure: {colname1: {...}, colname2: {...} }
27
- cat_value_dicts = {}
28
- final_colname = uncleaned_data.columns[len(uncleaned_data.columns) - 1]
29
-
30
- # for each column...
31
- for (colname, colval) in uncleaned_data.iteritems():
32
-
33
- # check if col is already a number; if so, add col directly
34
- # to new dataframe and skip to next column
35
- if isinstance(colval.values[0], (np.integer, float)):
36
- data[colname] = uncleaned_data[colname].copy()
37
- continue
38
-
39
- # structure: {0: "lilac", 1: "blue", ...}
40
- new_dict = {}
41
- val = 0 # first index per column
42
- transformed_col_vals = [] # new numeric datapoints
43
-
44
- # if not, for each item in that column...
45
- for (row, item) in enumerate(colval.values):
46
-
47
- # if item is not in this col's dict...
48
- if item not in new_dict:
49
- new_dict[item] = val
50
- val += 1
51
-
52
- # then add numerical value to transformed dataframe
53
- transformed_col_vals.append(new_dict[item])
54
-
55
- # reverse dictionary only for final col (0, 1) => (vals)
56
- if colname == final_colname:
57
- new_dict = {value : key for (key, value) in new_dict.items()}
58
-
59
- cat_value_dicts[colname] = new_dict
60
- data[colname] = transformed_col_vals
61
-
62
-
63
- ### -------------------------------- ###
64
- ### model training ###
65
- ### -------------------------------- ###
66
-
67
- # select features and predicton; automatically selects last column as prediction
68
- cols = len(data.columns)
69
- num_features = cols - 1
70
- x = data.iloc[: , :num_features]
71
- y = data.iloc[: , num_features:]
72
-
73
- # split data into training and testing sets
74
- x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25)
75
-
76
- # instantiate the model (using default parameters)
77
- model = LogisticRegression()
78
- model.fit(x_train, y_train.values.ravel())
79
- y_pred = model.predict(x_test)
80
-
81
-
82
- ### -------------------------------- ###
83
- ### article generation ###
84
- ### -------------------------------- ###
85
- # borrow file reading function from reader.py
86
-
87
- def get_feat():
88
- feats = [abs(x) for x in model.coef_[0]]
89
- max_val = max(feats)
90
- idx = feats.index(max_val)
91
- return data.columns[idx]
92
-
93
- acc = str(round(metrics.accuracy_score(y_test, y_pred) * 100, 1)) + "%"
94
- most_imp_feat = get_feat()
95
- # info = get_article(acc, most_imp_feat)
96
-
97
-
98
-
99
- ### ------------------------------- ###
100
- ### interface creation ###
101
- ### ------------------------------- ###
102
-
103
-
104
- # predictor for generic number of features
105
- def general_predictor(*args):
106
- features = []
107
-
108
- # transform categorical input
109
- for colname, arg in zip(data.columns, args):
110
- if (colname in cat_value_dicts):
111
- features.append(cat_value_dicts[colname][arg])
112
- else:
113
- features.append(arg)
114
-
115
- # predict single datapoint
116
- new_input = [features]
117
- result = model.predict(new_input)
118
- return cat_value_dicts[final_colname][result[0]]
119
-
120
- # add data labels to replace those lost via star-args
121
-
122
-
123
- block = gr.Blocks()
124
-
125
- with open('info.md') as f:
126
- with block:
127
- gr.Markdown(f.readline())
128
- gr.Markdown('Take the quiz to get a personalized recommendation using AI.')
129
-
130
- with gr.Row():
131
- with gr.Box():
132
- inputls = []
133
- for colname in data.columns:
134
- # skip last column
135
- if colname == final_colname:
136
- continue
137
-
138
- # access categories dict if data is categorical
139
- # otherwise, just use a number input
140
- if colname in cat_value_dicts:
141
- radio_options = list(cat_value_dicts[colname].keys())
142
- inputls.append(gr.inputs.Dropdown(choices=radio_options, type="value", label=colname))
143
- else:
144
- # add numerical input
145
- inputls.append(gr.inputs.Number(label=colname))
146
- gr.Markdown("<br />")
147
-
148
- submit = gr.Button("Click to see your personalized result!", variant="primary")
149
- gr.Markdown("<br />")
150
- output = gr.Textbox(label="Your recommendation:", placeholder="your recommendation will appear here")
151
-
152
- submit.click(fn=general_predictor, inputs=inputls, outputs=output)
153
- gr.Markdown("<br />")
154
-
155
- with gr.Row():
156
- with gr.Box():
157
- gr.Markdown(f"<h3>Accuracy: </h3>{acc}")
158
- with gr.Box():
159
- gr.Markdown(f"<h3>Most important feature: </h3>{most_imp_feat}")
160
-
161
- gr.Markdown("<br />")
162
-
163
- with gr.Box():
164
- gr.Markdown('''⭐ Note that model accuracy is based on the uploaded data.csv and reflects how well the AI model can give correct recommendations for <em>that dataset</em>. Model accuracy and most important feature can be helpful for understanding how the model works, but <em>should not be considered absolute facts about the real world</em>.''')
165
-
166
- with gr.Box():
167
- with open('info.md') as f:
168
- f.readline()
169
- gr.Markdown(f.read())
170
-
171
- # show the interface
172
- block.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/utils.py DELETED
@@ -1,132 +0,0 @@
1
- import os
2
- import re
3
- from datetime import datetime
4
- from pathlib import Path
5
-
6
- from modules import github, shared
7
- from modules.logging_colors import logger
8
-
9
-
10
- # Helper function to get multiple values from shared.gradio
11
- def gradio(*keys):
12
- if len(keys) == 1 and type(keys[0]) in [list, tuple]:
13
- keys = keys[0]
14
-
15
- return [shared.gradio[k] for k in keys]
16
-
17
-
18
- def save_file(fname, contents):
19
- if fname == '':
20
- logger.error('File name is empty!')
21
- return
22
-
23
- root_folder = Path(__file__).resolve().parent.parent
24
- abs_path = Path(fname).resolve()
25
- rel_path = abs_path.relative_to(root_folder)
26
- if rel_path.parts[0] == '..':
27
- logger.error(f'Invalid file path: {fname}')
28
- return
29
-
30
- with open(abs_path, 'w', encoding='utf-8') as f:
31
- f.write(contents)
32
-
33
- logger.info(f'Saved {abs_path}.')
34
-
35
-
36
- def delete_file(fname):
37
- if fname == '':
38
- logger.error('File name is empty!')
39
- return
40
-
41
- root_folder = Path(__file__).resolve().parent.parent
42
- abs_path = Path(fname).resolve()
43
- rel_path = abs_path.relative_to(root_folder)
44
- if rel_path.parts[0] == '..':
45
- logger.error(f'Invalid file path: {fname}')
46
- return
47
-
48
- if abs_path.exists():
49
- abs_path.unlink()
50
- logger.info(f'Deleted {fname}.')
51
-
52
-
53
- def current_time():
54
- return f"{datetime.now().strftime('%Y-%m-%d-%H%M%S')}"
55
-
56
-
57
- def atoi(text):
58
- return int(text) if text.isdigit() else text.lower()
59
-
60
-
61
- # Replace multiple string pairs in a string
62
- def replace_all(text, dic):
63
- for i, j in dic.items():
64
- text = text.replace(i, j)
65
-
66
- return text
67
-
68
-
69
- def natural_keys(text):
70
- return [atoi(c) for c in re.split(r'(\d+)', text)]
71
-
72
-
73
- def get_available_models():
74
- model_list = []
75
- for item in list(Path(f'{shared.args.model_dir}/').glob('*')):
76
- if not item.name.endswith(('.txt', '-np', '.pt', '.json', '.yaml', '.py')) and 'llama-tokenizer' not in item.name:
77
- model_list.append(re.sub('.pth$', '', item.name))
78
-
79
- return sorted(model_list, key=natural_keys)
80
-
81
-
82
- def get_available_presets():
83
- return sorted(set((k.stem for k in Path('presets').glob('*.yaml'))), key=natural_keys)
84
-
85
-
86
- def get_available_prompts():
87
- prompts = []
88
- files = set((k.stem for k in Path('prompts').glob('*.txt')))
89
- prompts += sorted([k for k in files if re.match('^[0-9]', k)], key=natural_keys, reverse=True)
90
- prompts += sorted([k for k in files if re.match('^[^0-9]', k)], key=natural_keys)
91
- prompts += ['None']
92
- return prompts
93
-
94
-
95
- def get_available_characters():
96
- paths = (x for x in Path('characters').iterdir() if x.suffix in ('.json', '.yaml', '.yml'))
97
- return sorted(set((k.stem for k in paths)), key=natural_keys)
98
-
99
-
100
- def get_available_instruction_templates():
101
- path = "instruction-templates"
102
- paths = []
103
- if os.path.exists(path):
104
- paths = (x for x in Path(path).iterdir() if x.suffix in ('.json', '.yaml', '.yml'))
105
-
106
- return ['None'] + sorted(set((k.stem for k in paths)), key=natural_keys)
107
-
108
-
109
- def get_available_extensions():
110
- extensions = sorted(set(map(lambda x: x.parts[1], Path('extensions').glob('*/script.py'))), key=natural_keys)
111
- extensions = [v for v in extensions if v not in github.new_extensions]
112
- return extensions
113
-
114
-
115
- def get_available_loras():
116
- return sorted([item.name for item in list(Path(shared.args.lora_dir).glob('*')) if not item.name.endswith(('.txt', '-np', '.pt', '.json'))], key=natural_keys)
117
-
118
-
119
- def get_datasets(path: str, ext: str):
120
- # include subdirectories for raw txt files to allow training from a subdirectory of txt files
121
- if ext == "txt":
122
- return ['None'] + sorted(set([k.stem for k in list(Path(path).glob('txt')) + list(Path(path).glob('*/')) if k.stem != 'put-trainer-datasets-here']), key=natural_keys)
123
-
124
- return ['None'] + sorted(set([k.stem for k in Path(path).glob(f'*.{ext}') if k.stem != 'put-trainer-datasets-here']), key=natural_keys)
125
-
126
-
127
- def get_available_chat_styles():
128
- return sorted(set(('-'.join(k.stem.split('-')[1:]) for k in Path('css').glob('chat_style*.css'))), key=natural_keys)
129
-
130
-
131
- def get_available_grammars():
132
- return ['None'] + sorted([item.name for item in list(Path('grammars').glob('*.gbnf'))], key=natural_keys)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/openpose/hand.py DELETED
@@ -1,86 +0,0 @@
1
- import cv2
2
- import json
3
- import numpy as np
4
- import math
5
- import time
6
- from scipy.ndimage.filters import gaussian_filter
7
- import matplotlib.pyplot as plt
8
- import matplotlib
9
- import torch
10
- from skimage.measure import label
11
-
12
- from .model import handpose_model
13
- from . import util
14
-
15
- class Hand(object):
16
- def __init__(self, model_path):
17
- self.model = handpose_model()
18
- if torch.cuda.is_available():
19
- self.model = self.model.cuda()
20
- print('cuda')
21
- model_dict = util.transfer(self.model, torch.load(model_path))
22
- self.model.load_state_dict(model_dict)
23
- self.model.eval()
24
-
25
- def __call__(self, oriImg):
26
- scale_search = [0.5, 1.0, 1.5, 2.0]
27
- # scale_search = [0.5]
28
- boxsize = 368
29
- stride = 8
30
- padValue = 128
31
- thre = 0.05
32
- multiplier = [x * boxsize / oriImg.shape[0] for x in scale_search]
33
- heatmap_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 22))
34
- # paf_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 38))
35
-
36
- for m in range(len(multiplier)):
37
- scale = multiplier[m]
38
- imageToTest = cv2.resize(oriImg, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC)
39
- imageToTest_padded, pad = util.padRightDownCorner(imageToTest, stride, padValue)
40
- im = np.transpose(np.float32(imageToTest_padded[:, :, :, np.newaxis]), (3, 2, 0, 1)) / 256 - 0.5
41
- im = np.ascontiguousarray(im)
42
-
43
- data = torch.from_numpy(im).float()
44
- if torch.cuda.is_available():
45
- data = data.cuda()
46
- # data = data.permute([2, 0, 1]).unsqueeze(0).float()
47
- with torch.no_grad():
48
- output = self.model(data).cpu().numpy()
49
- # output = self.model(data).numpy()q
50
-
51
- # extract outputs, resize, and remove padding
52
- heatmap = np.transpose(np.squeeze(output), (1, 2, 0)) # output 1 is heatmaps
53
- heatmap = cv2.resize(heatmap, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC)
54
- heatmap = heatmap[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :]
55
- heatmap = cv2.resize(heatmap, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC)
56
-
57
- heatmap_avg += heatmap / len(multiplier)
58
-
59
- all_peaks = []
60
- for part in range(21):
61
- map_ori = heatmap_avg[:, :, part]
62
- one_heatmap = gaussian_filter(map_ori, sigma=3)
63
- binary = np.ascontiguousarray(one_heatmap > thre, dtype=np.uint8)
64
- # 全部小于阈值
65
- if np.sum(binary) == 0:
66
- all_peaks.append([0, 0])
67
- continue
68
- label_img, label_numbers = label(binary, return_num=True, connectivity=binary.ndim)
69
- max_index = np.argmax([np.sum(map_ori[label_img == i]) for i in range(1, label_numbers + 1)]) + 1
70
- label_img[label_img != max_index] = 0
71
- map_ori[label_img == 0] = 0
72
-
73
- y, x = util.npmax(map_ori)
74
- all_peaks.append([x, y])
75
- return np.array(all_peaks)
76
-
77
- if __name__ == "__main__":
78
- hand_estimation = Hand('../model/hand_pose_model.pth')
79
-
80
- # test_image = '../images/hand.jpg'
81
- test_image = '../images/hand.jpg'
82
- oriImg = cv2.imread(test_image) # B,G,R order
83
- peaks = hand_estimation(oriImg)
84
- canvas = util.draw_handpose(oriImg, peaks, True)
85
- cv2.imshow('', canvas)
86
- cv2.waitKey(0)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/utils/logger.py DELETED
@@ -1,27 +0,0 @@
1
- import logging
2
-
3
- from annotator.uniformer.mmcv.utils import get_logger
4
-
5
-
6
- def get_root_logger(log_file=None, log_level=logging.INFO):
7
- """Get the root logger.
8
-
9
- The logger will be initialized if it has not been initialized. By default a
10
- StreamHandler will be added. If `log_file` is specified, a FileHandler will
11
- also be added. The name of the root logger is the top-level package name,
12
- e.g., "mmseg".
13
-
14
- Args:
15
- log_file (str | None): The log filename. If specified, a FileHandler
16
- will be added to the root logger.
17
- log_level (int): The root logger level. Note that only the process of
18
- rank 0 is affected, while other processes will set the level to
19
- "Error" and be silent most of the time.
20
-
21
- Returns:
22
- logging.Logger: The root logger.
23
- """
24
-
25
- logger = get_logger(name='mmseg', log_file=log_file, log_level=log_level)
26
-
27
- return logger
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AsakuraMizu/moe-tts/attentions.py DELETED
@@ -1,300 +0,0 @@
1
- import math
2
- import torch
3
- from torch import nn
4
- from torch.nn import functional as F
5
-
6
- import commons
7
- from modules import LayerNorm
8
-
9
-
10
- class Encoder(nn.Module):
11
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
12
- super().__init__()
13
- self.hidden_channels = hidden_channels
14
- self.filter_channels = filter_channels
15
- self.n_heads = n_heads
16
- self.n_layers = n_layers
17
- self.kernel_size = kernel_size
18
- self.p_dropout = p_dropout
19
- self.window_size = window_size
20
-
21
- self.drop = nn.Dropout(p_dropout)
22
- self.attn_layers = nn.ModuleList()
23
- self.norm_layers_1 = nn.ModuleList()
24
- self.ffn_layers = nn.ModuleList()
25
- self.norm_layers_2 = nn.ModuleList()
26
- for i in range(self.n_layers):
27
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
28
- self.norm_layers_1.append(LayerNorm(hidden_channels))
29
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
30
- self.norm_layers_2.append(LayerNorm(hidden_channels))
31
-
32
- def forward(self, x, x_mask):
33
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
34
- x = x * x_mask
35
- for i in range(self.n_layers):
36
- y = self.attn_layers[i](x, x, attn_mask)
37
- y = self.drop(y)
38
- x = self.norm_layers_1[i](x + y)
39
-
40
- y = self.ffn_layers[i](x, x_mask)
41
- y = self.drop(y)
42
- x = self.norm_layers_2[i](x + y)
43
- x = x * x_mask
44
- return x
45
-
46
-
47
- class Decoder(nn.Module):
48
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
49
- super().__init__()
50
- self.hidden_channels = hidden_channels
51
- self.filter_channels = filter_channels
52
- self.n_heads = n_heads
53
- self.n_layers = n_layers
54
- self.kernel_size = kernel_size
55
- self.p_dropout = p_dropout
56
- self.proximal_bias = proximal_bias
57
- self.proximal_init = proximal_init
58
-
59
- self.drop = nn.Dropout(p_dropout)
60
- self.self_attn_layers = nn.ModuleList()
61
- self.norm_layers_0 = nn.ModuleList()
62
- self.encdec_attn_layers = nn.ModuleList()
63
- self.norm_layers_1 = nn.ModuleList()
64
- self.ffn_layers = nn.ModuleList()
65
- self.norm_layers_2 = nn.ModuleList()
66
- for i in range(self.n_layers):
67
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
68
- self.norm_layers_0.append(LayerNorm(hidden_channels))
69
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
70
- self.norm_layers_1.append(LayerNorm(hidden_channels))
71
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
72
- self.norm_layers_2.append(LayerNorm(hidden_channels))
73
-
74
- def forward(self, x, x_mask, h, h_mask):
75
- """
76
- x: decoder input
77
- h: encoder output
78
- """
79
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
80
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
81
- x = x * x_mask
82
- for i in range(self.n_layers):
83
- y = self.self_attn_layers[i](x, x, self_attn_mask)
84
- y = self.drop(y)
85
- x = self.norm_layers_0[i](x + y)
86
-
87
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
88
- y = self.drop(y)
89
- x = self.norm_layers_1[i](x + y)
90
-
91
- y = self.ffn_layers[i](x, x_mask)
92
- y = self.drop(y)
93
- x = self.norm_layers_2[i](x + y)
94
- x = x * x_mask
95
- return x
96
-
97
-
98
- class MultiHeadAttention(nn.Module):
99
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
100
- super().__init__()
101
- assert channels % n_heads == 0
102
-
103
- self.channels = channels
104
- self.out_channels = out_channels
105
- self.n_heads = n_heads
106
- self.p_dropout = p_dropout
107
- self.window_size = window_size
108
- self.heads_share = heads_share
109
- self.block_length = block_length
110
- self.proximal_bias = proximal_bias
111
- self.proximal_init = proximal_init
112
- self.attn = None
113
-
114
- self.k_channels = channels // n_heads
115
- self.conv_q = nn.Conv1d(channels, channels, 1)
116
- self.conv_k = nn.Conv1d(channels, channels, 1)
117
- self.conv_v = nn.Conv1d(channels, channels, 1)
118
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
119
- self.drop = nn.Dropout(p_dropout)
120
-
121
- if window_size is not None:
122
- n_heads_rel = 1 if heads_share else n_heads
123
- rel_stddev = self.k_channels**-0.5
124
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
125
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
126
-
127
- nn.init.xavier_uniform_(self.conv_q.weight)
128
- nn.init.xavier_uniform_(self.conv_k.weight)
129
- nn.init.xavier_uniform_(self.conv_v.weight)
130
- if proximal_init:
131
- with torch.no_grad():
132
- self.conv_k.weight.copy_(self.conv_q.weight)
133
- self.conv_k.bias.copy_(self.conv_q.bias)
134
-
135
- def forward(self, x, c, attn_mask=None):
136
- q = self.conv_q(x)
137
- k = self.conv_k(c)
138
- v = self.conv_v(c)
139
-
140
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
141
-
142
- x = self.conv_o(x)
143
- return x
144
-
145
- def attention(self, query, key, value, mask=None):
146
- # reshape [b, d, t] -> [b, n_h, t, d_k]
147
- b, d, t_s, t_t = (*key.size(), query.size(2))
148
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
149
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
150
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
151
-
152
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
153
- if self.window_size is not None:
154
- assert t_s == t_t, "Relative attention is only available for self-attention."
155
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
156
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
157
- scores_local = self._relative_position_to_absolute_position(rel_logits)
158
- scores = scores + scores_local
159
- if self.proximal_bias:
160
- assert t_s == t_t, "Proximal bias is only available for self-attention."
161
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
162
- if mask is not None:
163
- scores = scores.masked_fill(mask == 0, -1e4)
164
- if self.block_length is not None:
165
- assert t_s == t_t, "Local attention is only available for self-attention."
166
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
167
- scores = scores.masked_fill(block_mask == 0, -1e4)
168
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
169
- p_attn = self.drop(p_attn)
170
- output = torch.matmul(p_attn, value)
171
- if self.window_size is not None:
172
- relative_weights = self._absolute_position_to_relative_position(p_attn)
173
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
174
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
175
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
176
- return output, p_attn
177
-
178
- def _matmul_with_relative_values(self, x, y):
179
- """
180
- x: [b, h, l, m]
181
- y: [h or 1, m, d]
182
- ret: [b, h, l, d]
183
- """
184
- ret = torch.matmul(x, y.unsqueeze(0))
185
- return ret
186
-
187
- def _matmul_with_relative_keys(self, x, y):
188
- """
189
- x: [b, h, l, d]
190
- y: [h or 1, m, d]
191
- ret: [b, h, l, m]
192
- """
193
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
194
- return ret
195
-
196
- def _get_relative_embeddings(self, relative_embeddings, length):
197
- max_relative_position = 2 * self.window_size + 1
198
- # Pad first before slice to avoid using cond ops.
199
- pad_length = max(length - (self.window_size + 1), 0)
200
- slice_start_position = max((self.window_size + 1) - length, 0)
201
- slice_end_position = slice_start_position + 2 * length - 1
202
- if pad_length > 0:
203
- padded_relative_embeddings = F.pad(
204
- relative_embeddings,
205
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
206
- else:
207
- padded_relative_embeddings = relative_embeddings
208
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
209
- return used_relative_embeddings
210
-
211
- def _relative_position_to_absolute_position(self, x):
212
- """
213
- x: [b, h, l, 2*l-1]
214
- ret: [b, h, l, l]
215
- """
216
- batch, heads, length, _ = x.size()
217
- # Concat columns of pad to shift from relative to absolute indexing.
218
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
219
-
220
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
221
- x_flat = x.view([batch, heads, length * 2 * length])
222
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
223
-
224
- # Reshape and slice out the padded elements.
225
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
226
- return x_final
227
-
228
- def _absolute_position_to_relative_position(self, x):
229
- """
230
- x: [b, h, l, l]
231
- ret: [b, h, l, 2*l-1]
232
- """
233
- batch, heads, length, _ = x.size()
234
- # padd along column
235
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
236
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
237
- # add 0's in the beginning that will skew the elements after reshape
238
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
239
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
240
- return x_final
241
-
242
- def _attention_bias_proximal(self, length):
243
- """Bias for self-attention to encourage attention to close positions.
244
- Args:
245
- length: an integer scalar.
246
- Returns:
247
- a Tensor with shape [1, 1, length, length]
248
- """
249
- r = torch.arange(length, dtype=torch.float32)
250
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
251
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
252
-
253
-
254
- class FFN(nn.Module):
255
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
256
- super().__init__()
257
- self.in_channels = in_channels
258
- self.out_channels = out_channels
259
- self.filter_channels = filter_channels
260
- self.kernel_size = kernel_size
261
- self.p_dropout = p_dropout
262
- self.activation = activation
263
- self.causal = causal
264
-
265
- if causal:
266
- self.padding = self._causal_padding
267
- else:
268
- self.padding = self._same_padding
269
-
270
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
271
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
272
- self.drop = nn.Dropout(p_dropout)
273
-
274
- def forward(self, x, x_mask):
275
- x = self.conv_1(self.padding(x * x_mask))
276
- if self.activation == "gelu":
277
- x = x * torch.sigmoid(1.702 * x)
278
- else:
279
- x = torch.relu(x)
280
- x = self.drop(x)
281
- x = self.conv_2(self.padding(x * x_mask))
282
- return x * x_mask
283
-
284
- def _causal_padding(self, x):
285
- if self.kernel_size == 1:
286
- return x
287
- pad_l = self.kernel_size - 1
288
- pad_r = 0
289
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
290
- x = F.pad(x, commons.convert_pad_shape(padding))
291
- return x
292
-
293
- def _same_padding(self, x):
294
- if self.kernel_size == 1:
295
- return x
296
- pad_l = (self.kernel_size - 1) // 2
297
- pad_r = self.kernel_size // 2
298
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
299
- x = F.pad(x, commons.convert_pad_shape(padding))
300
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/msgpack/fallback.py DELETED
@@ -1,1010 +0,0 @@
1
- """Fallback pure Python implementation of msgpack"""
2
- from datetime import datetime as _DateTime
3
- import sys
4
- import struct
5
-
6
-
7
- PY2 = sys.version_info[0] == 2
8
- if PY2:
9
- int_types = (int, long)
10
-
11
- def dict_iteritems(d):
12
- return d.iteritems()
13
-
14
- else:
15
- int_types = int
16
- unicode = str
17
- xrange = range
18
-
19
- def dict_iteritems(d):
20
- return d.items()
21
-
22
-
23
- if sys.version_info < (3, 5):
24
- # Ugly hack...
25
- RecursionError = RuntimeError
26
-
27
- def _is_recursionerror(e):
28
- return (
29
- len(e.args) == 1
30
- and isinstance(e.args[0], str)
31
- and e.args[0].startswith("maximum recursion depth exceeded")
32
- )
33
-
34
- else:
35
-
36
- def _is_recursionerror(e):
37
- return True
38
-
39
-
40
- if hasattr(sys, "pypy_version_info"):
41
- # StringIO is slow on PyPy, StringIO is faster. However: PyPy's own
42
- # StringBuilder is fastest.
43
- from __pypy__ import newlist_hint
44
-
45
- try:
46
- from __pypy__.builders import BytesBuilder as StringBuilder
47
- except ImportError:
48
- from __pypy__.builders import StringBuilder
49
- USING_STRINGBUILDER = True
50
-
51
- class StringIO(object):
52
- def __init__(self, s=b""):
53
- if s:
54
- self.builder = StringBuilder(len(s))
55
- self.builder.append(s)
56
- else:
57
- self.builder = StringBuilder()
58
-
59
- def write(self, s):
60
- if isinstance(s, memoryview):
61
- s = s.tobytes()
62
- elif isinstance(s, bytearray):
63
- s = bytes(s)
64
- self.builder.append(s)
65
-
66
- def getvalue(self):
67
- return self.builder.build()
68
-
69
- else:
70
- USING_STRINGBUILDER = False
71
- from io import BytesIO as StringIO
72
-
73
- newlist_hint = lambda size: []
74
-
75
-
76
- from .exceptions import BufferFull, OutOfData, ExtraData, FormatError, StackError
77
-
78
- from .ext import ExtType, Timestamp
79
-
80
-
81
- EX_SKIP = 0
82
- EX_CONSTRUCT = 1
83
- EX_READ_ARRAY_HEADER = 2
84
- EX_READ_MAP_HEADER = 3
85
-
86
- TYPE_IMMEDIATE = 0
87
- TYPE_ARRAY = 1
88
- TYPE_MAP = 2
89
- TYPE_RAW = 3
90
- TYPE_BIN = 4
91
- TYPE_EXT = 5
92
-
93
- DEFAULT_RECURSE_LIMIT = 511
94
-
95
-
96
- def _check_type_strict(obj, t, type=type, tuple=tuple):
97
- if type(t) is tuple:
98
- return type(obj) in t
99
- else:
100
- return type(obj) is t
101
-
102
-
103
- def _get_data_from_buffer(obj):
104
- view = memoryview(obj)
105
- if view.itemsize != 1:
106
- raise ValueError("cannot unpack from multi-byte object")
107
- return view
108
-
109
-
110
- def unpackb(packed, **kwargs):
111
- """
112
- Unpack an object from `packed`.
113
-
114
- Raises ``ExtraData`` when *packed* contains extra bytes.
115
- Raises ``ValueError`` when *packed* is incomplete.
116
- Raises ``FormatError`` when *packed* is not valid msgpack.
117
- Raises ``StackError`` when *packed* contains too nested.
118
- Other exceptions can be raised during unpacking.
119
-
120
- See :class:`Unpacker` for options.
121
- """
122
- unpacker = Unpacker(None, max_buffer_size=len(packed), **kwargs)
123
- unpacker.feed(packed)
124
- try:
125
- ret = unpacker._unpack()
126
- except OutOfData:
127
- raise ValueError("Unpack failed: incomplete input")
128
- except RecursionError as e:
129
- if _is_recursionerror(e):
130
- raise StackError
131
- raise
132
- if unpacker._got_extradata():
133
- raise ExtraData(ret, unpacker._get_extradata())
134
- return ret
135
-
136
-
137
- if sys.version_info < (2, 7, 6):
138
-
139
- def _unpack_from(f, b, o=0):
140
- """Explicit type cast for legacy struct.unpack_from"""
141
- return struct.unpack_from(f, bytes(b), o)
142
-
143
- else:
144
- _unpack_from = struct.unpack_from
145
-
146
- _NO_FORMAT_USED = ""
147
- _MSGPACK_HEADERS = {
148
- 0xC4: (1, _NO_FORMAT_USED, TYPE_BIN),
149
- 0xC5: (2, ">H", TYPE_BIN),
150
- 0xC6: (4, ">I", TYPE_BIN),
151
- 0xC7: (2, "Bb", TYPE_EXT),
152
- 0xC8: (3, ">Hb", TYPE_EXT),
153
- 0xC9: (5, ">Ib", TYPE_EXT),
154
- 0xCA: (4, ">f"),
155
- 0xCB: (8, ">d"),
156
- 0xCC: (1, _NO_FORMAT_USED),
157
- 0xCD: (2, ">H"),
158
- 0xCE: (4, ">I"),
159
- 0xCF: (8, ">Q"),
160
- 0xD0: (1, "b"),
161
- 0xD1: (2, ">h"),
162
- 0xD2: (4, ">i"),
163
- 0xD3: (8, ">q"),
164
- 0xD4: (1, "b1s", TYPE_EXT),
165
- 0xD5: (2, "b2s", TYPE_EXT),
166
- 0xD6: (4, "b4s", TYPE_EXT),
167
- 0xD7: (8, "b8s", TYPE_EXT),
168
- 0xD8: (16, "b16s", TYPE_EXT),
169
- 0xD9: (1, _NO_FORMAT_USED, TYPE_RAW),
170
- 0xDA: (2, ">H", TYPE_RAW),
171
- 0xDB: (4, ">I", TYPE_RAW),
172
- 0xDC: (2, ">H", TYPE_ARRAY),
173
- 0xDD: (4, ">I", TYPE_ARRAY),
174
- 0xDE: (2, ">H", TYPE_MAP),
175
- 0xDF: (4, ">I", TYPE_MAP),
176
- }
177
-
178
-
179
- class Unpacker(object):
180
- """Streaming unpacker.
181
-
182
- Arguments:
183
-
184
- :param file_like:
185
- File-like object having `.read(n)` method.
186
- If specified, unpacker reads serialized data from it and :meth:`feed()` is not usable.
187
-
188
- :param int read_size:
189
- Used as `file_like.read(read_size)`. (default: `min(16*1024, max_buffer_size)`)
190
-
191
- :param bool use_list:
192
- If true, unpack msgpack array to Python list.
193
- Otherwise, unpack to Python tuple. (default: True)
194
-
195
- :param bool raw:
196
- If true, unpack msgpack raw to Python bytes.
197
- Otherwise, unpack to Python str by decoding with UTF-8 encoding (default).
198
-
199
- :param int timestamp:
200
- Control how timestamp type is unpacked:
201
-
202
- 0 - Timestamp
203
- 1 - float (Seconds from the EPOCH)
204
- 2 - int (Nanoseconds from the EPOCH)
205
- 3 - datetime.datetime (UTC). Python 2 is not supported.
206
-
207
- :param bool strict_map_key:
208
- If true (default), only str or bytes are accepted for map (dict) keys.
209
-
210
- :param callable object_hook:
211
- When specified, it should be callable.
212
- Unpacker calls it with a dict argument after unpacking msgpack map.
213
- (See also simplejson)
214
-
215
- :param callable object_pairs_hook:
216
- When specified, it should be callable.
217
- Unpacker calls it with a list of key-value pairs after unpacking msgpack map.
218
- (See also simplejson)
219
-
220
- :param str unicode_errors:
221
- The error handler for decoding unicode. (default: 'strict')
222
- This option should be used only when you have msgpack data which
223
- contains invalid UTF-8 string.
224
-
225
- :param int max_buffer_size:
226
- Limits size of data waiting unpacked. 0 means 2**32-1.
227
- The default value is 100*1024*1024 (100MiB).
228
- Raises `BufferFull` exception when it is insufficient.
229
- You should set this parameter when unpacking data from untrusted source.
230
-
231
- :param int max_str_len:
232
- Deprecated, use *max_buffer_size* instead.
233
- Limits max length of str. (default: max_buffer_size)
234
-
235
- :param int max_bin_len:
236
- Deprecated, use *max_buffer_size* instead.
237
- Limits max length of bin. (default: max_buffer_size)
238
-
239
- :param int max_array_len:
240
- Limits max length of array.
241
- (default: max_buffer_size)
242
-
243
- :param int max_map_len:
244
- Limits max length of map.
245
- (default: max_buffer_size//2)
246
-
247
- :param int max_ext_len:
248
- Deprecated, use *max_buffer_size* instead.
249
- Limits max size of ext type. (default: max_buffer_size)
250
-
251
- Example of streaming deserialize from file-like object::
252
-
253
- unpacker = Unpacker(file_like)
254
- for o in unpacker:
255
- process(o)
256
-
257
- Example of streaming deserialize from socket::
258
-
259
- unpacker = Unpacker()
260
- while True:
261
- buf = sock.recv(1024**2)
262
- if not buf:
263
- break
264
- unpacker.feed(buf)
265
- for o in unpacker:
266
- process(o)
267
-
268
- Raises ``ExtraData`` when *packed* contains extra bytes.
269
- Raises ``OutOfData`` when *packed* is incomplete.
270
- Raises ``FormatError`` when *packed* is not valid msgpack.
271
- Raises ``StackError`` when *packed* contains too nested.
272
- Other exceptions can be raised during unpacking.
273
- """
274
-
275
- def __init__(
276
- self,
277
- file_like=None,
278
- read_size=0,
279
- use_list=True,
280
- raw=False,
281
- timestamp=0,
282
- strict_map_key=True,
283
- object_hook=None,
284
- object_pairs_hook=None,
285
- list_hook=None,
286
- unicode_errors=None,
287
- max_buffer_size=100 * 1024 * 1024,
288
- ext_hook=ExtType,
289
- max_str_len=-1,
290
- max_bin_len=-1,
291
- max_array_len=-1,
292
- max_map_len=-1,
293
- max_ext_len=-1,
294
- ):
295
- if unicode_errors is None:
296
- unicode_errors = "strict"
297
-
298
- if file_like is None:
299
- self._feeding = True
300
- else:
301
- if not callable(file_like.read):
302
- raise TypeError("`file_like.read` must be callable")
303
- self.file_like = file_like
304
- self._feeding = False
305
-
306
- #: array of bytes fed.
307
- self._buffer = bytearray()
308
- #: Which position we currently reads
309
- self._buff_i = 0
310
-
311
- # When Unpacker is used as an iterable, between the calls to next(),
312
- # the buffer is not "consumed" completely, for efficiency sake.
313
- # Instead, it is done sloppily. To make sure we raise BufferFull at
314
- # the correct moments, we have to keep track of how sloppy we were.
315
- # Furthermore, when the buffer is incomplete (that is: in the case
316
- # we raise an OutOfData) we need to rollback the buffer to the correct
317
- # state, which _buf_checkpoint records.
318
- self._buf_checkpoint = 0
319
-
320
- if not max_buffer_size:
321
- max_buffer_size = 2**31 - 1
322
- if max_str_len == -1:
323
- max_str_len = max_buffer_size
324
- if max_bin_len == -1:
325
- max_bin_len = max_buffer_size
326
- if max_array_len == -1:
327
- max_array_len = max_buffer_size
328
- if max_map_len == -1:
329
- max_map_len = max_buffer_size // 2
330
- if max_ext_len == -1:
331
- max_ext_len = max_buffer_size
332
-
333
- self._max_buffer_size = max_buffer_size
334
- if read_size > self._max_buffer_size:
335
- raise ValueError("read_size must be smaller than max_buffer_size")
336
- self._read_size = read_size or min(self._max_buffer_size, 16 * 1024)
337
- self._raw = bool(raw)
338
- self._strict_map_key = bool(strict_map_key)
339
- self._unicode_errors = unicode_errors
340
- self._use_list = use_list
341
- if not (0 <= timestamp <= 3):
342
- raise ValueError("timestamp must be 0..3")
343
- self._timestamp = timestamp
344
- self._list_hook = list_hook
345
- self._object_hook = object_hook
346
- self._object_pairs_hook = object_pairs_hook
347
- self._ext_hook = ext_hook
348
- self._max_str_len = max_str_len
349
- self._max_bin_len = max_bin_len
350
- self._max_array_len = max_array_len
351
- self._max_map_len = max_map_len
352
- self._max_ext_len = max_ext_len
353
- self._stream_offset = 0
354
-
355
- if list_hook is not None and not callable(list_hook):
356
- raise TypeError("`list_hook` is not callable")
357
- if object_hook is not None and not callable(object_hook):
358
- raise TypeError("`object_hook` is not callable")
359
- if object_pairs_hook is not None and not callable(object_pairs_hook):
360
- raise TypeError("`object_pairs_hook` is not callable")
361
- if object_hook is not None and object_pairs_hook is not None:
362
- raise TypeError(
363
- "object_pairs_hook and object_hook are mutually " "exclusive"
364
- )
365
- if not callable(ext_hook):
366
- raise TypeError("`ext_hook` is not callable")
367
-
368
- def feed(self, next_bytes):
369
- assert self._feeding
370
- view = _get_data_from_buffer(next_bytes)
371
- if len(self._buffer) - self._buff_i + len(view) > self._max_buffer_size:
372
- raise BufferFull
373
-
374
- # Strip buffer before checkpoint before reading file.
375
- if self._buf_checkpoint > 0:
376
- del self._buffer[: self._buf_checkpoint]
377
- self._buff_i -= self._buf_checkpoint
378
- self._buf_checkpoint = 0
379
-
380
- # Use extend here: INPLACE_ADD += doesn't reliably typecast memoryview in jython
381
- self._buffer.extend(view)
382
-
383
- def _consume(self):
384
- """Gets rid of the used parts of the buffer."""
385
- self._stream_offset += self._buff_i - self._buf_checkpoint
386
- self._buf_checkpoint = self._buff_i
387
-
388
- def _got_extradata(self):
389
- return self._buff_i < len(self._buffer)
390
-
391
- def _get_extradata(self):
392
- return self._buffer[self._buff_i :]
393
-
394
- def read_bytes(self, n):
395
- ret = self._read(n, raise_outofdata=False)
396
- self._consume()
397
- return ret
398
-
399
- def _read(self, n, raise_outofdata=True):
400
- # (int) -> bytearray
401
- self._reserve(n, raise_outofdata=raise_outofdata)
402
- i = self._buff_i
403
- ret = self._buffer[i : i + n]
404
- self._buff_i = i + len(ret)
405
- return ret
406
-
407
- def _reserve(self, n, raise_outofdata=True):
408
- remain_bytes = len(self._buffer) - self._buff_i - n
409
-
410
- # Fast path: buffer has n bytes already
411
- if remain_bytes >= 0:
412
- return
413
-
414
- if self._feeding:
415
- self._buff_i = self._buf_checkpoint
416
- raise OutOfData
417
-
418
- # Strip buffer before checkpoint before reading file.
419
- if self._buf_checkpoint > 0:
420
- del self._buffer[: self._buf_checkpoint]
421
- self._buff_i -= self._buf_checkpoint
422
- self._buf_checkpoint = 0
423
-
424
- # Read from file
425
- remain_bytes = -remain_bytes
426
- if remain_bytes + len(self._buffer) > self._max_buffer_size:
427
- raise BufferFull
428
- while remain_bytes > 0:
429
- to_read_bytes = max(self._read_size, remain_bytes)
430
- read_data = self.file_like.read(to_read_bytes)
431
- if not read_data:
432
- break
433
- assert isinstance(read_data, bytes)
434
- self._buffer += read_data
435
- remain_bytes -= len(read_data)
436
-
437
- if len(self._buffer) < n + self._buff_i and raise_outofdata:
438
- self._buff_i = 0 # rollback
439
- raise OutOfData
440
-
441
- def _read_header(self):
442
- typ = TYPE_IMMEDIATE
443
- n = 0
444
- obj = None
445
- self._reserve(1)
446
- b = self._buffer[self._buff_i]
447
- self._buff_i += 1
448
- if b & 0b10000000 == 0:
449
- obj = b
450
- elif b & 0b11100000 == 0b11100000:
451
- obj = -1 - (b ^ 0xFF)
452
- elif b & 0b11100000 == 0b10100000:
453
- n = b & 0b00011111
454
- typ = TYPE_RAW
455
- if n > self._max_str_len:
456
- raise ValueError("%s exceeds max_str_len(%s)" % (n, self._max_str_len))
457
- obj = self._read(n)
458
- elif b & 0b11110000 == 0b10010000:
459
- n = b & 0b00001111
460
- typ = TYPE_ARRAY
461
- if n > self._max_array_len:
462
- raise ValueError(
463
- "%s exceeds max_array_len(%s)" % (n, self._max_array_len)
464
- )
465
- elif b & 0b11110000 == 0b10000000:
466
- n = b & 0b00001111
467
- typ = TYPE_MAP
468
- if n > self._max_map_len:
469
- raise ValueError("%s exceeds max_map_len(%s)" % (n, self._max_map_len))
470
- elif b == 0xC0:
471
- obj = None
472
- elif b == 0xC2:
473
- obj = False
474
- elif b == 0xC3:
475
- obj = True
476
- elif 0xC4 <= b <= 0xC6:
477
- size, fmt, typ = _MSGPACK_HEADERS[b]
478
- self._reserve(size)
479
- if len(fmt) > 0:
480
- n = _unpack_from(fmt, self._buffer, self._buff_i)[0]
481
- else:
482
- n = self._buffer[self._buff_i]
483
- self._buff_i += size
484
- if n > self._max_bin_len:
485
- raise ValueError("%s exceeds max_bin_len(%s)" % (n, self._max_bin_len))
486
- obj = self._read(n)
487
- elif 0xC7 <= b <= 0xC9:
488
- size, fmt, typ = _MSGPACK_HEADERS[b]
489
- self._reserve(size)
490
- L, n = _unpack_from(fmt, self._buffer, self._buff_i)
491
- self._buff_i += size
492
- if L > self._max_ext_len:
493
- raise ValueError("%s exceeds max_ext_len(%s)" % (L, self._max_ext_len))
494
- obj = self._read(L)
495
- elif 0xCA <= b <= 0xD3:
496
- size, fmt = _MSGPACK_HEADERS[b]
497
- self._reserve(size)
498
- if len(fmt) > 0:
499
- obj = _unpack_from(fmt, self._buffer, self._buff_i)[0]
500
- else:
501
- obj = self._buffer[self._buff_i]
502
- self._buff_i += size
503
- elif 0xD4 <= b <= 0xD8:
504
- size, fmt, typ = _MSGPACK_HEADERS[b]
505
- if self._max_ext_len < size:
506
- raise ValueError(
507
- "%s exceeds max_ext_len(%s)" % (size, self._max_ext_len)
508
- )
509
- self._reserve(size + 1)
510
- n, obj = _unpack_from(fmt, self._buffer, self._buff_i)
511
- self._buff_i += size + 1
512
- elif 0xD9 <= b <= 0xDB:
513
- size, fmt, typ = _MSGPACK_HEADERS[b]
514
- self._reserve(size)
515
- if len(fmt) > 0:
516
- (n,) = _unpack_from(fmt, self._buffer, self._buff_i)
517
- else:
518
- n = self._buffer[self._buff_i]
519
- self._buff_i += size
520
- if n > self._max_str_len:
521
- raise ValueError("%s exceeds max_str_len(%s)" % (n, self._max_str_len))
522
- obj = self._read(n)
523
- elif 0xDC <= b <= 0xDD:
524
- size, fmt, typ = _MSGPACK_HEADERS[b]
525
- self._reserve(size)
526
- (n,) = _unpack_from(fmt, self._buffer, self._buff_i)
527
- self._buff_i += size
528
- if n > self._max_array_len:
529
- raise ValueError(
530
- "%s exceeds max_array_len(%s)" % (n, self._max_array_len)
531
- )
532
- elif 0xDE <= b <= 0xDF:
533
- size, fmt, typ = _MSGPACK_HEADERS[b]
534
- self._reserve(size)
535
- (n,) = _unpack_from(fmt, self._buffer, self._buff_i)
536
- self._buff_i += size
537
- if n > self._max_map_len:
538
- raise ValueError("%s exceeds max_map_len(%s)" % (n, self._max_map_len))
539
- else:
540
- raise FormatError("Unknown header: 0x%x" % b)
541
- return typ, n, obj
542
-
543
- def _unpack(self, execute=EX_CONSTRUCT):
544
- typ, n, obj = self._read_header()
545
-
546
- if execute == EX_READ_ARRAY_HEADER:
547
- if typ != TYPE_ARRAY:
548
- raise ValueError("Expected array")
549
- return n
550
- if execute == EX_READ_MAP_HEADER:
551
- if typ != TYPE_MAP:
552
- raise ValueError("Expected map")
553
- return n
554
- # TODO should we eliminate the recursion?
555
- if typ == TYPE_ARRAY:
556
- if execute == EX_SKIP:
557
- for i in xrange(n):
558
- # TODO check whether we need to call `list_hook`
559
- self._unpack(EX_SKIP)
560
- return
561
- ret = newlist_hint(n)
562
- for i in xrange(n):
563
- ret.append(self._unpack(EX_CONSTRUCT))
564
- if self._list_hook is not None:
565
- ret = self._list_hook(ret)
566
- # TODO is the interaction between `list_hook` and `use_list` ok?
567
- return ret if self._use_list else tuple(ret)
568
- if typ == TYPE_MAP:
569
- if execute == EX_SKIP:
570
- for i in xrange(n):
571
- # TODO check whether we need to call hooks
572
- self._unpack(EX_SKIP)
573
- self._unpack(EX_SKIP)
574
- return
575
- if self._object_pairs_hook is not None:
576
- ret = self._object_pairs_hook(
577
- (self._unpack(EX_CONSTRUCT), self._unpack(EX_CONSTRUCT))
578
- for _ in xrange(n)
579
- )
580
- else:
581
- ret = {}
582
- for _ in xrange(n):
583
- key = self._unpack(EX_CONSTRUCT)
584
- if self._strict_map_key and type(key) not in (unicode, bytes):
585
- raise ValueError(
586
- "%s is not allowed for map key" % str(type(key))
587
- )
588
- if not PY2 and type(key) is str:
589
- key = sys.intern(key)
590
- ret[key] = self._unpack(EX_CONSTRUCT)
591
- if self._object_hook is not None:
592
- ret = self._object_hook(ret)
593
- return ret
594
- if execute == EX_SKIP:
595
- return
596
- if typ == TYPE_RAW:
597
- if self._raw:
598
- obj = bytes(obj)
599
- else:
600
- obj = obj.decode("utf_8", self._unicode_errors)
601
- return obj
602
- if typ == TYPE_BIN:
603
- return bytes(obj)
604
- if typ == TYPE_EXT:
605
- if n == -1: # timestamp
606
- ts = Timestamp.from_bytes(bytes(obj))
607
- if self._timestamp == 1:
608
- return ts.to_unix()
609
- elif self._timestamp == 2:
610
- return ts.to_unix_nano()
611
- elif self._timestamp == 3:
612
- return ts.to_datetime()
613
- else:
614
- return ts
615
- else:
616
- return self._ext_hook(n, bytes(obj))
617
- assert typ == TYPE_IMMEDIATE
618
- return obj
619
-
620
- def __iter__(self):
621
- return self
622
-
623
- def __next__(self):
624
- try:
625
- ret = self._unpack(EX_CONSTRUCT)
626
- self._consume()
627
- return ret
628
- except OutOfData:
629
- self._consume()
630
- raise StopIteration
631
- except RecursionError:
632
- raise StackError
633
-
634
- next = __next__
635
-
636
- def skip(self):
637
- self._unpack(EX_SKIP)
638
- self._consume()
639
-
640
- def unpack(self):
641
- try:
642
- ret = self._unpack(EX_CONSTRUCT)
643
- except RecursionError:
644
- raise StackError
645
- self._consume()
646
- return ret
647
-
648
- def read_array_header(self):
649
- ret = self._unpack(EX_READ_ARRAY_HEADER)
650
- self._consume()
651
- return ret
652
-
653
- def read_map_header(self):
654
- ret = self._unpack(EX_READ_MAP_HEADER)
655
- self._consume()
656
- return ret
657
-
658
- def tell(self):
659
- return self._stream_offset
660
-
661
-
662
- class Packer(object):
663
- """
664
- MessagePack Packer
665
-
666
- Usage::
667
-
668
- packer = Packer()
669
- astream.write(packer.pack(a))
670
- astream.write(packer.pack(b))
671
-
672
- Packer's constructor has some keyword arguments:
673
-
674
- :param callable default:
675
- Convert user type to builtin type that Packer supports.
676
- See also simplejson's document.
677
-
678
- :param bool use_single_float:
679
- Use single precision float type for float. (default: False)
680
-
681
- :param bool autoreset:
682
- Reset buffer after each pack and return its content as `bytes`. (default: True).
683
- If set this to false, use `bytes()` to get content and `.reset()` to clear buffer.
684
-
685
- :param bool use_bin_type:
686
- Use bin type introduced in msgpack spec 2.0 for bytes.
687
- It also enables str8 type for unicode. (default: True)
688
-
689
- :param bool strict_types:
690
- If set to true, types will be checked to be exact. Derived classes
691
- from serializable types will not be serialized and will be
692
- treated as unsupported type and forwarded to default.
693
- Additionally tuples will not be serialized as lists.
694
- This is useful when trying to implement accurate serialization
695
- for python types.
696
-
697
- :param bool datetime:
698
- If set to true, datetime with tzinfo is packed into Timestamp type.
699
- Note that the tzinfo is stripped in the timestamp.
700
- You can get UTC datetime with `timestamp=3` option of the Unpacker.
701
- (Python 2 is not supported).
702
-
703
- :param str unicode_errors:
704
- The error handler for encoding unicode. (default: 'strict')
705
- DO NOT USE THIS!! This option is kept for very specific usage.
706
-
707
- Example of streaming deserialize from file-like object::
708
-
709
- unpacker = Unpacker(file_like)
710
- for o in unpacker:
711
- process(o)
712
-
713
- Example of streaming deserialize from socket::
714
-
715
- unpacker = Unpacker()
716
- while True:
717
- buf = sock.recv(1024**2)
718
- if not buf:
719
- break
720
- unpacker.feed(buf)
721
- for o in unpacker:
722
- process(o)
723
-
724
- Raises ``ExtraData`` when *packed* contains extra bytes.
725
- Raises ``OutOfData`` when *packed* is incomplete.
726
- Raises ``FormatError`` when *packed* is not valid msgpack.
727
- Raises ``StackError`` when *packed* contains too nested.
728
- Other exceptions can be raised during unpacking.
729
- """
730
-
731
- def __init__(
732
- self,
733
- default=None,
734
- use_single_float=False,
735
- autoreset=True,
736
- use_bin_type=True,
737
- strict_types=False,
738
- datetime=False,
739
- unicode_errors=None,
740
- ):
741
- self._strict_types = strict_types
742
- self._use_float = use_single_float
743
- self._autoreset = autoreset
744
- self._use_bin_type = use_bin_type
745
- self._buffer = StringIO()
746
- if PY2 and datetime:
747
- raise ValueError("datetime is not supported in Python 2")
748
- self._datetime = bool(datetime)
749
- self._unicode_errors = unicode_errors or "strict"
750
- if default is not None:
751
- if not callable(default):
752
- raise TypeError("default must be callable")
753
- self._default = default
754
-
755
- def _pack(
756
- self,
757
- obj,
758
- nest_limit=DEFAULT_RECURSE_LIMIT,
759
- check=isinstance,
760
- check_type_strict=_check_type_strict,
761
- ):
762
- default_used = False
763
- if self._strict_types:
764
- check = check_type_strict
765
- list_types = list
766
- else:
767
- list_types = (list, tuple)
768
- while True:
769
- if nest_limit < 0:
770
- raise ValueError("recursion limit exceeded")
771
- if obj is None:
772
- return self._buffer.write(b"\xc0")
773
- if check(obj, bool):
774
- if obj:
775
- return self._buffer.write(b"\xc3")
776
- return self._buffer.write(b"\xc2")
777
- if check(obj, int_types):
778
- if 0 <= obj < 0x80:
779
- return self._buffer.write(struct.pack("B", obj))
780
- if -0x20 <= obj < 0:
781
- return self._buffer.write(struct.pack("b", obj))
782
- if 0x80 <= obj <= 0xFF:
783
- return self._buffer.write(struct.pack("BB", 0xCC, obj))
784
- if -0x80 <= obj < 0:
785
- return self._buffer.write(struct.pack(">Bb", 0xD0, obj))
786
- if 0xFF < obj <= 0xFFFF:
787
- return self._buffer.write(struct.pack(">BH", 0xCD, obj))
788
- if -0x8000 <= obj < -0x80:
789
- return self._buffer.write(struct.pack(">Bh", 0xD1, obj))
790
- if 0xFFFF < obj <= 0xFFFFFFFF:
791
- return self._buffer.write(struct.pack(">BI", 0xCE, obj))
792
- if -0x80000000 <= obj < -0x8000:
793
- return self._buffer.write(struct.pack(">Bi", 0xD2, obj))
794
- if 0xFFFFFFFF < obj <= 0xFFFFFFFFFFFFFFFF:
795
- return self._buffer.write(struct.pack(">BQ", 0xCF, obj))
796
- if -0x8000000000000000 <= obj < -0x80000000:
797
- return self._buffer.write(struct.pack(">Bq", 0xD3, obj))
798
- if not default_used and self._default is not None:
799
- obj = self._default(obj)
800
- default_used = True
801
- continue
802
- raise OverflowError("Integer value out of range")
803
- if check(obj, (bytes, bytearray)):
804
- n = len(obj)
805
- if n >= 2**32:
806
- raise ValueError("%s is too large" % type(obj).__name__)
807
- self._pack_bin_header(n)
808
- return self._buffer.write(obj)
809
- if check(obj, unicode):
810
- obj = obj.encode("utf-8", self._unicode_errors)
811
- n = len(obj)
812
- if n >= 2**32:
813
- raise ValueError("String is too large")
814
- self._pack_raw_header(n)
815
- return self._buffer.write(obj)
816
- if check(obj, memoryview):
817
- n = obj.nbytes
818
- if n >= 2**32:
819
- raise ValueError("Memoryview is too large")
820
- self._pack_bin_header(n)
821
- return self._buffer.write(obj)
822
- if check(obj, float):
823
- if self._use_float:
824
- return self._buffer.write(struct.pack(">Bf", 0xCA, obj))
825
- return self._buffer.write(struct.pack(">Bd", 0xCB, obj))
826
- if check(obj, (ExtType, Timestamp)):
827
- if check(obj, Timestamp):
828
- code = -1
829
- data = obj.to_bytes()
830
- else:
831
- code = obj.code
832
- data = obj.data
833
- assert isinstance(code, int)
834
- assert isinstance(data, bytes)
835
- L = len(data)
836
- if L == 1:
837
- self._buffer.write(b"\xd4")
838
- elif L == 2:
839
- self._buffer.write(b"\xd5")
840
- elif L == 4:
841
- self._buffer.write(b"\xd6")
842
- elif L == 8:
843
- self._buffer.write(b"\xd7")
844
- elif L == 16:
845
- self._buffer.write(b"\xd8")
846
- elif L <= 0xFF:
847
- self._buffer.write(struct.pack(">BB", 0xC7, L))
848
- elif L <= 0xFFFF:
849
- self._buffer.write(struct.pack(">BH", 0xC8, L))
850
- else:
851
- self._buffer.write(struct.pack(">BI", 0xC9, L))
852
- self._buffer.write(struct.pack("b", code))
853
- self._buffer.write(data)
854
- return
855
- if check(obj, list_types):
856
- n = len(obj)
857
- self._pack_array_header(n)
858
- for i in xrange(n):
859
- self._pack(obj[i], nest_limit - 1)
860
- return
861
- if check(obj, dict):
862
- return self._pack_map_pairs(
863
- len(obj), dict_iteritems(obj), nest_limit - 1
864
- )
865
-
866
- if self._datetime and check(obj, _DateTime) and obj.tzinfo is not None:
867
- obj = Timestamp.from_datetime(obj)
868
- default_used = 1
869
- continue
870
-
871
- if not default_used and self._default is not None:
872
- obj = self._default(obj)
873
- default_used = 1
874
- continue
875
-
876
- if self._datetime and check(obj, _DateTime):
877
- raise ValueError("Cannot serialize %r where tzinfo=None" % (obj,))
878
-
879
- raise TypeError("Cannot serialize %r" % (obj,))
880
-
881
- def pack(self, obj):
882
- try:
883
- self._pack(obj)
884
- except:
885
- self._buffer = StringIO() # force reset
886
- raise
887
- if self._autoreset:
888
- ret = self._buffer.getvalue()
889
- self._buffer = StringIO()
890
- return ret
891
-
892
- def pack_map_pairs(self, pairs):
893
- self._pack_map_pairs(len(pairs), pairs)
894
- if self._autoreset:
895
- ret = self._buffer.getvalue()
896
- self._buffer = StringIO()
897
- return ret
898
-
899
- def pack_array_header(self, n):
900
- if n >= 2**32:
901
- raise ValueError
902
- self._pack_array_header(n)
903
- if self._autoreset:
904
- ret = self._buffer.getvalue()
905
- self._buffer = StringIO()
906
- return ret
907
-
908
- def pack_map_header(self, n):
909
- if n >= 2**32:
910
- raise ValueError
911
- self._pack_map_header(n)
912
- if self._autoreset:
913
- ret = self._buffer.getvalue()
914
- self._buffer = StringIO()
915
- return ret
916
-
917
- def pack_ext_type(self, typecode, data):
918
- if not isinstance(typecode, int):
919
- raise TypeError("typecode must have int type.")
920
- if not 0 <= typecode <= 127:
921
- raise ValueError("typecode should be 0-127")
922
- if not isinstance(data, bytes):
923
- raise TypeError("data must have bytes type")
924
- L = len(data)
925
- if L > 0xFFFFFFFF:
926
- raise ValueError("Too large data")
927
- if L == 1:
928
- self._buffer.write(b"\xd4")
929
- elif L == 2:
930
- self._buffer.write(b"\xd5")
931
- elif L == 4:
932
- self._buffer.write(b"\xd6")
933
- elif L == 8:
934
- self._buffer.write(b"\xd7")
935
- elif L == 16:
936
- self._buffer.write(b"\xd8")
937
- elif L <= 0xFF:
938
- self._buffer.write(b"\xc7" + struct.pack("B", L))
939
- elif L <= 0xFFFF:
940
- self._buffer.write(b"\xc8" + struct.pack(">H", L))
941
- else:
942
- self._buffer.write(b"\xc9" + struct.pack(">I", L))
943
- self._buffer.write(struct.pack("B", typecode))
944
- self._buffer.write(data)
945
-
946
- def _pack_array_header(self, n):
947
- if n <= 0x0F:
948
- return self._buffer.write(struct.pack("B", 0x90 + n))
949
- if n <= 0xFFFF:
950
- return self._buffer.write(struct.pack(">BH", 0xDC, n))
951
- if n <= 0xFFFFFFFF:
952
- return self._buffer.write(struct.pack(">BI", 0xDD, n))
953
- raise ValueError("Array is too large")
954
-
955
- def _pack_map_header(self, n):
956
- if n <= 0x0F:
957
- return self._buffer.write(struct.pack("B", 0x80 + n))
958
- if n <= 0xFFFF:
959
- return self._buffer.write(struct.pack(">BH", 0xDE, n))
960
- if n <= 0xFFFFFFFF:
961
- return self._buffer.write(struct.pack(">BI", 0xDF, n))
962
- raise ValueError("Dict is too large")
963
-
964
- def _pack_map_pairs(self, n, pairs, nest_limit=DEFAULT_RECURSE_LIMIT):
965
- self._pack_map_header(n)
966
- for (k, v) in pairs:
967
- self._pack(k, nest_limit - 1)
968
- self._pack(v, nest_limit - 1)
969
-
970
- def _pack_raw_header(self, n):
971
- if n <= 0x1F:
972
- self._buffer.write(struct.pack("B", 0xA0 + n))
973
- elif self._use_bin_type and n <= 0xFF:
974
- self._buffer.write(struct.pack(">BB", 0xD9, n))
975
- elif n <= 0xFFFF:
976
- self._buffer.write(struct.pack(">BH", 0xDA, n))
977
- elif n <= 0xFFFFFFFF:
978
- self._buffer.write(struct.pack(">BI", 0xDB, n))
979
- else:
980
- raise ValueError("Raw is too large")
981
-
982
- def _pack_bin_header(self, n):
983
- if not self._use_bin_type:
984
- return self._pack_raw_header(n)
985
- elif n <= 0xFF:
986
- return self._buffer.write(struct.pack(">BB", 0xC4, n))
987
- elif n <= 0xFFFF:
988
- return self._buffer.write(struct.pack(">BH", 0xC5, n))
989
- elif n <= 0xFFFFFFFF:
990
- return self._buffer.write(struct.pack(">BI", 0xC6, n))
991
- else:
992
- raise ValueError("Bin is too large")
993
-
994
- def bytes(self):
995
- """Return internal buffer contents as bytes object"""
996
- return self._buffer.getvalue()
997
-
998
- def reset(self):
999
- """Reset internal buffer.
1000
-
1001
- This method is useful only when autoreset=False.
1002
- """
1003
- self._buffer = StringIO()
1004
-
1005
- def getbuffer(self):
1006
- """Return view of internal buffer."""
1007
- if USING_STRINGBUILDER or PY2:
1008
- return memoryview(self.bytes())
1009
- else:
1010
- return self._buffer.getbuffer()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/__init__.py DELETED
@@ -1,182 +0,0 @@
1
- # __
2
- # /__) _ _ _ _ _/ _
3
- # / ( (- (/ (/ (- _) / _)
4
- # /
5
-
6
- """
7
- Requests HTTP Library
8
- ~~~~~~~~~~~~~~~~~~~~~
9
-
10
- Requests is an HTTP library, written in Python, for human beings.
11
- Basic GET usage:
12
-
13
- >>> import requests
14
- >>> r = requests.get('https://www.python.org')
15
- >>> r.status_code
16
- 200
17
- >>> b'Python is a programming language' in r.content
18
- True
19
-
20
- ... or POST:
21
-
22
- >>> payload = dict(key1='value1', key2='value2')
23
- >>> r = requests.post('https://httpbin.org/post', data=payload)
24
- >>> print(r.text)
25
- {
26
- ...
27
- "form": {
28
- "key1": "value1",
29
- "key2": "value2"
30
- },
31
- ...
32
- }
33
-
34
- The other HTTP methods are supported - see `requests.api`. Full documentation
35
- is at <https://requests.readthedocs.io>.
36
-
37
- :copyright: (c) 2017 by Kenneth Reitz.
38
- :license: Apache 2.0, see LICENSE for more details.
39
- """
40
-
41
- import warnings
42
-
43
- from pip._vendor import urllib3
44
-
45
- from .exceptions import RequestsDependencyWarning
46
-
47
- charset_normalizer_version = None
48
-
49
- try:
50
- from pip._vendor.chardet import __version__ as chardet_version
51
- except ImportError:
52
- chardet_version = None
53
-
54
-
55
- def check_compatibility(urllib3_version, chardet_version, charset_normalizer_version):
56
- urllib3_version = urllib3_version.split(".")
57
- assert urllib3_version != ["dev"] # Verify urllib3 isn't installed from git.
58
-
59
- # Sometimes, urllib3 only reports its version as 16.1.
60
- if len(urllib3_version) == 2:
61
- urllib3_version.append("0")
62
-
63
- # Check urllib3 for compatibility.
64
- major, minor, patch = urllib3_version # noqa: F811
65
- major, minor, patch = int(major), int(minor), int(patch)
66
- # urllib3 >= 1.21.1, <= 1.26
67
- assert major == 1
68
- assert minor >= 21
69
- assert minor <= 26
70
-
71
- # Check charset_normalizer for compatibility.
72
- if chardet_version:
73
- major, minor, patch = chardet_version.split(".")[:3]
74
- major, minor, patch = int(major), int(minor), int(patch)
75
- # chardet_version >= 3.0.2, < 6.0.0
76
- assert (3, 0, 2) <= (major, minor, patch) < (6, 0, 0)
77
- elif charset_normalizer_version:
78
- major, minor, patch = charset_normalizer_version.split(".")[:3]
79
- major, minor, patch = int(major), int(minor), int(patch)
80
- # charset_normalizer >= 2.0.0 < 4.0.0
81
- assert (2, 0, 0) <= (major, minor, patch) < (4, 0, 0)
82
- else:
83
- raise Exception("You need either charset_normalizer or chardet installed")
84
-
85
-
86
- def _check_cryptography(cryptography_version):
87
- # cryptography < 1.3.4
88
- try:
89
- cryptography_version = list(map(int, cryptography_version.split(".")))
90
- except ValueError:
91
- return
92
-
93
- if cryptography_version < [1, 3, 4]:
94
- warning = "Old version of cryptography ({}) may cause slowdown.".format(
95
- cryptography_version
96
- )
97
- warnings.warn(warning, RequestsDependencyWarning)
98
-
99
-
100
- # Check imported dependencies for compatibility.
101
- try:
102
- check_compatibility(
103
- urllib3.__version__, chardet_version, charset_normalizer_version
104
- )
105
- except (AssertionError, ValueError):
106
- warnings.warn(
107
- "urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported "
108
- "version!".format(
109
- urllib3.__version__, chardet_version, charset_normalizer_version
110
- ),
111
- RequestsDependencyWarning,
112
- )
113
-
114
- # Attempt to enable urllib3's fallback for SNI support
115
- # if the standard library doesn't support SNI or the
116
- # 'ssl' library isn't available.
117
- try:
118
- # Note: This logic prevents upgrading cryptography on Windows, if imported
119
- # as part of pip.
120
- from pip._internal.utils.compat import WINDOWS
121
- if not WINDOWS:
122
- raise ImportError("pip internals: don't import cryptography on Windows")
123
- try:
124
- import ssl
125
- except ImportError:
126
- ssl = None
127
-
128
- if not getattr(ssl, "HAS_SNI", False):
129
- from pip._vendor.urllib3.contrib import pyopenssl
130
-
131
- pyopenssl.inject_into_urllib3()
132
-
133
- # Check cryptography version
134
- from cryptography import __version__ as cryptography_version
135
-
136
- _check_cryptography(cryptography_version)
137
- except ImportError:
138
- pass
139
-
140
- # urllib3's DependencyWarnings should be silenced.
141
- from pip._vendor.urllib3.exceptions import DependencyWarning
142
-
143
- warnings.simplefilter("ignore", DependencyWarning)
144
-
145
- # Set default logging handler to avoid "No handler found" warnings.
146
- import logging
147
- from logging import NullHandler
148
-
149
- from . import packages, utils
150
- from .__version__ import (
151
- __author__,
152
- __author_email__,
153
- __build__,
154
- __cake__,
155
- __copyright__,
156
- __description__,
157
- __license__,
158
- __title__,
159
- __url__,
160
- __version__,
161
- )
162
- from .api import delete, get, head, options, patch, post, put, request
163
- from .exceptions import (
164
- ConnectionError,
165
- ConnectTimeout,
166
- FileModeWarning,
167
- HTTPError,
168
- JSONDecodeError,
169
- ReadTimeout,
170
- RequestException,
171
- Timeout,
172
- TooManyRedirects,
173
- URLRequired,
174
- )
175
- from .models import PreparedRequest, Request, Response
176
- from .sessions import Session, session
177
- from .status_codes import codes
178
-
179
- logging.getLogger(__name__).addHandler(NullHandler())
180
-
181
- # FileModeWarnings go off per the default.
182
- warnings.simplefilter("default", FileModeWarning, append=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/diagram/__init__.py DELETED
@@ -1,642 +0,0 @@
1
- import railroad
2
- import pyparsing
3
- import typing
4
- from typing import (
5
- List,
6
- NamedTuple,
7
- Generic,
8
- TypeVar,
9
- Dict,
10
- Callable,
11
- Set,
12
- Iterable,
13
- )
14
- from jinja2 import Template
15
- from io import StringIO
16
- import inspect
17
-
18
-
19
- jinja2_template_source = """\
20
- <!DOCTYPE html>
21
- <html>
22
- <head>
23
- {% if not head %}
24
- <style type="text/css">
25
- .railroad-heading {
26
- font-family: monospace;
27
- }
28
- </style>
29
- {% else %}
30
- {{ head | safe }}
31
- {% endif %}
32
- </head>
33
- <body>
34
- {{ body | safe }}
35
- {% for diagram in diagrams %}
36
- <div class="railroad-group">
37
- <h1 class="railroad-heading">{{ diagram.title }}</h1>
38
- <div class="railroad-description">{{ diagram.text }}</div>
39
- <div class="railroad-svg">
40
- {{ diagram.svg }}
41
- </div>
42
- </div>
43
- {% endfor %}
44
- </body>
45
- </html>
46
- """
47
-
48
- template = Template(jinja2_template_source)
49
-
50
- # Note: ideally this would be a dataclass, but we're supporting Python 3.5+ so we can't do this yet
51
- NamedDiagram = NamedTuple(
52
- "NamedDiagram",
53
- [("name", str), ("diagram", typing.Optional[railroad.DiagramItem]), ("index", int)],
54
- )
55
- """
56
- A simple structure for associating a name with a railroad diagram
57
- """
58
-
59
- T = TypeVar("T")
60
-
61
-
62
- class EachItem(railroad.Group):
63
- """
64
- Custom railroad item to compose a:
65
- - Group containing a
66
- - OneOrMore containing a
67
- - Choice of the elements in the Each
68
- with the group label indicating that all must be matched
69
- """
70
-
71
- all_label = "[ALL]"
72
-
73
- def __init__(self, *items):
74
- choice_item = railroad.Choice(len(items) - 1, *items)
75
- one_or_more_item = railroad.OneOrMore(item=choice_item)
76
- super().__init__(one_or_more_item, label=self.all_label)
77
-
78
-
79
- class AnnotatedItem(railroad.Group):
80
- """
81
- Simple subclass of Group that creates an annotation label
82
- """
83
-
84
- def __init__(self, label: str, item):
85
- super().__init__(item=item, label="[{}]".format(label) if label else label)
86
-
87
-
88
- class EditablePartial(Generic[T]):
89
- """
90
- Acts like a functools.partial, but can be edited. In other words, it represents a type that hasn't yet been
91
- constructed.
92
- """
93
-
94
- # We need this here because the railroad constructors actually transform the data, so can't be called until the
95
- # entire tree is assembled
96
-
97
- def __init__(self, func: Callable[..., T], args: list, kwargs: dict):
98
- self.func = func
99
- self.args = args
100
- self.kwargs = kwargs
101
-
102
- @classmethod
103
- def from_call(cls, func: Callable[..., T], *args, **kwargs) -> "EditablePartial[T]":
104
- """
105
- If you call this function in the same way that you would call the constructor, it will store the arguments
106
- as you expect. For example EditablePartial.from_call(Fraction, 1, 3)() == Fraction(1, 3)
107
- """
108
- return EditablePartial(func=func, args=list(args), kwargs=kwargs)
109
-
110
- @property
111
- def name(self):
112
- return self.kwargs["name"]
113
-
114
- def __call__(self) -> T:
115
- """
116
- Evaluate the partial and return the result
117
- """
118
- args = self.args.copy()
119
- kwargs = self.kwargs.copy()
120
-
121
- # This is a helpful hack to allow you to specify varargs parameters (e.g. *args) as keyword args (e.g.
122
- # args=['list', 'of', 'things'])
123
- arg_spec = inspect.getfullargspec(self.func)
124
- if arg_spec.varargs in self.kwargs:
125
- args += kwargs.pop(arg_spec.varargs)
126
-
127
- return self.func(*args, **kwargs)
128
-
129
-
130
- def railroad_to_html(diagrams: List[NamedDiagram], **kwargs) -> str:
131
- """
132
- Given a list of NamedDiagram, produce a single HTML string that visualises those diagrams
133
- :params kwargs: kwargs to be passed in to the template
134
- """
135
- data = []
136
- for diagram in diagrams:
137
- if diagram.diagram is None:
138
- continue
139
- io = StringIO()
140
- diagram.diagram.writeSvg(io.write)
141
- title = diagram.name
142
- if diagram.index == 0:
143
- title += " (root)"
144
- data.append({"title": title, "text": "", "svg": io.getvalue()})
145
-
146
- return template.render(diagrams=data, **kwargs)
147
-
148
-
149
- def resolve_partial(partial: "EditablePartial[T]") -> T:
150
- """
151
- Recursively resolves a collection of Partials into whatever type they are
152
- """
153
- if isinstance(partial, EditablePartial):
154
- partial.args = resolve_partial(partial.args)
155
- partial.kwargs = resolve_partial(partial.kwargs)
156
- return partial()
157
- elif isinstance(partial, list):
158
- return [resolve_partial(x) for x in partial]
159
- elif isinstance(partial, dict):
160
- return {key: resolve_partial(x) for key, x in partial.items()}
161
- else:
162
- return partial
163
-
164
-
165
- def to_railroad(
166
- element: pyparsing.ParserElement,
167
- diagram_kwargs: typing.Optional[dict] = None,
168
- vertical: int = 3,
169
- show_results_names: bool = False,
170
- show_groups: bool = False,
171
- ) -> List[NamedDiagram]:
172
- """
173
- Convert a pyparsing element tree into a list of diagrams. This is the recommended entrypoint to diagram
174
- creation if you want to access the Railroad tree before it is converted to HTML
175
- :param element: base element of the parser being diagrammed
176
- :param diagram_kwargs: kwargs to pass to the Diagram() constructor
177
- :param vertical: (optional) - int - limit at which number of alternatives should be
178
- shown vertically instead of horizontally
179
- :param show_results_names - bool to indicate whether results name annotations should be
180
- included in the diagram
181
- :param show_groups - bool to indicate whether groups should be highlighted with an unlabeled
182
- surrounding box
183
- """
184
- # Convert the whole tree underneath the root
185
- lookup = ConverterState(diagram_kwargs=diagram_kwargs or {})
186
- _to_diagram_element(
187
- element,
188
- lookup=lookup,
189
- parent=None,
190
- vertical=vertical,
191
- show_results_names=show_results_names,
192
- show_groups=show_groups,
193
- )
194
-
195
- root_id = id(element)
196
- # Convert the root if it hasn't been already
197
- if root_id in lookup:
198
- if not element.customName:
199
- lookup[root_id].name = ""
200
- lookup[root_id].mark_for_extraction(root_id, lookup, force=True)
201
-
202
- # Now that we're finished, we can convert from intermediate structures into Railroad elements
203
- diags = list(lookup.diagrams.values())
204
- if len(diags) > 1:
205
- # collapse out duplicate diags with the same name
206
- seen = set()
207
- deduped_diags = []
208
- for d in diags:
209
- # don't extract SkipTo elements, they are uninformative as subdiagrams
210
- if d.name == "...":
211
- continue
212
- if d.name is not None and d.name not in seen:
213
- seen.add(d.name)
214
- deduped_diags.append(d)
215
- resolved = [resolve_partial(partial) for partial in deduped_diags]
216
- else:
217
- # special case - if just one diagram, always display it, even if
218
- # it has no name
219
- resolved = [resolve_partial(partial) for partial in diags]
220
- return sorted(resolved, key=lambda diag: diag.index)
221
-
222
-
223
- def _should_vertical(
224
- specification: int, exprs: Iterable[pyparsing.ParserElement]
225
- ) -> bool:
226
- """
227
- Returns true if we should return a vertical list of elements
228
- """
229
- if specification is None:
230
- return False
231
- else:
232
- return len(_visible_exprs(exprs)) >= specification
233
-
234
-
235
- class ElementState:
236
- """
237
- State recorded for an individual pyparsing Element
238
- """
239
-
240
- # Note: this should be a dataclass, but we have to support Python 3.5
241
- def __init__(
242
- self,
243
- element: pyparsing.ParserElement,
244
- converted: EditablePartial,
245
- parent: EditablePartial,
246
- number: int,
247
- name: str = None,
248
- parent_index: typing.Optional[int] = None,
249
- ):
250
- #: The pyparsing element that this represents
251
- self.element: pyparsing.ParserElement = element
252
- #: The name of the element
253
- self.name: typing.Optional[str] = name
254
- #: The output Railroad element in an unconverted state
255
- self.converted: EditablePartial = converted
256
- #: The parent Railroad element, which we store so that we can extract this if it's duplicated
257
- self.parent: EditablePartial = parent
258
- #: The order in which we found this element, used for sorting diagrams if this is extracted into a diagram
259
- self.number: int = number
260
- #: The index of this inside its parent
261
- self.parent_index: typing.Optional[int] = parent_index
262
- #: If true, we should extract this out into a subdiagram
263
- self.extract: bool = False
264
- #: If true, all of this element's children have been filled out
265
- self.complete: bool = False
266
-
267
- def mark_for_extraction(
268
- self, el_id: int, state: "ConverterState", name: str = None, force: bool = False
269
- ):
270
- """
271
- Called when this instance has been seen twice, and thus should eventually be extracted into a sub-diagram
272
- :param el_id: id of the element
273
- :param state: element/diagram state tracker
274
- :param name: name to use for this element's text
275
- :param force: If true, force extraction now, regardless of the state of this. Only useful for extracting the
276
- root element when we know we're finished
277
- """
278
- self.extract = True
279
-
280
- # Set the name
281
- if not self.name:
282
- if name:
283
- # Allow forcing a custom name
284
- self.name = name
285
- elif self.element.customName:
286
- self.name = self.element.customName
287
- else:
288
- self.name = ""
289
-
290
- # Just because this is marked for extraction doesn't mean we can do it yet. We may have to wait for children
291
- # to be added
292
- # Also, if this is just a string literal etc, don't bother extracting it
293
- if force or (self.complete and _worth_extracting(self.element)):
294
- state.extract_into_diagram(el_id)
295
-
296
-
297
- class ConverterState:
298
- """
299
- Stores some state that persists between recursions into the element tree
300
- """
301
-
302
- def __init__(self, diagram_kwargs: typing.Optional[dict] = None):
303
- #: A dictionary mapping ParserElements to state relating to them
304
- self._element_diagram_states: Dict[int, ElementState] = {}
305
- #: A dictionary mapping ParserElement IDs to subdiagrams generated from them
306
- self.diagrams: Dict[int, EditablePartial[NamedDiagram]] = {}
307
- #: The index of the next unnamed element
308
- self.unnamed_index: int = 1
309
- #: The index of the next element. This is used for sorting
310
- self.index: int = 0
311
- #: Shared kwargs that are used to customize the construction of diagrams
312
- self.diagram_kwargs: dict = diagram_kwargs or {}
313
- self.extracted_diagram_names: Set[str] = set()
314
-
315
- def __setitem__(self, key: int, value: ElementState):
316
- self._element_diagram_states[key] = value
317
-
318
- def __getitem__(self, key: int) -> ElementState:
319
- return self._element_diagram_states[key]
320
-
321
- def __delitem__(self, key: int):
322
- del self._element_diagram_states[key]
323
-
324
- def __contains__(self, key: int):
325
- return key in self._element_diagram_states
326
-
327
- def generate_unnamed(self) -> int:
328
- """
329
- Generate a number used in the name of an otherwise unnamed diagram
330
- """
331
- self.unnamed_index += 1
332
- return self.unnamed_index
333
-
334
- def generate_index(self) -> int:
335
- """
336
- Generate a number used to index a diagram
337
- """
338
- self.index += 1
339
- return self.index
340
-
341
- def extract_into_diagram(self, el_id: int):
342
- """
343
- Used when we encounter the same token twice in the same tree. When this
344
- happens, we replace all instances of that token with a terminal, and
345
- create a new subdiagram for the token
346
- """
347
- position = self[el_id]
348
-
349
- # Replace the original definition of this element with a regular block
350
- if position.parent:
351
- ret = EditablePartial.from_call(railroad.NonTerminal, text=position.name)
352
- if "item" in position.parent.kwargs:
353
- position.parent.kwargs["item"] = ret
354
- elif "items" in position.parent.kwargs:
355
- position.parent.kwargs["items"][position.parent_index] = ret
356
-
357
- # If the element we're extracting is a group, skip to its content but keep the title
358
- if position.converted.func == railroad.Group:
359
- content = position.converted.kwargs["item"]
360
- else:
361
- content = position.converted
362
-
363
- self.diagrams[el_id] = EditablePartial.from_call(
364
- NamedDiagram,
365
- name=position.name,
366
- diagram=EditablePartial.from_call(
367
- railroad.Diagram, content, **self.diagram_kwargs
368
- ),
369
- index=position.number,
370
- )
371
-
372
- del self[el_id]
373
-
374
-
375
- def _worth_extracting(element: pyparsing.ParserElement) -> bool:
376
- """
377
- Returns true if this element is worth having its own sub-diagram. Simply, if any of its children
378
- themselves have children, then its complex enough to extract
379
- """
380
- children = element.recurse()
381
- return any(child.recurse() for child in children)
382
-
383
-
384
- def _apply_diagram_item_enhancements(fn):
385
- """
386
- decorator to ensure enhancements to a diagram item (such as results name annotations)
387
- get applied on return from _to_diagram_element (we do this since there are several
388
- returns in _to_diagram_element)
389
- """
390
-
391
- def _inner(
392
- element: pyparsing.ParserElement,
393
- parent: typing.Optional[EditablePartial],
394
- lookup: ConverterState = None,
395
- vertical: int = None,
396
- index: int = 0,
397
- name_hint: str = None,
398
- show_results_names: bool = False,
399
- show_groups: bool = False,
400
- ) -> typing.Optional[EditablePartial]:
401
-
402
- ret = fn(
403
- element,
404
- parent,
405
- lookup,
406
- vertical,
407
- index,
408
- name_hint,
409
- show_results_names,
410
- show_groups,
411
- )
412
-
413
- # apply annotation for results name, if present
414
- if show_results_names and ret is not None:
415
- element_results_name = element.resultsName
416
- if element_results_name:
417
- # add "*" to indicate if this is a "list all results" name
418
- element_results_name += "" if element.modalResults else "*"
419
- ret = EditablePartial.from_call(
420
- railroad.Group, item=ret, label=element_results_name
421
- )
422
-
423
- return ret
424
-
425
- return _inner
426
-
427
-
428
- def _visible_exprs(exprs: Iterable[pyparsing.ParserElement]):
429
- non_diagramming_exprs = (
430
- pyparsing.ParseElementEnhance,
431
- pyparsing.PositionToken,
432
- pyparsing.And._ErrorStop,
433
- )
434
- return [
435
- e
436
- for e in exprs
437
- if not (e.customName or e.resultsName or isinstance(e, non_diagramming_exprs))
438
- ]
439
-
440
-
441
- @_apply_diagram_item_enhancements
442
- def _to_diagram_element(
443
- element: pyparsing.ParserElement,
444
- parent: typing.Optional[EditablePartial],
445
- lookup: ConverterState = None,
446
- vertical: int = None,
447
- index: int = 0,
448
- name_hint: str = None,
449
- show_results_names: bool = False,
450
- show_groups: bool = False,
451
- ) -> typing.Optional[EditablePartial]:
452
- """
453
- Recursively converts a PyParsing Element to a railroad Element
454
- :param lookup: The shared converter state that keeps track of useful things
455
- :param index: The index of this element within the parent
456
- :param parent: The parent of this element in the output tree
457
- :param vertical: Controls at what point we make a list of elements vertical. If this is an integer (the default),
458
- it sets the threshold of the number of items before we go vertical. If True, always go vertical, if False, never
459
- do so
460
- :param name_hint: If provided, this will override the generated name
461
- :param show_results_names: bool flag indicating whether to add annotations for results names
462
- :returns: The converted version of the input element, but as a Partial that hasn't yet been constructed
463
- :param show_groups: bool flag indicating whether to show groups using bounding box
464
- """
465
- exprs = element.recurse()
466
- name = name_hint or element.customName or element.__class__.__name__
467
-
468
- # Python's id() is used to provide a unique identifier for elements
469
- el_id = id(element)
470
-
471
- element_results_name = element.resultsName
472
-
473
- # Here we basically bypass processing certain wrapper elements if they contribute nothing to the diagram
474
- if not element.customName:
475
- if isinstance(
476
- element,
477
- (
478
- # pyparsing.TokenConverter,
479
- # pyparsing.Forward,
480
- pyparsing.Located,
481
- ),
482
- ):
483
- # However, if this element has a useful custom name, and its child does not, we can pass it on to the child
484
- if exprs:
485
- if not exprs[0].customName:
486
- propagated_name = name
487
- else:
488
- propagated_name = None
489
-
490
- return _to_diagram_element(
491
- element.expr,
492
- parent=parent,
493
- lookup=lookup,
494
- vertical=vertical,
495
- index=index,
496
- name_hint=propagated_name,
497
- show_results_names=show_results_names,
498
- show_groups=show_groups,
499
- )
500
-
501
- # If the element isn't worth extracting, we always treat it as the first time we say it
502
- if _worth_extracting(element):
503
- if el_id in lookup:
504
- # If we've seen this element exactly once before, we are only just now finding out that it's a duplicate,
505
- # so we have to extract it into a new diagram.
506
- looked_up = lookup[el_id]
507
- looked_up.mark_for_extraction(el_id, lookup, name=name_hint)
508
- ret = EditablePartial.from_call(railroad.NonTerminal, text=looked_up.name)
509
- return ret
510
-
511
- elif el_id in lookup.diagrams:
512
- # If we have seen the element at least twice before, and have already extracted it into a subdiagram, we
513
- # just put in a marker element that refers to the sub-diagram
514
- ret = EditablePartial.from_call(
515
- railroad.NonTerminal, text=lookup.diagrams[el_id].kwargs["name"]
516
- )
517
- return ret
518
-
519
- # Recursively convert child elements
520
- # Here we find the most relevant Railroad element for matching pyparsing Element
521
- # We use ``items=[]`` here to hold the place for where the child elements will go once created
522
- if isinstance(element, pyparsing.And):
523
- # detect And's created with ``expr*N`` notation - for these use a OneOrMore with a repeat
524
- # (all will have the same name, and resultsName)
525
- if not exprs:
526
- return None
527
- if len(set((e.name, e.resultsName) for e in exprs)) == 1:
528
- ret = EditablePartial.from_call(
529
- railroad.OneOrMore, item="", repeat=str(len(exprs))
530
- )
531
- elif _should_vertical(vertical, exprs):
532
- ret = EditablePartial.from_call(railroad.Stack, items=[])
533
- else:
534
- ret = EditablePartial.from_call(railroad.Sequence, items=[])
535
- elif isinstance(element, (pyparsing.Or, pyparsing.MatchFirst)):
536
- if not exprs:
537
- return None
538
- if _should_vertical(vertical, exprs):
539
- ret = EditablePartial.from_call(railroad.Choice, 0, items=[])
540
- else:
541
- ret = EditablePartial.from_call(railroad.HorizontalChoice, items=[])
542
- elif isinstance(element, pyparsing.Each):
543
- if not exprs:
544
- return None
545
- ret = EditablePartial.from_call(EachItem, items=[])
546
- elif isinstance(element, pyparsing.NotAny):
547
- ret = EditablePartial.from_call(AnnotatedItem, label="NOT", item="")
548
- elif isinstance(element, pyparsing.FollowedBy):
549
- ret = EditablePartial.from_call(AnnotatedItem, label="LOOKAHEAD", item="")
550
- elif isinstance(element, pyparsing.PrecededBy):
551
- ret = EditablePartial.from_call(AnnotatedItem, label="LOOKBEHIND", item="")
552
- elif isinstance(element, pyparsing.Group):
553
- if show_groups:
554
- ret = EditablePartial.from_call(AnnotatedItem, label="", item="")
555
- else:
556
- ret = EditablePartial.from_call(railroad.Group, label="", item="")
557
- elif isinstance(element, pyparsing.TokenConverter):
558
- ret = EditablePartial.from_call(
559
- AnnotatedItem, label=type(element).__name__.lower(), item=""
560
- )
561
- elif isinstance(element, pyparsing.Opt):
562
- ret = EditablePartial.from_call(railroad.Optional, item="")
563
- elif isinstance(element, pyparsing.OneOrMore):
564
- ret = EditablePartial.from_call(railroad.OneOrMore, item="")
565
- elif isinstance(element, pyparsing.ZeroOrMore):
566
- ret = EditablePartial.from_call(railroad.ZeroOrMore, item="")
567
- elif isinstance(element, pyparsing.Group):
568
- ret = EditablePartial.from_call(
569
- railroad.Group, item=None, label=element_results_name
570
- )
571
- elif isinstance(element, pyparsing.Empty) and not element.customName:
572
- # Skip unnamed "Empty" elements
573
- ret = None
574
- elif len(exprs) > 1:
575
- ret = EditablePartial.from_call(railroad.Sequence, items=[])
576
- elif len(exprs) > 0 and not element_results_name:
577
- ret = EditablePartial.from_call(railroad.Group, item="", label=name)
578
- else:
579
- terminal = EditablePartial.from_call(railroad.Terminal, element.defaultName)
580
- ret = terminal
581
-
582
- if ret is None:
583
- return
584
-
585
- # Indicate this element's position in the tree so we can extract it if necessary
586
- lookup[el_id] = ElementState(
587
- element=element,
588
- converted=ret,
589
- parent=parent,
590
- parent_index=index,
591
- number=lookup.generate_index(),
592
- )
593
- if element.customName:
594
- lookup[el_id].mark_for_extraction(el_id, lookup, element.customName)
595
-
596
- i = 0
597
- for expr in exprs:
598
- # Add a placeholder index in case we have to extract the child before we even add it to the parent
599
- if "items" in ret.kwargs:
600
- ret.kwargs["items"].insert(i, None)
601
-
602
- item = _to_diagram_element(
603
- expr,
604
- parent=ret,
605
- lookup=lookup,
606
- vertical=vertical,
607
- index=i,
608
- show_results_names=show_results_names,
609
- show_groups=show_groups,
610
- )
611
-
612
- # Some elements don't need to be shown in the diagram
613
- if item is not None:
614
- if "item" in ret.kwargs:
615
- ret.kwargs["item"] = item
616
- elif "items" in ret.kwargs:
617
- # If we've already extracted the child, don't touch this index, since it's occupied by a nonterminal
618
- ret.kwargs["items"][i] = item
619
- i += 1
620
- elif "items" in ret.kwargs:
621
- # If we're supposed to skip this element, remove it from the parent
622
- del ret.kwargs["items"][i]
623
-
624
- # If all this items children are none, skip this item
625
- if ret and (
626
- ("items" in ret.kwargs and len(ret.kwargs["items"]) == 0)
627
- or ("item" in ret.kwargs and ret.kwargs["item"] is None)
628
- ):
629
- ret = EditablePartial.from_call(railroad.Terminal, name)
630
-
631
- # Mark this element as "complete", ie it has all of its children
632
- if el_id in lookup:
633
- lookup[el_id].complete = True
634
-
635
- if el_id in lookup and lookup[el_id].extract and lookup[el_id].complete:
636
- lookup.extract_into_diagram(el_id)
637
- if ret is not None:
638
- ret = EditablePartial.from_call(
639
- railroad.NonTerminal, text=lookup.diagrams[el_id].kwargs["name"]
640
- )
641
-
642
- return ret
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/config/_validate_pyproject/__init__.py DELETED
@@ -1,34 +0,0 @@
1
- from functools import reduce
2
- from typing import Any, Callable, Dict
3
-
4
- from . import formats
5
- from .error_reporting import detailed_errors, ValidationError
6
- from .extra_validations import EXTRA_VALIDATIONS
7
- from .fastjsonschema_exceptions import JsonSchemaException, JsonSchemaValueException
8
- from .fastjsonschema_validations import validate as _validate
9
-
10
- __all__ = [
11
- "validate",
12
- "FORMAT_FUNCTIONS",
13
- "EXTRA_VALIDATIONS",
14
- "ValidationError",
15
- "JsonSchemaException",
16
- "JsonSchemaValueException",
17
- ]
18
-
19
-
20
- FORMAT_FUNCTIONS: Dict[str, Callable[[str], bool]] = {
21
- fn.__name__.replace("_", "-"): fn
22
- for fn in formats.__dict__.values()
23
- if callable(fn) and not fn.__name__.startswith("_")
24
- }
25
-
26
-
27
- def validate(data: Any) -> bool:
28
- """Validate the given ``data`` object using JSON Schema
29
- This function raises ``ValidationError`` if ``data`` is invalid.
30
- """
31
- with detailed_errors():
32
- _validate(data, custom_formats=FORMAT_FUNCTIONS)
33
- reduce(lambda acc, fn: fn(acc), EXTRA_VALIDATIONS, data)
34
- return True
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/api.py DELETED
@@ -1,235 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- import copy
3
- import logging
4
- import os
5
- import torch
6
- from caffe2.proto import caffe2_pb2
7
- from torch import nn
8
-
9
- from detectron2.config import CfgNode
10
- from detectron2.utils.file_io import PathManager
11
-
12
- from .caffe2_inference import ProtobufDetectionModel
13
- from .caffe2_modeling import META_ARCH_CAFFE2_EXPORT_TYPE_MAP, convert_batched_inputs_to_c2_format
14
- from .shared import get_pb_arg_vali, get_pb_arg_vals, save_graph
15
-
16
- __all__ = [
17
- "add_export_config",
18
- "Caffe2Model",
19
- "Caffe2Tracer",
20
- ]
21
-
22
-
23
- def add_export_config(cfg):
24
- return cfg
25
-
26
-
27
- class Caffe2Tracer:
28
- """
29
- Make a detectron2 model traceable with Caffe2 operators.
30
- This class creates a traceable version of a detectron2 model which:
31
-
32
- 1. Rewrite parts of the model using ops in Caffe2. Note that some ops do
33
- not have GPU implementation in Caffe2.
34
- 2. Remove post-processing and only produce raw layer outputs
35
-
36
- After making a traceable model, the class provide methods to export such a
37
- model to different deployment formats.
38
- Exported graph produced by this class take two input tensors:
39
-
40
- 1. (1, C, H, W) float "data" which is an image (usually in [0, 255]).
41
- (H, W) often has to be padded to multiple of 32 (depend on the model
42
- architecture).
43
- 2. 1x3 float "im_info", each row of which is (height, width, 1.0).
44
- Height and width are true image shapes before padding.
45
-
46
- The class currently only supports models using builtin meta architectures.
47
- Batch inference is not supported, and contributions are welcome.
48
- """
49
-
50
- def __init__(self, cfg: CfgNode, model: nn.Module, inputs):
51
- """
52
- Args:
53
- cfg (CfgNode): a detectron2 config used to construct caffe2-compatible model.
54
- model (nn.Module): An original pytorch model. Must be among a few official models
55
- in detectron2 that can be converted to become caffe2-compatible automatically.
56
- Weights have to be already loaded to this model.
57
- inputs: sample inputs that the given model takes for inference.
58
- Will be used to trace the model. For most models, random inputs with
59
- no detected objects will not work as they lead to wrong traces.
60
- """
61
- assert isinstance(cfg, CfgNode), cfg
62
- assert isinstance(model, torch.nn.Module), type(model)
63
-
64
- # TODO make it support custom models, by passing in c2 model directly
65
- C2MetaArch = META_ARCH_CAFFE2_EXPORT_TYPE_MAP[cfg.MODEL.META_ARCHITECTURE]
66
- self.traceable_model = C2MetaArch(cfg, copy.deepcopy(model))
67
- self.inputs = inputs
68
- self.traceable_inputs = self.traceable_model.get_caffe2_inputs(inputs)
69
-
70
- def export_caffe2(self):
71
- """
72
- Export the model to Caffe2's protobuf format.
73
- The returned object can be saved with its :meth:`.save_protobuf()` method.
74
- The result can be loaded and executed using Caffe2 runtime.
75
-
76
- Returns:
77
- :class:`Caffe2Model`
78
- """
79
- from .caffe2_export import export_caffe2_detection_model
80
-
81
- predict_net, init_net = export_caffe2_detection_model(
82
- self.traceable_model, self.traceable_inputs
83
- )
84
- return Caffe2Model(predict_net, init_net)
85
-
86
- def export_onnx(self):
87
- """
88
- Export the model to ONNX format.
89
- Note that the exported model contains custom ops only available in caffe2, therefore it
90
- cannot be directly executed by other runtime (such as onnxruntime or TensorRT).
91
- Post-processing or transformation passes may be applied on the model to accommodate
92
- different runtimes, but we currently do not provide support for them.
93
-
94
- Returns:
95
- onnx.ModelProto: an onnx model.
96
- """
97
- from .caffe2_export import export_onnx_model as export_onnx_model_impl
98
-
99
- return export_onnx_model_impl(self.traceable_model, (self.traceable_inputs,))
100
-
101
- def export_torchscript(self):
102
- """
103
- Export the model to a ``torch.jit.TracedModule`` by tracing.
104
- The returned object can be saved to a file by ``.save()``.
105
-
106
- Returns:
107
- torch.jit.TracedModule: a torch TracedModule
108
- """
109
- logger = logging.getLogger(__name__)
110
- logger.info("Tracing the model with torch.jit.trace ...")
111
- with torch.no_grad():
112
- return torch.jit.trace(self.traceable_model, (self.traceable_inputs,))
113
-
114
-
115
- class Caffe2Model(nn.Module):
116
- """
117
- A wrapper around the traced model in Caffe2's protobuf format.
118
- The exported graph has different inputs/outputs from the original Pytorch
119
- model, as explained in :class:`Caffe2Tracer`. This class wraps around the
120
- exported graph to simulate the same interface as the original Pytorch model.
121
- It also provides functions to save/load models in Caffe2's format.'
122
-
123
- Examples:
124
- ::
125
- c2_model = Caffe2Tracer(cfg, torch_model, inputs).export_caffe2()
126
- inputs = [{"image": img_tensor_CHW}]
127
- outputs = c2_model(inputs)
128
- orig_outputs = torch_model(inputs)
129
- """
130
-
131
- def __init__(self, predict_net, init_net):
132
- super().__init__()
133
- self.eval() # always in eval mode
134
- self._predict_net = predict_net
135
- self._init_net = init_net
136
- self._predictor = None
137
-
138
- __init__.__HIDE_SPHINX_DOC__ = True
139
-
140
- @property
141
- def predict_net(self):
142
- """
143
- caffe2.core.Net: the underlying caffe2 predict net
144
- """
145
- return self._predict_net
146
-
147
- @property
148
- def init_net(self):
149
- """
150
- caffe2.core.Net: the underlying caffe2 init net
151
- """
152
- return self._init_net
153
-
154
- def save_protobuf(self, output_dir):
155
- """
156
- Save the model as caffe2's protobuf format.
157
- It saves the following files:
158
-
159
- * "model.pb": definition of the graph. Can be visualized with
160
- tools like `netron <https://github.com/lutzroeder/netron>`_.
161
- * "model_init.pb": model parameters
162
- * "model.pbtxt": human-readable definition of the graph. Not
163
- needed for deployment.
164
-
165
- Args:
166
- output_dir (str): the output directory to save protobuf files.
167
- """
168
- logger = logging.getLogger(__name__)
169
- logger.info("Saving model to {} ...".format(output_dir))
170
- if not PathManager.exists(output_dir):
171
- PathManager.mkdirs(output_dir)
172
-
173
- with PathManager.open(os.path.join(output_dir, "model.pb"), "wb") as f:
174
- f.write(self._predict_net.SerializeToString())
175
- with PathManager.open(os.path.join(output_dir, "model.pbtxt"), "w") as f:
176
- f.write(str(self._predict_net))
177
- with PathManager.open(os.path.join(output_dir, "model_init.pb"), "wb") as f:
178
- f.write(self._init_net.SerializeToString())
179
-
180
- def save_graph(self, output_file, inputs=None):
181
- """
182
- Save the graph as SVG format.
183
-
184
- Args:
185
- output_file (str): a SVG file
186
- inputs: optional inputs given to the model.
187
- If given, the inputs will be used to run the graph to record
188
- shape of every tensor. The shape information will be
189
- saved together with the graph.
190
- """
191
- from .caffe2_export import run_and_save_graph
192
-
193
- if inputs is None:
194
- save_graph(self._predict_net, output_file, op_only=False)
195
- else:
196
- size_divisibility = get_pb_arg_vali(self._predict_net, "size_divisibility", 0)
197
- device = get_pb_arg_vals(self._predict_net, "device", b"cpu").decode("ascii")
198
- inputs = convert_batched_inputs_to_c2_format(inputs, size_divisibility, device)
199
- inputs = [x.cpu().numpy() for x in inputs]
200
- run_and_save_graph(self._predict_net, self._init_net, inputs, output_file)
201
-
202
- @staticmethod
203
- def load_protobuf(dir):
204
- """
205
- Args:
206
- dir (str): a directory used to save Caffe2Model with
207
- :meth:`save_protobuf`.
208
- The files "model.pb" and "model_init.pb" are needed.
209
-
210
- Returns:
211
- Caffe2Model: the caffe2 model loaded from this directory.
212
- """
213
- predict_net = caffe2_pb2.NetDef()
214
- with PathManager.open(os.path.join(dir, "model.pb"), "rb") as f:
215
- predict_net.ParseFromString(f.read())
216
-
217
- init_net = caffe2_pb2.NetDef()
218
- with PathManager.open(os.path.join(dir, "model_init.pb"), "rb") as f:
219
- init_net.ParseFromString(f.read())
220
-
221
- return Caffe2Model(predict_net, init_net)
222
-
223
- def __call__(self, inputs):
224
- """
225
- An interface that wraps around a Caffe2 model and mimics detectron2's models'
226
- input/output format. See details about the format at :doc:`/tutorials/models`.
227
- This is used to compare the outputs of caffe2 model with its original torch model.
228
-
229
- Due to the extra conversion between Pytorch/Caffe2, this method is not meant for
230
- benchmark. Because of the conversion, this method also has dependency
231
- on detectron2 in order to convert to detectron2's output format.
232
- """
233
- if self._predictor is None:
234
- self._predictor = ProtobufDetectionModel(self._predict_net, self._init_net)
235
- return self._predictor(inputs)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/deploy/export_model.py DELETED
@@ -1,235 +0,0 @@
1
- #!/usr/bin/env python
2
- # Copyright (c) Facebook, Inc. and its affiliates.
3
- import argparse
4
- import os
5
- from typing import Dict, List, Tuple
6
- import torch
7
- from torch import Tensor, nn
8
-
9
- import detectron2.data.transforms as T
10
- from detectron2.checkpoint import DetectionCheckpointer
11
- from detectron2.config import get_cfg
12
- from detectron2.data import build_detection_test_loader, detection_utils
13
- from detectron2.evaluation import COCOEvaluator, inference_on_dataset, print_csv_format
14
- from detectron2.export import TracingAdapter, dump_torchscript_IR, scripting_with_instances
15
- from detectron2.modeling import GeneralizedRCNN, RetinaNet, build_model
16
- from detectron2.modeling.postprocessing import detector_postprocess
17
- from detectron2.projects.point_rend import add_pointrend_config
18
- from detectron2.structures import Boxes
19
- from detectron2.utils.env import TORCH_VERSION
20
- from detectron2.utils.file_io import PathManager
21
- from detectron2.utils.logger import setup_logger
22
-
23
-
24
- def setup_cfg(args):
25
- cfg = get_cfg()
26
- # cuda context is initialized before creating dataloader, so we don't fork anymore
27
- cfg.DATALOADER.NUM_WORKERS = 0
28
- add_pointrend_config(cfg)
29
- cfg.merge_from_file(args.config_file)
30
- cfg.merge_from_list(args.opts)
31
- cfg.freeze()
32
- return cfg
33
-
34
-
35
- def export_caffe2_tracing(cfg, torch_model, inputs):
36
- from detectron2.export import Caffe2Tracer
37
-
38
- tracer = Caffe2Tracer(cfg, torch_model, inputs)
39
- if args.format == "caffe2":
40
- caffe2_model = tracer.export_caffe2()
41
- caffe2_model.save_protobuf(args.output)
42
- # draw the caffe2 graph
43
- caffe2_model.save_graph(os.path.join(args.output, "model.svg"), inputs=inputs)
44
- return caffe2_model
45
- elif args.format == "onnx":
46
- import onnx
47
-
48
- onnx_model = tracer.export_onnx()
49
- onnx.save(onnx_model, os.path.join(args.output, "model.onnx"))
50
- elif args.format == "torchscript":
51
- ts_model = tracer.export_torchscript()
52
- with PathManager.open(os.path.join(args.output, "model.ts"), "wb") as f:
53
- torch.jit.save(ts_model, f)
54
- dump_torchscript_IR(ts_model, args.output)
55
-
56
-
57
- # experimental. API not yet final
58
- def export_scripting(torch_model):
59
- assert TORCH_VERSION >= (1, 8)
60
- fields = {
61
- "proposal_boxes": Boxes,
62
- "objectness_logits": Tensor,
63
- "pred_boxes": Boxes,
64
- "scores": Tensor,
65
- "pred_classes": Tensor,
66
- "pred_masks": Tensor,
67
- "pred_keypoints": torch.Tensor,
68
- "pred_keypoint_heatmaps": torch.Tensor,
69
- }
70
- assert args.format == "torchscript", "Scripting only supports torchscript format."
71
-
72
- class ScriptableAdapterBase(nn.Module):
73
- # Use this adapter to workaround https://github.com/pytorch/pytorch/issues/46944
74
- # by not retuning instances but dicts. Otherwise the exported model is not deployable
75
- def __init__(self):
76
- super().__init__()
77
- self.model = torch_model
78
- self.eval()
79
-
80
- if isinstance(torch_model, GeneralizedRCNN):
81
-
82
- class ScriptableAdapter(ScriptableAdapterBase):
83
- def forward(self, inputs: Tuple[Dict[str, torch.Tensor]]) -> List[Dict[str, Tensor]]:
84
- instances = self.model.inference(inputs, do_postprocess=False)
85
- return [i.get_fields() for i in instances]
86
-
87
- else:
88
-
89
- class ScriptableAdapter(ScriptableAdapterBase):
90
- def forward(self, inputs: Tuple[Dict[str, torch.Tensor]]) -> List[Dict[str, Tensor]]:
91
- instances = self.model(inputs)
92
- return [i.get_fields() for i in instances]
93
-
94
- ts_model = scripting_with_instances(ScriptableAdapter(), fields)
95
- with PathManager.open(os.path.join(args.output, "model.ts"), "wb") as f:
96
- torch.jit.save(ts_model, f)
97
- dump_torchscript_IR(ts_model, args.output)
98
- # TODO inference in Python now missing postprocessing glue code
99
- return None
100
-
101
-
102
- # experimental. API not yet final
103
- def export_tracing(torch_model, inputs):
104
- assert TORCH_VERSION >= (1, 8)
105
- image = inputs[0]["image"]
106
- inputs = [{"image": image}] # remove other unused keys
107
-
108
- if isinstance(torch_model, GeneralizedRCNN):
109
-
110
- def inference(model, inputs):
111
- # use do_postprocess=False so it returns ROI mask
112
- inst = model.inference(inputs, do_postprocess=False)[0]
113
- return [{"instances": inst}]
114
-
115
- else:
116
- inference = None # assume that we just call the model directly
117
-
118
- traceable_model = TracingAdapter(torch_model, inputs, inference)
119
-
120
- if args.format == "torchscript":
121
- ts_model = torch.jit.trace(traceable_model, (image,))
122
- with PathManager.open(os.path.join(args.output, "model.ts"), "wb") as f:
123
- torch.jit.save(ts_model, f)
124
- dump_torchscript_IR(ts_model, args.output)
125
- elif args.format == "onnx":
126
- with PathManager.open(os.path.join(args.output, "model.onnx"), "wb") as f:
127
- torch.onnx.export(traceable_model, (image,), f, opset_version=11)
128
- logger.info("Inputs schema: " + str(traceable_model.inputs_schema))
129
- logger.info("Outputs schema: " + str(traceable_model.outputs_schema))
130
-
131
- if args.format != "torchscript":
132
- return None
133
- if not isinstance(torch_model, (GeneralizedRCNN, RetinaNet)):
134
- return None
135
-
136
- def eval_wrapper(inputs):
137
- """
138
- The exported model does not contain the final resize step, which is typically
139
- unused in deployment but needed for evaluation. We add it manually here.
140
- """
141
- input = inputs[0]
142
- instances = traceable_model.outputs_schema(ts_model(input["image"]))[0]["instances"]
143
- postprocessed = detector_postprocess(instances, input["height"], input["width"])
144
- return [{"instances": postprocessed}]
145
-
146
- return eval_wrapper
147
-
148
-
149
- def get_sample_inputs(args):
150
-
151
- if args.sample_image is None:
152
- # get a first batch from dataset
153
- data_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0])
154
- first_batch = next(iter(data_loader))
155
- return first_batch
156
- else:
157
- # get a sample data
158
- original_image = detection_utils.read_image(args.sample_image, format=cfg.INPUT.FORMAT)
159
- # Do same preprocessing as DefaultPredictor
160
- aug = T.ResizeShortestEdge(
161
- [cfg.INPUT.MIN_SIZE_TEST, cfg.INPUT.MIN_SIZE_TEST], cfg.INPUT.MAX_SIZE_TEST
162
- )
163
- height, width = original_image.shape[:2]
164
- image = aug.get_transform(original_image).apply_image(original_image)
165
- image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1))
166
-
167
- inputs = {"image": image, "height": height, "width": width}
168
-
169
- # Sample ready
170
- sample_inputs = [inputs]
171
- return sample_inputs
172
-
173
-
174
- if __name__ == "__main__":
175
- parser = argparse.ArgumentParser(description="Export a model for deployment.")
176
- parser.add_argument(
177
- "--format",
178
- choices=["caffe2", "onnx", "torchscript"],
179
- help="output format",
180
- default="torchscript",
181
- )
182
- parser.add_argument(
183
- "--export-method",
184
- choices=["caffe2_tracing", "tracing", "scripting"],
185
- help="Method to export models",
186
- default="tracing",
187
- )
188
- parser.add_argument("--config-file", default="", metavar="FILE", help="path to config file")
189
- parser.add_argument("--sample-image", default=None, type=str, help="sample image for input")
190
- parser.add_argument("--run-eval", action="store_true")
191
- parser.add_argument("--output", help="output directory for the converted model")
192
- parser.add_argument(
193
- "opts",
194
- help="Modify config options using the command-line",
195
- default=None,
196
- nargs=argparse.REMAINDER,
197
- )
198
- args = parser.parse_args()
199
- logger = setup_logger()
200
- logger.info("Command line arguments: " + str(args))
201
- PathManager.mkdirs(args.output)
202
- # Disable respecialization on new shapes. Otherwise --run-eval will be slow
203
- torch._C._jit_set_bailout_depth(1)
204
-
205
- cfg = setup_cfg(args)
206
-
207
- # create a torch model
208
- torch_model = build_model(cfg)
209
- DetectionCheckpointer(torch_model).resume_or_load(cfg.MODEL.WEIGHTS)
210
- torch_model.eval()
211
-
212
- # get sample data
213
- sample_inputs = get_sample_inputs(args)
214
-
215
- # convert and save model
216
- if args.export_method == "caffe2_tracing":
217
- exported_model = export_caffe2_tracing(cfg, torch_model, sample_inputs)
218
- elif args.export_method == "scripting":
219
- exported_model = export_scripting(torch_model)
220
- elif args.export_method == "tracing":
221
- exported_model = export_tracing(torch_model, sample_inputs)
222
-
223
- # run evaluation with the converted model
224
- if args.run_eval:
225
- assert exported_model is not None, (
226
- "Python inference is not yet implemented for "
227
- f"export_method={args.export_method}, format={args.format}."
228
- )
229
- logger.info("Running evaluation ... this takes a long time if you export to CPU.")
230
- dataset = cfg.DATASETS.TEST[0]
231
- data_loader = build_detection_test_loader(cfg, dataset)
232
- # NOTE: hard-coded evaluator. change to the evaluator for your dataset
233
- evaluator = COCOEvaluator(dataset, output_dir=args.output)
234
- metrics = inference_on_dataset(exported_model, data_loader, evaluator)
235
- print_csv_format(metrics)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BBrother/Pandora/Dockerfile DELETED
@@ -1,25 +0,0 @@
1
- # 使用一个更轻量级的基础镜像
2
- FROM python:3.11-slim
3
-
4
- # 设置工作目录
5
- WORKDIR /app
6
-
7
- # 安装 git 和清理缓存,减少图层
8
- RUN apt update && \
9
- apt install -y git && \
10
- rm -rf /var/lib/apt/lists/*
11
-
12
- # 克隆代码库
13
- RUN git clone https://github.com/zhile-io/pandora-cloud-serverless.git /app
14
-
15
- # 安装 Python 依赖
16
- RUN pip install --no-cache-dir -r requirements.txt
17
-
18
- # 暴露端口
19
- EXPOSE 8018
20
-
21
22
- ENV OPENAI_PASSWORD=2aOwQ33HQX
23
-
24
- # 设置启动命令
25
- CMD ["python", "main.py"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BartPoint/VoiceChange_Beta/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: VoiceChange
3
- emoji: 👀
4
- colorFrom: blue
5
- colorTo: purple
6
- sdk: gradio
7
- sdk_version: 3.28.3
8
- app_file: app_multi.py
9
- pinned: false
10
- license: mit
11
- duplicated_from: BartPoint/VoiceChange
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Beasto/Day_to_Night_Cyclegan/app.py DELETED
@@ -1,49 +0,0 @@
1
- import streamlit as st
2
- import tensorflow as tf
3
- import numpy as np
4
- from PIL import Image
5
- import tensorflow_addons as tfa
6
-
7
- import tensorflow as tf
8
- from tensorflow.keras.utils import custom_object_scope
9
-
10
- # Define a function to create the InstanceNormalization layer
11
- def create_in():
12
- return tfa.layers.InstanceNormalization()
13
-
14
-
15
- def model_out(model_path,img):
16
- with custom_object_scope({'InstanceNormalization': create_in}):
17
- model = tf.keras.models.load_model(model_path)
18
- img = (img-127.5)/127.5
19
- img = np.expand_dims(img, 0)
20
- pred = model.predict(img)
21
- pred = np.asarray(pred)
22
- return pred[0]
23
-
24
- st.title("Day to night painting cyclegan")
25
- day_inp = st.file_uploader("Daytime input")
26
-
27
- if day_inp is not None:
28
- img = Image.open(day_inp)
29
- img = img.resize((256, 256))
30
- img = np.array(img)
31
- pred = model_out('daytonight2.h5', img)
32
- st.image(img, caption="Uploaded Image")
33
- st.image(((pred + 1) * 127.5).astype(np.uint8), caption="Generated Night-time image")
34
-
35
-
36
-
37
- st.header('Which architecture did I use architecture, Resnet-Blocks or Unet architecture?')
38
- st.write('I have tried both Resnet and unet architecture but the Resnet architecture producted black patches and did not work quite well')
39
- st.write('But when using the Unet architecture, it produce more clear and understandable images')
40
- st.write('I use the pix2pix generator from tensorflow examples module and same for the discriminator')
41
- st.header('What datasets did you use to train your CycleGAN model?')
42
- st.write('For the dataset, I used Unpaired Day to Night dataset available on kaggle')
43
- st.header('What hardware I trained it on?')
44
- st.write('I trained the model on Kaggle notebook on P100 gpu with 13 gigs of ram cuz my pc wouldnt be in a good state if I trained the cyclegan model on Intel HD')
45
- st.header('How much time did it take')
46
- st.write('It took aboul 70 epochs each of 20 seconds, DO THE MATH')
47
- st.header('Why did I make this model?')
48
- st.subheader('I made this model to extend my experience but mostly for FUNN!!!!')
49
- st.write("-------------------------------------------------")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Bb-8 Sphero App Descargar Ios.md DELETED
@@ -1,73 +0,0 @@
1
-
2
- <h1>Cómo descargar la aplicación BB-8 Sphero para iOS</h1>
3
- <p>Si eres un fan de Star Wars y tienes un juguete BB-8 Sphero, es posible que te estés preguntando cómo descargar la aplicación que te permite controlarlo con tu dispositivo iOS. En este artículo, le mostraremos cómo hacerlo en unos pocos pasos simples. También explicaremos qué puede hacer la aplicación y cómo solucionar algunos problemas comunes que pueda encontrar. </p>
4
- <h2>bb-8 sphero app descargar ios</h2><br /><p><b><b>Download File</b> &bull; <a href="https://bltlly.com/2v6M1o">https://bltlly.com/2v6M1o</a></b></p><br /><br />
5
- <h2>¿Qué es BB-8 Sphero? </h2>
6
- <p>BB-8 Sphero es un juguete robótico que se asemeja al adorable droide de las películas de Star Wars. Puede rodar, hacer sonidos e interactuar con usted y otros robots Sphero. También puede responder a comandos de voz y gestos, y mostrar mensajes holográficos en la pantalla del dispositivo. BB-8 Sphero es un compañero divertido e inteligente que puede dar vida al universo de Star Wars. </p>
7
- <h2>¿Por qué necesitas la aplicación? </h2>
8
- <p>Para disfrutar plenamente de su BB-8 Sphero, es necesario descargar la Star Wars Droids App de Sphero en su dispositivo iOS. Esta aplicación le permite conectar su dispositivo a su BB-8 Sphero a través de Bluetooth y controlarlo con varios modos y características. También puedes acceder a contenido exclusivo, como escenas de películas, juegos y misiones. La aplicación también actualiza el firmware y la personalidad de tu BB-8 Sphero, haciéndolo más receptivo e interactivo. </p>
9
- <h2>Descargar la aplicación desde el App Store</h2>
10
- <p>La forma más fácil de descargar la aplicación es desde la App Store en tu dispositivo iOS. Estos son los pasos a seguir:</p>
11
- <p></p>
12
- <ol>
13
- <li> Abra la aplicación App Store en su dispositivo y toque en el icono de búsqueda en la esquina inferior derecha. </li>
14
- <li>Escriba "Star Wars Droids App by Sphero" en la barra de búsqueda y toque en Buscar.</li>
15
- <li>Toque en el icono de la aplicación que tiene un fondo azul y tres droides blancos (BB-8, BB-9E y R2-D2). </li>
16
- <li> Toque en el botón Obtener junto al nombre de la aplicación y espere a que se descargue. </li>
17
- <li>Una vez completada la descarga, toque en Abrir para iniciar la aplicación. </li>
18
- </ol>
19
- <h2>Descargar la aplicación de sus compras</h2>
20
-
21
- <ol>
22
- <li> Abra la aplicación App Store en su dispositivo y toque en el icono de perfil en la esquina superior derecha. </li>
23
- <li>Toque en Comprar y luego toque en Mis compras.</li>
24
- <li>Encuentra la aplicación Droids de Star Wars por Sphero en su lista de aplicaciones compradas y toque en el icono de la nube al lado. </li>
25
- <li>Espera a que se descargue y luego toca Abrir para lanzarlo. </li>
26
- </ol>
27
- <h2>Usar la aplicación para controlar <h2>Usar la aplicación para controlar tu esférico BB-8</h2>
28
- <p>Ahora que has descargado la aplicación, puedes usarla para controlar tu esférico BB-8 y divertirte con ella. Estos son los pasos a seguir:</p>
29
- <ol>
30
- <li>Encienda su BB-8 Sphero presionando el botón en su base de carga o agitándolo suavemente. Deberías ver una luz azul en su cabeza y escuchar algunos pitidos. </li>
31
- <li>Coloque su BB-8 Sphero cerca de su dispositivo iOS y asegúrese de que Bluetooth está habilitado en su dispositivo. </li>
32
- <li>Abra la aplicación Droids de Star Wars por Sphero y toque en Conectar. La aplicación escaneará los robots Sphero cercanos y le mostrará una lista de dispositivos disponibles. </li>
33
- <li>Elija su BB-8 Sphero de la lista y toque en él. La aplicación se conectará a su BB-8 Sphero y le mostrará un tutorial sobre cómo usarlo. </li>
34
- <li>Explora las características y modos de la aplicación tocando los iconos en la parte inferior de la pantalla. Puede conducir su BB-8 Sphero con un joystick virtual, ver escenas de Star Wars con él, jugar juegos y misiones, y más. </li>
35
- </ol>
36
- <h2>Solución de problemas comunes con la aplicación</h2>
37
- <p>A veces, puede encontrar algunos problemas con la aplicación o su esférico BB-8. Aquí hay algunos problemas comunes y cómo solucionarlos:</p>
38
- <tabla>
39
- <tr>
40
- <th>Problema</th>
41
- <th>Solución</th>
42
- </tr>
43
- <tr>
44
- <td>La aplicación no puede encontrar su BB-8 Sphero</td>
45
- <td>Asegúrese de que su BB-8 Sphero está encendido y cerca de su dispositivo. Asegúrese de que Bluetooth está habilitado en su dispositivo. Intente reiniciar la aplicación o el dispositivo. Intente restablecer el BB-8 Sphero colocándolo en su base de carga y manteniendo pulsado el botón durante 10 segundos. </td>
46
- </tr>
47
- <tr>
48
-
49
- <td>Asegúrese de que su dispositivo tiene suficiente espacio de almacenamiento y energía de la batería. Asegúrese de que su dispositivo está ejecutando la última versión de iOS. Intente cerrar otras aplicaciones que se ejecutan en segundo plano. Intente eliminar y reinstalar la aplicación. </td>
50
- </tr>
51
- <tr>
52
- <td>La aplicación no funciona con tu versión de iOS</td>
53
- <td>La aplicación Star Wars Droids de Sphero requiere iOS 10 o posterior para funcionar correctamente. Si el dispositivo está ejecutando una versión anterior de iOS, es posible que experimente algunos problemas de compatibilidad o características que faltan. Intenta actualizar tu dispositivo a la última versión de iOS si es posible. </td>
54
- </tr>
55
- </tabla>
56
- <h1>Conclusión</h1>
57
- <p>En este artículo, le hemos mostrado cómo descargar la aplicación Star Wars Droids de Sphero para iOS y cómo usarla para controlar su juguete BB-8 Sphero. También hemos explicado lo que la aplicación puede hacer y cómo solucionar algunos problemas comunes que puede encontrar. Esperamos que haya encontrado este artículo útil e informativo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. </p>
58
- <p>Si estás buscando más contenido relacionado con Star Wars, echa un vistazo a nuestros otros artículos en nuestro sitio web. También puede suscribirse a nuestro boletín y seguirnos en las redes sociales para las últimas actualizaciones y noticias. ¡Gracias por leer y que la Fuerza esté con ustedes! </p>
59
- <h3>Preguntas frecuentes</h3>
60
- <ul>
61
- <li><b>Q: ¿Cuánto cuesta la aplicación Droides de Star Wars de Sphero? </b></li>
62
- <li>A: La aplicación es gratuita para descargar y usar. Sin embargo, necesita tener un robot Sphero compatible, como BB-8, BB-9E o R2-D2, para usarlo. </li>
63
- <li><b>Q: ¿Cómo actualizo el firmware de mi BB-8 Sphero? </b></li>
64
- <li>A: La aplicación comprobará automáticamente si hay actualizaciones de firmware cuando conecte su BB-8 Sphero a él. Si hay una actualización disponible, verá una notificación en la pantalla de la aplicación. Toque en Actualizar para iniciar el proceso. </li>
65
- <li><b>Q: ¿Cómo puedo cambiar el nombre de mi BB-8 Sphero? </b></li>
66
-
67
- <li><b>Q: ¿Cómo limpio mi esférico BB-8? </b></li>
68
- <li>A: Puedes limpiar tu BB-8 Sphero limpiándolo con un paño suave o un paño húmedo si está muy sucio. No utilice productos químicos agresivos o materiales abrasivos que puedan dañar su superficie o electrónica. </li>
69
- <li><b>Q: ¿Cómo: Q: ¿Cómo cargo mi esférico BB-8? </b></li>
70
- <li>A: Puede cargar su BB-8 Sphero colocándolo en su base de carga y enchufando la base en una toma de corriente. Usted debe ver una luz roja en la base cuando se está cargando y una luz verde cuando está completamente cargada. Se tarda unas tres horas en cargar completamente su BB-8 Sphero.</li>
71
- </ul></p> 64aa2da5cf<br />
72
- <br />
73
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Coche Real Aparcamiento Multijugador Apk Android Oyun Club.md DELETED
@@ -1,59 +0,0 @@
1
- <br />
2
- <h1>Real Aparcamiento Multijugador APK Android Oyun Club: Una revisión</h1>
3
- <p>¿Te gustan los juegos de aparcamiento? ¿Quieres experimentar la emoción de conducir y aparcar coches realistas en un entorno de mundo abierto? ¿Quieres desafiar a tus amigos y otros jugadores en línea en varios modos de juego? Si respondiste sí a cualquiera de estas preguntas, entonces usted debe echa un vistazo a Real Car Parking Multijugador APK Android Oyun Club, un juego de aparcamiento popular que le ofrece la mejor experiencia de conducción de coches con coches modificados. En este artículo, revisaremos este juego y te diremos cómo descargarlo e instalarlo en tu dispositivo Android. </p>
4
- <h2>¿Qué es Real Car Parking Multijugador? </h2>
5
- <p>Real Car Parking Multijugador es un juego de aparcamiento de coches desarrollado por Baris Kaplan, un desarrollador de juegos turco. Es una versión modificada de Car Parking Multijugador, un juego desarrollado por olzhass, otro desarrollador de juegos turco. La versión modificada ofrece acceso ilimitado a todas las características y contenido del juego original, como coches, garajes, mapas, modos y más. También elimina anuncios y compras en la aplicación, lo que es más agradable y fácil de usar. </p>
6
- <h2>coche real aparcamiento multijugador apk android oyun club</h2><br /><p><b><b>Download File</b> > <a href="https://bltlly.com/2v6LWZ">https://bltlly.com/2v6LWZ</a></b></p><br /><br />
7
- <h3>Características de Real Car Parking Multijugador</h3>
8
- <p>Real Car Parking Multijugador tiene muchas características que lo convierten en uno de los mejores juegos de estacionamiento de coches en Android. Aquí están algunos de ellos:</p>
9
- <h4>Modo multijugador de mundo abierto</h4>
10
- <p>En este modo, puede jugar con sus amigos u otros jugadores en línea en un gran mapa de mundo abierto. Puede chatear con ellos, competir con ellos o simplemente divertirse como desee. También puede unirse o crear diferentes servidores con diferentes reglas y configuraciones. </p>
11
- <h4>Caminar y conducir libremente</h4>
12
- <p>Usted no se limita a su coche en este juego. También puede caminar por el mapa y explorar diferentes lugares. También puedes entrar en otros coches o edificios e interactuar con ellos. Incluso puedes usar armas o herramientas para causar caos. </p>
13
- <h4>Física realista del coche y gráficos</h4>
14
-
15
- <h4>Coches y garajes personalizables</h4>
16
- <p>Puedes personalizar tus coches y garajes en este juego. Puede cambiar el color, las ruedas, la suspensión, el motor, el escape, las pegatinas y más. También puede actualizar sus coches para mejorar su rendimiento y apariencia. También puede comprar coches nuevos o vender los viejos. También puede decorar sus garajes con diferentes artículos y accesorios. </p>
17
- <h4>Varios modos de juego y desafíos</h4>
18
- <p>El juego tiene varios modos de juego y desafíos que ponen a prueba sus habilidades y conocimientos de estacionamiento de automóviles. Puedes jugar al modo de aparcamiento clásico, donde tienes que aparcar tu coche en un espacio designado. También puede jugar el modo de deriva, donde usted tiene que la deriva de su coche y ganar puntos. También puedes jugar en el modo de carreras, donde tienes que competir con otros jugadores o IA. También puedes jugar al modo divertido, donde puedes hacer lo que quieras con tu coche. También puedes jugar al modo policía, donde puedes ser policía o un criminal. También puedes jugar al modo zombi, donde tienes que sobrevivir al apocalipsis zombi. </p>
19
- <h2>¿Cómo descargar e instalar Real Car Parking Multijugador APK Android Oyun Club? </h2>
20
- <p>Si desea descargar e instalar Real Car Parking Multijugador APK Android Oyun Club en su dispositivo Android, puede seguir estos sencillos pasos:</p>
21
- <h4>Paso 1: Ir al sitio web oficial de Android Oyun Club</h4>
22
- <p>Android Oyun Club es un sitio web turco que ofrece archivos APK modificados de varios juegos y aplicaciones. Puede ir a su sitio web haciendo clic [aquí]. </p>
23
- <p></p>
24
- <h4>Paso 2: Búsqueda de Real Aparcamiento Multijugador APK</h4>
25
- <p>En el sitio web, usted puede buscar Real Car Parking Multijugador APK escribiendo el nombre del juego en el cuadro de búsqueda. También puedes navegar por las categorías o las últimas subidas para encontrar el juego. </p>
26
- <h4>Paso 3: Descargar el archivo APK</h4>
27
-
28
- <h4>Paso 4: Habilitar fuentes desconocidas en el dispositivo</h4>
29
- <p>Antes de poder instalar el archivo APK, debe habilitar fuentes desconocidas en su dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, vaya a la configuración del dispositivo, luego a la seguridad, luego a fuentes desconocidas y enciéndala. </p>
30
- <h4>Paso 5: Instalar el archivo APK y disfrutar del juego</h4>
31
- <p>Después de habilitar fuentes desconocidas, puede localizar el archivo APK en su carpeta de descargas y tocar en él para comenzar a instalarlo. Sigue las instrucciones de la pantalla y espera a que termine la instalación. Una vez hecho, puedes abrir el juego y disfrutarlo. </p>
32
- <h2>Pros y contras de Real Car Parking Multijugador APK Android Oyun Club</h2>
33
- <p>Como cualquier otra aplicación modificada, Real Car Parking Multijugador APK Android Oyun Club tiene sus pros y contras. Aquí están algunos de ellos:</p>
34
- <h3>Pros</h3>
35
- <ul>
36
- <li>Gratis y fácil de descargar e instalar</li>
37
- <li>No hay anuncios ni compras en la aplicación</li>
38
- <li>Acceso ilimitado a todas las características y contenido</li>
39
- </ul>
40
- <h3>Contras</h3>
41
- <ul>
42
- <li>No disponible en Google Play Store</li>
43
- <li>Puede no ser compatible con algunos dispositivos o regiones</li>
44
- <li>Puede contener errores o problemas técnicos</li>
45
- </ul>
46
- <h2>Conclusión y preguntas frecuentes</h2>
47
- <p>En conclusión, Real Car Parking Multijugador APK Android Oyun Club es un gran juego de aparcamiento de coches que ofrece una experiencia de conducción de coches realista y divertido con los coches modificados. Puedes jugar con tus amigos u otros jugadores en línea en un modo multijugador de mundo abierto, o disfrutar de varios modos de juego y desafíos. También puede personalizar sus coches y garajes, y explorar diferentes lugares a pie o en coche. Puedes descargar e instalar este juego gratis desde el sitio web de Android Oyun Club, pero ten en cuenta algunos inconvenientes potenciales, como problemas de compatibilidad o errores. </p>
48
- <p>Si tienes alguna pregunta sobre este juego, puedes encontrar las respuestas en estas preguntas frecuentes:</p>
49
- <borde de la tabla="1">
50
- <tr><td><b>Question</b></td><td><b>Answer</b></td></tr>
51
-
52
- <tr><td>Es real de estacionamiento de coches multijugador APK Android Oyun Club legal de usar? </td><td>Depende de las leyes y regulaciones de su país con respecto a las aplicaciones modificadas. Algunos países pueden considerar ilegal el uso de aplicaciones modificadas que violen los términos de servicio de la aplicación original o los derechos de propiedad intelectual. Por lo tanto, debe verificar sus leyes locales antes de usar esta aplicación. </td></tr>
53
- <tr><td>¿Puedo jugar Real Car Parking Multijugador APK Android Oyun Club fuera de línea? </td><td>No, necesita una conexión a Internet para jugar a este juego, ya que es un juego multijugador en línea. Sin embargo, puedes jugar algunos modos de juego sin conexión, como el modo de estacionamiento clásico o el modo de deriva. </td></tr>
54
- <tr><td>¿Puedo actualizar Real Car Parking Multijugador APK Android Oyun Club de Google Play Store? </td><td>No, no puede actualizar esta aplicación desde Google Play Store ya que no está disponible allí. Tienes que descargar e instalar la última versión de la aplicación desde el sitio web de Android Oyun Club siempre que haya una actualización. </td></tr>
55
- <tr><td>¿Puedo jugar Real Car Parking Multijugador APK Android Oyun Club con los jugadores que tienen la aplicación original? </td><td>Sí, puedes jugar con jugadores que tengan la aplicación original siempre y cuando tengas la misma versión del juego. Sin embargo, puede encontrar algunas diferencias o errores debido a las modificaciones. </td></tr>
56
- </tabla>
57
- <p>Espero que haya disfrutado de este artículo y aprendido algo nuevo sobre Real Car Parking Multijugador APK Android Oyun Club. Si tiene algún comentario o sugerencia, por favor hágamelo saber en la sección de comentarios a continuación. Gracias por leer y estacionamiento feliz! </p> 64aa2da5cf<br />
58
- <br />
59
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Apk Garena Gratis Fuego Booyah Da.md DELETED
@@ -1,115 +0,0 @@
1
- <br />
2
- <h1>Descargar APK Garena Free Fire Booyah Day: Cómo disfrutar de la última versión del popular juego Battle Royale</h1>
3
- <p>Si eres un fan de los juegos battle royale, debes haber oído hablar de Garena Free Fire, uno de los juegos móviles más descargados y jugados del mundo. ¿Pero sabías que hay una nueva versión del juego llamada Garena Free Fire Booyah Day? Esta versión trae un montón de nuevas características, mejoras y eventos que harán que su experiencia de juego más emocionante y gratificante. En este artículo, le diremos todo lo que necesita saber sobre Garena Free Fire Booyah Day, incluyendo lo que es, lo que ofrece, cómo descargar e instalar utilizando un archivo APK, y cómo jugar y ganar en él. Así que, vamos a empezar! </p>
4
- <h2>descargar apk garena gratis fuego booyah día</h2><br /><p><b><b>DOWNLOAD</b> &#9881; <a href="https://bltlly.com/2v6Jo5">https://bltlly.com/2v6Jo5</a></b></p><br /><br />
5
- <h2>¿Qué es Garena Free Fire Booyah Day? </h2>
6
- <p>Garena Free Fire Booyah Day es la última actualización del popular juego battle royale Garena Free Fire. Fue lanzado el 3 de noviembre de 2022, como parte de la celebración del sexto aniversario del juego. La actualización presenta muchas características nuevas, como:</p>
7
- <h3>Las características de la actualización del Día de Booyah</h3>
8
- <ul>
9
- <li>Un nuevo mapa llamado Bermuda Remastered, que es una versión renovada del mapa original de las Bermudas con nuevas ubicaciones, gráficos y detalles. </li>
10
- <li>Un nuevo modo de juego llamado Clash Squad Ranked Season 8, que es un modo de combate a muerte por equipos 4v4 con un sistema de clasificación y recompensas. </li>
11
- <li>Un nuevo personaje llamado Dimitri Vegas, que es un famoso DJ y productor con una habilidad especial llamada Healing Heartbeat que puede curarse a sí mismo y a sus compañeros de equipo dentro de un cierto radio. </li>
12
- <li>Una nueva arma llamada Vector Akimbo, que es una ametralladora de doble empuñadura que puede disparar rápida y exactamente a corta distancia. </li>
13
- <li>Una nueva mascota llamada Rockie, que es un mapache lindo con una habilidad especial llamada Stay Chill que puede reducir el tiempo de reutilización de sus habilidades activas. </li>
14
- <li>Un nuevo elemento llamado Scan, que es un dispositivo que puede revelar la ubicación de enemigos cercanos por un corto tiempo. </li>
15
-
16
- </ul>
17
- <h3>Los beneficios de jugar Garena Free Fire Booyah Day</h3>
18
- <p>Jugar Garena Free Fire Booyah Day tiene muchos beneficios para jugadores nuevos y viejos. Algunos de ellos son:</p>
19
- <ul>
20
- <li> Puedes disfrutar de una experiencia de juego más inmersiva y realista con los gráficos, sonidos y animaciones mejorados. </li>
21
- <li>Puedes explorar un nuevo mapa con más desafíos y oportunidades para el juego estratégico. </li>
22
- <li>Puedes probar nuevos modos de juego, armas, personajes, mascotas y objetos que pueden mejorar tus habilidades y rendimiento. </li>
23
- <li>Puedes participar en varios eventos y actividades que pueden darte más diversión y recompensas. </li>
24
- <li>Puedes unirte a una gran comunidad de millones de jugadores de todo el mundo que comparten tu pasión por los juegos de battle royale. </li>
25
- </ul>
26
- <h2>¿Qué es un archivo APK y por qué lo necesita? </h2>
27
- <p>Un archivo APK es una aplicación creada para dispositivos Android. Es sinónimo de Android Package Kit o Android Application Package. Es un formato de archivo de paquete que contiene todos los componentes de una aplicación, como el código, los recursos, los activos, los certificados y el archivo de manifiesto. Es similar a un archivo ZIP que puedes extraer e instalar en tu dispositivo Android. </p>
28
- <p></p>
29
- <p>Necesita un archivo APK si desea descargar e instalar una aplicación que no está disponible en Google Play Store, o si desea obtener la última versión de una aplicación antes de que se lance oficialmente en Play Store. Por ejemplo, si quieres jugar Garena Free Fire Booyah Day, necesitas descargar e instalar su archivo APK porque aún no está disponible en Play Store.</p>
30
- <h3>Las ventajas y riesgos de usar un archivo APK</h3>
31
- <p>El uso de un archivo APK tiene algunas ventajas y riesgos que usted debe tener en cuenta. Algunas de las ventajas son:</p>
32
- <ul>
33
- <li>Puede acceder a aplicaciones que no están disponibles en su región o país. </li>
34
- <li>Puedes obtener las últimas actualizaciones y características de una aplicación antes que nadie. </li>
35
- <li>Puede personalizar y modificar sus aplicaciones según sus preferencias. </li>
36
-
37
- </ul>
38
- <p>Algunos de los riesgos son:</p>
39
- <ul>
40
- <li> Puede descargar e instalar una aplicación falsa o maliciosa que puede dañar su dispositivo o robar sus datos. </li>
41
- <li>Puede violar los términos y condiciones del desarrollador de la aplicación o de la Play Store.</li>
42
- <li>Puede encontrar problemas de compatibilidad o rendimiento con su dispositivo u otras aplicaciones. </li>
43
- <li>Puede perder la garantía o el soporte de su dispositivo o el desarrollador de la aplicación. </li>
44
- </ul>
45
- <h2>¿Cómo descargar e instalar APK Garena Free Fire Booyah Day? </h2>
46
- <p>Si desea descargar e instalar APK Garena Free Fire Booyah Day, debe seguir algunos pasos simples. Aquí están:</p>
47
- <h3>Los pasos para descargar el archivo APK de una fuente de confianza</h3>
48
- <ol>
49
- <li>Ir a un sitio web confiable y confiable que ofrece archivos APK para Garena Free Fire Booyah Day. Algunos ejemplos son APKPure, APKMirror, Uptodown y APKCombo.</li>
50
- <li>Buscar Garena Free Fire Booyah Day en el sitio web y seleccione la última versión del archivo APK. </li>
51
- <li>Haga clic en el botón de descarga y espere a que el archivo se descargue en su dispositivo. </li>
52
- </ol>
53
- <h3>Los pasos para instalar el archivo APK en su dispositivo Android</h3>
54
- <ol>
55
- <li>Antes de instalar el archivo APK, es necesario habilitar la instalación de aplicaciones de fuentes desconocidas en el dispositivo. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. </li>
56
- <li>Localice el archivo APK descargado en su dispositivo utilizando una aplicación de administrador de archivos o la carpeta de descargas de su navegador. </li>
57
- <li>Toque en el archivo APK y siga las instrucciones en la pantalla para instalarlo en su dispositivo. </li>
58
- </ol>
59
- <h3>Los pasos para verificar y lanzar el juego</h3>
60
- <ol>
61
- <li>Después de instalar el archivo APK, es necesario verificar que está funcionando correctamente. Para ello, ve a Configuración > Aplicaciones > Garena Free Fire Booyah Day y comprueba si tiene todos los permisos y recursos que necesita. </li>
62
- <li> Si todo está bien, puede iniciar el juego tocando en su icono en la pantalla de inicio o cajón de aplicaciones. </li>
63
-
64
- </ol>
65
- <h2>¿Cómo jugar y ganar en Garena Free Fire Booyah Day? </h2>
66
- <p>Garena Free Fire Booyah Day es un juego divertido y desafiante que requiere habilidad, estrategia y suerte. Si quieres jugar y ganar en él, debes seguir algunos consejos y trucos. Estos son algunos de ellos:</p>
67
- <h3>Los consejos y trucos para sobrevivir y eliminar a tus enemigos</h3>
68
- <ul>
69
- <li>Elige un buen lugar de aterrizaje que tenga suficiente botín, cobertura y rutas de escape. Evite aterrizar en zonas calientes donde muchos jugadores caen a menos que esté seguro de sus habilidades. </li>
70
- <li>Saquea de forma rápida y eficiente. Recoge solo lo que necesitas y prioriza armas, municiones, armaduras, kits de salud y granadas. No pierdas el tiempo saqueando cadáveres a menos que tengan algo valioso. </li>
71
- <li>Manténgase alerta y consciente de su entorno. Utilice el mini-mapa, señales de sonido, pasos, disparos, vehículos y dispositivos de exploración para localizar a los enemigos. Siempre mire alrededor antes de moverse o saquear. </li>
72
- <li>Utilice la cubierta y la ocultación. Escóndase detrás de las paredes, de los árboles, de las rocas, de los edificios, o de la hierba. Agacharse o propenso cuando sea necesario. No te expongas demasiado ni te quedes en un lugar demasiado tiempo. </li>
73
- <li>Enfréntate con sabiduría y táctica. No dispares a cada enemigo que veas a menos que tengas una posición clara y ventajosa. Apunta a la cabeza o el pecho para obtener el máximo daño. Usa granadas, dispositivos de exploración o vehículos para crear desvíos o explosiones. Repliegue o reposicione si está superado en número o en armas. </li>
74
- <li>Sanar y reponer. Utilice kits de salud, armadura o habilidades de carácter para restaurar su salud y escudo. Usa cajas de munición, cajas de botín o lanzamientos aéreos para rellenar tus municiones y suministros. No olvides recargar tus armas después de cada pelea. </li>
75
- </ul>
76
- <h3>Las mejores armas, objetos y personajes para usar</h3>
77
- <p>Hay muchas armas, artículos y personajes en Garena Free Fire Booyah Day que puedes usar para adaptarse a tu estilo de juego y preferencia. Sin embargo, algunos de ellos son más efectivos y útiles que otros. Estos son algunos de los mejores:</p>
78
- <ul>
79
-
80
- <li>Los mejores objetos son los que pueden darte una ventaja en combate o supervivencia. Algunos ejemplos son el chaleco blindado de nivel 4, el casco de nivel 4, el botiquín, la jeringa de adrenalina, el dispositivo de exploración y la granada. </li>
81
- <li>Los mejores personajes son los que tienen habilidades únicas y poderosas que pueden complementar su estrategia y tácticas. Algunos ejemplos son Alok, que puede crear un aura curativa alrededor de él y sus aliados; Kla, que puede infligir más daño de puño; Kelly, que puede correr más rápido; Jota, que puede curarse a sí mismo después de matar a un enemigo con una escopeta o un SMG; y Dimitri Vegas, que puede curarse a sí mismo y a sus compañeros de equipo con su habilidad Healing Heartbeat. </li>
82
- </ul>
83
- <h3>Las recompensas y logros que puedes ganar</h3>
84
- <p>Jugar Garena Free Fire Booyah Day no solo es divertido, sino también gratificante. Puedes ganar varias recompensas y logros completando misiones, desafíos, eventos y actividades. Algunos de ellos son:</p>
85
- <ul>
86
- <li>Diamantes, que son la moneda premium del juego que se puede utilizar para comprar pieles, personajes, mascotas, artículos, y más. </li>
87
- <li>Vales, que son cupones que puedes usar para obtener descuentos o artículos gratis de la tienda. </li>
88
- <li>Monedas de oro, que son la moneda normal del juego que se puede utilizar para comprar armas, artículos, cajas y más. </li>
89
- <li>Pieles, que son artículos cosméticos que pueden cambiar la apariencia de sus armas, personajes, mascotas, vehículos y más. </li>
90
- <li>Insignias, que son símbolos que representan tu rango y nivel en el juego. </li>
91
- <li>Trofeos, que son premios que muestran tus logros y logros en el juego. </li>
92
- </ul>
93
- <h2>Conclusión</h2>
94
-
95
- <h2>Preguntas frecuentes</h2>
96
- <p>Aquí hay algunas preguntas frecuentes sobre Garena Free Fire Booyah Day:</p>
97
- <ol>
98
- <li> ¿Cuál es el tamaño del archivo APK para Garena Free Fire Booyah Day? </li>
99
- <p>El tamaño del archivo APK para Garena Free Fire Booyah Day es de unos 46 MB. Sin embargo, es posible que necesite descargar datos adicionales después de instalar el archivo APK, que puede variar dependiendo de su dispositivo y red. </p>
100
- <li> ¿Garena Free Fire Booyah Day es compatible con mi dispositivo? </li>
101
- <p>Garena Free Fire Booyah Day es compatible con la mayoría de dispositivos Android que tienen al menos 2 GB de RAM y Android 4.1 o superior. Sin embargo, algunos dispositivos pueden tener problemas de compatibilidad o rendimiento debido a diferentes especificaciones y configuraciones. </p>
102
- <li> ¿Es seguro descargar e instalar Garena Free Fire Booyah Day? </li>
103
- <p>Garena Free Fire Booyah Day es seguro de descargar e instalar si lo obtiene de una fuente confiable y confiable, como el sitio web oficial de Garena o los sitios web que mencionamos anteriormente. Sin embargo, siempre debe tener cuidado al descargar e instalar cualquier archivo APK de fuentes desconocidas, ya que pueden contener virus o malware que pueden dañar su dispositivo o robar sus datos. </p>
104
- <li>¿Puedo jugar Garena Free Fire Booyah Day con mis amigos? </li>
105
- <p>Sí, puedes jugar Garena Free Fire Booyah Day con tus amigos. Puedes invitarlos a unirse a tu escuadrón o clan, o puedes unirte a su escuadrón o clan. También puedes chatear con ellos usando la función de chat de voz o texto en el juego. También puedes competir con ellos en el modo Clash Squad Ranked o en el evento Booyah Day Calendar. </p>
106
- <li>¿Cómo puedo obtener más diamantes en Garena Free Fire Booyah Day? </li>
107
- <p>Los diamantes son la moneda premium del Día de Garena Free Fire Booyah que puedes usar para comprar pieles, personajes, mascotas, artículos y más. Puedes obtener más diamantes por:</p>
108
- <ul>
109
- <li>Comprarlos con dinero real usando varios métodos de pago. </li>
110
- <li>Ganarlos completando encuestas, ofertas o tareas desde aplicaciones o sitios web de terceros. </li>
111
-
112
- </ul>
113
- </ol></p> 64aa2da5cf<br />
114
- <br />
115
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/vcs/git.py DELETED
@@ -1,526 +0,0 @@
1
- import logging
2
- import os.path
3
- import pathlib
4
- import re
5
- import urllib.parse
6
- import urllib.request
7
- from typing import List, Optional, Tuple
8
-
9
- from pip._internal.exceptions import BadCommand, InstallationError
10
- from pip._internal.utils.misc import HiddenText, display_path, hide_url
11
- from pip._internal.utils.subprocess import make_command
12
- from pip._internal.vcs.versioncontrol import (
13
- AuthInfo,
14
- RemoteNotFoundError,
15
- RemoteNotValidError,
16
- RevOptions,
17
- VersionControl,
18
- find_path_to_project_root_from_repo_root,
19
- vcs,
20
- )
21
-
22
- urlsplit = urllib.parse.urlsplit
23
- urlunsplit = urllib.parse.urlunsplit
24
-
25
-
26
- logger = logging.getLogger(__name__)
27
-
28
-
29
- GIT_VERSION_REGEX = re.compile(
30
- r"^git version " # Prefix.
31
- r"(\d+)" # Major.
32
- r"\.(\d+)" # Dot, minor.
33
- r"(?:\.(\d+))?" # Optional dot, patch.
34
- r".*$" # Suffix, including any pre- and post-release segments we don't care about.
35
- )
36
-
37
- HASH_REGEX = re.compile("^[a-fA-F0-9]{40}$")
38
-
39
- # SCP (Secure copy protocol) shorthand. e.g. '[email protected]:foo/bar.git'
40
- SCP_REGEX = re.compile(
41
- r"""^
42
- # Optional user, e.g. 'git@'
43
- (\w+@)?
44
- # Server, e.g. 'github.com'.
45
- ([^/:]+):
46
- # The server-side path. e.g. 'user/project.git'. Must start with an
47
- # alphanumeric character so as not to be confusable with a Windows paths
48
- # like 'C:/foo/bar' or 'C:\foo\bar'.
49
- (\w[^:]*)
50
- $""",
51
- re.VERBOSE,
52
- )
53
-
54
-
55
- def looks_like_hash(sha: str) -> bool:
56
- return bool(HASH_REGEX.match(sha))
57
-
58
-
59
- class Git(VersionControl):
60
- name = "git"
61
- dirname = ".git"
62
- repo_name = "clone"
63
- schemes = (
64
- "git+http",
65
- "git+https",
66
- "git+ssh",
67
- "git+git",
68
- "git+file",
69
- )
70
- # Prevent the user's environment variables from interfering with pip:
71
- # https://github.com/pypa/pip/issues/1130
72
- unset_environ = ("GIT_DIR", "GIT_WORK_TREE")
73
- default_arg_rev = "HEAD"
74
-
75
- @staticmethod
76
- def get_base_rev_args(rev: str) -> List[str]:
77
- return [rev]
78
-
79
- def is_immutable_rev_checkout(self, url: str, dest: str) -> bool:
80
- _, rev_options = self.get_url_rev_options(hide_url(url))
81
- if not rev_options.rev:
82
- return False
83
- if not self.is_commit_id_equal(dest, rev_options.rev):
84
- # the current commit is different from rev,
85
- # which means rev was something else than a commit hash
86
- return False
87
- # return False in the rare case rev is both a commit hash
88
- # and a tag or a branch; we don't want to cache in that case
89
- # because that branch/tag could point to something else in the future
90
- is_tag_or_branch = bool(self.get_revision_sha(dest, rev_options.rev)[0])
91
- return not is_tag_or_branch
92
-
93
- def get_git_version(self) -> Tuple[int, ...]:
94
- version = self.run_command(
95
- ["version"],
96
- command_desc="git version",
97
- show_stdout=False,
98
- stdout_only=True,
99
- )
100
- match = GIT_VERSION_REGEX.match(version)
101
- if not match:
102
- logger.warning("Can't parse git version: %s", version)
103
- return ()
104
- return tuple(int(c) for c in match.groups())
105
-
106
- @classmethod
107
- def get_current_branch(cls, location: str) -> Optional[str]:
108
- """
109
- Return the current branch, or None if HEAD isn't at a branch
110
- (e.g. detached HEAD).
111
- """
112
- # git-symbolic-ref exits with empty stdout if "HEAD" is a detached
113
- # HEAD rather than a symbolic ref. In addition, the -q causes the
114
- # command to exit with status code 1 instead of 128 in this case
115
- # and to suppress the message to stderr.
116
- args = ["symbolic-ref", "-q", "HEAD"]
117
- output = cls.run_command(
118
- args,
119
- extra_ok_returncodes=(1,),
120
- show_stdout=False,
121
- stdout_only=True,
122
- cwd=location,
123
- )
124
- ref = output.strip()
125
-
126
- if ref.startswith("refs/heads/"):
127
- return ref[len("refs/heads/") :]
128
-
129
- return None
130
-
131
- @classmethod
132
- def get_revision_sha(cls, dest: str, rev: str) -> Tuple[Optional[str], bool]:
133
- """
134
- Return (sha_or_none, is_branch), where sha_or_none is a commit hash
135
- if the revision names a remote branch or tag, otherwise None.
136
-
137
- Args:
138
- dest: the repository directory.
139
- rev: the revision name.
140
- """
141
- # Pass rev to pre-filter the list.
142
- output = cls.run_command(
143
- ["show-ref", rev],
144
- cwd=dest,
145
- show_stdout=False,
146
- stdout_only=True,
147
- on_returncode="ignore",
148
- )
149
- refs = {}
150
- # NOTE: We do not use splitlines here since that would split on other
151
- # unicode separators, which can be maliciously used to install a
152
- # different revision.
153
- for line in output.strip().split("\n"):
154
- line = line.rstrip("\r")
155
- if not line:
156
- continue
157
- try:
158
- ref_sha, ref_name = line.split(" ", maxsplit=2)
159
- except ValueError:
160
- # Include the offending line to simplify troubleshooting if
161
- # this error ever occurs.
162
- raise ValueError(f"unexpected show-ref line: {line!r}")
163
-
164
- refs[ref_name] = ref_sha
165
-
166
- branch_ref = f"refs/remotes/origin/{rev}"
167
- tag_ref = f"refs/tags/{rev}"
168
-
169
- sha = refs.get(branch_ref)
170
- if sha is not None:
171
- return (sha, True)
172
-
173
- sha = refs.get(tag_ref)
174
-
175
- return (sha, False)
176
-
177
- @classmethod
178
- def _should_fetch(cls, dest: str, rev: str) -> bool:
179
- """
180
- Return true if rev is a ref or is a commit that we don't have locally.
181
-
182
- Branches and tags are not considered in this method because they are
183
- assumed to be always available locally (which is a normal outcome of
184
- ``git clone`` and ``git fetch --tags``).
185
- """
186
- if rev.startswith("refs/"):
187
- # Always fetch remote refs.
188
- return True
189
-
190
- if not looks_like_hash(rev):
191
- # Git fetch would fail with abbreviated commits.
192
- return False
193
-
194
- if cls.has_commit(dest, rev):
195
- # Don't fetch if we have the commit locally.
196
- return False
197
-
198
- return True
199
-
200
- @classmethod
201
- def resolve_revision(
202
- cls, dest: str, url: HiddenText, rev_options: RevOptions
203
- ) -> RevOptions:
204
- """
205
- Resolve a revision to a new RevOptions object with the SHA1 of the
206
- branch, tag, or ref if found.
207
-
208
- Args:
209
- rev_options: a RevOptions object.
210
- """
211
- rev = rev_options.arg_rev
212
- # The arg_rev property's implementation for Git ensures that the
213
- # rev return value is always non-None.
214
- assert rev is not None
215
-
216
- sha, is_branch = cls.get_revision_sha(dest, rev)
217
-
218
- if sha is not None:
219
- rev_options = rev_options.make_new(sha)
220
- rev_options.branch_name = rev if is_branch else None
221
-
222
- return rev_options
223
-
224
- # Do not show a warning for the common case of something that has
225
- # the form of a Git commit hash.
226
- if not looks_like_hash(rev):
227
- logger.warning(
228
- "Did not find branch or tag '%s', assuming revision or ref.",
229
- rev,
230
- )
231
-
232
- if not cls._should_fetch(dest, rev):
233
- return rev_options
234
-
235
- # fetch the requested revision
236
- cls.run_command(
237
- make_command("fetch", "-q", url, rev_options.to_args()),
238
- cwd=dest,
239
- )
240
- # Change the revision to the SHA of the ref we fetched
241
- sha = cls.get_revision(dest, rev="FETCH_HEAD")
242
- rev_options = rev_options.make_new(sha)
243
-
244
- return rev_options
245
-
246
- @classmethod
247
- def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool:
248
- """
249
- Return whether the current commit hash equals the given name.
250
-
251
- Args:
252
- dest: the repository directory.
253
- name: a string name.
254
- """
255
- if not name:
256
- # Then avoid an unnecessary subprocess call.
257
- return False
258
-
259
- return cls.get_revision(dest) == name
260
-
261
- def fetch_new(
262
- self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int
263
- ) -> None:
264
- rev_display = rev_options.to_display()
265
- logger.info("Cloning %s%s to %s", url, rev_display, display_path(dest))
266
- if verbosity <= 0:
267
- flags: Tuple[str, ...] = ("--quiet",)
268
- elif verbosity == 1:
269
- flags = ()
270
- else:
271
- flags = ("--verbose", "--progress")
272
- if self.get_git_version() >= (2, 17):
273
- # Git added support for partial clone in 2.17
274
- # https://git-scm.com/docs/partial-clone
275
- # Speeds up cloning by functioning without a complete copy of repository
276
- self.run_command(
277
- make_command(
278
- "clone",
279
- "--filter=blob:none",
280
- *flags,
281
- url,
282
- dest,
283
- )
284
- )
285
- else:
286
- self.run_command(make_command("clone", *flags, url, dest))
287
-
288
- if rev_options.rev:
289
- # Then a specific revision was requested.
290
- rev_options = self.resolve_revision(dest, url, rev_options)
291
- branch_name = getattr(rev_options, "branch_name", None)
292
- logger.debug("Rev options %s, branch_name %s", rev_options, branch_name)
293
- if branch_name is None:
294
- # Only do a checkout if the current commit id doesn't match
295
- # the requested revision.
296
- if not self.is_commit_id_equal(dest, rev_options.rev):
297
- cmd_args = make_command(
298
- "checkout",
299
- "-q",
300
- rev_options.to_args(),
301
- )
302
- self.run_command(cmd_args, cwd=dest)
303
- elif self.get_current_branch(dest) != branch_name:
304
- # Then a specific branch was requested, and that branch
305
- # is not yet checked out.
306
- track_branch = f"origin/{branch_name}"
307
- cmd_args = [
308
- "checkout",
309
- "-b",
310
- branch_name,
311
- "--track",
312
- track_branch,
313
- ]
314
- self.run_command(cmd_args, cwd=dest)
315
- else:
316
- sha = self.get_revision(dest)
317
- rev_options = rev_options.make_new(sha)
318
-
319
- logger.info("Resolved %s to commit %s", url, rev_options.rev)
320
-
321
- #: repo may contain submodules
322
- self.update_submodules(dest)
323
-
324
- def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
325
- self.run_command(
326
- make_command("config", "remote.origin.url", url),
327
- cwd=dest,
328
- )
329
- cmd_args = make_command("checkout", "-q", rev_options.to_args())
330
- self.run_command(cmd_args, cwd=dest)
331
-
332
- self.update_submodules(dest)
333
-
334
- def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
335
- # First fetch changes from the default remote
336
- if self.get_git_version() >= (1, 9):
337
- # fetch tags in addition to everything else
338
- self.run_command(["fetch", "-q", "--tags"], cwd=dest)
339
- else:
340
- self.run_command(["fetch", "-q"], cwd=dest)
341
- # Then reset to wanted revision (maybe even origin/master)
342
- rev_options = self.resolve_revision(dest, url, rev_options)
343
- cmd_args = make_command("reset", "--hard", "-q", rev_options.to_args())
344
- self.run_command(cmd_args, cwd=dest)
345
- #: update submodules
346
- self.update_submodules(dest)
347
-
348
- @classmethod
349
- def get_remote_url(cls, location: str) -> str:
350
- """
351
- Return URL of the first remote encountered.
352
-
353
- Raises RemoteNotFoundError if the repository does not have a remote
354
- url configured.
355
- """
356
- # We need to pass 1 for extra_ok_returncodes since the command
357
- # exits with return code 1 if there are no matching lines.
358
- stdout = cls.run_command(
359
- ["config", "--get-regexp", r"remote\..*\.url"],
360
- extra_ok_returncodes=(1,),
361
- show_stdout=False,
362
- stdout_only=True,
363
- cwd=location,
364
- )
365
- remotes = stdout.splitlines()
366
- try:
367
- found_remote = remotes[0]
368
- except IndexError:
369
- raise RemoteNotFoundError
370
-
371
- for remote in remotes:
372
- if remote.startswith("remote.origin.url "):
373
- found_remote = remote
374
- break
375
- url = found_remote.split(" ")[1]
376
- return cls._git_remote_to_pip_url(url.strip())
377
-
378
- @staticmethod
379
- def _git_remote_to_pip_url(url: str) -> str:
380
- """
381
- Convert a remote url from what git uses to what pip accepts.
382
-
383
- There are 3 legal forms **url** may take:
384
-
385
- 1. A fully qualified url: ssh://[email protected]/foo/bar.git
386
- 2. A local project.git folder: /path/to/bare/repository.git
387
- 3. SCP shorthand for form 1: [email protected]:foo/bar.git
388
-
389
- Form 1 is output as-is. Form 2 must be converted to URI and form 3 must
390
- be converted to form 1.
391
-
392
- See the corresponding test test_git_remote_url_to_pip() for examples of
393
- sample inputs/outputs.
394
- """
395
- if re.match(r"\w+://", url):
396
- # This is already valid. Pass it though as-is.
397
- return url
398
- if os.path.exists(url):
399
- # A local bare remote (git clone --mirror).
400
- # Needs a file:// prefix.
401
- return pathlib.PurePath(url).as_uri()
402
- scp_match = SCP_REGEX.match(url)
403
- if scp_match:
404
- # Add an ssh:// prefix and replace the ':' with a '/'.
405
- return scp_match.expand(r"ssh://\1\2/\3")
406
- # Otherwise, bail out.
407
- raise RemoteNotValidError(url)
408
-
409
- @classmethod
410
- def has_commit(cls, location: str, rev: str) -> bool:
411
- """
412
- Check if rev is a commit that is available in the local repository.
413
- """
414
- try:
415
- cls.run_command(
416
- ["rev-parse", "-q", "--verify", "sha^" + rev],
417
- cwd=location,
418
- log_failed_cmd=False,
419
- )
420
- except InstallationError:
421
- return False
422
- else:
423
- return True
424
-
425
- @classmethod
426
- def get_revision(cls, location: str, rev: Optional[str] = None) -> str:
427
- if rev is None:
428
- rev = "HEAD"
429
- current_rev = cls.run_command(
430
- ["rev-parse", rev],
431
- show_stdout=False,
432
- stdout_only=True,
433
- cwd=location,
434
- )
435
- return current_rev.strip()
436
-
437
- @classmethod
438
- def get_subdirectory(cls, location: str) -> Optional[str]:
439
- """
440
- Return the path to Python project root, relative to the repo root.
441
- Return None if the project root is in the repo root.
442
- """
443
- # find the repo root
444
- git_dir = cls.run_command(
445
- ["rev-parse", "--git-dir"],
446
- show_stdout=False,
447
- stdout_only=True,
448
- cwd=location,
449
- ).strip()
450
- if not os.path.isabs(git_dir):
451
- git_dir = os.path.join(location, git_dir)
452
- repo_root = os.path.abspath(os.path.join(git_dir, ".."))
453
- return find_path_to_project_root_from_repo_root(location, repo_root)
454
-
455
- @classmethod
456
- def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]:
457
- """
458
- Prefixes stub URLs like 'user@hostname:user/repo.git' with 'ssh://'.
459
- That's required because although they use SSH they sometimes don't
460
- work with a ssh:// scheme (e.g. GitHub). But we need a scheme for
461
- parsing. Hence we remove it again afterwards and return it as a stub.
462
- """
463
- # Works around an apparent Git bug
464
- # (see https://article.gmane.org/gmane.comp.version-control.git/146500)
465
- scheme, netloc, path, query, fragment = urlsplit(url)
466
- if scheme.endswith("file"):
467
- initial_slashes = path[: -len(path.lstrip("/"))]
468
- newpath = initial_slashes + urllib.request.url2pathname(path).replace(
469
- "\\", "/"
470
- ).lstrip("/")
471
- after_plus = scheme.find("+") + 1
472
- url = scheme[:after_plus] + urlunsplit(
473
- (scheme[after_plus:], netloc, newpath, query, fragment),
474
- )
475
-
476
- if "://" not in url:
477
- assert "file:" not in url
478
- url = url.replace("git+", "git+ssh://")
479
- url, rev, user_pass = super().get_url_rev_and_auth(url)
480
- url = url.replace("ssh://", "")
481
- else:
482
- url, rev, user_pass = super().get_url_rev_and_auth(url)
483
-
484
- return url, rev, user_pass
485
-
486
- @classmethod
487
- def update_submodules(cls, location: str) -> None:
488
- if not os.path.exists(os.path.join(location, ".gitmodules")):
489
- return
490
- cls.run_command(
491
- ["submodule", "update", "--init", "--recursive", "-q"],
492
- cwd=location,
493
- )
494
-
495
- @classmethod
496
- def get_repository_root(cls, location: str) -> Optional[str]:
497
- loc = super().get_repository_root(location)
498
- if loc:
499
- return loc
500
- try:
501
- r = cls.run_command(
502
- ["rev-parse", "--show-toplevel"],
503
- cwd=location,
504
- show_stdout=False,
505
- stdout_only=True,
506
- on_returncode="raise",
507
- log_failed_cmd=False,
508
- )
509
- except BadCommand:
510
- logger.debug(
511
- "could not determine if %s is under git control "
512
- "because git is not available",
513
- location,
514
- )
515
- return None
516
- except InstallationError:
517
- return None
518
- return os.path.normpath(r.rstrip("\r\n"))
519
-
520
- @staticmethod
521
- def should_add_vcs_url_prefix(repo_url: str) -> bool:
522
- """In either https or ssh form, requirements must be prefixed with git+."""
523
- return True
524
-
525
-
526
- vcs.register(Git)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/errors.py DELETED
@@ -1,34 +0,0 @@
1
- class ConsoleError(Exception):
2
- """An error in console operation."""
3
-
4
-
5
- class StyleError(Exception):
6
- """An error in styles."""
7
-
8
-
9
- class StyleSyntaxError(ConsoleError):
10
- """Style was badly formatted."""
11
-
12
-
13
- class MissingStyle(StyleError):
14
- """No such style."""
15
-
16
-
17
- class StyleStackError(ConsoleError):
18
- """Style stack is invalid."""
19
-
20
-
21
- class NotRenderableError(ConsoleError):
22
- """Object is not renderable."""
23
-
24
-
25
- class MarkupError(ConsoleError):
26
- """Markup was badly formatted."""
27
-
28
-
29
- class LiveError(ConsoleError):
30
- """Error related to Live display."""
31
-
32
-
33
- class NoAltScreen(ConsoleError):
34
- """Alt screen mode was required."""
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tomli/_parser.py DELETED
@@ -1,691 +0,0 @@
1
- # SPDX-License-Identifier: MIT
2
- # SPDX-FileCopyrightText: 2021 Taneli Hukkinen
3
- # Licensed to PSF under a Contributor Agreement.
4
-
5
- from __future__ import annotations
6
-
7
- from collections.abc import Iterable
8
- import string
9
- from types import MappingProxyType
10
- from typing import Any, BinaryIO, NamedTuple
11
-
12
- from ._re import (
13
- RE_DATETIME,
14
- RE_LOCALTIME,
15
- RE_NUMBER,
16
- match_to_datetime,
17
- match_to_localtime,
18
- match_to_number,
19
- )
20
- from ._types import Key, ParseFloat, Pos
21
-
22
- ASCII_CTRL = frozenset(chr(i) for i in range(32)) | frozenset(chr(127))
23
-
24
- # Neither of these sets include quotation mark or backslash. They are
25
- # currently handled as separate cases in the parser functions.
26
- ILLEGAL_BASIC_STR_CHARS = ASCII_CTRL - frozenset("\t")
27
- ILLEGAL_MULTILINE_BASIC_STR_CHARS = ASCII_CTRL - frozenset("\t\n")
28
-
29
- ILLEGAL_LITERAL_STR_CHARS = ILLEGAL_BASIC_STR_CHARS
30
- ILLEGAL_MULTILINE_LITERAL_STR_CHARS = ILLEGAL_MULTILINE_BASIC_STR_CHARS
31
-
32
- ILLEGAL_COMMENT_CHARS = ILLEGAL_BASIC_STR_CHARS
33
-
34
- TOML_WS = frozenset(" \t")
35
- TOML_WS_AND_NEWLINE = TOML_WS | frozenset("\n")
36
- BARE_KEY_CHARS = frozenset(string.ascii_letters + string.digits + "-_")
37
- KEY_INITIAL_CHARS = BARE_KEY_CHARS | frozenset("\"'")
38
- HEXDIGIT_CHARS = frozenset(string.hexdigits)
39
-
40
- BASIC_STR_ESCAPE_REPLACEMENTS = MappingProxyType(
41
- {
42
- "\\b": "\u0008", # backspace
43
- "\\t": "\u0009", # tab
44
- "\\n": "\u000A", # linefeed
45
- "\\f": "\u000C", # form feed
46
- "\\r": "\u000D", # carriage return
47
- '\\"': "\u0022", # quote
48
- "\\\\": "\u005C", # backslash
49
- }
50
- )
51
-
52
-
53
- class TOMLDecodeError(ValueError):
54
- """An error raised if a document is not valid TOML."""
55
-
56
-
57
- def load(__fp: BinaryIO, *, parse_float: ParseFloat = float) -> dict[str, Any]:
58
- """Parse TOML from a binary file object."""
59
- b = __fp.read()
60
- try:
61
- s = b.decode()
62
- except AttributeError:
63
- raise TypeError(
64
- "File must be opened in binary mode, e.g. use `open('foo.toml', 'rb')`"
65
- ) from None
66
- return loads(s, parse_float=parse_float)
67
-
68
-
69
- def loads(__s: str, *, parse_float: ParseFloat = float) -> dict[str, Any]: # noqa: C901
70
- """Parse TOML from a string."""
71
-
72
- # The spec allows converting "\r\n" to "\n", even in string
73
- # literals. Let's do so to simplify parsing.
74
- src = __s.replace("\r\n", "\n")
75
- pos = 0
76
- out = Output(NestedDict(), Flags())
77
- header: Key = ()
78
- parse_float = make_safe_parse_float(parse_float)
79
-
80
- # Parse one statement at a time
81
- # (typically means one line in TOML source)
82
- while True:
83
- # 1. Skip line leading whitespace
84
- pos = skip_chars(src, pos, TOML_WS)
85
-
86
- # 2. Parse rules. Expect one of the following:
87
- # - end of file
88
- # - end of line
89
- # - comment
90
- # - key/value pair
91
- # - append dict to list (and move to its namespace)
92
- # - create dict (and move to its namespace)
93
- # Skip trailing whitespace when applicable.
94
- try:
95
- char = src[pos]
96
- except IndexError:
97
- break
98
- if char == "\n":
99
- pos += 1
100
- continue
101
- if char in KEY_INITIAL_CHARS:
102
- pos = key_value_rule(src, pos, out, header, parse_float)
103
- pos = skip_chars(src, pos, TOML_WS)
104
- elif char == "[":
105
- try:
106
- second_char: str | None = src[pos + 1]
107
- except IndexError:
108
- second_char = None
109
- out.flags.finalize_pending()
110
- if second_char == "[":
111
- pos, header = create_list_rule(src, pos, out)
112
- else:
113
- pos, header = create_dict_rule(src, pos, out)
114
- pos = skip_chars(src, pos, TOML_WS)
115
- elif char != "#":
116
- raise suffixed_err(src, pos, "Invalid statement")
117
-
118
- # 3. Skip comment
119
- pos = skip_comment(src, pos)
120
-
121
- # 4. Expect end of line or end of file
122
- try:
123
- char = src[pos]
124
- except IndexError:
125
- break
126
- if char != "\n":
127
- raise suffixed_err(
128
- src, pos, "Expected newline or end of document after a statement"
129
- )
130
- pos += 1
131
-
132
- return out.data.dict
133
-
134
-
135
- class Flags:
136
- """Flags that map to parsed keys/namespaces."""
137
-
138
- # Marks an immutable namespace (inline array or inline table).
139
- FROZEN = 0
140
- # Marks a nest that has been explicitly created and can no longer
141
- # be opened using the "[table]" syntax.
142
- EXPLICIT_NEST = 1
143
-
144
- def __init__(self) -> None:
145
- self._flags: dict[str, dict] = {}
146
- self._pending_flags: set[tuple[Key, int]] = set()
147
-
148
- def add_pending(self, key: Key, flag: int) -> None:
149
- self._pending_flags.add((key, flag))
150
-
151
- def finalize_pending(self) -> None:
152
- for key, flag in self._pending_flags:
153
- self.set(key, flag, recursive=False)
154
- self._pending_flags.clear()
155
-
156
- def unset_all(self, key: Key) -> None:
157
- cont = self._flags
158
- for k in key[:-1]:
159
- if k not in cont:
160
- return
161
- cont = cont[k]["nested"]
162
- cont.pop(key[-1], None)
163
-
164
- def set(self, key: Key, flag: int, *, recursive: bool) -> None: # noqa: A003
165
- cont = self._flags
166
- key_parent, key_stem = key[:-1], key[-1]
167
- for k in key_parent:
168
- if k not in cont:
169
- cont[k] = {"flags": set(), "recursive_flags": set(), "nested": {}}
170
- cont = cont[k]["nested"]
171
- if key_stem not in cont:
172
- cont[key_stem] = {"flags": set(), "recursive_flags": set(), "nested": {}}
173
- cont[key_stem]["recursive_flags" if recursive else "flags"].add(flag)
174
-
175
- def is_(self, key: Key, flag: int) -> bool:
176
- if not key:
177
- return False # document root has no flags
178
- cont = self._flags
179
- for k in key[:-1]:
180
- if k not in cont:
181
- return False
182
- inner_cont = cont[k]
183
- if flag in inner_cont["recursive_flags"]:
184
- return True
185
- cont = inner_cont["nested"]
186
- key_stem = key[-1]
187
- if key_stem in cont:
188
- cont = cont[key_stem]
189
- return flag in cont["flags"] or flag in cont["recursive_flags"]
190
- return False
191
-
192
-
193
- class NestedDict:
194
- def __init__(self) -> None:
195
- # The parsed content of the TOML document
196
- self.dict: dict[str, Any] = {}
197
-
198
- def get_or_create_nest(
199
- self,
200
- key: Key,
201
- *,
202
- access_lists: bool = True,
203
- ) -> dict:
204
- cont: Any = self.dict
205
- for k in key:
206
- if k not in cont:
207
- cont[k] = {}
208
- cont = cont[k]
209
- if access_lists and isinstance(cont, list):
210
- cont = cont[-1]
211
- if not isinstance(cont, dict):
212
- raise KeyError("There is no nest behind this key")
213
- return cont
214
-
215
- def append_nest_to_list(self, key: Key) -> None:
216
- cont = self.get_or_create_nest(key[:-1])
217
- last_key = key[-1]
218
- if last_key in cont:
219
- list_ = cont[last_key]
220
- if not isinstance(list_, list):
221
- raise KeyError("An object other than list found behind this key")
222
- list_.append({})
223
- else:
224
- cont[last_key] = [{}]
225
-
226
-
227
- class Output(NamedTuple):
228
- data: NestedDict
229
- flags: Flags
230
-
231
-
232
- def skip_chars(src: str, pos: Pos, chars: Iterable[str]) -> Pos:
233
- try:
234
- while src[pos] in chars:
235
- pos += 1
236
- except IndexError:
237
- pass
238
- return pos
239
-
240
-
241
- def skip_until(
242
- src: str,
243
- pos: Pos,
244
- expect: str,
245
- *,
246
- error_on: frozenset[str],
247
- error_on_eof: bool,
248
- ) -> Pos:
249
- try:
250
- new_pos = src.index(expect, pos)
251
- except ValueError:
252
- new_pos = len(src)
253
- if error_on_eof:
254
- raise suffixed_err(src, new_pos, f"Expected {expect!r}") from None
255
-
256
- if not error_on.isdisjoint(src[pos:new_pos]):
257
- while src[pos] not in error_on:
258
- pos += 1
259
- raise suffixed_err(src, pos, f"Found invalid character {src[pos]!r}")
260
- return new_pos
261
-
262
-
263
- def skip_comment(src: str, pos: Pos) -> Pos:
264
- try:
265
- char: str | None = src[pos]
266
- except IndexError:
267
- char = None
268
- if char == "#":
269
- return skip_until(
270
- src, pos + 1, "\n", error_on=ILLEGAL_COMMENT_CHARS, error_on_eof=False
271
- )
272
- return pos
273
-
274
-
275
- def skip_comments_and_array_ws(src: str, pos: Pos) -> Pos:
276
- while True:
277
- pos_before_skip = pos
278
- pos = skip_chars(src, pos, TOML_WS_AND_NEWLINE)
279
- pos = skip_comment(src, pos)
280
- if pos == pos_before_skip:
281
- return pos
282
-
283
-
284
- def create_dict_rule(src: str, pos: Pos, out: Output) -> tuple[Pos, Key]:
285
- pos += 1 # Skip "["
286
- pos = skip_chars(src, pos, TOML_WS)
287
- pos, key = parse_key(src, pos)
288
-
289
- if out.flags.is_(key, Flags.EXPLICIT_NEST) or out.flags.is_(key, Flags.FROZEN):
290
- raise suffixed_err(src, pos, f"Cannot declare {key} twice")
291
- out.flags.set(key, Flags.EXPLICIT_NEST, recursive=False)
292
- try:
293
- out.data.get_or_create_nest(key)
294
- except KeyError:
295
- raise suffixed_err(src, pos, "Cannot overwrite a value") from None
296
-
297
- if not src.startswith("]", pos):
298
- raise suffixed_err(src, pos, "Expected ']' at the end of a table declaration")
299
- return pos + 1, key
300
-
301
-
302
- def create_list_rule(src: str, pos: Pos, out: Output) -> tuple[Pos, Key]:
303
- pos += 2 # Skip "[["
304
- pos = skip_chars(src, pos, TOML_WS)
305
- pos, key = parse_key(src, pos)
306
-
307
- if out.flags.is_(key, Flags.FROZEN):
308
- raise suffixed_err(src, pos, f"Cannot mutate immutable namespace {key}")
309
- # Free the namespace now that it points to another empty list item...
310
- out.flags.unset_all(key)
311
- # ...but this key precisely is still prohibited from table declaration
312
- out.flags.set(key, Flags.EXPLICIT_NEST, recursive=False)
313
- try:
314
- out.data.append_nest_to_list(key)
315
- except KeyError:
316
- raise suffixed_err(src, pos, "Cannot overwrite a value") from None
317
-
318
- if not src.startswith("]]", pos):
319
- raise suffixed_err(src, pos, "Expected ']]' at the end of an array declaration")
320
- return pos + 2, key
321
-
322
-
323
- def key_value_rule(
324
- src: str, pos: Pos, out: Output, header: Key, parse_float: ParseFloat
325
- ) -> Pos:
326
- pos, key, value = parse_key_value_pair(src, pos, parse_float)
327
- key_parent, key_stem = key[:-1], key[-1]
328
- abs_key_parent = header + key_parent
329
-
330
- relative_path_cont_keys = (header + key[:i] for i in range(1, len(key)))
331
- for cont_key in relative_path_cont_keys:
332
- # Check that dotted key syntax does not redefine an existing table
333
- if out.flags.is_(cont_key, Flags.EXPLICIT_NEST):
334
- raise suffixed_err(src, pos, f"Cannot redefine namespace {cont_key}")
335
- # Containers in the relative path can't be opened with the table syntax or
336
- # dotted key/value syntax in following table sections.
337
- out.flags.add_pending(cont_key, Flags.EXPLICIT_NEST)
338
-
339
- if out.flags.is_(abs_key_parent, Flags.FROZEN):
340
- raise suffixed_err(
341
- src, pos, f"Cannot mutate immutable namespace {abs_key_parent}"
342
- )
343
-
344
- try:
345
- nest = out.data.get_or_create_nest(abs_key_parent)
346
- except KeyError:
347
- raise suffixed_err(src, pos, "Cannot overwrite a value") from None
348
- if key_stem in nest:
349
- raise suffixed_err(src, pos, "Cannot overwrite a value")
350
- # Mark inline table and array namespaces recursively immutable
351
- if isinstance(value, (dict, list)):
352
- out.flags.set(header + key, Flags.FROZEN, recursive=True)
353
- nest[key_stem] = value
354
- return pos
355
-
356
-
357
- def parse_key_value_pair(
358
- src: str, pos: Pos, parse_float: ParseFloat
359
- ) -> tuple[Pos, Key, Any]:
360
- pos, key = parse_key(src, pos)
361
- try:
362
- char: str | None = src[pos]
363
- except IndexError:
364
- char = None
365
- if char != "=":
366
- raise suffixed_err(src, pos, "Expected '=' after a key in a key/value pair")
367
- pos += 1
368
- pos = skip_chars(src, pos, TOML_WS)
369
- pos, value = parse_value(src, pos, parse_float)
370
- return pos, key, value
371
-
372
-
373
- def parse_key(src: str, pos: Pos) -> tuple[Pos, Key]:
374
- pos, key_part = parse_key_part(src, pos)
375
- key: Key = (key_part,)
376
- pos = skip_chars(src, pos, TOML_WS)
377
- while True:
378
- try:
379
- char: str | None = src[pos]
380
- except IndexError:
381
- char = None
382
- if char != ".":
383
- return pos, key
384
- pos += 1
385
- pos = skip_chars(src, pos, TOML_WS)
386
- pos, key_part = parse_key_part(src, pos)
387
- key += (key_part,)
388
- pos = skip_chars(src, pos, TOML_WS)
389
-
390
-
391
- def parse_key_part(src: str, pos: Pos) -> tuple[Pos, str]:
392
- try:
393
- char: str | None = src[pos]
394
- except IndexError:
395
- char = None
396
- if char in BARE_KEY_CHARS:
397
- start_pos = pos
398
- pos = skip_chars(src, pos, BARE_KEY_CHARS)
399
- return pos, src[start_pos:pos]
400
- if char == "'":
401
- return parse_literal_str(src, pos)
402
- if char == '"':
403
- return parse_one_line_basic_str(src, pos)
404
- raise suffixed_err(src, pos, "Invalid initial character for a key part")
405
-
406
-
407
- def parse_one_line_basic_str(src: str, pos: Pos) -> tuple[Pos, str]:
408
- pos += 1
409
- return parse_basic_str(src, pos, multiline=False)
410
-
411
-
412
- def parse_array(src: str, pos: Pos, parse_float: ParseFloat) -> tuple[Pos, list]:
413
- pos += 1
414
- array: list = []
415
-
416
- pos = skip_comments_and_array_ws(src, pos)
417
- if src.startswith("]", pos):
418
- return pos + 1, array
419
- while True:
420
- pos, val = parse_value(src, pos, parse_float)
421
- array.append(val)
422
- pos = skip_comments_and_array_ws(src, pos)
423
-
424
- c = src[pos : pos + 1]
425
- if c == "]":
426
- return pos + 1, array
427
- if c != ",":
428
- raise suffixed_err(src, pos, "Unclosed array")
429
- pos += 1
430
-
431
- pos = skip_comments_and_array_ws(src, pos)
432
- if src.startswith("]", pos):
433
- return pos + 1, array
434
-
435
-
436
- def parse_inline_table(src: str, pos: Pos, parse_float: ParseFloat) -> tuple[Pos, dict]:
437
- pos += 1
438
- nested_dict = NestedDict()
439
- flags = Flags()
440
-
441
- pos = skip_chars(src, pos, TOML_WS)
442
- if src.startswith("}", pos):
443
- return pos + 1, nested_dict.dict
444
- while True:
445
- pos, key, value = parse_key_value_pair(src, pos, parse_float)
446
- key_parent, key_stem = key[:-1], key[-1]
447
- if flags.is_(key, Flags.FROZEN):
448
- raise suffixed_err(src, pos, f"Cannot mutate immutable namespace {key}")
449
- try:
450
- nest = nested_dict.get_or_create_nest(key_parent, access_lists=False)
451
- except KeyError:
452
- raise suffixed_err(src, pos, "Cannot overwrite a value") from None
453
- if key_stem in nest:
454
- raise suffixed_err(src, pos, f"Duplicate inline table key {key_stem!r}")
455
- nest[key_stem] = value
456
- pos = skip_chars(src, pos, TOML_WS)
457
- c = src[pos : pos + 1]
458
- if c == "}":
459
- return pos + 1, nested_dict.dict
460
- if c != ",":
461
- raise suffixed_err(src, pos, "Unclosed inline table")
462
- if isinstance(value, (dict, list)):
463
- flags.set(key, Flags.FROZEN, recursive=True)
464
- pos += 1
465
- pos = skip_chars(src, pos, TOML_WS)
466
-
467
-
468
- def parse_basic_str_escape(
469
- src: str, pos: Pos, *, multiline: bool = False
470
- ) -> tuple[Pos, str]:
471
- escape_id = src[pos : pos + 2]
472
- pos += 2
473
- if multiline and escape_id in {"\\ ", "\\\t", "\\\n"}:
474
- # Skip whitespace until next non-whitespace character or end of
475
- # the doc. Error if non-whitespace is found before newline.
476
- if escape_id != "\\\n":
477
- pos = skip_chars(src, pos, TOML_WS)
478
- try:
479
- char = src[pos]
480
- except IndexError:
481
- return pos, ""
482
- if char != "\n":
483
- raise suffixed_err(src, pos, "Unescaped '\\' in a string")
484
- pos += 1
485
- pos = skip_chars(src, pos, TOML_WS_AND_NEWLINE)
486
- return pos, ""
487
- if escape_id == "\\u":
488
- return parse_hex_char(src, pos, 4)
489
- if escape_id == "\\U":
490
- return parse_hex_char(src, pos, 8)
491
- try:
492
- return pos, BASIC_STR_ESCAPE_REPLACEMENTS[escape_id]
493
- except KeyError:
494
- raise suffixed_err(src, pos, "Unescaped '\\' in a string") from None
495
-
496
-
497
- def parse_basic_str_escape_multiline(src: str, pos: Pos) -> tuple[Pos, str]:
498
- return parse_basic_str_escape(src, pos, multiline=True)
499
-
500
-
501
- def parse_hex_char(src: str, pos: Pos, hex_len: int) -> tuple[Pos, str]:
502
- hex_str = src[pos : pos + hex_len]
503
- if len(hex_str) != hex_len or not HEXDIGIT_CHARS.issuperset(hex_str):
504
- raise suffixed_err(src, pos, "Invalid hex value")
505
- pos += hex_len
506
- hex_int = int(hex_str, 16)
507
- if not is_unicode_scalar_value(hex_int):
508
- raise suffixed_err(src, pos, "Escaped character is not a Unicode scalar value")
509
- return pos, chr(hex_int)
510
-
511
-
512
- def parse_literal_str(src: str, pos: Pos) -> tuple[Pos, str]:
513
- pos += 1 # Skip starting apostrophe
514
- start_pos = pos
515
- pos = skip_until(
516
- src, pos, "'", error_on=ILLEGAL_LITERAL_STR_CHARS, error_on_eof=True
517
- )
518
- return pos + 1, src[start_pos:pos] # Skip ending apostrophe
519
-
520
-
521
- def parse_multiline_str(src: str, pos: Pos, *, literal: bool) -> tuple[Pos, str]:
522
- pos += 3
523
- if src.startswith("\n", pos):
524
- pos += 1
525
-
526
- if literal:
527
- delim = "'"
528
- end_pos = skip_until(
529
- src,
530
- pos,
531
- "'''",
532
- error_on=ILLEGAL_MULTILINE_LITERAL_STR_CHARS,
533
- error_on_eof=True,
534
- )
535
- result = src[pos:end_pos]
536
- pos = end_pos + 3
537
- else:
538
- delim = '"'
539
- pos, result = parse_basic_str(src, pos, multiline=True)
540
-
541
- # Add at maximum two extra apostrophes/quotes if the end sequence
542
- # is 4 or 5 chars long instead of just 3.
543
- if not src.startswith(delim, pos):
544
- return pos, result
545
- pos += 1
546
- if not src.startswith(delim, pos):
547
- return pos, result + delim
548
- pos += 1
549
- return pos, result + (delim * 2)
550
-
551
-
552
- def parse_basic_str(src: str, pos: Pos, *, multiline: bool) -> tuple[Pos, str]:
553
- if multiline:
554
- error_on = ILLEGAL_MULTILINE_BASIC_STR_CHARS
555
- parse_escapes = parse_basic_str_escape_multiline
556
- else:
557
- error_on = ILLEGAL_BASIC_STR_CHARS
558
- parse_escapes = parse_basic_str_escape
559
- result = ""
560
- start_pos = pos
561
- while True:
562
- try:
563
- char = src[pos]
564
- except IndexError:
565
- raise suffixed_err(src, pos, "Unterminated string") from None
566
- if char == '"':
567
- if not multiline:
568
- return pos + 1, result + src[start_pos:pos]
569
- if src.startswith('"""', pos):
570
- return pos + 3, result + src[start_pos:pos]
571
- pos += 1
572
- continue
573
- if char == "\\":
574
- result += src[start_pos:pos]
575
- pos, parsed_escape = parse_escapes(src, pos)
576
- result += parsed_escape
577
- start_pos = pos
578
- continue
579
- if char in error_on:
580
- raise suffixed_err(src, pos, f"Illegal character {char!r}")
581
- pos += 1
582
-
583
-
584
- def parse_value( # noqa: C901
585
- src: str, pos: Pos, parse_float: ParseFloat
586
- ) -> tuple[Pos, Any]:
587
- try:
588
- char: str | None = src[pos]
589
- except IndexError:
590
- char = None
591
-
592
- # IMPORTANT: order conditions based on speed of checking and likelihood
593
-
594
- # Basic strings
595
- if char == '"':
596
- if src.startswith('"""', pos):
597
- return parse_multiline_str(src, pos, literal=False)
598
- return parse_one_line_basic_str(src, pos)
599
-
600
- # Literal strings
601
- if char == "'":
602
- if src.startswith("'''", pos):
603
- return parse_multiline_str(src, pos, literal=True)
604
- return parse_literal_str(src, pos)
605
-
606
- # Booleans
607
- if char == "t":
608
- if src.startswith("true", pos):
609
- return pos + 4, True
610
- if char == "f":
611
- if src.startswith("false", pos):
612
- return pos + 5, False
613
-
614
- # Arrays
615
- if char == "[":
616
- return parse_array(src, pos, parse_float)
617
-
618
- # Inline tables
619
- if char == "{":
620
- return parse_inline_table(src, pos, parse_float)
621
-
622
- # Dates and times
623
- datetime_match = RE_DATETIME.match(src, pos)
624
- if datetime_match:
625
- try:
626
- datetime_obj = match_to_datetime(datetime_match)
627
- except ValueError as e:
628
- raise suffixed_err(src, pos, "Invalid date or datetime") from e
629
- return datetime_match.end(), datetime_obj
630
- localtime_match = RE_LOCALTIME.match(src, pos)
631
- if localtime_match:
632
- return localtime_match.end(), match_to_localtime(localtime_match)
633
-
634
- # Integers and "normal" floats.
635
- # The regex will greedily match any type starting with a decimal
636
- # char, so needs to be located after handling of dates and times.
637
- number_match = RE_NUMBER.match(src, pos)
638
- if number_match:
639
- return number_match.end(), match_to_number(number_match, parse_float)
640
-
641
- # Special floats
642
- first_three = src[pos : pos + 3]
643
- if first_three in {"inf", "nan"}:
644
- return pos + 3, parse_float(first_three)
645
- first_four = src[pos : pos + 4]
646
- if first_four in {"-inf", "+inf", "-nan", "+nan"}:
647
- return pos + 4, parse_float(first_four)
648
-
649
- raise suffixed_err(src, pos, "Invalid value")
650
-
651
-
652
- def suffixed_err(src: str, pos: Pos, msg: str) -> TOMLDecodeError:
653
- """Return a `TOMLDecodeError` where error message is suffixed with
654
- coordinates in source."""
655
-
656
- def coord_repr(src: str, pos: Pos) -> str:
657
- if pos >= len(src):
658
- return "end of document"
659
- line = src.count("\n", 0, pos) + 1
660
- if line == 1:
661
- column = pos + 1
662
- else:
663
- column = pos - src.rindex("\n", 0, pos)
664
- return f"line {line}, column {column}"
665
-
666
- return TOMLDecodeError(f"{msg} (at {coord_repr(src, pos)})")
667
-
668
-
669
- def is_unicode_scalar_value(codepoint: int) -> bool:
670
- return (0 <= codepoint <= 55295) or (57344 <= codepoint <= 1114111)
671
-
672
-
673
- def make_safe_parse_float(parse_float: ParseFloat) -> ParseFloat:
674
- """A decorator to make `parse_float` safe.
675
-
676
- `parse_float` must not return dicts or lists, because these types
677
- would be mixed with parsed TOML tables and arrays, thus confusing
678
- the parser. The returned decorated callable raises `ValueError`
679
- instead of returning illegal types.
680
- """
681
- # The default `float` callable never returns illegal types. Optimize it.
682
- if parse_float is float: # type: ignore[comparison-overlap]
683
- return float
684
-
685
- def safe_parse_float(float_str: str) -> Any:
686
- float_value = parse_float(float_str)
687
- if isinstance(float_value, (dict, list)):
688
- raise ValueError("parse_float must not return dicts or lists")
689
- return float_value
690
-
691
- return safe_parse_float
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BigSalmon/Paraphrase/app.py DELETED
@@ -1,41 +0,0 @@
1
- import torch
2
- from transformers import T5ForConditionalGeneration, AutoTokenizer, AutoModelForSeq2SeqLM
3
- import streamlit as st
4
- st.title("Paraphrase")
5
- model_name = st.text_input("Pick a Model", "geckos/pegasus-fined-tuned-on-paraphrase")
6
- model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
7
- tokenizer = AutoTokenizer.from_pretrained("geckos/pegasus-fined-tuned-on-paraphrase")
8
-
9
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
10
- model = model.to(device)
11
- temp = st.sidebar.slider("Temperature", 0.7, 1.5)
12
- number_of_outputs = st.sidebar.slider("Number of Outputs", 1, 10)
13
-
14
- def translate_to_english(model, tokenizer, text):
15
- translated_text = []
16
- text = text + " </s>"
17
- encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt")
18
- input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device)
19
- beam_outputs = model.generate(
20
- input_ids=input_ids, attention_mask=attention_masks,
21
- do_sample=True,
22
- max_length=256,
23
- temperature = temp,
24
- top_k=120,
25
- top_p=0.98,
26
- early_stopping=True,
27
- num_return_sequences=number_of_outputs,
28
- )
29
- for beam_output in beam_outputs:
30
- sent = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
31
- print(sent)
32
- translated_text.append(sent)
33
- return translated_text
34
-
35
- text = st.text_input("Okay")
36
- st.text("What you wrote: ")
37
- st.write(text)
38
- st.text("Output: ")
39
- if text:
40
- translated_text = translate_to_english(model, tokenizer, text)
41
- st.write(translated_text if translated_text else "No translation found")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Bamboo_ViT-B16_demo/app.py DELETED
@@ -1,105 +0,0 @@
1
- import argparse
2
- import requests
3
- import gradio as gr
4
- import numpy as np
5
- import cv2
6
- import torch
7
- import torch.nn as nn
8
- from PIL import Image
9
- import torchvision
10
- from timm.data.constants import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
11
- from timm.data import create_transform
12
-
13
- from timmvit import timmvit
14
- import json
15
- from timm.models.hub import download_cached_file
16
- from PIL import Image
17
-
18
- def pil_loader(filepath):
19
- with Image.open(filepath) as img:
20
- img = img.convert('RGB')
21
- return img
22
-
23
- def build_transforms(input_size, center_crop=True):
24
- transform = torchvision.transforms.Compose([
25
- torchvision.transforms.ToPILImage(),
26
- torchvision.transforms.Resize(input_size * 8 // 7),
27
- torchvision.transforms.CenterCrop(input_size),
28
- torchvision.transforms.ToTensor(),
29
- torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
30
- ])
31
- return transform
32
-
33
- # Download human-readable labels for Bamboo.
34
- with open('./trainid2name.json') as f:
35
- id2name = json.load(f)
36
-
37
-
38
- '''
39
- build model
40
- '''
41
- model = timmvit(pretrain_path='./Bamboo_v0-1_ViT-B16.pth.tar.convert')
42
- model.eval()
43
-
44
- '''
45
- borrow code from here: https://github.com/jacobgil/pytorch-grad-cam/blob/master/pytorch_grad_cam/utils/image.py
46
- '''
47
- def show_cam_on_image(img: np.ndarray,
48
- mask: np.ndarray,
49
- use_rgb: bool = False,
50
- colormap: int = cv2.COLORMAP_JET) -> np.ndarray:
51
- """ This function overlays the cam mask on the image as an heatmap.
52
- By default the heatmap is in BGR format.
53
- :param img: The base image in RGB or BGR format.
54
- :param mask: The cam mask.
55
- :param use_rgb: Whether to use an RGB or BGR heatmap, this should be set to True if 'img' is in RGB format.
56
- :param colormap: The OpenCV colormap to be used.
57
- :returns: The default image with the cam overlay.
58
- """
59
- heatmap = cv2.applyColorMap(np.uint8(255 * mask), colormap)
60
- if use_rgb:
61
- heatmap = cv2.cvtColor(heatmap, cv2.COLOR_BGR2RGB)
62
- heatmap = np.float32(heatmap) / 255
63
-
64
- if np.max(img) > 1:
65
- raise Exception(
66
- "The input image should np.float32 in the range [0, 1]")
67
-
68
- cam = 0.7*heatmap + 0.3*img
69
- # cam = cam / np.max(cam)
70
- return np.uint8(255 * cam)
71
-
72
-
73
-
74
-
75
- def recognize_image(image):
76
- img_t = eval_transforms(image)
77
- # compute output
78
- output = model(img_t.unsqueeze(0))
79
- prediction = output.softmax(-1).flatten()
80
- _,top5_idx = torch.topk(prediction, 5)
81
- return {id2name[str(i)][0]: float(prediction[i]) for i in top5_idx.tolist()}
82
-
83
- eval_transforms = build_transforms(224)
84
-
85
-
86
- image = gr.inputs.Image()
87
- label = gr.outputs.Label(num_top_classes=5)
88
-
89
- gr.Interface(
90
- description="Bamboo for Image Recognition Demo (https://github.com/Davidzhangyuanhan/Bamboo). Bamboo knows what this object is and what you are doing in a very fine-grain granularity: fratercula arctica (fig.5) and dribbler (fig.2)).",
91
- fn=recognize_image,
92
- inputs=["image"],
93
- outputs=[
94
- label,
95
- ],
96
- examples=[
97
- ["./examples/playing_mahjong.jpg"],
98
- ["./examples/dribbler.jpg"],
99
- ["./examples/Ferrari-F355.jpg"],
100
- ["./examples/northern_oriole.jpg"],
101
- ["./examples/fratercula_arctica.jpg"],
102
- ["./examples/husky.jpg"],
103
- ["./examples/taraxacum_erythrospermum.jpg"],
104
- ],
105
- ).launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/pybind11/.github/ISSUE_TEMPLATE/feature-request.md DELETED
@@ -1,16 +0,0 @@
1
- ---
2
- name: Feature Request
3
- about: File an issue about adding a feature
4
- title: "[FEAT] "
5
- ---
6
-
7
-
8
- Make sure you've completed the following steps before submitting your issue -- thank you!
9
-
10
- 1. Check if your feature has already been mentioned / rejected / planned in other issues.
11
- 2. If those resources didn't help, consider asking in the [Gitter chat room][] to see if this is interesting / useful to a larger audience and possible to implement reasonably,
12
- 4. If you have a useful feature that passes the previous items (or not suitable for chat), please fill in the details below.
13
-
14
- [Gitter chat room]: https://gitter.im/pybind/Lobby
15
-
16
- *After reading, remove this checklist.*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/models/backbones/resnet.py DELETED
@@ -1,663 +0,0 @@
1
- import torch.nn as nn
2
- import torch.utils.checkpoint as cp
3
- from mmcv.cnn import (build_conv_layer, build_norm_layer, build_plugin_layer,
4
- constant_init, kaiming_init)
5
- from mmcv.runner import load_checkpoint
6
- from torch.nn.modules.batchnorm import _BatchNorm
7
-
8
- from mmdet.utils import get_root_logger
9
- from ..builder import BACKBONES
10
- from ..utils import ResLayer
11
-
12
-
13
- class BasicBlock(nn.Module):
14
- expansion = 1
15
-
16
- def __init__(self,
17
- inplanes,
18
- planes,
19
- stride=1,
20
- dilation=1,
21
- downsample=None,
22
- style='pytorch',
23
- with_cp=False,
24
- conv_cfg=None,
25
- norm_cfg=dict(type='BN'),
26
- dcn=None,
27
- plugins=None):
28
- super(BasicBlock, self).__init__()
29
- assert dcn is None, 'Not implemented yet.'
30
- assert plugins is None, 'Not implemented yet.'
31
-
32
- self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1)
33
- self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2)
34
-
35
- self.conv1 = build_conv_layer(
36
- conv_cfg,
37
- inplanes,
38
- planes,
39
- 3,
40
- stride=stride,
41
- padding=dilation,
42
- dilation=dilation,
43
- bias=False)
44
- self.add_module(self.norm1_name, norm1)
45
- self.conv2 = build_conv_layer(
46
- conv_cfg, planes, planes, 3, padding=1, bias=False)
47
- self.add_module(self.norm2_name, norm2)
48
-
49
- self.relu = nn.ReLU(inplace=True)
50
- self.downsample = downsample
51
- self.stride = stride
52
- self.dilation = dilation
53
- self.with_cp = with_cp
54
-
55
- @property
56
- def norm1(self):
57
- """nn.Module: normalization layer after the first convolution layer"""
58
- return getattr(self, self.norm1_name)
59
-
60
- @property
61
- def norm2(self):
62
- """nn.Module: normalization layer after the second convolution layer"""
63
- return getattr(self, self.norm2_name)
64
-
65
- def forward(self, x):
66
- """Forward function."""
67
-
68
- def _inner_forward(x):
69
- identity = x
70
-
71
- out = self.conv1(x)
72
- out = self.norm1(out)
73
- out = self.relu(out)
74
-
75
- out = self.conv2(out)
76
- out = self.norm2(out)
77
-
78
- if self.downsample is not None:
79
- identity = self.downsample(x)
80
-
81
- out += identity
82
-
83
- return out
84
-
85
- if self.with_cp and x.requires_grad:
86
- out = cp.checkpoint(_inner_forward, x)
87
- else:
88
- out = _inner_forward(x)
89
-
90
- out = self.relu(out)
91
-
92
- return out
93
-
94
-
95
- class Bottleneck(nn.Module):
96
- expansion = 4
97
-
98
- def __init__(self,
99
- inplanes,
100
- planes,
101
- stride=1,
102
- dilation=1,
103
- downsample=None,
104
- style='pytorch',
105
- with_cp=False,
106
- conv_cfg=None,
107
- norm_cfg=dict(type='BN'),
108
- dcn=None,
109
- plugins=None):
110
- """Bottleneck block for ResNet.
111
-
112
- If style is "pytorch", the stride-two layer is the 3x3 conv layer, if
113
- it is "caffe", the stride-two layer is the first 1x1 conv layer.
114
- """
115
- super(Bottleneck, self).__init__()
116
- assert style in ['pytorch', 'caffe']
117
- assert dcn is None or isinstance(dcn, dict)
118
- assert plugins is None or isinstance(plugins, list)
119
- if plugins is not None:
120
- allowed_position = ['after_conv1', 'after_conv2', 'after_conv3']
121
- assert all(p['position'] in allowed_position for p in plugins)
122
-
123
- self.inplanes = inplanes
124
- self.planes = planes
125
- self.stride = stride
126
- self.dilation = dilation
127
- self.style = style
128
- self.with_cp = with_cp
129
- self.conv_cfg = conv_cfg
130
- self.norm_cfg = norm_cfg
131
- self.dcn = dcn
132
- self.with_dcn = dcn is not None
133
- self.plugins = plugins
134
- self.with_plugins = plugins is not None
135
-
136
- if self.with_plugins:
137
- # collect plugins for conv1/conv2/conv3
138
- self.after_conv1_plugins = [
139
- plugin['cfg'] for plugin in plugins
140
- if plugin['position'] == 'after_conv1'
141
- ]
142
- self.after_conv2_plugins = [
143
- plugin['cfg'] for plugin in plugins
144
- if plugin['position'] == 'after_conv2'
145
- ]
146
- self.after_conv3_plugins = [
147
- plugin['cfg'] for plugin in plugins
148
- if plugin['position'] == 'after_conv3'
149
- ]
150
-
151
- if self.style == 'pytorch':
152
- self.conv1_stride = 1
153
- self.conv2_stride = stride
154
- else:
155
- self.conv1_stride = stride
156
- self.conv2_stride = 1
157
-
158
- self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1)
159
- self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2)
160
- self.norm3_name, norm3 = build_norm_layer(
161
- norm_cfg, planes * self.expansion, postfix=3)
162
-
163
- self.conv1 = build_conv_layer(
164
- conv_cfg,
165
- inplanes,
166
- planes,
167
- kernel_size=1,
168
- stride=self.conv1_stride,
169
- bias=False)
170
- self.add_module(self.norm1_name, norm1)
171
- fallback_on_stride = False
172
- if self.with_dcn:
173
- fallback_on_stride = dcn.pop('fallback_on_stride', False)
174
- if not self.with_dcn or fallback_on_stride:
175
- self.conv2 = build_conv_layer(
176
- conv_cfg,
177
- planes,
178
- planes,
179
- kernel_size=3,
180
- stride=self.conv2_stride,
181
- padding=dilation,
182
- dilation=dilation,
183
- bias=False)
184
- else:
185
- assert self.conv_cfg is None, 'conv_cfg must be None for DCN'
186
- self.conv2 = build_conv_layer(
187
- dcn,
188
- planes,
189
- planes,
190
- kernel_size=3,
191
- stride=self.conv2_stride,
192
- padding=dilation,
193
- dilation=dilation,
194
- bias=False)
195
-
196
- self.add_module(self.norm2_name, norm2)
197
- self.conv3 = build_conv_layer(
198
- conv_cfg,
199
- planes,
200
- planes * self.expansion,
201
- kernel_size=1,
202
- bias=False)
203
- self.add_module(self.norm3_name, norm3)
204
-
205
- self.relu = nn.ReLU(inplace=True)
206
- self.downsample = downsample
207
-
208
- if self.with_plugins:
209
- self.after_conv1_plugin_names = self.make_block_plugins(
210
- planes, self.after_conv1_plugins)
211
- self.after_conv2_plugin_names = self.make_block_plugins(
212
- planes, self.after_conv2_plugins)
213
- self.after_conv3_plugin_names = self.make_block_plugins(
214
- planes * self.expansion, self.after_conv3_plugins)
215
-
216
- def make_block_plugins(self, in_channels, plugins):
217
- """make plugins for block.
218
-
219
- Args:
220
- in_channels (int): Input channels of plugin.
221
- plugins (list[dict]): List of plugins cfg to build.
222
-
223
- Returns:
224
- list[str]: List of the names of plugin.
225
- """
226
- assert isinstance(plugins, list)
227
- plugin_names = []
228
- for plugin in plugins:
229
- plugin = plugin.copy()
230
- name, layer = build_plugin_layer(
231
- plugin,
232
- in_channels=in_channels,
233
- postfix=plugin.pop('postfix', ''))
234
- assert not hasattr(self, name), f'duplicate plugin {name}'
235
- self.add_module(name, layer)
236
- plugin_names.append(name)
237
- return plugin_names
238
-
239
- def forward_plugin(self, x, plugin_names):
240
- out = x
241
- for name in plugin_names:
242
- out = getattr(self, name)(x)
243
- return out
244
-
245
- @property
246
- def norm1(self):
247
- """nn.Module: normalization layer after the first convolution layer"""
248
- return getattr(self, self.norm1_name)
249
-
250
- @property
251
- def norm2(self):
252
- """nn.Module: normalization layer after the second convolution layer"""
253
- return getattr(self, self.norm2_name)
254
-
255
- @property
256
- def norm3(self):
257
- """nn.Module: normalization layer after the third convolution layer"""
258
- return getattr(self, self.norm3_name)
259
-
260
- def forward(self, x):
261
- """Forward function."""
262
-
263
- def _inner_forward(x):
264
- identity = x
265
- out = self.conv1(x)
266
- out = self.norm1(out)
267
- out = self.relu(out)
268
-
269
- if self.with_plugins:
270
- out = self.forward_plugin(out, self.after_conv1_plugin_names)
271
-
272
- out = self.conv2(out)
273
- out = self.norm2(out)
274
- out = self.relu(out)
275
-
276
- if self.with_plugins:
277
- out = self.forward_plugin(out, self.after_conv2_plugin_names)
278
-
279
- out = self.conv3(out)
280
- out = self.norm3(out)
281
-
282
- if self.with_plugins:
283
- out = self.forward_plugin(out, self.after_conv3_plugin_names)
284
-
285
- if self.downsample is not None:
286
- identity = self.downsample(x)
287
-
288
- out += identity
289
-
290
- return out
291
-
292
- if self.with_cp and x.requires_grad:
293
- out = cp.checkpoint(_inner_forward, x)
294
- else:
295
- out = _inner_forward(x)
296
-
297
- out = self.relu(out)
298
-
299
- return out
300
-
301
-
302
- @BACKBONES.register_module()
303
- class ResNet(nn.Module):
304
- """ResNet backbone.
305
-
306
- Args:
307
- depth (int): Depth of resnet, from {18, 34, 50, 101, 152}.
308
- stem_channels (int | None): Number of stem channels. If not specified,
309
- it will be the same as `base_channels`. Default: None.
310
- base_channels (int): Number of base channels of res layer. Default: 64.
311
- in_channels (int): Number of input image channels. Default: 3.
312
- num_stages (int): Resnet stages. Default: 4.
313
- strides (Sequence[int]): Strides of the first block of each stage.
314
- dilations (Sequence[int]): Dilation of each stage.
315
- out_indices (Sequence[int]): Output from which stages.
316
- style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two
317
- layer is the 3x3 conv layer, otherwise the stride-two layer is
318
- the first 1x1 conv layer.
319
- deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv
320
- avg_down (bool): Use AvgPool instead of stride conv when
321
- downsampling in the bottleneck.
322
- frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
323
- -1 means not freezing any parameters.
324
- norm_cfg (dict): Dictionary to construct and config norm layer.
325
- norm_eval (bool): Whether to set norm layers to eval mode, namely,
326
- freeze running stats (mean and var). Note: Effect on Batch Norm
327
- and its variants only.
328
- plugins (list[dict]): List of plugins for stages, each dict contains:
329
-
330
- - cfg (dict, required): Cfg dict to build plugin.
331
- - position (str, required): Position inside block to insert
332
- plugin, options are 'after_conv1', 'after_conv2', 'after_conv3'.
333
- - stages (tuple[bool], optional): Stages to apply plugin, length
334
- should be same as 'num_stages'.
335
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
336
- memory while slowing down the training speed.
337
- zero_init_residual (bool): Whether to use zero init for last norm layer
338
- in resblocks to let them behave as identity.
339
-
340
- Example:
341
- >>> from mmdet.models import ResNet
342
- >>> import torch
343
- >>> self = ResNet(depth=18)
344
- >>> self.eval()
345
- >>> inputs = torch.rand(1, 3, 32, 32)
346
- >>> level_outputs = self.forward(inputs)
347
- >>> for level_out in level_outputs:
348
- ... print(tuple(level_out.shape))
349
- (1, 64, 8, 8)
350
- (1, 128, 4, 4)
351
- (1, 256, 2, 2)
352
- (1, 512, 1, 1)
353
- """
354
-
355
- arch_settings = {
356
- 18: (BasicBlock, (2, 2, 2, 2)),
357
- 34: (BasicBlock, (3, 4, 6, 3)),
358
- 50: (Bottleneck, (3, 4, 6, 3)),
359
- 101: (Bottleneck, (3, 4, 23, 3)),
360
- 152: (Bottleneck, (3, 8, 36, 3))
361
- }
362
-
363
- def __init__(self,
364
- depth,
365
- in_channels=3,
366
- stem_channels=None,
367
- base_channels=64,
368
- num_stages=4,
369
- strides=(1, 2, 2, 2),
370
- dilations=(1, 1, 1, 1),
371
- out_indices=(0, 1, 2, 3),
372
- style='pytorch',
373
- deep_stem=False,
374
- avg_down=False,
375
- frozen_stages=-1,
376
- conv_cfg=None,
377
- norm_cfg=dict(type='BN', requires_grad=True),
378
- norm_eval=True,
379
- dcn=None,
380
- stage_with_dcn=(False, False, False, False),
381
- plugins=None,
382
- with_cp=False,
383
- zero_init_residual=True):
384
- super(ResNet, self).__init__()
385
- if depth not in self.arch_settings:
386
- raise KeyError(f'invalid depth {depth} for resnet')
387
- self.depth = depth
388
- if stem_channels is None:
389
- stem_channels = base_channels
390
- self.stem_channels = stem_channels
391
- self.base_channels = base_channels
392
- self.num_stages = num_stages
393
- assert num_stages >= 1 and num_stages <= 4
394
- self.strides = strides
395
- self.dilations = dilations
396
- assert len(strides) == len(dilations) == num_stages
397
- self.out_indices = out_indices
398
- assert max(out_indices) < num_stages
399
- self.style = style
400
- self.deep_stem = deep_stem
401
- self.avg_down = avg_down
402
- self.frozen_stages = frozen_stages
403
- self.conv_cfg = conv_cfg
404
- self.norm_cfg = norm_cfg
405
- self.with_cp = with_cp
406
- self.norm_eval = norm_eval
407
- self.dcn = dcn
408
- self.stage_with_dcn = stage_with_dcn
409
- if dcn is not None:
410
- assert len(stage_with_dcn) == num_stages
411
- self.plugins = plugins
412
- self.zero_init_residual = zero_init_residual
413
- self.block, stage_blocks = self.arch_settings[depth]
414
- self.stage_blocks = stage_blocks[:num_stages]
415
- self.inplanes = stem_channels
416
-
417
- self._make_stem_layer(in_channels, stem_channels)
418
-
419
- self.res_layers = []
420
- for i, num_blocks in enumerate(self.stage_blocks):
421
- stride = strides[i]
422
- dilation = dilations[i]
423
- dcn = self.dcn if self.stage_with_dcn[i] else None
424
- if plugins is not None:
425
- stage_plugins = self.make_stage_plugins(plugins, i)
426
- else:
427
- stage_plugins = None
428
- planes = base_channels * 2**i
429
- res_layer = self.make_res_layer(
430
- block=self.block,
431
- inplanes=self.inplanes,
432
- planes=planes,
433
- num_blocks=num_blocks,
434
- stride=stride,
435
- dilation=dilation,
436
- style=self.style,
437
- avg_down=self.avg_down,
438
- with_cp=with_cp,
439
- conv_cfg=conv_cfg,
440
- norm_cfg=norm_cfg,
441
- dcn=dcn,
442
- plugins=stage_plugins)
443
- self.inplanes = planes * self.block.expansion
444
- layer_name = f'layer{i + 1}'
445
- self.add_module(layer_name, res_layer)
446
- self.res_layers.append(layer_name)
447
-
448
- self._freeze_stages()
449
-
450
- self.feat_dim = self.block.expansion * base_channels * 2**(
451
- len(self.stage_blocks) - 1)
452
-
453
- def make_stage_plugins(self, plugins, stage_idx):
454
- """Make plugins for ResNet ``stage_idx`` th stage.
455
-
456
- Currently we support to insert ``context_block``,
457
- ``empirical_attention_block``, ``nonlocal_block`` into the backbone
458
- like ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of
459
- Bottleneck.
460
-
461
- An example of plugins format could be:
462
-
463
- Examples:
464
- >>> plugins=[
465
- ... dict(cfg=dict(type='xxx', arg1='xxx'),
466
- ... stages=(False, True, True, True),
467
- ... position='after_conv2'),
468
- ... dict(cfg=dict(type='yyy'),
469
- ... stages=(True, True, True, True),
470
- ... position='after_conv3'),
471
- ... dict(cfg=dict(type='zzz', postfix='1'),
472
- ... stages=(True, True, True, True),
473
- ... position='after_conv3'),
474
- ... dict(cfg=dict(type='zzz', postfix='2'),
475
- ... stages=(True, True, True, True),
476
- ... position='after_conv3')
477
- ... ]
478
- >>> self = ResNet(depth=18)
479
- >>> stage_plugins = self.make_stage_plugins(plugins, 0)
480
- >>> assert len(stage_plugins) == 3
481
-
482
- Suppose ``stage_idx=0``, the structure of blocks in the stage would be:
483
-
484
- .. code-block:: none
485
-
486
- conv1-> conv2->conv3->yyy->zzz1->zzz2
487
-
488
- Suppose 'stage_idx=1', the structure of blocks in the stage would be:
489
-
490
- .. code-block:: none
491
-
492
- conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2
493
-
494
- If stages is missing, the plugin would be applied to all stages.
495
-
496
- Args:
497
- plugins (list[dict]): List of plugins cfg to build. The postfix is
498
- required if multiple same type plugins are inserted.
499
- stage_idx (int): Index of stage to build
500
-
501
- Returns:
502
- list[dict]: Plugins for current stage
503
- """
504
- stage_plugins = []
505
- for plugin in plugins:
506
- plugin = plugin.copy()
507
- stages = plugin.pop('stages', None)
508
- assert stages is None or len(stages) == self.num_stages
509
- # whether to insert plugin into current stage
510
- if stages is None or stages[stage_idx]:
511
- stage_plugins.append(plugin)
512
-
513
- return stage_plugins
514
-
515
- def make_res_layer(self, **kwargs):
516
- """Pack all blocks in a stage into a ``ResLayer``."""
517
- return ResLayer(**kwargs)
518
-
519
- @property
520
- def norm1(self):
521
- """nn.Module: the normalization layer named "norm1" """
522
- return getattr(self, self.norm1_name)
523
-
524
- def _make_stem_layer(self, in_channels, stem_channels):
525
- if self.deep_stem:
526
- self.stem = nn.Sequential(
527
- build_conv_layer(
528
- self.conv_cfg,
529
- in_channels,
530
- stem_channels // 2,
531
- kernel_size=3,
532
- stride=2,
533
- padding=1,
534
- bias=False),
535
- build_norm_layer(self.norm_cfg, stem_channels // 2)[1],
536
- nn.ReLU(inplace=True),
537
- build_conv_layer(
538
- self.conv_cfg,
539
- stem_channels // 2,
540
- stem_channels // 2,
541
- kernel_size=3,
542
- stride=1,
543
- padding=1,
544
- bias=False),
545
- build_norm_layer(self.norm_cfg, stem_channels // 2)[1],
546
- nn.ReLU(inplace=True),
547
- build_conv_layer(
548
- self.conv_cfg,
549
- stem_channels // 2,
550
- stem_channels,
551
- kernel_size=3,
552
- stride=1,
553
- padding=1,
554
- bias=False),
555
- build_norm_layer(self.norm_cfg, stem_channels)[1],
556
- nn.ReLU(inplace=True))
557
- else:
558
- self.conv1 = build_conv_layer(
559
- self.conv_cfg,
560
- in_channels,
561
- stem_channels,
562
- kernel_size=7,
563
- stride=2,
564
- padding=3,
565
- bias=False)
566
- self.norm1_name, norm1 = build_norm_layer(
567
- self.norm_cfg, stem_channels, postfix=1)
568
- self.add_module(self.norm1_name, norm1)
569
- self.relu = nn.ReLU(inplace=True)
570
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
571
-
572
- def _freeze_stages(self):
573
- if self.frozen_stages >= 0:
574
- if self.deep_stem:
575
- self.stem.eval()
576
- for param in self.stem.parameters():
577
- param.requires_grad = False
578
- else:
579
- self.norm1.eval()
580
- for m in [self.conv1, self.norm1]:
581
- for param in m.parameters():
582
- param.requires_grad = False
583
-
584
- for i in range(1, self.frozen_stages + 1):
585
- m = getattr(self, f'layer{i}')
586
- m.eval()
587
- for param in m.parameters():
588
- param.requires_grad = False
589
-
590
- def init_weights(self, pretrained=None):
591
- """Initialize the weights in backbone.
592
-
593
- Args:
594
- pretrained (str, optional): Path to pre-trained weights.
595
- Defaults to None.
596
- """
597
- if isinstance(pretrained, str):
598
- logger = get_root_logger()
599
- load_checkpoint(self, pretrained, strict=False, logger=logger)
600
- elif pretrained is None:
601
- for m in self.modules():
602
- if isinstance(m, nn.Conv2d):
603
- kaiming_init(m)
604
- elif isinstance(m, (_BatchNorm, nn.GroupNorm)):
605
- constant_init(m, 1)
606
-
607
- if self.dcn is not None:
608
- for m in self.modules():
609
- if isinstance(m, Bottleneck) and hasattr(
610
- m.conv2, 'conv_offset'):
611
- constant_init(m.conv2.conv_offset, 0)
612
-
613
- if self.zero_init_residual:
614
- for m in self.modules():
615
- if isinstance(m, Bottleneck):
616
- constant_init(m.norm3, 0)
617
- elif isinstance(m, BasicBlock):
618
- constant_init(m.norm2, 0)
619
- else:
620
- raise TypeError('pretrained must be a str or None')
621
-
622
- def forward(self, x):
623
- """Forward function."""
624
- if self.deep_stem:
625
- x = self.stem(x)
626
- else:
627
- x = self.conv1(x)
628
- x = self.norm1(x)
629
- x = self.relu(x)
630
- x = self.maxpool(x)
631
- outs = []
632
- for i, layer_name in enumerate(self.res_layers):
633
- res_layer = getattr(self, layer_name)
634
- x = res_layer(x)
635
- if i in self.out_indices:
636
- outs.append(x)
637
- return tuple(outs)
638
-
639
- def train(self, mode=True):
640
- """Convert the model into training mode while keep normalization layer
641
- freezed."""
642
- super(ResNet, self).train(mode)
643
- self._freeze_stages()
644
- if mode and self.norm_eval:
645
- for m in self.modules():
646
- # trick: eval have effect on BatchNorm only
647
- if isinstance(m, _BatchNorm):
648
- m.eval()
649
-
650
-
651
- @BACKBONES.register_module()
652
- class ResNetV1d(ResNet):
653
- r"""ResNetV1d variant described in `Bag of Tricks
654
- <https://arxiv.org/pdf/1812.01187.pdf>`_.
655
-
656
- Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in
657
- the input stem with three 3x3 convs. And in the downsampling block, a 2x2
658
- avg_pool with stride 2 is added before conv, whose stride is changed to 1.
659
- """
660
-
661
- def __init__(self, **kwargs):
662
- super(ResNetV1d, self).__init__(
663
- deep_stem=True, avg_down=True, **kwargs)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/regionclip-demo/detectron2/modeling/matcher.py DELETED
@@ -1,126 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- from typing import List
3
- import torch
4
-
5
- from detectron2.layers import nonzero_tuple
6
-
7
-
8
- class Matcher(object):
9
- """
10
- This class assigns to each predicted "element" (e.g., a box) a ground-truth
11
- element. Each predicted element will have exactly zero or one matches; each
12
- ground-truth element may be matched to zero or more predicted elements.
13
-
14
- The matching is determined by the MxN match_quality_matrix, that characterizes
15
- how well each (ground-truth, prediction)-pair match each other. For example,
16
- if the elements are boxes, this matrix may contain box intersection-over-union
17
- overlap values.
18
-
19
- The matcher returns (a) a vector of length N containing the index of the
20
- ground-truth element m in [0, M) that matches to prediction n in [0, N).
21
- (b) a vector of length N containing the labels for each prediction.
22
- """
23
-
24
- def __init__(
25
- self, thresholds: List[float], labels: List[int], allow_low_quality_matches: bool = False
26
- ):
27
- """
28
- Args:
29
- thresholds (list): a list of thresholds used to stratify predictions
30
- into levels.
31
- labels (list): a list of values to label predictions belonging at
32
- each level. A label can be one of {-1, 0, 1} signifying
33
- {ignore, negative class, positive class}, respectively.
34
- allow_low_quality_matches (bool): if True, produce additional matches
35
- for predictions with maximum match quality lower than high_threshold.
36
- See set_low_quality_matches_ for more details.
37
-
38
- For example,
39
- thresholds = [0.3, 0.5]
40
- labels = [0, -1, 1]
41
- All predictions with iou < 0.3 will be marked with 0 and
42
- thus will be considered as false positives while training.
43
- All predictions with 0.3 <= iou < 0.5 will be marked with -1 and
44
- thus will be ignored.
45
- All predictions with 0.5 <= iou will be marked with 1 and
46
- thus will be considered as true positives.
47
- """
48
- # Add -inf and +inf to first and last position in thresholds
49
- thresholds = thresholds[:]
50
- assert thresholds[0] > 0
51
- thresholds.insert(0, -float("inf"))
52
- thresholds.append(float("inf"))
53
- # Currently torchscript does not support all + generator
54
- assert all([low <= high for (low, high) in zip(thresholds[:-1], thresholds[1:])])
55
- assert all([l in [-1, 0, 1] for l in labels])
56
- assert len(labels) == len(thresholds) - 1
57
- self.thresholds = thresholds
58
- self.labels = labels
59
- self.allow_low_quality_matches = allow_low_quality_matches
60
-
61
- def __call__(self, match_quality_matrix):
62
- """
63
- Args:
64
- match_quality_matrix (Tensor[float]): an MxN tensor, containing the
65
- pairwise quality between M ground-truth elements and N predicted
66
- elements. All elements must be >= 0 (due to the us of `torch.nonzero`
67
- for selecting indices in :meth:`set_low_quality_matches_`).
68
-
69
- Returns:
70
- matches (Tensor[int64]): a vector of length N, where matches[i] is a matched
71
- ground-truth index in [0, M)
72
- match_labels (Tensor[int8]): a vector of length N, where pred_labels[i] indicates
73
- whether a prediction is a true or false positive or ignored
74
- """
75
- assert match_quality_matrix.dim() == 2
76
- if match_quality_matrix.numel() == 0:
77
- default_matches = match_quality_matrix.new_full(
78
- (match_quality_matrix.size(1),), 0, dtype=torch.int64
79
- )
80
- # When no gt boxes exist, we define IOU = 0 and therefore set labels
81
- # to `self.labels[0]`, which usually defaults to background class 0
82
- # To choose to ignore instead, can make labels=[-1,0,-1,1] + set appropriate thresholds
83
- default_match_labels = match_quality_matrix.new_full(
84
- (match_quality_matrix.size(1),), self.labels[0], dtype=torch.int8
85
- )
86
- return default_matches, default_match_labels
87
-
88
- assert torch.all(match_quality_matrix >= 0)
89
-
90
- # match_quality_matrix is M (gt) x N (predicted)
91
- # Max over gt elements (dim 0) to find best gt candidate for each prediction
92
- matched_vals, matches = match_quality_matrix.max(dim=0)
93
-
94
- match_labels = matches.new_full(matches.size(), 1, dtype=torch.int8)
95
-
96
- for (l, low, high) in zip(self.labels, self.thresholds[:-1], self.thresholds[1:]):
97
- low_high = (matched_vals >= low) & (matched_vals < high)
98
- match_labels[low_high] = l
99
-
100
- if self.allow_low_quality_matches:
101
- self.set_low_quality_matches_(match_labels, match_quality_matrix)
102
-
103
- return matches, match_labels
104
-
105
- def set_low_quality_matches_(self, match_labels, match_quality_matrix):
106
- """
107
- Produce additional matches for predictions that have only low-quality matches.
108
- Specifically, for each ground-truth G find the set of predictions that have
109
- maximum overlap with it (including ties); for each prediction in that set, if
110
- it is unmatched, then match it to the ground-truth G.
111
-
112
- This function implements the RPN assignment case (i) in Sec. 3.1.2 of
113
- :paper:`Faster R-CNN`.
114
- """
115
- # For each gt, find the prediction with which it has highest quality
116
- highest_quality_foreach_gt, _ = match_quality_matrix.max(dim=1)
117
- # Find the highest quality match available, even if it is low, including ties.
118
- # Note that the matches qualities must be positive due to the use of
119
- # `torch.nonzero`.
120
- _, pred_inds_with_highest_quality = nonzero_tuple(
121
- match_quality_matrix == highest_quality_foreach_gt[:, None]
122
- )
123
- # If an anchor was labeled positive only due to a low-quality match
124
- # with gt_A, but it has larger overlap with gt_B, it's matched index will still be gt_B.
125
- # This follows the implementation in Detectron, and is found to have no significant impact.
126
- match_labels[pred_inds_with_highest_quality] = 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ChandraMohanNayal/AutoGPT/autogpt/agent/__init__.py DELETED
@@ -1,4 +0,0 @@
1
- from autogpt.agent.agent import Agent
2
- from autogpt.agent.agent_manager import AgentManager
3
-
4
- __all__ = ["Agent", "AgentManager"]
 
 
 
 
 
spaces/Cicooo/vits-uma-genshin-honkai/attentions.py DELETED
@@ -1,300 +0,0 @@
1
- import math
2
- import torch
3
- from torch import nn
4
- from torch.nn import functional as F
5
-
6
- import commons
7
- from modules import LayerNorm
8
-
9
-
10
- class Encoder(nn.Module):
11
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
12
- super().__init__()
13
- self.hidden_channels = hidden_channels
14
- self.filter_channels = filter_channels
15
- self.n_heads = n_heads
16
- self.n_layers = n_layers
17
- self.kernel_size = kernel_size
18
- self.p_dropout = p_dropout
19
- self.window_size = window_size
20
-
21
- self.drop = nn.Dropout(p_dropout)
22
- self.attn_layers = nn.ModuleList()
23
- self.norm_layers_1 = nn.ModuleList()
24
- self.ffn_layers = nn.ModuleList()
25
- self.norm_layers_2 = nn.ModuleList()
26
- for i in range(self.n_layers):
27
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
28
- self.norm_layers_1.append(LayerNorm(hidden_channels))
29
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
30
- self.norm_layers_2.append(LayerNorm(hidden_channels))
31
-
32
- def forward(self, x, x_mask):
33
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
34
- x = x * x_mask
35
- for i in range(self.n_layers):
36
- y = self.attn_layers[i](x, x, attn_mask)
37
- y = self.drop(y)
38
- x = self.norm_layers_1[i](x + y)
39
-
40
- y = self.ffn_layers[i](x, x_mask)
41
- y = self.drop(y)
42
- x = self.norm_layers_2[i](x + y)
43
- x = x * x_mask
44
- return x
45
-
46
-
47
- class Decoder(nn.Module):
48
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
49
- super().__init__()
50
- self.hidden_channels = hidden_channels
51
- self.filter_channels = filter_channels
52
- self.n_heads = n_heads
53
- self.n_layers = n_layers
54
- self.kernel_size = kernel_size
55
- self.p_dropout = p_dropout
56
- self.proximal_bias = proximal_bias
57
- self.proximal_init = proximal_init
58
-
59
- self.drop = nn.Dropout(p_dropout)
60
- self.self_attn_layers = nn.ModuleList()
61
- self.norm_layers_0 = nn.ModuleList()
62
- self.encdec_attn_layers = nn.ModuleList()
63
- self.norm_layers_1 = nn.ModuleList()
64
- self.ffn_layers = nn.ModuleList()
65
- self.norm_layers_2 = nn.ModuleList()
66
- for i in range(self.n_layers):
67
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
68
- self.norm_layers_0.append(LayerNorm(hidden_channels))
69
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
70
- self.norm_layers_1.append(LayerNorm(hidden_channels))
71
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
72
- self.norm_layers_2.append(LayerNorm(hidden_channels))
73
-
74
- def forward(self, x, x_mask, h, h_mask):
75
- """
76
- x: decoder input
77
- h: encoder output
78
- """
79
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
80
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
81
- x = x * x_mask
82
- for i in range(self.n_layers):
83
- y = self.self_attn_layers[i](x, x, self_attn_mask)
84
- y = self.drop(y)
85
- x = self.norm_layers_0[i](x + y)
86
-
87
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
88
- y = self.drop(y)
89
- x = self.norm_layers_1[i](x + y)
90
-
91
- y = self.ffn_layers[i](x, x_mask)
92
- y = self.drop(y)
93
- x = self.norm_layers_2[i](x + y)
94
- x = x * x_mask
95
- return x
96
-
97
-
98
- class MultiHeadAttention(nn.Module):
99
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
100
- super().__init__()
101
- assert channels % n_heads == 0
102
-
103
- self.channels = channels
104
- self.out_channels = out_channels
105
- self.n_heads = n_heads
106
- self.p_dropout = p_dropout
107
- self.window_size = window_size
108
- self.heads_share = heads_share
109
- self.block_length = block_length
110
- self.proximal_bias = proximal_bias
111
- self.proximal_init = proximal_init
112
- self.attn = None
113
-
114
- self.k_channels = channels // n_heads
115
- self.conv_q = nn.Conv1d(channels, channels, 1)
116
- self.conv_k = nn.Conv1d(channels, channels, 1)
117
- self.conv_v = nn.Conv1d(channels, channels, 1)
118
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
119
- self.drop = nn.Dropout(p_dropout)
120
-
121
- if window_size is not None:
122
- n_heads_rel = 1 if heads_share else n_heads
123
- rel_stddev = self.k_channels**-0.5
124
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
125
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
126
-
127
- nn.init.xavier_uniform_(self.conv_q.weight)
128
- nn.init.xavier_uniform_(self.conv_k.weight)
129
- nn.init.xavier_uniform_(self.conv_v.weight)
130
- if proximal_init:
131
- with torch.no_grad():
132
- self.conv_k.weight.copy_(self.conv_q.weight)
133
- self.conv_k.bias.copy_(self.conv_q.bias)
134
-
135
- def forward(self, x, c, attn_mask=None):
136
- q = self.conv_q(x)
137
- k = self.conv_k(c)
138
- v = self.conv_v(c)
139
-
140
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
141
-
142
- x = self.conv_o(x)
143
- return x
144
-
145
- def attention(self, query, key, value, mask=None):
146
- # reshape [b, d, t] -> [b, n_h, t, d_k]
147
- b, d, t_s, t_t = (*key.size(), query.size(2))
148
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
149
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
150
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
151
-
152
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
153
- if self.window_size is not None:
154
- assert t_s == t_t, "Relative attention is only available for self-attention."
155
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
156
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
157
- scores_local = self._relative_position_to_absolute_position(rel_logits)
158
- scores = scores + scores_local
159
- if self.proximal_bias:
160
- assert t_s == t_t, "Proximal bias is only available for self-attention."
161
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
162
- if mask is not None:
163
- scores = scores.masked_fill(mask == 0, -1e4)
164
- if self.block_length is not None:
165
- assert t_s == t_t, "Local attention is only available for self-attention."
166
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
167
- scores = scores.masked_fill(block_mask == 0, -1e4)
168
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
169
- p_attn = self.drop(p_attn)
170
- output = torch.matmul(p_attn, value)
171
- if self.window_size is not None:
172
- relative_weights = self._absolute_position_to_relative_position(p_attn)
173
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
174
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
175
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
176
- return output, p_attn
177
-
178
- def _matmul_with_relative_values(self, x, y):
179
- """
180
- x: [b, h, l, m]
181
- y: [h or 1, m, d]
182
- ret: [b, h, l, d]
183
- """
184
- ret = torch.matmul(x, y.unsqueeze(0))
185
- return ret
186
-
187
- def _matmul_with_relative_keys(self, x, y):
188
- """
189
- x: [b, h, l, d]
190
- y: [h or 1, m, d]
191
- ret: [b, h, l, m]
192
- """
193
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
194
- return ret
195
-
196
- def _get_relative_embeddings(self, relative_embeddings, length):
197
- max_relative_position = 2 * self.window_size + 1
198
- # Pad first before slice to avoid using cond ops.
199
- pad_length = max(length - (self.window_size + 1), 0)
200
- slice_start_position = max((self.window_size + 1) - length, 0)
201
- slice_end_position = slice_start_position + 2 * length - 1
202
- if pad_length > 0:
203
- padded_relative_embeddings = F.pad(
204
- relative_embeddings,
205
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
206
- else:
207
- padded_relative_embeddings = relative_embeddings
208
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
209
- return used_relative_embeddings
210
-
211
- def _relative_position_to_absolute_position(self, x):
212
- """
213
- x: [b, h, l, 2*l-1]
214
- ret: [b, h, l, l]
215
- """
216
- batch, heads, length, _ = x.size()
217
- # Concat columns of pad to shift from relative to absolute indexing.
218
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
219
-
220
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
221
- x_flat = x.view([batch, heads, length * 2 * length])
222
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
223
-
224
- # Reshape and slice out the padded elements.
225
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
226
- return x_final
227
-
228
- def _absolute_position_to_relative_position(self, x):
229
- """
230
- x: [b, h, l, l]
231
- ret: [b, h, l, 2*l-1]
232
- """
233
- batch, heads, length, _ = x.size()
234
- # padd along column
235
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
236
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
237
- # add 0's in the beginning that will skew the elements after reshape
238
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
239
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
240
- return x_final
241
-
242
- def _attention_bias_proximal(self, length):
243
- """Bias for self-attention to encourage attention to close positions.
244
- Args:
245
- length: an integer scalar.
246
- Returns:
247
- a Tensor with shape [1, 1, length, length]
248
- """
249
- r = torch.arange(length, dtype=torch.float32)
250
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
251
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
252
-
253
-
254
- class FFN(nn.Module):
255
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
256
- super().__init__()
257
- self.in_channels = in_channels
258
- self.out_channels = out_channels
259
- self.filter_channels = filter_channels
260
- self.kernel_size = kernel_size
261
- self.p_dropout = p_dropout
262
- self.activation = activation
263
- self.causal = causal
264
-
265
- if causal:
266
- self.padding = self._causal_padding
267
- else:
268
- self.padding = self._same_padding
269
-
270
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
271
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
272
- self.drop = nn.Dropout(p_dropout)
273
-
274
- def forward(self, x, x_mask):
275
- x = self.conv_1(self.padding(x * x_mask))
276
- if self.activation == "gelu":
277
- x = x * torch.sigmoid(1.702 * x)
278
- else:
279
- x = torch.relu(x)
280
- x = self.drop(x)
281
- x = self.conv_2(self.padding(x * x_mask))
282
- return x * x_mask
283
-
284
- def _causal_padding(self, x):
285
- if self.kernel_size == 1:
286
- return x
287
- pad_l = self.kernel_size - 1
288
- pad_r = 0
289
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
290
- x = F.pad(x, commons.convert_pad_shape(padding))
291
- return x
292
-
293
- def _same_padding(self, x):
294
- if self.kernel_size == 1:
295
- return x
296
- pad_l = (self.kernel_size - 1) // 2
297
- pad_r = self.kernel_size // 2
298
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
299
- x = F.pad(x, commons.convert_pad_shape(padding))
300
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/meme-api/meme_generator/memes/love_you/__init__.py DELETED
@@ -1,26 +0,0 @@
1
- from pathlib import Path
2
- from typing import List
3
-
4
- from PIL.Image import Image as IMG
5
- from pil_utils import BuildImage
6
-
7
- from meme_generator import add_meme
8
- from meme_generator.utils import save_gif
9
-
10
- img_dir = Path(__file__).parent / "images"
11
-
12
-
13
- def love_you(images: List[BuildImage], texts, args):
14
- img = images[0].convert("RGBA").square()
15
- frames: List[IMG] = []
16
- locs = [(68, 65, 70, 70), (63, 59, 80, 80)]
17
- for i in range(2):
18
- heart = BuildImage.open(img_dir / f"{i}.png")
19
- frame = BuildImage.new("RGBA", heart.size, "white")
20
- x, y, w, h = locs[i]
21
- frame.paste(img.resize((w, h)), (x, y), alpha=True).paste(heart, alpha=True)
22
- frames.append(frame.image)
23
- return save_gif(frames, 0.2)
24
-
25
-
26
- add_meme("love_you", love_you, min_images=1, max_images=1, keywords=["永远爱你"])