parquet-converter commited on
Commit
a8e8663
·
1 Parent(s): faaec0e

Update parquet files (step 9 of 121)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Clutch Blast Tyrant Rar REPACK Download.md +0 -28
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download TextNow Cracked APK for Free Is It Worth It?.md +0 -27
  3. spaces/1gistliPinn/ChatGPT4/Examples/Extra Speed Zte Ac2726 Modem Firmware Upgrade.md +0 -20
  4. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Discover the Best Free Video Games for Every Genre and Platform.md +0 -115
  5. spaces/1phancelerku/anime-remove-background/Free Download General Knowledge Quiz Book PDF with Answers and Explanations.md +0 -261
  6. spaces/7hao/bingo/src/lib/isomorphic/index.ts +0 -17
  7. spaces/A00001/bingothoo/src/components/welcome-screen.tsx +0 -34
  8. spaces/AIWaves/Software_Company/src/agents/Action/base_action.py +0 -49
  9. spaces/Abdulkader/HumanMotionsDetector/README.md +0 -13
  10. spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/midas/midas/midas_net.py +0 -76
  11. spaces/AgentVerse/agentVerse/agentverse/memory_manipulator/__init__.py +0 -9
  12. spaces/AgentVerse/agentVerse/scripts/evaluate_commongen.py +0 -53
  13. spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/text/japanese.py +0 -153
  14. spaces/AlanMars/QYL-AI-Space/run_Linux.sh +0 -31
  15. spaces/Andy1621/uniformer_image_detection/configs/empirical_attention/faster_rcnn_r50_fpn_attention_1111_dcn_1x_coco.py +0 -16
  16. spaces/Andy1621/uniformer_image_detection/configs/fpg/faster_rcnn_r50_fpg_crop640_50e_coco.py +0 -48
  17. spaces/Andy1621/uniformer_image_detection/configs/legacy_1.x/README.md +0 -53
  18. spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_ssd512_coco.py +0 -8
  19. spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x512_160k_ade20k.py +0 -5
  20. spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/dataloader/data_loader.py +0 -238
  21. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/wrappers.py +0 -180
  22. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/core/__init__.py +0 -3
  23. spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/sampling_util.py +0 -22
  24. spaces/AquaSuisei/ChatGPTXE/modules/llama_func.py +0 -137
  25. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/hooks.py +0 -33
  26. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/filepost.py +0 -98
  27. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/roi_heads/custom_roi_heads.py +0 -185
  28. spaces/BetterAPI/BetterChat_new/src/routes/conversation/[id]/summarize/+server.ts +0 -83
  29. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/latex.py +0 -521
  30. spaces/BigSalmon/GPT2_Most_Probable/app.py +0 -273
  31. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/data/datasets/pascal_voc.py +0 -80
  32. spaces/CVPR/LIVE/thrust/thrust/detail/malloc_and_free.h +0 -85
  33. spaces/CVPR/LIVE/thrust/thrust/iterator/reverse_iterator.h +0 -238
  34. spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/reduce.h +0 -59
  35. spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/binary_search.h +0 -157
  36. spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/equal.h +0 -22
  37. spaces/CVPR/ml-talking-face/translator/module.py +0 -59
  38. spaces/CarlDennis/Lovelive-VITS-JPZH/text/japanese.py +0 -132
  39. spaces/CarperAI/pile-v2-eda/load_dataset.py +0 -22
  40. spaces/DataDreamweavers/LegaWeaver/app.py +0 -118
  41. spaces/Dinoking/Guccio-AI-Designer/netdissect/workerpool.py +0 -158
  42. spaces/DragGan/DragGan-Inversion/torch_utils/ops/upfirdn2d.h +0 -59
  43. spaces/ECCV2022/ECCV2022_papers/paper_list.py +0 -129
  44. spaces/Eieichicken/yyayyaya/README.md +0 -10
  45. spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/models/__init__.py +0 -10
  46. spaces/Enigma007/Classifier-Fasttext/README.md +0 -13
  47. spaces/EtTKSf/uu/README.md +0 -10
  48. spaces/EuroPython2022/automatic-speech-recognition-with-next-gen-kaldi/README.md +0 -13
  49. spaces/EuroPython2022/automatic-speech-recognition-with-next-gen-kaldi/examples.py +0 -256
  50. spaces/Everymans-ai/GPT-knowledge-management/README.md +0 -15
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Clutch Blast Tyrant Rar REPACK Download.md DELETED
@@ -1,28 +0,0 @@
1
- <br />
2
- <h1>How to Download Clutch Blast Tyrant Rar File for Free</h1>
3
- <p>Clutch Blast Tyrant is the sixth studio album by the American rock band Clutch, released in 2004. The album features a mix of hard rock, blues, funk and metal elements, and is considered one of the band's best works by fans and critics alike.</p>
4
- <p>If you are looking for a way to download Clutch Blast Tyrant rar file for free, you have come to the right place. In this article, we will show you how to use a torrent client to get the album in high quality and without any viruses or malware.</p>
5
- <h2>Clutch Blast Tyrant Rar Download</h2><br /><p><b><b>Download File</b> &#10084; <a href="https://byltly.com/2uKuZW">https://byltly.com/2uKuZW</a></b></p><br /><br />
6
- <h2>Step 1: Download and Install a Torrent Client</h2>
7
- <p>A torrent client is a software that allows you to download files from other users who are sharing them on the internet. There are many torrent clients available, but we recommend using uTorrent, which is one of the most popular and reliable ones. You can download uTorrent from <a href="https://www.utorrent.com/">here</a>.</p>
8
- <p>Once you have downloaded the uTorrent installer, run it and follow the instructions to install it on your computer. You may need to allow uTorrent to access your network in your firewall settings.</p>
9
- <h2>Step 2: Find and Download the Clutch Blast Tyrant Rar File</h2>
10
- <p>Now that you have uTorrent installed, you need to find the Clutch Blast Tyrant rar file that you want to download. You can use a torrent search engine like <a href="https://thepiratebay.org/">The Pirate Bay</a> or <a href="https://1337x.to/">1337x</a> to look for it. Just type "Clutch Blast Tyrant rar" in the search box and hit enter.</p>
11
- <p>You will see a list of results with different file sizes and qualities. Choose the one that has the most seeders and leechers, as this indicates that the file is more likely to be complete and fast to download. Click on the magnet link or the download button next to the result that you want.</p>
12
- <p>This will open uTorrent and start downloading the file. You can see the progress and speed of the download in uTorrent's interface. Depending on your internet connection and the availability of other users, it may take some time to finish.</p>
13
- <h2>Step 3: Extract and Enjoy the Album</h2>
14
- <p>Once the download is complete, you will have a Clutch Blast Tyrant rar file in your uTorrent downloads folder. You will need a software like WinRAR or 7-Zip to extract it. You can download WinRAR from <a href="https://www.win-rar.com/">here</a> or 7-Zip from <a href="https://www.7-zip.org/">here</a>.</p>
15
- <p>After installing one of these software, right-click on the Clutch Blast Tyrant rar file and choose "Extract Here" or "Extract to Clutch Blast Tyrant". This will create a folder with the same name as the rar file, containing all the tracks of the album in mp3 format.</p>
16
- <p></p>
17
- <p>You can now play the album with your favorite music player or transfer it to your mobile device. Enjoy!</p>
18
-
19
- <h2>Why Download Clutch Blast Tyrant Rar File?</h2>
20
- <p>You may be wondering why you should download Clutch Blast Tyrant rar file instead of buying the album from a legal source. There are several reasons why some people prefer to download music for free from torrent sites.</p>
21
- <p>One reason is that they may not have enough money to afford the album or they may live in a country where the album is not available or too expensive. Another reason is that they may want to try the album before buying it or they may not like all the tracks and only want to keep some of them. A third reason is that they may want to have a backup copy of the album in case they lose or damage their original one.</p>
22
- <p>Whatever your reason is, downloading Clutch Blast Tyrant rar file from torrent sites is a risky and illegal activity. You may expose your computer to viruses or malware, you may face legal consequences if you get caught by the authorities, and you may harm the artists and the music industry by not supporting their work. Therefore, we do not encourage or endorse downloading music for free from torrent sites. We only provide this information for educational purposes and we advise you to buy the album from a legal source if you like it.</p>
23
- <h2>What is Clutch Blast Tyrant About?</h2>
24
- <p>Clutch Blast Tyrant is an album that showcases the band's diverse musical influences and styles. The album has 15 tracks, each with a different theme and mood. Some of the tracks are inspired by science fiction, mythology, history, politics, and religion. The album also features guest appearances by members of other bands such as Mastodon, Fu Manchu, and Sixty Watt Shaman.</p>
25
- <p>The album's title refers to a fictional character named Blast Tyrant, who is a tyrannical ruler of a futuristic world. The album's cover art depicts Blast Tyrant riding a giant worm in a desert landscape. The album's lyrics explore various aspects of Blast Tyrant's personality and actions, as well as the resistance and rebellion against him.</p>
26
- <p>The album received critical acclaim from music critics and fans alike. It was praised for its originality, creativity, energy, and musicianship. It was also ranked as one of the best albums of 2004 by several publications and websites. The album sold over 100,000 copies in the United States and was certified gold by the RIAA.</p> 81aa517590<br />
27
- <br />
28
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download TextNow Cracked APK for Free Is It Worth It?.md DELETED
@@ -1,27 +0,0 @@
1
- <br />
2
- <h1>TextNow Cracked APK Download: How to Get Unlimited Free Calls and Texts</h1>
3
- <p>TextNow is a popular app that allows you to make free calls and texts to any number in the US and Canada. However, the free version of the app has some limitations, such as ads, low-quality calls, and limited features. If you want to enjoy the full benefits of TextNow without paying for a premium subscription, you might be tempted to download a cracked APK of the app. A cracked APK is a modified version of the app that bypasses the security and verification systems of the original app. However, downloading a cracked APK of TextNow is not only illegal, but also risky. In this article, we will explain why you should avoid TextNow cracked APK download and what are some safer alternatives.</p>
4
-
5
- <h2>Why You Should Avoid TextNow Cracked APK Download</h2>
6
- <p>Downloading a cracked APK of TextNow may seem like a good way to save money and enjoy unlimited free calls and texts, but there are many drawbacks and dangers associated with it. Here are some of them:</p>
7
- <h2>textnow cracked apk download</h2><br /><p><b><b>Download</b> &#128279; <a href="https://byltly.com/2uKv5d">https://byltly.com/2uKv5d</a></b></p><br /><br />
8
- <ul>
9
- <li><b>It's illegal.</b> Downloading a cracked APK of TextNow is a form of software piracy, which is a criminal offense in many countries. You could face legal consequences such as fines or even jail time if you are caught downloading or distributing a cracked APK of TextNow.</li>
10
- <li><b>It's unethical.</b> Downloading a cracked APK of TextNow is a form of stealing from the developers and publishers who invested time, money and effort to create and distribute the app. By downloading a cracked APK of TextNow, you are depriving them of their rightful income and discouraging them from making more updates or improvements to the app.</li>
11
- <li><b>It's unsafe.</b> Downloading a cracked APK of TextNow often comes from shady sources that may contain malware, viruses or spyware that can harm your device or compromise your personal information. You may also expose yourself to cyberattacks or identity theft by visiting untrustworthy websites or using peer-to-peer networks to download a cracked APK of TextNow.</li>
12
- <li><b>It's unreliable.</b> Downloading a cracked APK of TextNow may not work properly or at all on your device, as it may be incompatible with your system or missing important files or updates. You may also experience bugs, glitches, crashes or errors that can affect your user experience or functionality of the app.</li>
13
- <li><b>It's unsatisfying.</b> Downloading a cracked APK of TextNow may not give you the same level of quality or satisfaction as using the original app. You may encounter ads, low-quality calls, limited features or poor customer service that can ruin your enjoyment or satisfaction of the app.</li>
14
- </ul>
15
-
16
- <h2>What Are Some Safer Alternatives</h2>
17
- <p>If you want to use TextNow without breaking the law or risking your device or personal information, there are some safer alternatives that you can try. Here are some of them:</p>
18
- <ul>
19
- <li><b>Free version.</b> The free version of TextNow is still available for download from the official website or the Google Play Store. The free version allows you to make free calls and texts to any number in the US and Canada, as well as enjoy some features such as voicemail, call forwarding, group chat and emojis. However, the free version also has some limitations such as ads, low-quality calls and limited features.</li>
20
- <li><b>Premium subscription.</b> The premium subscription of TextNow is available for a monthly or annual fee that varies depending on your plan and location. The premium subscription allows you to enjoy all the benefits of TextNow without any limitations or ads. You can also access additional features such as high-quality calls, international calling credits, custom phone number, call recorder and more.</li>
21
- <li><b>Other apps.</b> There are other apps that offer similar services as TextNow that you can download and use legally and safely. These apps allow you to make free or cheap calls and texts to any number in the world using your internet connection. Some examples of these apps are WhatsApp, Skype, Viber and Telegram.</li>
22
- </ul>
23
-
24
- <h2>Conclusion</h2>
25
- <p>In conclusion, downloading a cracked APK of TextNow is not worth it, as it is illegal, unethical, unsafe,</p> ddb901b051<br />
26
- <br />
27
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Extra Speed Zte Ac2726 Modem Firmware Upgrade.md DELETED
@@ -1,20 +0,0 @@
1
- <h2>Extra Speed Zte Ac2726 Modem Firmware Upgrade</h2><br /><p><b><b>Download Zip</b> &middot;&middot;&middot;&middot;&middot; <a href="https://imgfil.com/2uy1G4">https://imgfil.com/2uy1G4</a></b></p><br /><br />
2
-
3
- The release was delayed due to unknown reasons. Various sources stated that the film would hit the screens in the fourth week of January 2020, however, a poster was released on New Year's Eve, making fans assume that the release date would be January 6, 2020, or sooner.
4
-
5
- In June 2019, the film began production with director Kong Ramesh. The film was officially launched in February 2020. The trailer of the film was launched in February 2020.
6
-
7
- It is an unofficial translation of original Malayalam into Kannada.
8
-
9
- Fell in love with bangalore girl on the websiteLoveSagar.com now...Video downloader with support for many video download sites. 1, free and simple. With Videodownloader you will be able to download any video from free to premium without any limits. 【How to find a common definition or translation of a word?】 A Dictionnaire composed of both official and recent meanings of words.   【All the English words that can be translated into a Kannada or Telugu equivalent.]
10
-
11
- Amdavad a new instalment of Sangli talkies new film series which was released on 19 August 2018 and the film was a hit in the box office. While the music and lyrics of the film were in Telugu, the dialogues were in English. The audio was composed by P. Sreenivasa Rao and its background score was composed by Hamsalekha.
12
-
13
- Here in this post, we bring the best songs of the film in the best possible way. 10) Amaraa (Duet) – Singers : Sriprabha, Vemali Venkata RaoLyrics : Vemali Venkata RaoVideo: Singers: Sriprabha, Vemali Venkata Rao–
14
-
15
- The first music album of Dhananjay Rajendra Janga Rao, the new director of the film. The first song in the movie was composed by G. K. Venkatesh and the song was sung by T. M. Soundararajan and Vani Jairam in the roles of “Kodagu” and “Mukunda,” respectively. The song is called “Mamaththu” which means “child” in Telugu. It is an adaptation of the Malayalam song “Kanne Enkku Enne.”
16
-
17
- Kamakshi Chandra, 4fefd39f24<br />
18
- <br />
19
- <br />
20
- <p></p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Discover the Best Free Video Games for Every Genre and Platform.md DELETED
@@ -1,115 +0,0 @@
1
-
2
- <h1>Free Download Video Game: How to Find and Play the Best Games for Free</h1>
3
- <p>If you love playing video games, but don't want to spend a lot of money on them, you might be interested in free download video games. These are games that you can download and play on your PC or mobile device without paying anything. Sounds too good to be true, right? Well, not really. There are actually many free download video games available online, and some of them are very high-quality and fun. In this article, we will explain what free download video games are, why you should play them, and how to find them. We will also recommend some of the best free download video games for PC that you can try right now.</p>
4
- <h2>free download video game</h2><br /><p><b><b>Download File</b> &#10038;&#10038;&#10038; <a href="https://urlin.us/2uSYdQ">https://urlin.us/2uSYdQ</a></b></p><br /><br />
5
- <h2>Introduction</h2>
6
- <h3>What are free download video games?</h3>
7
- <p>Free download video games are games that you can download from the internet and play on your computer or mobile device without paying anything. They are also known as freeware, free-to-play, or F2P games. Some of these games are completely free, meaning that they don't have any in-game purchases or ads. Others are partially free, meaning that they have optional in-game purchases or ads that you can choose to ignore or pay for. Some examples of free download video games are Asphalt 9: Legends, Roblox, and FIFA Mobile.</p>
8
- <h3>Why play free download video games?</h3>
9
- <p>There are many reasons why you might want to play free download video games. Here are some of them:</p>
10
- <ul>
11
- <li>You can save money. Playing free download video games means that you don't have to spend any money on buying or renting games. You can enjoy hours of entertainment without breaking the bank.</li>
12
- <li>You can discover new genres and titles. Playing free download video games gives you the opportunity to try out different types of games that you might not normally play. You might find a new favorite game or genre that you didn't know existed.</li>
13
- <li>You can play anytime and anywhere. Playing free download video games means that you can play them whenever and wherever you want, as long as you have an internet connection and a compatible device. You don't have to worry about discs, cartridges, or consoles.</li>
14
- </ul>
15
- <h3>How to find free download video games?</h3>
16
- <p>There are many ways to find free download video games online. Here are some of them:</p>
17
- <ul>
18
- <li>You can use search engines. You can simply type in "free download video game" or something similar in your favorite search engine and see what comes up. You might find some websites that offer free download video games or links to other websites that do.</li>
19
- <li>You can use online platforms or stores. You can visit online platforms or stores that specialize in offering free download video games or have a section for them. Some examples are Microsoft Store, EA Play, Steam, Epic Games Store, Google Play Store, and Apple App Store.</li>
20
- <li>You can use online reviews or recommendations. You can read online reviews or recommendations from other gamers who have played free download video games and see what they think. You might find some useful tips, ratings, feedback, or suggestions for free download video games that you might like.</li>
21
- </ul>
22
- <h2>Best Free Download Video Games for PC</h2>
23
- <h3>Microsoft Store</h3>
24
- <p>One of the best places to find free download video games for PC is Microsoft Store. This is an online platform that offers a variety of apps, games, movies, TV shows, and more for Windows 10 devices. You can browse through different categories and genres of games and find some great free download video games for PC. Here are some of the best ones:</p>
25
- <h4>Asphalt 9: Legends</h4>
26
- <p>Asphalt 9: Legends is a free download video game for PC that lets you experience the thrill of racing in some of the most amazing cars in the world. You can choose from over 50 models of cars from famous brands like Ferrari, Lamborghini, Porsche, and more. You can customize your car with different colors, decals, and parts. You can also compete with other players online or offline in various modes and events. You can perform stunning stunts and drifts on realistic tracks and environments. Asphalt 9: Legends is a fast-paced and adrenaline-pumping game that will keep you on the edge of your seat.</p>
27
- <h4>Roblox</h4>
28
- <p>Roblox is a free download video game for PC that lets you create and play millions of games with other players around the world. Roblox is a platform where you can unleash your imagination and creativity. You can use the Roblox Studio to design your own games, or play games made by other users. You can explore different genres and themes of games, such as adventure, role-playing, simulation, horror, and more. You can also chat and socialize with other players, join groups, and earn virtual currency. Roblox is a game that will never get boring, as there is always something new and exciting to discover.</p>
29
- <p>free download video game for pc<br />
30
- free download video game for android<br />
31
- free download video game for windows 10<br />
32
- free download video game for mac<br />
33
- free download video game for ps4<br />
34
- free download video game for xbox one<br />
35
- free download video game for switch<br />
36
- free download video game for ios<br />
37
- free download video game for linux<br />
38
- free download video game for steam<br />
39
- free download video game apk<br />
40
- free download video game iso<br />
41
- free download video game roms<br />
42
- free download video game emulator<br />
43
- free download video game crack<br />
44
- free download video game full version<br />
45
- free download video game offline<br />
46
- free download video game online<br />
47
- free download video game multiplayer<br />
48
- free download video game single player<br />
49
- free download video game action<br />
50
- free download video game adventure<br />
51
- free download video game racing<br />
52
- free download video game sports<br />
53
- free download video game simulation<br />
54
- free download video game strategy<br />
55
- free download video game puzzle<br />
56
- free download video game rpg<br />
57
- free download video game fps<br />
58
- free download video game horror<br />
59
- free download video game minecraft<br />
60
- free download video game roblox<br />
61
- free download video game fortnite<br />
62
- free download video game pubg<br />
63
- free download video game gta 5<br />
64
- free download video game fifa 21<br />
65
- free download video game call of duty<br />
66
- free download video game halo infinite<br />
67
- free download video game apex legends<br />
68
- free download video game destiny 2<br />
69
- free download video game world of warcraft<br />
70
- free download video game league of legends<br />
71
- free download video game dota 2<br />
72
- free download video game valorant<br />
73
- free download video game among us<br />
74
- free download video game candy crush saga</p>
75
- <h4>Forza Horizon 4 Demo</h4>
76
- <p>Forza Horizon 4 Demo is a free download video game for PC that lets you sample the full version of Forza Horizon 4, one of the best racing games ever made. Forza Horizon 4 is a game that lets you experience the beauty and diversity of Britain in an open-world setting. You can drive over 450 cars from over 100 manufacturers, each with realistic details and performance. You can also enjoy dynamic seasons that change the weather and scenery of the game. You can race solo or with other players online in various modes and events. Forza Horizon 4 Demo is a game that will make you fall in love with driving.</p>
77
- <h3>EA Play</h3>
78
- <p>Another great place to find free download video games for PC is EA Play. This is an online platform that offers access to a collection of games from Electronic Arts, one of the biggest names in the gaming industry. You can play some of the most popular and acclaimed games from EA for free, or subscribe to EA Play Pro for unlimited access to all EA games and exclusive benefits. Here are some of the best free download video games for PC from EA Play:</p>
79
- <h4>FIFA Mobile</h4>
80
- <p>FIFA Mobile is a free download video game for PC that lets you play soccer with your favorite teams and players from around the world. FIFA Mobile is a game that brings you the authentic and immersive experience of soccer on your PC. You can build your own team, customize your players, and compete in various modes and events. You can also join leagues, play with friends, and challenge other players online. FIFA Mobile is a game that will make you feel the passion and excitement of soccer.</p>
81
- <h4>Star Wars: Galaxy of Heroes</h4>
82
- <p>Star Wars: Galaxy of Heroes is a free download video game for PC that lets you collect and battle with iconic characters from the Star Wars universe. Star Wars: Galaxy of Heroes is a game that lets you create your own dream team of heroes and villains from different eras and factions of Star Wars. You can upgrade your characters, equip them with powerful gear, and unleash their unique abilities. You can also fight in epic battles across various locations and scenarios from Star Wars. Star Wars: Galaxy of Heroes is a game that will make you feel the force.</p>
83
- <h4>Apex Legends</h4>
84
- <p>Apex Legends is a free download video game for PC that lets you compete in a battle royale with other players online. Apex Legends is a game that combines elements of shooter, strategy, and survival genres in a fast-paced and thrilling gameplay. You can choose from different characters, each with their own skills and personalities. You can also team up with other players, communicate with them, and coordinate your actions. You can also loot weapons, items, and resources from the map, and use them to your advantage. Apex Legends is a game that will test your skills and nerves.</p>
85
- <h2>Conclusion</h2>
86
- <h3>Summary of the main points</h3>
87
- <p>In conclusion, free download video games are games that you can download and play on your PC or mobile device without paying anything. They are a great way to save money, discover new games, and play anytime and anywhere. There are many ways to find free download video games online, such as using search engines, online platforms or stores, or online reviews or recommendations. Some of the best free download video games for PC are Asphalt 9: Legends, Roblox, Forza Horizon 4 Demo, FIFA Mobile, Star Wars: Galaxy of Heroes, and Apex Legends. If you are looking for some free download video games to play on your PC, you should definitely check them out.</p>
88
- <h3>Call to action</h3>
89
- <p>Now that you know how to find and play the best free download video games for PC, what are you waiting for? Go ahead and download some of them and have fun. You won't regret it. And if you enjoyed this article, please share it with your friends and family who might also be interested in free download video games. Thank you for reading.</p>
90
- <h2>FAQs</h2>
91
- <p>Here are some of the most frequently asked questions about free download video games:</p>
92
- <ul>
93
- <li><b>Are free download video games safe?</b></li>
94
- <p>Most free download video games are safe, as long as you download them from reputable sources and scan them for viruses or malware. However, some free download video games might contain harmful or unwanted software, such as spyware, adware, or ransomware. Therefore, you should always be careful and cautious when downloading free download video games, and avoid clicking on suspicious links or pop-ups.</p>
95
- <li><b>Are free download video games legal?</b></li>
96
- <p>Most free download video games are legal, as long as they are authorized by the developers or publishers to be distributed for free. However, some free download video games might be illegal, such as pirated or cracked versions of paid games. Therefore, you should always respect the intellectual property rights of the creators and owners of the games, and avoid downloading or playing illegal free download video games.</p>
97
- <li><b>Are free download video games good?</b></li>
98
- <p>Most free download video games are good, as they offer high-quality graphics, sound, gameplay, and story. Some free download video games are even better than some paid games, as they have more features, content, and updates. However, some free download video games might be bad, such as low-quality, buggy, or boring games. Therefore, you should always read reviews or ratings of the games before downloading or playing them, and choose the ones that suit your preferences and expectations.</p>
99
- <li><b>How do free download video games make money?</b></li>
100
- <p>Most free download video games make money by using one or more of the following methods:</p>
101
- <ul>
102
- <li>In-game purchases: These are optional purchases that players can make within the game to unlock or enhance certain features, items, or abilities. For example, players can buy coins, gems, skins, weapons, or power-ups.</li>
103
- <li>Ads: These are advertisements that appear within the game or on the screen while playing the game. For example, players can see banners, pop-ups, videos, or interstitials.</li>
104
- <li>Sponsorships: These are partnerships that the game developers or publishers have with other companies or brands to promote their products or services within the game. For example, players can see logos, banners, or characters related to the sponsors.</li>
105
- <li>Donations: These are voluntary contributions that players can make to support the game developers or publishers. For example, players can donate money, items, or feedback.</li>
106
- </ul>
107
- <li><b>How do I uninstall free download video games?</b></li>
108
- <p>To uninstall free download video games from your PC, you can follow these steps:</p>
109
- <ol>
110
- <li>Go to the Control Panel on your PC and click on Programs and Features.</li>
111
- <li>Find the game that you want to uninstall from the list of installed programs and click on Uninstall.</li>
112
- <li>Follow the instructions on the screen to complete the uninstallation process.</li>
113
- </ol></p> 197e85843d<br />
114
- <br />
115
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Free Download General Knowledge Quiz Book PDF with Answers and Explanations.md DELETED
@@ -1,261 +0,0 @@
1
-
2
- <h1>General Knowledge Quiz Book PDF Free Download</h1>
3
- <p>Do you love to challenge yourself with trivia questions? Do you want to improve your general knowledge and learn something new every day? Do you enjoy reading books that are both entertaining and educational? If you answered yes to any of these questions, then this article is for you.</p>
4
- <h2>general knowledge quiz book pdf free download</h2><br /><p><b><b>Download File</b> &#9889; <a href="https://jinyurl.com/2uNOOH">https://jinyurl.com/2uNOOH</a></b></p><br /><br />
5
- <p>In this article, we will introduce you to the concept of general knowledge, its importance, and its benefits. We will also show you how to download general knowledge quiz book PDF for free from various sources. Finally, we will review the top 10 general knowledge quiz books that you can read online or offline, and compare their features, pros, and cons.</p>
6
- <h2>Introduction</h2>
7
- <h3>Why is general knowledge important?</h3>
8
- <p>General knowledge is the information that covers a wide range of topics, such as history, geography, science, arts, culture, sports, current affairs, and more. It is the knowledge that is not specialized or limited to a specific field or domain. It is the knowledge that is common to everyone or most people.</p>
9
- <p>General knowledge is important for many reasons. Some of them are:</p>
10
- <ul>
11
- <li>It enhances your cognitive abilities, such as memory, reasoning, problem-solving, and creativity.</li>
12
- <li>It broadens your perspective and helps you understand different cultures, viewpoints, and phenomena.</li>
13
- <li>It improves your communication skills and makes you more confident and articulate in expressing your opinions and arguments.</li>
14
- <li>It boosts your academic performance and helps you excel in exams, tests, quizzes, and assignments.</li>
15
- <li>It increases your employability and career prospects, as employers value candidates who have a good grasp of general knowledge.</li>
16
- <li>It enriches your personal life and makes you more interesting and attractive to others.</li>
17
- </ul>
18
- <h3>What are the benefits of reading quiz books?</h3>
19
- <p>Quiz books are books that contain questions and answers on various topics of general knowledge. They are designed to test your knowledge, challenge your mind, and entertain you at the same time. Reading quiz books has many benefits, such as:</p>
20
- <ul>
21
- <li>It stimulates your brain and keeps it active and healthy.</li>
22
- <li>It expands your vocabulary and improves your language skills.</li>
23
- <li>It updates your information and keeps you abreast of the latest developments in the world.</li>
24
- <li>It develops your curiosity and encourages you to research more about the topics that interest you.</li>
25
- <li>It enhances your concentration and attention span.</li>
26
- <li>It reduces your stress and boredom and provides you with fun and relaxation.</li> <h3>How to download general knowledge quiz book PDF for free?</h3>
27
- <p>If you are looking for a way to download general knowledge quiz book PDF for free, you have several options to choose from. Some of them are:</p>
28
- <p>general knowledge trivia book pdf download<br />
29
- free gk quiz book pdf download<br />
30
- general awareness quiz book pdf free<br />
31
- gk questions and answers book pdf download<br />
32
- general knowledge test book pdf free<br />
33
- gk quiz questions book pdf download<br />
34
- general knowledge mcqs book pdf free<br />
35
- gk objective questions book pdf download<br />
36
- general knowledge facts book pdf free<br />
37
- gk current affairs book pdf download<br />
38
- general knowledge ebook pdf free download<br />
39
- gk quiz ebook pdf download<br />
40
- general knowledge questions book pdf free<br />
41
- gk multiple choice questions book pdf download<br />
42
- general knowledge puzzles book pdf free<br />
43
- gk riddles book pdf download<br />
44
- general knowledge games book pdf free<br />
45
- gk brain teasers book pdf download<br />
46
- general knowledge fun book pdf free<br />
47
- gk challenge book pdf download<br />
48
- general knowledge 2020 book pdf free download<br />
49
- gk 2020 quiz book pdf download<br />
50
- general knowledge 2021 book pdf free download<br />
51
- gk 2021 quiz book pdf download<br />
52
- general knowledge world book pdf free download<br />
53
- gk world quiz book pdf download<br />
54
- general knowledge india book pdf free download<br />
55
- gk india quiz book pdf download<br />
56
- general knowledge pakistan book pdf free download<br />
57
- gk pakistan quiz book pdf download<br />
58
- general knowledge history book pdf free download<br />
59
- gk history quiz book pdf download<br />
60
- general knowledge geography book pdf free download<br />
61
- gk geography quiz book pdf download<br />
62
- general knowledge science book pdf free download<br />
63
- gk science quiz book pdf download<br />
64
- general knowledge sports book pdf free download<br />
65
- gk sports quiz book pdf download<br />
66
- general knowledge entertainment book pdf free download<br />
67
- gk entertainment quiz book pdf download<br />
68
- general knowledge literature book pdf free download<br />
69
- gk literature quiz book pdf download<br />
70
- general knowledge art and culture book pdf free download<br />
71
- gk art and culture quiz book pdf download<br />
72
- general knowledge politics and economy book pdf free download<br />
73
- gk politics and economy quiz book pdf download<br />
74
- general knowledge religion and mythology book pdf free download<br />
75
- gk religion and mythology quiz book pdf download</p>
76
- <ul>
77
- <li>Use online platforms that offer free ebooks, such as Project Gutenberg, Open Library, Internet Archive, and ManyBooks. These platforms have a large collection of quiz books and other genres that you can download in various formats, including PDF.</li>
78
- <li>Use online tools that convert web pages or documents into PDF files, such as Web2PDF, PDF Converter, and Smallpdf. These tools allow you to enter the URL or upload the file of the quiz book you want to download, and then generate a PDF file that you can save or share.</li>
79
- <li>Use online sources that provide free PDF downloads of quiz books, such as PDF Drive, Free-Ebooks.net, and BookBoon. These sources have a wide range of quiz books and other categories that you can download for free or with a registration.</li>
80
- </ul>
81
- <p>However, before you download any quiz book PDF for free, make sure that you check the following:</p>
82
- <ul>
83
- <li>The quality and accuracy of the content. Some quiz books may have outdated, incorrect, or incomplete information that may mislead or confuse you.</li>
84
- <li>The legality and safety of the source. Some sources may violate the copyright or privacy of the authors or publishers, or may contain viruses or malware that may harm your device.</li>
85
- <li>The compatibility and accessibility of the format. Some PDF files may not be compatible with your device or software, or may not be easy to read or print.</li>
86
- </ul>
87
- <h2>Top 10 General Knowledge Quiz Books</h2>
88
- <p>Now that you know how to download general knowledge quiz book PDF for free, let's take a look at some of the best quiz books that you can read online or offline. Here are our top 10 picks, along with their overviews, features, pros, and cons.</p>
89
- <h3>The General Knowledge Quiz Book by Day Williams</h3>
90
- <h4>Overview</h4>
91
- <p>The General Knowledge Quiz Book by Day Williams is a comprehensive and challenging quiz book that covers various topics, such as history, geography, science, arts, culture, sports, current affairs, and more. It contains 500 questions and answers that are divided into 50 categories. It also includes fun facts and trivia that will enrich your knowledge and entertain you.</p>
92
- <h4>Features</h4>
93
- <ul>
94
- <li>500 questions and answers on various topics of general knowledge</li>
95
- <li>50 categories, such as animals, astronomy, celebrities, countries, inventions, movies, music, religion, etc.</li>
96
- <li>Fun facts and trivia that provide additional information and insights</li>
97
- <li>Easy to read and follow format</li>
98
- <li>Suitable for all ages and levels of difficulty</li>
99
- </ul>
100
- <h4>Pros and cons</h4>
101
- | Pros | Cons | | --- | --- | | It covers a wide range of topics and categories | It may not be updated with the latest information | | It provides fun facts and trivia that add value and interest | It may have some errors or typos | | It is easy to read and follow | It may not be available in some regions or formats | | It is suitable for all ages and levels of difficulty | It may not have enough questions or variety | <h3>The Ultimate General Knowledge Quiz Book by Terry Dolan</h3>
102
- <h4>Overview</h4>
103
- <p>The Ultimate General Knowledge Quiz Book by Terry Dolan is an extensive and engaging quiz book that covers various topics, such as history, geography, science, arts, culture, sports, current affairs, and more. It contains 1000 questions and answers that are divided into 100 quizzes. It also includes hints and tips that will help you improve your general knowledge and quiz skills.</p>
104
- <h4>Features</h4>
105
- <ul>
106
- <li>1000 questions and answers on various topics of general knowledge</li>
107
- <li>100 quizzes, each with 10 questions and answers</li>
108
- <li>Hints and tips that provide guidance and advice on how to answer the questions</li>
109
- <li>Clear and concise format</li>
110
- <li>Suitable for all ages and levels of difficulty</li>
111
- </ul>
112
- <h4>Pros and cons</h4>
113
- | Pros | Cons | | --- | --- | | It covers a wide range of topics and quizzes | It may not be updated with the latest information | | It provides hints and tips that enhance your general knowledge and quiz skills | It may have some errors or typos | | It is clear and concise | It may not be available in some regions or formats | | <h3>The Great Book of General Knowledge Trivia by Bill O'Neill</h3>
114
- <h4>Overview</h4>
115
- <p>The Great Book of General Knowledge Trivia by Bill O'Neill is a fun and fascinating quiz book that covers various topics, such as history, geography, science, arts, culture, sports, current affairs, and more. It contains 200 questions and answers that are divided into 20 quizzes. It also includes interesting facts and stories that will amaze and entertain you.</p>
116
- <h4>Features</h4>
117
- <ul>
118
- <li>200 questions and answers on various topics of general knowledge</li>
119
- <li>20 quizzes, each with 10 questions and answers</li>
120
- <li>Interesting facts and stories that provide background and context to the questions</li>
121
- <li>Colorful and attractive format</li>
122
- <li>Suitable for all ages and levels of difficulty</li>
123
- </ul>
124
- <h4>Pros and cons</h4>
125
- | Pros | Cons | | --- | --- | | It covers a wide range of topics and quizzes | It may not be updated with the latest information | | It provides interesting facts and stories that make the quiz more enjoyable | It may have some errors or typos | | It is colorful and attractive | It may not be available in some regions or formats | | It is suitable for all ages and levels of difficulty | It may not have enough questions or variety | <h3>The Mega General Knowledge Quiz Book by Jenny Kellett</h3>
126
- <h4>Overview</h4>
127
- <p>The Mega General Knowledge Quiz Book by Jenny Kellett is a massive and comprehensive quiz book that covers various topics, such as history, geography, science, arts, culture, sports, current affairs, and more. It contains 5000 questions and answers that are divided into 500 quizzes. It also includes a scoring system that will help you track your progress and performance.</p>
128
- <h4>Features</h4>
129
- <ul>
130
- <li>5000 questions and answers on various topics of general knowledge</li>
131
- <li>500 quizzes, each with 10 questions and answers</li>
132
- <li>A scoring system that provides feedback and motivation</li>
133
- <li>Simple and straightforward format</li>
134
- <li>Suitable for all ages and levels of difficulty</li>
135
- </ul>
136
- <h4>Pros and cons</h4>
137
- | Pros | Cons | | --- | --- | | It covers a wide range of topics and quizzes | It may not be updated with the latest information | | It provides a scoring system that enhances your general knowledge and quiz skills | It may have some errors or typos | | It is simple and straightforward | It may not be available in some regions or formats | | It is suitable for all ages and levels of difficulty | It may be too long or repetitive for some readers | <h3>The General Knowledge Pub Quiz Book by Geoff Tibballs</h3>
138
- <h4>Overview</h4>
139
- <p>The General Knowledge Pub Quiz Book by Geoff Tibballs is a fun and entertaining quiz book that covers various topics, such as history, geography, science, arts, culture, sports, current affairs, and more. It contains 1000 questions and answers that are divided into 100 quizzes. It also includes a mix of easy, medium, and hard questions that will suit different tastes and abilities.</p>
140
- <h4>Features</h4>
141
- <ul>
142
- <li>1000 questions and answers on various topics of general knowledge</li>
143
- <li>100 quizzes, each with 10 questions and answers</li>
144
- <li>A mix of easy, medium, and hard questions that provide variety and challenge</li>
145
- <li>Funny and witty format</li>
146
- <li>Suitable for all ages and levels of difficulty</li>
147
- </ul>
148
- <h4>Pros and cons</h4>
149
- | Pros | Cons | | --- | --- | | It covers a wide range of topics and quizzes | It may not be updated with the latest information | | It provides a mix of easy, medium, and hard questions that cater to different tastes and abilities | It may have some errors or typos | | It is funny and witty | It may not be available in some regions or formats | | It is suitable for all ages and levels of difficulty | It may not have enough facts or trivia to accompany the questions | <h3>The Big Book of Random Facts by Bill O'Neill</h3>
150
- <h4>Overview</h4>
151
- <p>The Big Book of Random Facts by Bill O'Neill is a fun and interesting quiz book that covers various topics, such as history, geography, science, arts, culture, sports, current affairs, and more. It contains 1000 questions and answers that are divided into 100 quizzes. It also includes random facts and trivia that will surprise and amuse you.</p>
152
- <h4>Features</h4>
153
- <ul>
154
- <li>1000 questions and answers on various topics of general knowledge</li>
155
- <li>100 quizzes, each with 10 questions and answers</li>
156
- <li>Random facts and trivia that provide fun and entertainment</li>
157
- <li>Creative and catchy format</li>
158
- <li>Suitable for all ages and levels of difficulty</li>
159
- </ul>
160
- <h4>Pros and cons</h4>
161
- | Pros | Cons | | --- | --- | | It covers a wide range of topics and quizzes | It may not be updated with the latest information | | It provides random facts and trivia that make the quiz more enjoyable | It may have some errors or typos | | It is creative and catchy | It may not be available in some regions or formats | | It is suitable for all ages and levels of difficulty | It may not have enough questions or variety | <h3>The World's Greatest General Knowledge Quiz Book by Chris Cowlin</h3>
162
- <h4>Overview</h4>
163
- <p>The World's Greatest General Knowledge Quiz Book by Chris Cowlin is a challenging and impressive quiz book that covers various topics, such as history, geography, science, arts, culture, sports, current affairs, and more. It contains 5000 questions and answers that are divided into 200 quizzes. It also includes a table that shows the world records for the most questions answered correctly in a quiz.</p>
164
- <h4>Features</h4>
165
- <ul>
166
- <li>5000 questions and answers on various topics of general knowledge</li>
167
- <li>200 quizzes, each with 25 questions and answers</li>
168
- <li>A table that shows the world records for the most questions answered correctly in a quiz</li>
169
- <li>Professional and elegant format</li>
170
- <li>Suitable for all ages and levels of difficulty</li>
171
- </ul>
172
- <h4>Pros and cons</h4>
173
- | Pros | Cons | | --- | --- | | It covers a wide range of topics and quizzes | It may not be updated with the latest information | | It provides a table that shows the world records for the most questions answered correctly in a quiz | It may have some errors or typos | | It is professional and elegant | It may not be available in some regions or formats | | It is suitable for all ages and levels of difficulty | It may be too difficult or intimidating for some readers | <h3>The Ultimate Trivia Challenge by Editors of Reader's Digest</h3>
174
- <h4>Overview</h4>
175
- <p>The Ultimate Trivia Challenge by Editors of Reader's Digest is a fun and informative quiz book that covers various topics, such as history, geography, science, arts, culture, sports, current affairs, and more. It contains 3000 questions and answers that are divided into 300 quizzes. It also includes illustrations and diagrams that will help you visualize the questions and answers.</p>
176
- <h4>Features</h4>
177
- <ul>
178
- <li>3000 questions and answers on various topics of general knowledge</li>
179
- <li>300 quizzes, each with 10 questions and answers</li>
180
- <li>Illustrations and diagrams that provide visual aids to the questions and answers</li>
181
- <li>Friendly and colorful format</li>
182
- <li>Suitable for all ages and levels of difficulty</li>
183
- </ul>
184
- <h4>Pros and cons</h4>
185
- | Pros | Cons | | --- | --- | | It covers a wide range of topics and quizzes | It may not be updated with the latest information | | It provides illustrations and diagrams that enhance your general knowledge and quiz skills | It may have some errors or typos | | It is friendly and colorful | It may not be available in some regions or formats | | It is suitable for all ages and levels of difficulty | It may not have enough facts or trivia to accompany the questions | <h3>The Fun Knowledge Encyclopedia by Bill O'Neill</h3>
186
- <h4>Overview</h4>
187
- <p>The Fun Knowledge Encyclopedia by Bill O'Neill is a fun and captivating quiz book that covers various topics, such as history, geography, science, arts, culture, sports, current affairs, and more. It contains 1000 questions and answers that are divided into 100 quizzes. It also includes amazing facts and stories that will astonish and delight you.</p>
188
- <h4>Features</h4>
189
- <ul>
190
- <li>1000 questions and answers on various topics of general knowledge</li>
191
- <li>100 quizzes, each with 10 questions and answers</li>
192
- <li>Amazing facts and stories that provide wonder and entertainment</li>
193
- <li>Engaging and lively format</li>
194
- <li>Suitable for all ages and levels of difficulty</li>
195
- </ul>
196
- <h4>Pros and cons</h4>
197
- | Pros | Cons | | --- | --- | | It covers a wide range of topics and quizzes | It may not be updated with the latest information | | It provides amazing facts and stories that make the quiz more enjoyable | It may have some errors or typos | | It is engaging and lively | It may not be available in some regions or formats | | It is suitable for all ages and levels of difficulty | It may not have enough questions or variety | <h3>The Amazing Quiz Book by Jack Goldstein and Frankie Taylor</h3>
198
- <h4>Overview</h4>
199
- <p>The Amazing Quiz Book by Jack Goldstein and Frankie Taylor is a fun and exciting quiz book that covers various topics, such as history, geography, science, arts, culture, sports, current affairs, and more. It contains 500 questions and answers that are divided into 50 quizzes. It also includes a bonus round that will test your speed and accuracy.</p>
200
- <h4>Features</h4>
201
- <ul>
202
- <li>500 questions and answers on various topics of general knowledge</li>
203
- <li>50 quizzes, each with 10 questions and answers</li>
204
- <li>A bonus round that provides a timed challenge</li>
205
- <li>Cool and modern format</li>
206
- <li>Suitable for all ages and levels of difficulty</li>
207
- </ul>
208
- <h4>Pros and cons</h4>
209
- | Pros | Cons | | --- | --- | | It covers a wide range of topics and quizzes | It may not be updated with the latest information | | It provides a bonus round that adds thrill and excitement to the quiz | It may have some errors or typos | | It is cool and modern | It may not be available in some regions or formats | | It is suitable for all ages and levels of difficulty | It may not have enough facts or trivia to accompany the questions | <h2>Conclusion</h2>
210
- <p>In conclusion, general knowledge is an important and beneficial skill that you can improve by reading quiz books. Quiz books are books that contain questions and answers on various topics of general knowledge. They are designed to test your knowledge, challenge your mind, and entertain you at the same time.</p>
211
- <p>In this article, we have shown you how to download general knowledge quiz book PDF for free from various sources. We have also reviewed the top 10 general knowledge quiz books that you can read online or offline, and compared their features, pros, and cons.</p>
212
- <p>We hope that this article has helped you find the best quiz book for your needs and preferences. Whether you want to learn something new, have fun, or impress your friends, there is a quiz book for everyone. So what are you waiting for? Grab your quiz book today and start quizzing!</p>
213
- <h2>Frequently Asked Questions (FAQs)</h2>
214
- <h3>Q: What is the best way to prepare for a general knowledge quiz?</h3>
215
- <p>A: The best way to prepare for a general knowledge quiz is to read widely and regularly on various topics of interest. You can also watch documentaries, listen to podcasts, browse websites, or follow social media accounts that provide general knowledge information. Additionally, you can practice answering quiz questions from books or online sources.</p>
216
- <h3>Q: How can I improve my general knowledge?</h3>
217
- <p>A: You can improve your general knowledge by following these tips:</p>
218
- <ul>
219
- <li>Be curious and ask questions about everything.</li>
220
- <li>Read books, magazines, newspapers, blogs, etc. on different subjects.</li>
221
- <li>Watch TV shows, movies, videos, etc. that are informative and educational.</li>
222
- <li>Listen to radio programs, podcasts, audiobooks, etc. that are relevant and interesting.</li>
223
- <li>Browse websites, apps, games, etc. that are fun and interactive.</li>
224
- <li>Follow social media accounts, newsletters, forums, etc. that are reliable and updated.</li>
225
- <li>Join clubs, groups, events, etc. that are related to your hobbies or passions.</li>
226
- <li>Travel to different places and cultures.</li>
227
- <li>Talk to people who have different backgrounds and experiences.</li>
228
- <li>Play games, puzzles, quizzes, etc. that are fun and challenging.</li>
229
- </ul>
230
- <h3>Q: What are some of the most common topics in general knowledge quizzes?</h3>
231
- <p>A: Some of the most common topics in general knowledge quizzes are:</p>
232
- <ul>
233
- <li>History: This topic covers the events, people, places, dates, etc. that shaped the past.</li>
234
- <li>Geography: This topic covers the physical features, locations, climates, populations, etc. of the world.</li>
235
- <li>Science: This topic covers the natural phenomena, laws, theories, discoveries, inventions, etc. that explain the world.</li>
236
- <li>Arts: This topic covers the forms, styles, movements, creators, works, etc. that express the world.</li>
237
- <li>Culture: This topic covers the beliefs, values, customs, traditions, languages, religions, etc. that define the world.</li>
238
- <li>Sports: This topic covers the games, rules, players, teams, records, etc. that entertain the world.</li>
239
- <li>Current Affairs: This topic covers the news, issues, trends, events, etc. that affect the world.</li>
240
- </ul>
241
- <h3>Q: What are some of the best sources for general knowledge information?</h3>
242
- <p>A: Some of the best sources for general knowledge information are:</p>
243
- <ul>
244
- <li>Books: Books are reliable and authoritative sources of information that cover a wide range of topics and genres. You can find books in libraries, bookstores, or online platforms.</li>
245
- <li>Encyclopedias: Encyclopedias are comprehensive and organized sources of information that cover various topics and fields. You can find encyclopedias in print or online formats.</li>
246
- <li>Dictionaries: Dictionaries are useful and handy sources of information that provide definitions, meanings, synonyms, antonyms, pronunciations, etc. of words. You can find dictionaries in print or online formats.</li>
247
- <li>Almanacs: Almanacs are informative and updated sources of information that provide facts, figures, statistics, charts, tables, etc. on various topics and categories. You can find almanacs in print or online formats.</li>
248
- <li>Atlases: Atlases are visual and interactive sources of information that provide maps, images, diagrams, etc. of various places and regions. You can find atlases in print or online formats.</li>
249
- </ul>
250
- <h3>Q: How can I make my own general knowledge quiz?</h3>
251
- <p>A: You can make your own general knowledge quiz by following these steps:</p>
252
- <ol>
253
- <li>Pick a topic or category that interests you or your audience.</li>
254
- <li>Research and gather information on that topic or category from reliable and updated sources.</li>
255
- <li>Create a list of questions and answers based on that information. Make sure to vary the difficulty and format of the questions.</li>
256
- <li>Organize your questions and answers into a quiz format. You can use a template or a tool to help you with this step.</li>
257
- <li>Test your quiz on yourself or others to check for errors or improvements.</li>
258
- <li>Publish or share your quiz with your friends or family and enjoy!</li>
259
- </ol></p> 401be4b1e0<br />
260
- <br />
261
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/7hao/bingo/src/lib/isomorphic/index.ts DELETED
@@ -1,17 +0,0 @@
1
- 'use client'
2
-
3
- import Default from './browser'
4
-
5
- let exportsModel: any = {}
6
-
7
- if (process.browser) {
8
- Object.assign(exportsModel, require('./browser').default)
9
- } else {
10
- Object.assign(exportsModel, require('./node').default)
11
- }
12
-
13
- export default exportsModel! as typeof Default
14
-
15
- export const fetch: typeof Default.fetch = exportsModel!.fetch
16
- export const WebSocket: typeof Default.WebSocket = exportsModel!.WebSocket
17
- export const debug: typeof Default.debug = exportsModel!.debug
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/A00001/bingothoo/src/components/welcome-screen.tsx DELETED
@@ -1,34 +0,0 @@
1
- import { useBing } from '@/lib/hooks/use-bing'
2
-
3
- const exampleMessages = [
4
- {
5
- heading: '🧐 提出复杂问题',
6
- message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?`
7
- },
8
- {
9
- heading: '🙌 获取更好的答案',
10
- message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?'
11
- },
12
- {
13
- heading: '🎨 获得创意灵感',
14
- message: `以海盗的口吻写一首关于外太空鳄鱼的俳句`
15
- }
16
- ]
17
-
18
- export function WelcomeScreen({ setInput }: Pick<ReturnType<typeof useBing>, 'setInput'>) {
19
- return (
20
- <div className="welcome-container flex">
21
- {exampleMessages.map(example => (
22
- <button key={example.heading} className="welcome-item w-4/5 sm:w-[240px]" type="button" onClick={() => setInput(example.message)}>
23
- <div className="item-title">{example.heading}</div>
24
- <div className="item-content">
25
- <div className="item-body">
26
- <div className="item-header"></div>
27
- <div>&ldquo;{example.message}&rdquo;</div>
28
- </div>
29
- </div>
30
- </button>
31
- ))}
32
- </div>
33
- )
34
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIWaves/Software_Company/src/agents/Action/base_action.py DELETED
@@ -1,49 +0,0 @@
1
- from Memory import Memory
2
- from utils import extract
3
- class Action:
4
- """
5
- The basic action unit of agent
6
- """
7
- def __init__(self,**kwargs):
8
- self.response = None
9
- self.is_user = False
10
- self.res_dict = {}
11
- self.name = ""
12
- self.role = ""
13
- for key,value in kwargs.items():
14
- setattr(self,key,value)
15
-
16
-
17
- def process(self):
18
- """
19
- processing action
20
- Rerutn : memory(Memory)
21
- """
22
- response = self.response
23
- send_name = self.name
24
- send_role = self.role
25
- all = ""
26
- for res in response:
27
- all += res
28
- parse = f"{send_name}:"
29
-
30
- # 将里面对话的第三人称删了
31
- # The third person in the dialogue was deleted.
32
- while parse in all:
33
- index = all.index(parse) + len(parse)
34
- all = all[index:]
35
-
36
- if not self.is_user:
37
- print(f"{send_name}({send_role}):{all}")
38
- # for software
39
- if "<title>" in all:
40
- title = extract(all,"title")
41
- python = extract(all,"python")
42
- os.makedirs("output_code", exist_ok=True)
43
- file_name = "output_code/" + title
44
- with open(file_name, "w", encoding="utf-8") as f:
45
- f.write(python)
46
- memory = Memory(send_role, send_name, all)
47
- return memory
48
-
49
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abdulkader/HumanMotionsDetector/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: HumanMotionsDetector
3
- emoji: 🔥
4
- colorFrom: purple
5
- colorTo: gray
6
- sdk: gradio
7
- sdk_version: 3.12.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/midas/midas/midas_net.py DELETED
@@ -1,76 +0,0 @@
1
- """MidashNet: Network for monocular depth estimation trained by mixing several datasets.
2
- This file contains code that is adapted from
3
- https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py
4
- """
5
- import torch
6
- import torch.nn as nn
7
-
8
- from .base_model import BaseModel
9
- from .blocks import FeatureFusionBlock, Interpolate, _make_encoder
10
-
11
-
12
- class MidasNet(BaseModel):
13
- """Network for monocular depth estimation.
14
- """
15
-
16
- def __init__(self, path=None, features=256, non_negative=True):
17
- """Init.
18
-
19
- Args:
20
- path (str, optional): Path to saved model. Defaults to None.
21
- features (int, optional): Number of features. Defaults to 256.
22
- backbone (str, optional): Backbone network for encoder. Defaults to resnet50
23
- """
24
- print("Loading weights: ", path)
25
-
26
- super(MidasNet, self).__init__()
27
-
28
- use_pretrained = False if path is None else True
29
-
30
- self.pretrained, self.scratch = _make_encoder(backbone="resnext101_wsl", features=features, use_pretrained=use_pretrained)
31
-
32
- self.scratch.refinenet4 = FeatureFusionBlock(features)
33
- self.scratch.refinenet3 = FeatureFusionBlock(features)
34
- self.scratch.refinenet2 = FeatureFusionBlock(features)
35
- self.scratch.refinenet1 = FeatureFusionBlock(features)
36
-
37
- self.scratch.output_conv = nn.Sequential(
38
- nn.Conv2d(features, 128, kernel_size=3, stride=1, padding=1),
39
- Interpolate(scale_factor=2, mode="bilinear"),
40
- nn.Conv2d(128, 32, kernel_size=3, stride=1, padding=1),
41
- nn.ReLU(True),
42
- nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0),
43
- nn.ReLU(True) if non_negative else nn.Identity(),
44
- )
45
-
46
- if path:
47
- self.load(path)
48
-
49
- def forward(self, x):
50
- """Forward pass.
51
-
52
- Args:
53
- x (tensor): input data (image)
54
-
55
- Returns:
56
- tensor: depth
57
- """
58
-
59
- layer_1 = self.pretrained.layer1(x)
60
- layer_2 = self.pretrained.layer2(layer_1)
61
- layer_3 = self.pretrained.layer3(layer_2)
62
- layer_4 = self.pretrained.layer4(layer_3)
63
-
64
- layer_1_rn = self.scratch.layer1_rn(layer_1)
65
- layer_2_rn = self.scratch.layer2_rn(layer_2)
66
- layer_3_rn = self.scratch.layer3_rn(layer_3)
67
- layer_4_rn = self.scratch.layer4_rn(layer_4)
68
-
69
- path_4 = self.scratch.refinenet4(layer_4_rn)
70
- path_3 = self.scratch.refinenet3(path_4, layer_3_rn)
71
- path_2 = self.scratch.refinenet2(path_3, layer_2_rn)
72
- path_1 = self.scratch.refinenet1(path_2, layer_1_rn)
73
-
74
- out = self.scratch.output_conv(path_1)
75
-
76
- return torch.squeeze(out, dim=1)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/memory_manipulator/__init__.py DELETED
@@ -1,9 +0,0 @@
1
- from agentverse.registry import Registry
2
-
3
- memory_manipulator_registry = Registry(name="Memory_Manipulator_Registry")
4
-
5
- from .base import BaseMemoryManipulator
6
- from .basic import BasicMemoryManipulator
7
- from .reflection import Reflection
8
- from .plan import Plan
9
-
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/scripts/evaluate_commongen.py DELETED
@@ -1,53 +0,0 @@
1
- import argparse
2
- import json
3
- import spacy
4
- from tqdm import tqdm
5
-
6
-
7
- nlp = spacy.load("en_core_web_sm")
8
-
9
-
10
- def coverage_score(preds, concept_sets):
11
- covs = []
12
- missings = []
13
- for p, cs in tqdm(zip(preds, concept_sets), total=len(preds)):
14
- cs = set(cs)
15
- lemmas = set()
16
- for token in nlp(p):
17
- lemmas.add(token.lemma_)
18
- cov = len(lemmas & cs) / len(cs)
19
- covs.append(cov)
20
- missings.append(cs - lemmas)
21
- return sum(covs) / len(covs), missings
22
-
23
-
24
- def scoring(preds, concept_sets):
25
- # Scores, Coverage, Coverage_POS = pivot_score.score(pred, ref, concept, ori_concepts, scoring="steiner_tree", parser="spacy", verbose=False)
26
- coverage, missing_tokens = coverage_score(preds, concept_sets)
27
- # print(f"System level Score: {sum(Scores)/len(Scores)*100:.2f}")
28
- print(f"System level Coverage: {coverage*100:.2f}")
29
- # print(f"System level Coverage_POS: {sum(Coverage_POS)/len(Scores)*100:.2f}")
30
- return coverage, missing_tokens
31
-
32
-
33
- if __name__ == "__main__":
34
- parser = argparse.ArgumentParser()
35
- parser.add_argument("--path", default="", type=str)
36
- args = parser.parse_args()
37
- # nlp.pipeline = [("tagger", nlp.tagger), ("parser", nlp.parser)]
38
-
39
- preds_final = []
40
- preds_first = []
41
- concept_sets = []
42
- with open(args.path) as f:
43
- for line in f:
44
- line = json.loads(line)
45
- preds_final.append(line["response"])
46
- if line["logs"][0]["module"] == "Role Assigner":
47
- preds_first.append(line["logs"][1]["content"])
48
- else:
49
- preds_first.append(line["logs"][0]["content"])
50
- concept_sets.append(line["input"])
51
-
52
- scoring(preds_final, concept_sets)
53
- scoring(preds_first, concept_sets)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/text/japanese.py DELETED
@@ -1,153 +0,0 @@
1
- import re
2
- from unidecode import unidecode
3
- import pyopenjtalk
4
-
5
-
6
- # Regular expression matching Japanese without punctuation marks:
7
- _japanese_characters = re.compile(
8
- r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
9
-
10
- # Regular expression matching non-Japanese characters or punctuation marks:
11
- _japanese_marks = re.compile(
12
- r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
13
-
14
- # List of (symbol, Japanese) pairs for marks:
15
- _symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [
16
- ('%', 'パーセント')
17
- ]]
18
-
19
- # List of (romaji, ipa) pairs for marks:
20
- _romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
21
- ('ts', 'ʦ'),
22
- ('u', 'ɯ'),
23
- ('j', 'ʥ'),
24
- ('y', 'j'),
25
- ('ni', 'n^i'),
26
- ('nj', 'n^'),
27
- ('hi', 'çi'),
28
- ('hj', 'ç'),
29
- ('f', 'ɸ'),
30
- ('I', 'i*'),
31
- ('U', 'ɯ*'),
32
- ('r', 'ɾ')
33
- ]]
34
-
35
- # List of (romaji, ipa2) pairs for marks:
36
- _romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
37
- ('u', 'ɯ'),
38
- ('ʧ', 'tʃ'),
39
- ('j', 'dʑ'),
40
- ('y', 'j'),
41
- ('ni', 'n^i'),
42
- ('nj', 'n^'),
43
- ('hi', 'çi'),
44
- ('hj', 'ç'),
45
- ('f', 'ɸ'),
46
- ('I', 'i*'),
47
- ('U', 'ɯ*'),
48
- ('r', 'ɾ')
49
- ]]
50
-
51
- # List of (consonant, sokuon) pairs:
52
- _real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [
53
- (r'Q([↑↓]*[kg])', r'k#\1'),
54
- (r'Q([↑↓]*[tdjʧ])', r't#\1'),
55
- (r'Q([↑↓]*[sʃ])', r's\1'),
56
- (r'Q([↑↓]*[pb])', r'p#\1')
57
- ]]
58
-
59
- # List of (consonant, hatsuon) pairs:
60
- _real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [
61
- (r'N([↑↓]*[pbm])', r'm\1'),
62
- (r'N([↑↓]*[ʧʥj])', r'n^\1'),
63
- (r'N([↑↓]*[tdn])', r'n\1'),
64
- (r'N([↑↓]*[kg])', r'ŋ\1')
65
- ]]
66
-
67
-
68
- def symbols_to_japanese(text):
69
- for regex, replacement in _symbols_to_japanese:
70
- text = re.sub(regex, replacement, text)
71
- return text
72
-
73
-
74
- def japanese_to_romaji_with_accent(text):
75
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
76
- text = symbols_to_japanese(text)
77
- sentences = re.split(_japanese_marks, text)
78
- marks = re.findall(_japanese_marks, text)
79
- text = ''
80
- for i, sentence in enumerate(sentences):
81
- if re.match(_japanese_characters, sentence):
82
- if text != '':
83
- text += ' '
84
- labels = pyopenjtalk.extract_fullcontext(sentence)
85
- for n, label in enumerate(labels):
86
- phoneme = re.search(r'\-([^\+]*)\+', label).group(1)
87
- if phoneme not in ['sil', 'pau']:
88
- text += phoneme.replace('ch', 'ʧ').replace('sh',
89
- 'ʃ').replace('cl', 'Q')
90
- else:
91
- continue
92
- # n_moras = int(re.search(r'/F:(\d+)_', label).group(1))
93
- a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1))
94
- a2 = int(re.search(r"\+(\d+)\+", label).group(1))
95
- a3 = int(re.search(r"\+(\d+)/", label).group(1))
96
- if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']:
97
- a2_next = -1
98
- else:
99
- a2_next = int(
100
- re.search(r"\+(\d+)\+", labels[n + 1]).group(1))
101
- # Accent phrase boundary
102
- if a3 == 1 and a2_next == 1:
103
- text += ' '
104
- # Falling
105
- elif a1 == 0 and a2_next == a2 + 1:
106
- text += '↓'
107
- # Rising
108
- elif a2 == 1 and a2_next == 2:
109
- text += '↑'
110
- if i < len(marks):
111
- text += unidecode(marks[i]).replace(' ', '')
112
- return text
113
-
114
-
115
- def get_real_sokuon(text):
116
- for regex, replacement in _real_sokuon:
117
- text = re.sub(regex, replacement, text)
118
- return text
119
-
120
-
121
- def get_real_hatsuon(text):
122
- for regex, replacement in _real_hatsuon:
123
- text = re.sub(regex, replacement, text)
124
- return text
125
-
126
-
127
- def japanese_to_ipa(text):
128
- text = japanese_to_romaji_with_accent(text).replace('...', '…')
129
- text = re.sub(
130
- r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text)
131
- text = get_real_sokuon(text)
132
- text = get_real_hatsuon(text)
133
- for regex, replacement in _romaji_to_ipa:
134
- text = re.sub(regex, replacement, text)
135
- return text
136
-
137
-
138
- def japanese_to_ipa2(text):
139
- text = japanese_to_romaji_with_accent(text).replace('...', '…')
140
- text = get_real_sokuon(text)
141
- text = get_real_hatsuon(text)
142
- for regex, replacement in _romaji_to_ipa2:
143
- text = re.sub(regex, replacement, text)
144
- return text
145
-
146
-
147
- def japanese_to_ipa3(text):
148
- text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace(
149
- 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a')
150
- text = re.sub(
151
- r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text)
152
- text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text)
153
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlanMars/QYL-AI-Space/run_Linux.sh DELETED
@@ -1,31 +0,0 @@
1
- #!/bin/bash
2
-
3
- # 获取脚本所在目录
4
- script_dir=$(dirname "$(readlink -f "$0")")
5
-
6
- # 将工作目录更改为脚本所在目录
7
- cd "$script_dir" || exit
8
-
9
- # 检查Git仓库是否有更新
10
- git remote update
11
- pwd
12
-
13
- if ! git status -uno | grep 'up to date' > /dev/null; then
14
- # 如果有更新,关闭当前运行的服务器
15
- pkill -f ChuanhuChatbot.py
16
-
17
- # 拉取最新更改
18
- git pull
19
-
20
- # 安装依赖
21
- pip3 install -r requirements.txt
22
-
23
- # 重新启动服务器
24
- nohup python3 ChuanhuChatbot.py &
25
- fi
26
-
27
- # 检查ChuanhuChatbot.py是否在运行
28
- if ! pgrep -f ChuanhuChatbot.py > /dev/null; then
29
- # 如果没有运行,启动服务器
30
- nohup python3 ChuanhuChatbot.py &
31
- fi
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/empirical_attention/faster_rcnn_r50_fpn_attention_1111_dcn_1x_coco.py DELETED
@@ -1,16 +0,0 @@
1
- _base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
2
- model = dict(
3
- backbone=dict(
4
- plugins=[
5
- dict(
6
- cfg=dict(
7
- type='GeneralizedAttention',
8
- spatial_range=-1,
9
- num_heads=8,
10
- attention_type='1111',
11
- kv_stride=2),
12
- stages=(False, False, True, True),
13
- position='after_conv2')
14
- ],
15
- dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False),
16
- stage_with_dcn=(False, True, True, True)))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/fpg/faster_rcnn_r50_fpg_crop640_50e_coco.py DELETED
@@ -1,48 +0,0 @@
1
- _base_ = 'faster_rcnn_r50_fpn_crop640_50e_coco.py'
2
-
3
- norm_cfg = dict(type='BN', requires_grad=True)
4
- model = dict(
5
- neck=dict(
6
- type='FPG',
7
- in_channels=[256, 512, 1024, 2048],
8
- out_channels=256,
9
- inter_channels=256,
10
- num_outs=5,
11
- stack_times=9,
12
- paths=['bu'] * 9,
13
- same_down_trans=None,
14
- same_up_trans=dict(
15
- type='conv',
16
- kernel_size=3,
17
- stride=2,
18
- padding=1,
19
- norm_cfg=norm_cfg,
20
- inplace=False,
21
- order=('act', 'conv', 'norm')),
22
- across_lateral_trans=dict(
23
- type='conv',
24
- kernel_size=1,
25
- norm_cfg=norm_cfg,
26
- inplace=False,
27
- order=('act', 'conv', 'norm')),
28
- across_down_trans=dict(
29
- type='interpolation_conv',
30
- mode='nearest',
31
- kernel_size=3,
32
- norm_cfg=norm_cfg,
33
- order=('act', 'conv', 'norm'),
34
- inplace=False),
35
- across_up_trans=None,
36
- across_skip_trans=dict(
37
- type='conv',
38
- kernel_size=1,
39
- norm_cfg=norm_cfg,
40
- inplace=False,
41
- order=('act', 'conv', 'norm')),
42
- output_trans=dict(
43
- type='last_conv',
44
- kernel_size=3,
45
- order=('act', 'conv', 'norm'),
46
- inplace=False),
47
- norm_cfg=norm_cfg,
48
- skip_inds=[(0, 1, 2, 3), (0, 1, 2), (0, 1), (0, ), ()]))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/legacy_1.x/README.md DELETED
@@ -1,53 +0,0 @@
1
- # Legacy Configs in MMDetection V1.x
2
-
3
- [OTHERS]
4
-
5
- Configs in this directory implement the legacy configs used by MMDetection V1.x and its model zoos.
6
-
7
- To help users convert their models from V1.x to MMDetection V2.0, we provide v1.x configs to inference the converted v1.x models.
8
- Due to the BC-breaking changes in MMDetection V2.0 from MMDetection V1.x, running inference with the same model weights in these two version will produce different results. The difference will cause within 1% AP absolute difference as can be found in the following table.
9
-
10
- ## Usage
11
-
12
- To upgrade the model version, the users need to do the following steps.
13
-
14
- ### 1. Convert model weights
15
-
16
- There are three main difference in the model weights between V1.x and V2.0 codebases.
17
-
18
- 1. Since the class order in all the detector's classification branch is reordered, all the legacy model weights need to go through the conversion process.
19
- 2. The regression and segmentation head no longer contain the background channel. Weights in these background channels should be removed to fix in the current codebase.
20
- 3. For two-stage detectors, their wegihts need to be upgraded since MMDetection V2.0 refactors all the two-stage detectors with `RoIHead`.
21
-
22
- The users can do the same modification as mentioned above for the self-implemented
23
- detectors. We provide a scripts `tools/model_converters/upgrade_model_version.py` to convert the model weights in the V1.x model zoo.
24
-
25
- ```bash
26
- python tools/model_converters/upgrade_model_version.py ${OLD_MODEL_PATH} ${NEW_MODEL_PATH} --num-classes ${NUM_CLASSES}
27
-
28
- ```
29
-
30
- - OLD_MODEL_PATH: the path to load the model weights in 1.x version.
31
- - NEW_MODEL_PATH: the path to save the converted model weights in 2.0 version.
32
- - NUM_CLASSES: number of classes of the original model weights. Usually it is 81 for COCO dataset, 21 for VOC dataset.
33
- The number of classes in V2.0 models should be equal to that in V1.x models - 1.
34
-
35
- ### 2. Use configs with legacy settings
36
-
37
- After converting the model weights, checkout to the v1.2 release to find the corresponding config file that uses the legacy settings.
38
- The V1.x models usually need these three legacy modules: `LegacyAnchorGenerator`, `LegacyDeltaXYWHBBoxCoder`, and `RoIAlign(align=False)`.
39
- For models using ResNet Caffe backbones, they also need to change the pretrain name and the corresponding `img_norm_cfg`.
40
- An example is in [`retinanet_r50_caffe_fpn_1x_coco_v1.py`](retinanet_r50_caffe_fpn_1x_coco_v1.py)
41
- Then use the config to test the model weights. For most models, the obtained results should be close to that in V1.x.
42
- We provide configs of some common structures in this directory.
43
-
44
- ## Performance
45
-
46
- The performance change after converting the models in this directory are listed as the following.
47
- | Method | Style | Lr schd | V1.x box AP | V1.x mask AP | V2.0 box AP | V2.0 mask AP | Config | Download |
48
- | :-------------: | :-----: | :-----: | :------:| :-----: |:------:| :-----: | :-------: |:------------------------------------------------------------------------------------------------------------------------------: |
49
- | Mask R-CNN R-50-FPN | pytorch | 1x | 37.3 | 34.2 | 36.8 | 33.9 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/legacy_1.x/mask_rcnn_r50_fpn_1x_coco_v1.py) | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/mask_rcnn_r50_fpn_1x_20181010-069fa190.pth)|
50
- | RetinaNet R-50-FPN | caffe | 1x | 35.8 | - | 35.4 | - | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/legacy_1.x/retinanet_r50_caffe_1x_coco_v1.py) |
51
- | RetinaNet R-50-FPN | pytorch | 1x | 35.6 |-|35.2| -| [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/legacy_1.x/retinanet_r50_fpn_1x_coco_v1.py) | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/retinanet_r50_fpn_1x_20181125-7b0c2548.pth) |
52
- | Cascade Mask R-CNN R-50-FPN | pytorch | 1x | 41.2 | 35.7 |40.8| 35.6| [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/legacy_1.x/cascade_mask_rcnn_r50_fpn_1x_coco_v1.py) | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/cascade_mask_rcnn_r50_fpn_1x_20181123-88b170c9.pth) |
53
- | SSD300-VGG16 | caffe | 120e | 25.7 |-|25.4|-| [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/legacy_1.x/ssd300_coco_v1.py) | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/ssd300_coco_vgg16_caffe_120e_20181221-84d7110b.pth) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/pisa/pisa_ssd512_coco.py DELETED
@@ -1,8 +0,0 @@
1
- _base_ = '../ssd/ssd512_coco.py'
2
-
3
- model = dict(
4
- bbox_head=dict(type='PISASSDHead'),
5
- train_cfg=dict(isr=dict(k=2., bias=0.), carl=dict(k=1., bias=0.2)))
6
-
7
- optimizer_config = dict(
8
- _delete_=True, grad_clip=dict(max_norm=35, norm_type=2))
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x512_160k_ade20k.py DELETED
@@ -1,5 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/fcn_hr18.py', '../_base_/datasets/ade20k.py',
3
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
4
- ]
5
- model = dict(decode_head=dict(num_classes=150))
 
 
 
 
 
 
spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/dataloader/data_loader.py DELETED
@@ -1,238 +0,0 @@
1
- import torch
2
- import torchvision.transforms as transforms
3
- import torch.utils.data as data
4
- from util import task
5
- from .image_folder import make_dataset
6
- import random
7
- import numpy as np
8
- import copy
9
- import skimage.morphology as sm
10
- from PIL import Image, ImageFile, ImageOps
11
- ImageFile.LOAD_TRUNCATED_IMAGES = True
12
-
13
-
14
- ######################################################################################
15
- # Create the dataloader
16
- ######################################################################################
17
- class CreateDataset(data.Dataset):
18
- def __init__(self, opt):
19
- self.opt = opt
20
- self.img_paths, self.img_size = make_dataset(opt.img_file)
21
- if opt.mask_file != 'none': # load the random mask files for training and testing
22
- self.mask_paths, self.mask_size = make_dataset(opt.mask_file)
23
- self.transform = get_transform(opt, convert=False, augment=False)
24
- fixed_opt = copy.deepcopy(opt)
25
- fixed_opt.preprocess = 'scale_longside'
26
- fixed_opt.load_size = fixed_opt.fixed_size
27
- fixed_opt.no_flip = True
28
- self.transform_fixed = get_transform(fixed_opt, convert=True, augment=False)
29
-
30
- def __len__(self):
31
- """return the total number of examples in the dataset"""
32
- return self.img_size
33
-
34
- def __getitem__(self, item):
35
- """return a data point and its metadata information"""
36
- # load the image and conditional input
37
- img_org, img, img_path = self._load_img(item)
38
- if self.opt.batch_size > 1: # padding the image to the same size for batch training
39
- img_org = transforms.functional.pad(img_org, (0, 0, self.opt.fine_size-self.img_h, self.opt.fine_size-self.img_w))
40
- img = transforms.functional.pad(img, (0, 0, self.opt.fixed_size - img.size(-1), self.opt.fixed_size - img.size(-2)))
41
- pad_mask = torch.zeros_like(img_org)
42
- pad_mask[:, :self.img_w, :self.img_h] = 1
43
- # load the mask
44
- mask, mask_type = self._load_mask(item, img_org)
45
- if self.opt.reverse_mask:
46
- if self.opt.isTrain:
47
- mask = 1 - mask if random.random() > 0.8 else mask
48
- else:
49
- mask = 1 - mask
50
- return {'img_org': img_org, 'img': img, 'img_path': img_path, 'mask': mask, 'pad_mask': pad_mask}
51
-
52
- def name(self):
53
- return ""
54
-
55
- def _load_img(self, item):
56
- """load the original image and preprocess image"""
57
- img_path = self.img_paths[item % self.img_size]
58
- img_pil = Image.open(img_path).convert('RGB')
59
- img_org = self.transform(img_pil)
60
- img = self.transform_fixed(img_org)
61
- img_org = transforms.ToTensor()(img_org)
62
- img_pil.close()
63
- self.img_c, self.img_w, self.img_h = img_org.size()
64
- return img_org, img, img_path
65
-
66
- def _mask_dilation(self, mask):
67
- """mask erosion for different region"""
68
- mask = np.array(mask)
69
- pixel = np.random.randint(3, 25)
70
- mask = sm.erosion(mask, sm.square(pixel)).astype(np.uint8)
71
-
72
- return mask
73
-
74
- def _load_mask(self, item, img):
75
- """load the mask for image completion task"""
76
- c, h, w = img.size()
77
- if isinstance(self.opt.mask_type, list):
78
- mask_type_index = random.randint(0, len(self.opt.mask_type) - 1)
79
- mask_type = self.opt.mask_type[mask_type_index]
80
- else:
81
- mask_type = self.opt.mask_type
82
-
83
- if mask_type == 0: # center mask
84
- if random.random() > 0.3 and self.opt.isTrain:
85
- return task.random_regular_mask(img), mask_type # random regular mask
86
- return task.center_mask(img), mask_type
87
- elif mask_type == 1: # random regular mask
88
- return task.random_regular_mask(img), mask_type
89
- elif mask_type == 2: # random irregular mask
90
- return task.random_irregular_mask(img), mask_type
91
- elif mask_type == 3:
92
- # external mask from "Image Inpainting for Irregular Holes Using Partial Convolutions (ECCV18)"
93
- if self.opt.isTrain:
94
- mask_index = random.randint(0, self.mask_size-1)
95
- mask_transform = transforms.Compose(
96
- [
97
- transforms.RandomHorizontalFlip(),
98
- transforms.RandomRotation(10),
99
- transforms.RandomCrop([self.opt.fine_size + 64, self.opt.fine_size + 64]),
100
- transforms.Resize([h, w])
101
- ]
102
- )
103
- else:
104
- mask_index = item
105
- mask_transform = transforms.Compose(
106
- [
107
- transforms.Resize([h, w])
108
- ]
109
- )
110
- mask_pil = Image.open(self.mask_paths[mask_index]).convert('L')
111
- mask = mask_transform(mask_pil)
112
- mask_pil.close()
113
- if self.opt.isTrain:
114
- mask = self._mask_dilation(mask)
115
- else:
116
- mask = np.array(mask) < 128
117
- mask = torch.tensor(mask).view(1, h, w).float()
118
- return mask, mask_type
119
- else:
120
- raise NotImplementedError('mask type [%s] is not implemented' % str(mask_type))
121
-
122
-
123
- def dataloader(opt):
124
- datasets = CreateDataset(opt)
125
- dataset = data.DataLoader(datasets, batch_size=opt.batch_size, shuffle=not opt.no_shuffle,
126
- num_workers=int(opt.nThreads), drop_last=True)
127
-
128
- return dataset
129
-
130
-
131
- ######################################################################################
132
- # Basic image preprocess function
133
- ######################################################################################
134
- def _make_power_2(img, power, method=Image.BICUBIC):
135
- """resize the image to the size of log2(base) times"""
136
- ow, oh = img.size
137
- base = 2 ** power
138
- nw, nh = int(max(1, round(ow / base)) * base), int(max(1, round(oh / base)) * base)
139
- if nw == ow and nh == oh:
140
- return img
141
- return img.resize((nw, nh), method)
142
-
143
-
144
- def _random_zoom(img, target_width, method=Image.BICUBIC):
145
- """random resize the image scale"""
146
- zoom_level = np.random.uniform(0.8, 1.0, size=[2])
147
- ow, oh = img.size
148
- nw, nh = int(round(max(target_width, ow * zoom_level[0]))), int(round(max(target_width, oh * zoom_level[1])))
149
- return img.resize((nw, nh), method)
150
-
151
-
152
- def _scale_shortside(img, target_width, method=Image.BICUBIC):
153
- """resize the short side to the target width"""
154
- ow, oh = img.size
155
- shortsize = min(ow, oh)
156
- scale = target_width / shortsize
157
- return img.resize((round(ow * scale), round(oh * scale)), method)
158
-
159
-
160
- def _scale_longside(img, target_width, method=Image.BICUBIC):
161
- """resize the long side to the target width"""
162
- ow, oh = img.size
163
- longsize = max(ow, oh)
164
- scale = target_width / longsize
165
- return img.resize((round(ow * scale), round(oh * scale)), method)
166
-
167
-
168
- def _scale_randomside(img, target_width, method=Image.BICUBIC):
169
- """resize the side to the target width with random side"""
170
- if random.random() > 0.5:
171
- return _scale_shortside(img, target_width, method)
172
- else:
173
- return _scale_longside(img, target_width, method)
174
-
175
-
176
- def _crop(img, pos=None, size=None):
177
- """crop the image based on the given pos and size"""
178
- ow, oh = img.size
179
- if size is None:
180
- return img
181
- nw = min(ow, size)
182
- nh = min(oh, size)
183
- if (ow > nw or oh > nh):
184
- if pos is None:
185
- x1 = np.random.randint(0, int(ow-nw)+1)
186
- y1 = np.random.randint(0, int(oh-nh)+1)
187
- else:
188
- x1, y1 = pos
189
- return img.crop((x1, y1, x1 + nw, y1 + nh))
190
- return img
191
-
192
-
193
- def _pad(img):
194
- """expand the image to the square size"""
195
- ow, oh = img.size
196
- size = max(ow, oh)
197
- return ImageOps.pad(img, (size, size), centering=(0, 0))
198
-
199
-
200
- def _flip(img, flip):
201
- if flip:
202
- return img.transpose(Image.FLIP_LEFT_RIGHT)
203
- return img
204
-
205
-
206
- def get_transform(opt, params=None, method=Image.BICUBIC, convert=True, augment=False):
207
- """get the transform functions"""
208
- transforms_list = []
209
- if 'resize' in opt.preprocess:
210
- osize = [opt.load_size, opt.load_size]
211
- transforms_list.append(transforms.Resize(osize))
212
- elif 'scale_shortside' in opt.preprocess:
213
- transforms_list.append(transforms.Lambda(lambda img: _scale_shortside(img, opt.load_size, method)))
214
- elif 'scale_longside' in opt.preprocess:
215
- transforms_list.append(transforms.Lambda(lambda img: _scale_longside(img, opt.load_size, method)))
216
- elif "scale_randomside" in opt.preprocess:
217
- transforms_list.append(transforms.Lambda(lambda img: _scale_randomside(img, opt.load_size, method)))
218
-
219
- if 'zoom' in opt.preprocess:
220
- transforms_list.append(transforms.Lambda(lambda img: _random_zoom(img, opt.load_size, method)))
221
-
222
- if 'crop' in opt.preprocess and opt.isTrain:
223
- transforms_list.append(transforms.Lambda(lambda img: _crop(img, size=opt.fine_size)))
224
- if 'pad' in opt.preprocess:
225
- transforms_list.append(transforms.Lambda(lambda img: _pad(img))) # padding image to square
226
-
227
- transforms_list.append(transforms.Lambda(lambda img: _make_power_2(img, opt.data_powers, method)))
228
-
229
- if not opt.no_flip and opt.isTrain:
230
- transforms_list.append(transforms.RandomHorizontalFlip())
231
-
232
- if augment and opt.isTrain:
233
- transforms_list.append(transforms.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.2))
234
-
235
- if convert:
236
- transforms_list.append(transforms.ToTensor())
237
-
238
- return transforms.Compose(transforms_list)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/wrappers.py DELETED
@@ -1,180 +0,0 @@
1
- # Copyright (c) OpenMMLab. All rights reserved.
2
- r"""Modified from https://github.com/facebookresearch/detectron2/blob/master/detectron2/layers/wrappers.py # noqa: E501
3
-
4
- Wrap some nn modules to support empty tensor input. Currently, these wrappers
5
- are mainly used in mask heads like fcn_mask_head and maskiou_heads since mask
6
- heads are trained on only positive RoIs.
7
- """
8
- import math
9
-
10
- import torch
11
- import torch.nn as nn
12
- from torch.nn.modules.utils import _pair, _triple
13
-
14
- from .registry import CONV_LAYERS, UPSAMPLE_LAYERS
15
-
16
- if torch.__version__ == 'parrots':
17
- TORCH_VERSION = torch.__version__
18
- else:
19
- # torch.__version__ could be 1.3.1+cu92, we only need the first two
20
- # for comparison
21
- TORCH_VERSION = tuple(int(x) for x in torch.__version__.split('.')[:2])
22
-
23
-
24
- def obsolete_torch_version(torch_version, version_threshold):
25
- return torch_version == 'parrots' or torch_version <= version_threshold
26
-
27
-
28
- class NewEmptyTensorOp(torch.autograd.Function):
29
-
30
- @staticmethod
31
- def forward(ctx, x, new_shape):
32
- ctx.shape = x.shape
33
- return x.new_empty(new_shape)
34
-
35
- @staticmethod
36
- def backward(ctx, grad):
37
- shape = ctx.shape
38
- return NewEmptyTensorOp.apply(grad, shape), None
39
-
40
-
41
- @CONV_LAYERS.register_module('Conv', force=True)
42
- class Conv2d(nn.Conv2d):
43
-
44
- def forward(self, x):
45
- if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)):
46
- out_shape = [x.shape[0], self.out_channels]
47
- for i, k, p, s, d in zip(x.shape[-2:], self.kernel_size,
48
- self.padding, self.stride, self.dilation):
49
- o = (i + 2 * p - (d * (k - 1) + 1)) // s + 1
50
- out_shape.append(o)
51
- empty = NewEmptyTensorOp.apply(x, out_shape)
52
- if self.training:
53
- # produce dummy gradient to avoid DDP warning.
54
- dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0
55
- return empty + dummy
56
- else:
57
- return empty
58
-
59
- return super().forward(x)
60
-
61
-
62
- @CONV_LAYERS.register_module('Conv3d', force=True)
63
- class Conv3d(nn.Conv3d):
64
-
65
- def forward(self, x):
66
- if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)):
67
- out_shape = [x.shape[0], self.out_channels]
68
- for i, k, p, s, d in zip(x.shape[-3:], self.kernel_size,
69
- self.padding, self.stride, self.dilation):
70
- o = (i + 2 * p - (d * (k - 1) + 1)) // s + 1
71
- out_shape.append(o)
72
- empty = NewEmptyTensorOp.apply(x, out_shape)
73
- if self.training:
74
- # produce dummy gradient to avoid DDP warning.
75
- dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0
76
- return empty + dummy
77
- else:
78
- return empty
79
-
80
- return super().forward(x)
81
-
82
-
83
- @CONV_LAYERS.register_module()
84
- @CONV_LAYERS.register_module('deconv')
85
- @UPSAMPLE_LAYERS.register_module('deconv', force=True)
86
- class ConvTranspose2d(nn.ConvTranspose2d):
87
-
88
- def forward(self, x):
89
- if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)):
90
- out_shape = [x.shape[0], self.out_channels]
91
- for i, k, p, s, d, op in zip(x.shape[-2:], self.kernel_size,
92
- self.padding, self.stride,
93
- self.dilation, self.output_padding):
94
- out_shape.append((i - 1) * s - 2 * p + (d * (k - 1) + 1) + op)
95
- empty = NewEmptyTensorOp.apply(x, out_shape)
96
- if self.training:
97
- # produce dummy gradient to avoid DDP warning.
98
- dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0
99
- return empty + dummy
100
- else:
101
- return empty
102
-
103
- return super().forward(x)
104
-
105
-
106
- @CONV_LAYERS.register_module()
107
- @CONV_LAYERS.register_module('deconv3d')
108
- @UPSAMPLE_LAYERS.register_module('deconv3d', force=True)
109
- class ConvTranspose3d(nn.ConvTranspose3d):
110
-
111
- def forward(self, x):
112
- if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)):
113
- out_shape = [x.shape[0], self.out_channels]
114
- for i, k, p, s, d, op in zip(x.shape[-3:], self.kernel_size,
115
- self.padding, self.stride,
116
- self.dilation, self.output_padding):
117
- out_shape.append((i - 1) * s - 2 * p + (d * (k - 1) + 1) + op)
118
- empty = NewEmptyTensorOp.apply(x, out_shape)
119
- if self.training:
120
- # produce dummy gradient to avoid DDP warning.
121
- dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0
122
- return empty + dummy
123
- else:
124
- return empty
125
-
126
- return super().forward(x)
127
-
128
-
129
- class MaxPool2d(nn.MaxPool2d):
130
-
131
- def forward(self, x):
132
- # PyTorch 1.9 does not support empty tensor inference yet
133
- if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 9)):
134
- out_shape = list(x.shape[:2])
135
- for i, k, p, s, d in zip(x.shape[-2:], _pair(self.kernel_size),
136
- _pair(self.padding), _pair(self.stride),
137
- _pair(self.dilation)):
138
- o = (i + 2 * p - (d * (k - 1) + 1)) / s + 1
139
- o = math.ceil(o) if self.ceil_mode else math.floor(o)
140
- out_shape.append(o)
141
- empty = NewEmptyTensorOp.apply(x, out_shape)
142
- return empty
143
-
144
- return super().forward(x)
145
-
146
-
147
- class MaxPool3d(nn.MaxPool3d):
148
-
149
- def forward(self, x):
150
- # PyTorch 1.9 does not support empty tensor inference yet
151
- if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 9)):
152
- out_shape = list(x.shape[:2])
153
- for i, k, p, s, d in zip(x.shape[-3:], _triple(self.kernel_size),
154
- _triple(self.padding),
155
- _triple(self.stride),
156
- _triple(self.dilation)):
157
- o = (i + 2 * p - (d * (k - 1) + 1)) / s + 1
158
- o = math.ceil(o) if self.ceil_mode else math.floor(o)
159
- out_shape.append(o)
160
- empty = NewEmptyTensorOp.apply(x, out_shape)
161
- return empty
162
-
163
- return super().forward(x)
164
-
165
-
166
- class Linear(torch.nn.Linear):
167
-
168
- def forward(self, x):
169
- # empty tensor forward of Linear layer is supported in Pytorch 1.6
170
- if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 5)):
171
- out_shape = [x.shape[0], self.out_features]
172
- empty = NewEmptyTensorOp.apply(x, out_shape)
173
- if self.training:
174
- # produce dummy gradient to avoid DDP warning.
175
- dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0
176
- return empty + dummy
177
- else:
178
- return empty
179
-
180
- return super().forward(x)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/core/__init__.py DELETED
@@ -1,3 +0,0 @@
1
- from .evaluation import * # noqa: F401, F403
2
- from .seg import * # noqa: F401, F403
3
- from .utils import * # noqa: F401, F403
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/ldm/models/diffusion/sampling_util.py DELETED
@@ -1,22 +0,0 @@
1
- import torch
2
- import numpy as np
3
-
4
-
5
- def append_dims(x, target_dims):
6
- """Appends dimensions to the end of a tensor until it has target_dims dimensions.
7
- From https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/utils.py"""
8
- dims_to_append = target_dims - x.ndim
9
- if dims_to_append < 0:
10
- raise ValueError(f'input has {x.ndim} dims but target_dims is {target_dims}, which is less')
11
- return x[(...,) + (None,) * dims_to_append]
12
-
13
-
14
- def norm_thresholding(x0, value):
15
- s = append_dims(x0.pow(2).flatten(1).mean(1).sqrt().clamp(min=value), x0.ndim)
16
- return x0 * (value / s)
17
-
18
-
19
- def spatial_norm_thresholding(x0, value):
20
- # b c h w
21
- s = x0.pow(2).mean(1, keepdim=True).sqrt().clamp(min=value)
22
- return x0 * (value / s)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AquaSuisei/ChatGPTXE/modules/llama_func.py DELETED
@@ -1,137 +0,0 @@
1
- import os
2
- import logging
3
-
4
- from llama_index import download_loader
5
- from llama_index import (
6
- Document,
7
- LLMPredictor,
8
- PromptHelper,
9
- QuestionAnswerPrompt,
10
- RefinePrompt,
11
- )
12
- import colorama
13
- import PyPDF2
14
- from tqdm import tqdm
15
-
16
- from modules.presets import *
17
- from modules.utils import *
18
-
19
- def get_index_name(file_src):
20
- file_paths = [x.name for x in file_src]
21
- file_paths.sort(key=lambda x: os.path.basename(x))
22
-
23
- md5_hash = hashlib.md5()
24
- for file_path in file_paths:
25
- with open(file_path, "rb") as f:
26
- while chunk := f.read(8192):
27
- md5_hash.update(chunk)
28
-
29
- return md5_hash.hexdigest()
30
-
31
- def block_split(text):
32
- blocks = []
33
- while len(text) > 0:
34
- blocks.append(Document(text[:1000]))
35
- text = text[1000:]
36
- return blocks
37
-
38
- def get_documents(file_src):
39
- documents = []
40
- logging.debug("Loading documents...")
41
- logging.debug(f"file_src: {file_src}")
42
- for file in file_src:
43
- filepath = file.name
44
- filename = os.path.basename(filepath)
45
- file_type = os.path.splitext(filepath)[1]
46
- logging.info(f"loading file: {filename}")
47
- if file_type == ".pdf":
48
- logging.debug("Loading PDF...")
49
- try:
50
- from modules.pdf_func import parse_pdf
51
- from modules.config import advance_docs
52
- two_column = advance_docs["pdf"].get("two_column", False)
53
- pdftext = parse_pdf(filepath, two_column).text
54
- except:
55
- pdftext = ""
56
- with open(filepath, 'rb') as pdfFileObj:
57
- pdfReader = PyPDF2.PdfReader(pdfFileObj)
58
- for page in tqdm(pdfReader.pages):
59
- pdftext += page.extract_text()
60
- text_raw = pdftext
61
- elif file_type == ".docx":
62
- logging.debug("Loading Word...")
63
- DocxReader = download_loader("DocxReader")
64
- loader = DocxReader()
65
- text_raw = loader.load_data(file=filepath)[0].text
66
- elif file_type == ".epub":
67
- logging.debug("Loading EPUB...")
68
- EpubReader = download_loader("EpubReader")
69
- loader = EpubReader()
70
- text_raw = loader.load_data(file=filepath)[0].text
71
- elif file_type == ".xlsx":
72
- logging.debug("Loading Excel...")
73
- text_raw = excel_to_string(filepath)
74
- else:
75
- logging.debug("Loading text file...")
76
- with open(filepath, "r", encoding="utf-8") as f:
77
- text_raw = f.read()
78
- text = add_space(text_raw)
79
- # text = block_split(text)
80
- # documents += text
81
- documents += [Document(text)]
82
- logging.debug("Documents loaded.")
83
- return documents
84
-
85
-
86
- def construct_index(
87
- api_key,
88
- file_src,
89
- max_input_size=4096,
90
- num_outputs=5,
91
- max_chunk_overlap=20,
92
- chunk_size_limit=600,
93
- embedding_limit=None,
94
- separator=" "
95
- ):
96
- from langchain.chat_models import ChatOpenAI
97
- from llama_index import GPTSimpleVectorIndex, ServiceContext
98
-
99
- os.environ["OPENAI_API_KEY"] = api_key
100
- chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit
101
- embedding_limit = None if embedding_limit == 0 else embedding_limit
102
- separator = " " if separator == "" else separator
103
-
104
- llm_predictor = LLMPredictor(
105
- llm=ChatOpenAI(model_name="gpt-3.5-turbo-0301", openai_api_key=api_key)
106
- )
107
- prompt_helper = PromptHelper(max_input_size = max_input_size, num_output = num_outputs, max_chunk_overlap = max_chunk_overlap, embedding_limit=embedding_limit, chunk_size_limit=600, separator=separator)
108
- index_name = get_index_name(file_src)
109
- if os.path.exists(f"./index/{index_name}.json"):
110
- logging.info("找到了缓存的索引文件,加载中……")
111
- return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json")
112
- else:
113
- try:
114
- documents = get_documents(file_src)
115
- logging.info("构建索引中……")
116
- with retrieve_proxy():
117
- service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper, chunk_size_limit=chunk_size_limit)
118
- index = GPTSimpleVectorIndex.from_documents(
119
- documents, service_context=service_context
120
- )
121
- logging.debug("索引构建完成!")
122
- os.makedirs("./index", exist_ok=True)
123
- index.save_to_disk(f"./index/{index_name}.json")
124
- logging.debug("索引已保存至本地!")
125
- return index
126
-
127
- except Exception as e:
128
- logging.error("索引构建失败!", e)
129
- print(e)
130
- return None
131
-
132
-
133
- def add_space(text):
134
- punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "}
135
- for cn_punc, en_punc in punctuations.items():
136
- text = text.replace(cn_punc, en_punc)
137
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/hooks.py DELETED
@@ -1,33 +0,0 @@
1
- """
2
- requests.hooks
3
- ~~~~~~~~~~~~~~
4
-
5
- This module provides the capabilities for the Requests hooks system.
6
-
7
- Available hooks:
8
-
9
- ``response``:
10
- The response generated from a Request.
11
- """
12
- HOOKS = ["response"]
13
-
14
-
15
- def default_hooks():
16
- return {event: [] for event in HOOKS}
17
-
18
-
19
- # TODO: response is the only one
20
-
21
-
22
- def dispatch_hook(key, hooks, hook_data, **kwargs):
23
- """Dispatches a hook dictionary on a given piece of data."""
24
- hooks = hooks or {}
25
- hooks = hooks.get(key)
26
- if hooks:
27
- if hasattr(hooks, "__call__"):
28
- hooks = [hooks]
29
- for hook in hooks:
30
- _hook_data = hook(hook_data, **kwargs)
31
- if _hook_data is not None:
32
- hook_data = _hook_data
33
- return hook_data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/filepost.py DELETED
@@ -1,98 +0,0 @@
1
- from __future__ import absolute_import
2
-
3
- import binascii
4
- import codecs
5
- import os
6
- from io import BytesIO
7
-
8
- from .fields import RequestField
9
- from .packages import six
10
- from .packages.six import b
11
-
12
- writer = codecs.lookup("utf-8")[3]
13
-
14
-
15
- def choose_boundary():
16
- """
17
- Our embarrassingly-simple replacement for mimetools.choose_boundary.
18
- """
19
- boundary = binascii.hexlify(os.urandom(16))
20
- if not six.PY2:
21
- boundary = boundary.decode("ascii")
22
- return boundary
23
-
24
-
25
- def iter_field_objects(fields):
26
- """
27
- Iterate over fields.
28
-
29
- Supports list of (k, v) tuples and dicts, and lists of
30
- :class:`~urllib3.fields.RequestField`.
31
-
32
- """
33
- if isinstance(fields, dict):
34
- i = six.iteritems(fields)
35
- else:
36
- i = iter(fields)
37
-
38
- for field in i:
39
- if isinstance(field, RequestField):
40
- yield field
41
- else:
42
- yield RequestField.from_tuples(*field)
43
-
44
-
45
- def iter_fields(fields):
46
- """
47
- .. deprecated:: 1.6
48
-
49
- Iterate over fields.
50
-
51
- The addition of :class:`~urllib3.fields.RequestField` makes this function
52
- obsolete. Instead, use :func:`iter_field_objects`, which returns
53
- :class:`~urllib3.fields.RequestField` objects.
54
-
55
- Supports list of (k, v) tuples and dicts.
56
- """
57
- if isinstance(fields, dict):
58
- return ((k, v) for k, v in six.iteritems(fields))
59
-
60
- return ((k, v) for k, v in fields)
61
-
62
-
63
- def encode_multipart_formdata(fields, boundary=None):
64
- """
65
- Encode a dictionary of ``fields`` using the multipart/form-data MIME format.
66
-
67
- :param fields:
68
- Dictionary of fields or list of (key, :class:`~urllib3.fields.RequestField`).
69
-
70
- :param boundary:
71
- If not specified, then a random boundary will be generated using
72
- :func:`urllib3.filepost.choose_boundary`.
73
- """
74
- body = BytesIO()
75
- if boundary is None:
76
- boundary = choose_boundary()
77
-
78
- for field in iter_field_objects(fields):
79
- body.write(b("--%s\r\n" % (boundary)))
80
-
81
- writer(body).write(field.render_headers())
82
- data = field.data
83
-
84
- if isinstance(data, int):
85
- data = str(data) # Backwards compatibility
86
-
87
- if isinstance(data, six.text_type):
88
- writer(body).write(data)
89
- else:
90
- body.write(data)
91
-
92
- body.write(b"\r\n")
93
-
94
- body.write(b("--%s--\r\n" % (boundary)))
95
-
96
- content_type = str("multipart/form-data; boundary=%s" % boundary)
97
-
98
- return body.getvalue(), content_type
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/roi_heads/custom_roi_heads.py DELETED
@@ -1,185 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
- import numpy as np
3
- import json
4
- import math
5
- import torch
6
- from torch import nn
7
- from torch.autograd.function import Function
8
- from typing import Dict, List, Optional, Tuple, Union
9
-
10
- from detectron2.layers import ShapeSpec
11
- from detectron2.structures import Boxes, Instances, pairwise_iou
12
- from detectron2.utils.events import get_event_storage
13
-
14
- from detectron2.modeling.box_regression import Box2BoxTransform
15
- from detectron2.modeling.roi_heads.fast_rcnn import fast_rcnn_inference
16
- from detectron2.modeling.roi_heads.roi_heads import ROI_HEADS_REGISTRY, StandardROIHeads
17
- from detectron2.modeling.roi_heads.cascade_rcnn import CascadeROIHeads
18
- from detectron2.modeling.roi_heads.box_head import build_box_head
19
- from .custom_fast_rcnn import CustomFastRCNNOutputLayers
20
-
21
-
22
- @ROI_HEADS_REGISTRY.register()
23
- class CustomROIHeads(StandardROIHeads):
24
- @classmethod
25
- def _init_box_head(self, cfg, input_shape):
26
- ret = super()._init_box_head(cfg, input_shape)
27
- del ret['box_predictor']
28
- ret['box_predictor'] = CustomFastRCNNOutputLayers(
29
- cfg, ret['box_head'].output_shape)
30
- self.debug = cfg.DEBUG
31
- if self.debug:
32
- self.debug_show_name = cfg.DEBUG_SHOW_NAME
33
- self.save_debug = cfg.SAVE_DEBUG
34
- self.vis_thresh = cfg.VIS_THRESH
35
- self.pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).to(
36
- torch.device(cfg.MODEL.DEVICE)).view(3, 1, 1)
37
- self.pixel_std = torch.Tensor(cfg.MODEL.PIXEL_STD).to(
38
- torch.device(cfg.MODEL.DEVICE)).view(3, 1, 1)
39
- return ret
40
-
41
- def forward(self, images, features, proposals, targets=None):
42
- """
43
- enable debug
44
- """
45
- if not self.debug:
46
- del images
47
- if self.training:
48
- assert targets
49
- proposals = self.label_and_sample_proposals(proposals, targets)
50
- del targets
51
-
52
- if self.training:
53
- losses = self._forward_box(features, proposals)
54
- losses.update(self._forward_mask(features, proposals))
55
- losses.update(self._forward_keypoint(features, proposals))
56
- return proposals, losses
57
- else:
58
- pred_instances = self._forward_box(features, proposals)
59
- pred_instances = self.forward_with_given_boxes(features, pred_instances)
60
- if self.debug:
61
- from ..debug import debug_second_stage
62
- denormalizer = lambda x: x * self.pixel_std + self.pixel_mean
63
- debug_second_stage(
64
- [denormalizer(images[0].clone())],
65
- pred_instances, proposals=proposals,
66
- debug_show_name=self.debug_show_name)
67
- return pred_instances, {}
68
-
69
-
70
- @ROI_HEADS_REGISTRY.register()
71
- class CustomCascadeROIHeads(CascadeROIHeads):
72
- @classmethod
73
- def _init_box_head(self, cfg, input_shape):
74
- self.mult_proposal_score = cfg.MODEL.ROI_BOX_HEAD.MULT_PROPOSAL_SCORE
75
- ret = super()._init_box_head(cfg, input_shape)
76
- del ret['box_predictors']
77
- cascade_bbox_reg_weights = cfg.MODEL.ROI_BOX_CASCADE_HEAD.BBOX_REG_WEIGHTS
78
- box_predictors = []
79
- for box_head, bbox_reg_weights in zip(ret['box_heads'], cascade_bbox_reg_weights):
80
- box_predictors.append(
81
- CustomFastRCNNOutputLayers(
82
- cfg, box_head.output_shape,
83
- box2box_transform=Box2BoxTransform(weights=bbox_reg_weights)
84
- ))
85
- ret['box_predictors'] = box_predictors
86
- self.debug = cfg.DEBUG
87
- if self.debug:
88
- self.debug_show_name = cfg.DEBUG_SHOW_NAME
89
- self.save_debug = cfg.SAVE_DEBUG
90
- self.vis_thresh = cfg.VIS_THRESH
91
- self.pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).to(
92
- torch.device(cfg.MODEL.DEVICE)).view(3, 1, 1)
93
- self.pixel_std = torch.Tensor(cfg.MODEL.PIXEL_STD).to(
94
- torch.device(cfg.MODEL.DEVICE)).view(3, 1, 1)
95
- return ret
96
-
97
-
98
- def _forward_box(self, features, proposals, targets=None):
99
- """
100
- Add mult proposal scores at testing
101
- """
102
- if (not self.training) and self.mult_proposal_score:
103
- if len(proposals) > 0 and proposals[0].has('scores'):
104
- proposal_scores = [
105
- p.get('scores') for p in proposals]
106
- else:
107
- proposal_scores = [
108
- p.get('objectness_logits') for p in proposals]
109
-
110
- features = [features[f] for f in self.box_in_features]
111
- head_outputs = [] # (predictor, predictions, proposals)
112
- prev_pred_boxes = None
113
- image_sizes = [x.image_size for x in proposals]
114
- for k in range(self.num_cascade_stages):
115
- if k > 0:
116
- proposals = self._create_proposals_from_boxes(prev_pred_boxes, image_sizes)
117
- if self.training:
118
- proposals = self._match_and_label_boxes(proposals, k, targets)
119
- predictions = self._run_stage(features, proposals, k)
120
- prev_pred_boxes = self.box_predictor[k].predict_boxes(predictions, proposals)
121
- head_outputs.append((self.box_predictor[k], predictions, proposals))
122
-
123
- if self.training:
124
- losses = {}
125
- storage = get_event_storage()
126
- for stage, (predictor, predictions, proposals) in enumerate(head_outputs):
127
- with storage.name_scope("stage{}".format(stage)):
128
- stage_losses = predictor.losses(predictions, proposals)
129
- losses.update({k + "_stage{}".format(stage): v for k, v in stage_losses.items()})
130
- return losses
131
- else:
132
- # Each is a list[Tensor] of length #image. Each tensor is Ri x (K+1)
133
- scores_per_stage = [h[0].predict_probs(h[1], h[2]) for h in head_outputs]
134
- scores = [
135
- sum(list(scores_per_image)) * (1.0 / self.num_cascade_stages)
136
- for scores_per_image in zip(*scores_per_stage)
137
- ]
138
-
139
- if self.mult_proposal_score:
140
- scores = [(s * ps[:, None]) ** 0.5 \
141
- for s, ps in zip(scores, proposal_scores)]
142
-
143
- predictor, predictions, proposals = head_outputs[-1]
144
- boxes = predictor.predict_boxes(predictions, proposals)
145
- pred_instances, _ = fast_rcnn_inference(
146
- boxes,
147
- scores,
148
- image_sizes,
149
- predictor.test_score_thresh,
150
- predictor.test_nms_thresh,
151
- predictor.test_topk_per_image,
152
- )
153
-
154
- return pred_instances
155
-
156
- def forward(self, images, features, proposals, targets=None):
157
- '''
158
- enable debug
159
- '''
160
- if not self.debug:
161
- del images
162
- if self.training:
163
- proposals = self.label_and_sample_proposals(proposals, targets)
164
-
165
- if self.training:
166
- losses = self._forward_box(features, proposals, targets)
167
- losses.update(self._forward_mask(features, proposals))
168
- losses.update(self._forward_keypoint(features, proposals))
169
- return proposals, losses
170
- else:
171
- # import pdb; pdb.set_trace()
172
- pred_instances = self._forward_box(features, proposals)
173
- pred_instances = self.forward_with_given_boxes(features, pred_instances)
174
- if self.debug:
175
- from ..debug import debug_second_stage
176
- denormalizer = lambda x: x * self.pixel_std + self.pixel_mean
177
- debug_second_stage(
178
- [denormalizer(x.clone()) for x in images],
179
- pred_instances, proposals=proposals,
180
- save_debug=self.save_debug,
181
- debug_show_name=self.debug_show_name,
182
- vis_thresh=self.vis_thresh)
183
- return pred_instances, {}
184
-
185
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BetterAPI/BetterChat_new/src/routes/conversation/[id]/summarize/+server.ts DELETED
@@ -1,83 +0,0 @@
1
- import { PUBLIC_MAX_INPUT_TOKENS, PUBLIC_SEP_TOKEN } from "$env/static/public";
2
- import { buildPrompt } from "$lib/buildPrompt";
3
- import { collections } from "$lib/server/database.js";
4
- import { modelEndpoint } from "$lib/server/modelEndpoint.js";
5
- import { trimPrefix } from "$lib/utils/trimPrefix.js";
6
- import { trimSuffix } from "$lib/utils/trimSuffix.js";
7
- import { textGeneration } from "@huggingface/inference";
8
- import { error } from "@sveltejs/kit";
9
- import { ObjectId } from "mongodb";
10
-
11
- export async function POST({ params, locals, fetch }) {
12
- const convId = new ObjectId(params.id);
13
-
14
- const conversation = await collections.conversations.findOne({
15
- _id: convId,
16
- sessionId: locals.sessionId,
17
- });
18
-
19
- if (!conversation) {
20
- throw error(404, "Conversation not found");
21
- }
22
-
23
- const firstMessage = conversation.messages.find((m) => m.from === "user");
24
-
25
- const userPrompt =
26
- `Please summarize the following message as a single sentence of less than 5 words:\n` +
27
- firstMessage?.content;
28
-
29
- const prompt = buildPrompt([{ from: "user", content: userPrompt }]);
30
-
31
- const parameters = {
32
- temperature: 0.9,
33
- top_p: 0.95,
34
- repetition_penalty: 1.2,
35
- top_k: 50,
36
- watermark: false,
37
- max_new_tokens: 1024,
38
- truncate: parseInt(PUBLIC_MAX_INPUT_TOKENS),
39
- stop: [PUBLIC_SEP_TOKEN],
40
- return_full_text: false,
41
- };
42
-
43
- const endpoint = modelEndpoint();
44
- let { generated_text } = await textGeneration(
45
- {
46
- model: endpoint.endpoint,
47
- inputs: prompt,
48
- parameters,
49
- },
50
- {
51
- fetch: (url, options) =>
52
- fetch(url, {
53
- ...options,
54
- headers: { ...options?.headers, Authorization: endpoint.authorization },
55
- }),
56
- }
57
- );
58
-
59
- generated_text = trimSuffix(trimPrefix(generated_text, "<|startoftext|>"), PUBLIC_SEP_TOKEN);
60
-
61
- if (generated_text) {
62
- await collections.conversations.updateOne(
63
- {
64
- _id: convId,
65
- sessionId: locals.sessionId,
66
- },
67
- {
68
- $set: { title: generated_text },
69
- }
70
- );
71
- }
72
-
73
- return new Response(
74
- JSON.stringify(
75
- generated_text
76
- ? {
77
- title: generated_text,
78
- }
79
- : {}
80
- ),
81
- { headers: { "Content-Type": "application/json" } }
82
- );
83
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/formatters/latex.py DELETED
@@ -1,521 +0,0 @@
1
- """
2
- pygments.formatters.latex
3
- ~~~~~~~~~~~~~~~~~~~~~~~~~
4
-
5
- Formatter for LaTeX fancyvrb output.
6
-
7
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
8
- :license: BSD, see LICENSE for details.
9
- """
10
-
11
- from io import StringIO
12
-
13
- from pip._vendor.pygments.formatter import Formatter
14
- from pip._vendor.pygments.lexer import Lexer, do_insertions
15
- from pip._vendor.pygments.token import Token, STANDARD_TYPES
16
- from pip._vendor.pygments.util import get_bool_opt, get_int_opt
17
-
18
-
19
- __all__ = ['LatexFormatter']
20
-
21
-
22
- def escape_tex(text, commandprefix):
23
- return text.replace('\\', '\x00'). \
24
- replace('{', '\x01'). \
25
- replace('}', '\x02'). \
26
- replace('\x00', r'\%sZbs{}' % commandprefix). \
27
- replace('\x01', r'\%sZob{}' % commandprefix). \
28
- replace('\x02', r'\%sZcb{}' % commandprefix). \
29
- replace('^', r'\%sZca{}' % commandprefix). \
30
- replace('_', r'\%sZus{}' % commandprefix). \
31
- replace('&', r'\%sZam{}' % commandprefix). \
32
- replace('<', r'\%sZlt{}' % commandprefix). \
33
- replace('>', r'\%sZgt{}' % commandprefix). \
34
- replace('#', r'\%sZsh{}' % commandprefix). \
35
- replace('%', r'\%sZpc{}' % commandprefix). \
36
- replace('$', r'\%sZdl{}' % commandprefix). \
37
- replace('-', r'\%sZhy{}' % commandprefix). \
38
- replace("'", r'\%sZsq{}' % commandprefix). \
39
- replace('"', r'\%sZdq{}' % commandprefix). \
40
- replace('~', r'\%sZti{}' % commandprefix)
41
-
42
-
43
- DOC_TEMPLATE = r'''
44
- \documentclass{%(docclass)s}
45
- \usepackage{fancyvrb}
46
- \usepackage{color}
47
- \usepackage[%(encoding)s]{inputenc}
48
- %(preamble)s
49
-
50
- %(styledefs)s
51
-
52
- \begin{document}
53
-
54
- \section*{%(title)s}
55
-
56
- %(code)s
57
- \end{document}
58
- '''
59
-
60
- ## Small explanation of the mess below :)
61
- #
62
- # The previous version of the LaTeX formatter just assigned a command to
63
- # each token type defined in the current style. That obviously is
64
- # problematic if the highlighted code is produced for a different style
65
- # than the style commands themselves.
66
- #
67
- # This version works much like the HTML formatter which assigns multiple
68
- # CSS classes to each <span> tag, from the most specific to the least
69
- # specific token type, thus falling back to the parent token type if one
70
- # is not defined. Here, the classes are there too and use the same short
71
- # forms given in token.STANDARD_TYPES.
72
- #
73
- # Highlighted code now only uses one custom command, which by default is
74
- # \PY and selectable by the commandprefix option (and in addition the
75
- # escapes \PYZat, \PYZlb and \PYZrb which haven't been renamed for
76
- # backwards compatibility purposes).
77
- #
78
- # \PY has two arguments: the classes, separated by +, and the text to
79
- # render in that style. The classes are resolved into the respective
80
- # style commands by magic, which serves to ignore unknown classes.
81
- #
82
- # The magic macros are:
83
- # * \PY@it, \PY@bf, etc. are unconditionally wrapped around the text
84
- # to render in \PY@do. Their definition determines the style.
85
- # * \PY@reset resets \PY@it etc. to do nothing.
86
- # * \PY@toks parses the list of classes, using magic inspired by the
87
- # keyval package (but modified to use plusses instead of commas
88
- # because fancyvrb redefines commas inside its environments).
89
- # * \PY@tok processes one class, calling the \PY@tok@classname command
90
- # if it exists.
91
- # * \PY@tok@classname sets the \PY@it etc. to reflect the chosen style
92
- # for its class.
93
- # * \PY resets the style, parses the classnames and then calls \PY@do.
94
- #
95
- # Tip: to read this code, print it out in substituted form using e.g.
96
- # >>> print STYLE_TEMPLATE % {'cp': 'PY'}
97
-
98
- STYLE_TEMPLATE = r'''
99
- \makeatletter
100
- \def\%(cp)s@reset{\let\%(cp)s@it=\relax \let\%(cp)s@bf=\relax%%
101
- \let\%(cp)s@ul=\relax \let\%(cp)s@tc=\relax%%
102
- \let\%(cp)s@bc=\relax \let\%(cp)s@ff=\relax}
103
- \def\%(cp)s@tok#1{\csname %(cp)s@tok@#1\endcsname}
104
- \def\%(cp)s@toks#1+{\ifx\relax#1\empty\else%%
105
- \%(cp)s@tok{#1}\expandafter\%(cp)s@toks\fi}
106
- \def\%(cp)s@do#1{\%(cp)s@bc{\%(cp)s@tc{\%(cp)s@ul{%%
107
- \%(cp)s@it{\%(cp)s@bf{\%(cp)s@ff{#1}}}}}}}
108
- \def\%(cp)s#1#2{\%(cp)s@reset\%(cp)s@toks#1+\relax+\%(cp)s@do{#2}}
109
-
110
- %(styles)s
111
-
112
- \def\%(cp)sZbs{\char`\\}
113
- \def\%(cp)sZus{\char`\_}
114
- \def\%(cp)sZob{\char`\{}
115
- \def\%(cp)sZcb{\char`\}}
116
- \def\%(cp)sZca{\char`\^}
117
- \def\%(cp)sZam{\char`\&}
118
- \def\%(cp)sZlt{\char`\<}
119
- \def\%(cp)sZgt{\char`\>}
120
- \def\%(cp)sZsh{\char`\#}
121
- \def\%(cp)sZpc{\char`\%%}
122
- \def\%(cp)sZdl{\char`\$}
123
- \def\%(cp)sZhy{\char`\-}
124
- \def\%(cp)sZsq{\char`\'}
125
- \def\%(cp)sZdq{\char`\"}
126
- \def\%(cp)sZti{\char`\~}
127
- %% for compatibility with earlier versions
128
- \def\%(cp)sZat{@}
129
- \def\%(cp)sZlb{[}
130
- \def\%(cp)sZrb{]}
131
- \makeatother
132
- '''
133
-
134
-
135
- def _get_ttype_name(ttype):
136
- fname = STANDARD_TYPES.get(ttype)
137
- if fname:
138
- return fname
139
- aname = ''
140
- while fname is None:
141
- aname = ttype[-1] + aname
142
- ttype = ttype.parent
143
- fname = STANDARD_TYPES.get(ttype)
144
- return fname + aname
145
-
146
-
147
- class LatexFormatter(Formatter):
148
- r"""
149
- Format tokens as LaTeX code. This needs the `fancyvrb` and `color`
150
- standard packages.
151
-
152
- Without the `full` option, code is formatted as one ``Verbatim``
153
- environment, like this:
154
-
155
- .. sourcecode:: latex
156
-
157
- \begin{Verbatim}[commandchars=\\\{\}]
158
- \PY{k}{def }\PY{n+nf}{foo}(\PY{n}{bar}):
159
- \PY{k}{pass}
160
- \end{Verbatim}
161
-
162
- Wrapping can be disabled using the `nowrap` option.
163
-
164
- The special command used here (``\PY``) and all the other macros it needs
165
- are output by the `get_style_defs` method.
166
-
167
- With the `full` option, a complete LaTeX document is output, including
168
- the command definitions in the preamble.
169
-
170
- The `get_style_defs()` method of a `LatexFormatter` returns a string
171
- containing ``\def`` commands defining the macros needed inside the
172
- ``Verbatim`` environments.
173
-
174
- Additional options accepted:
175
-
176
- `nowrap`
177
- If set to ``True``, don't wrap the tokens at all, not even inside a
178
- ``\begin{Verbatim}`` environment. This disables most other options
179
- (default: ``False``).
180
-
181
- `style`
182
- The style to use, can be a string or a Style subclass (default:
183
- ``'default'``).
184
-
185
- `full`
186
- Tells the formatter to output a "full" document, i.e. a complete
187
- self-contained document (default: ``False``).
188
-
189
- `title`
190
- If `full` is true, the title that should be used to caption the
191
- document (default: ``''``).
192
-
193
- `docclass`
194
- If the `full` option is enabled, this is the document class to use
195
- (default: ``'article'``).
196
-
197
- `preamble`
198
- If the `full` option is enabled, this can be further preamble commands,
199
- e.g. ``\usepackage`` (default: ``''``).
200
-
201
- `linenos`
202
- If set to ``True``, output line numbers (default: ``False``).
203
-
204
- `linenostart`
205
- The line number for the first line (default: ``1``).
206
-
207
- `linenostep`
208
- If set to a number n > 1, only every nth line number is printed.
209
-
210
- `verboptions`
211
- Additional options given to the Verbatim environment (see the *fancyvrb*
212
- docs for possible values) (default: ``''``).
213
-
214
- `commandprefix`
215
- The LaTeX commands used to produce colored output are constructed
216
- using this prefix and some letters (default: ``'PY'``).
217
-
218
- .. versionadded:: 0.7
219
- .. versionchanged:: 0.10
220
- The default is now ``'PY'`` instead of ``'C'``.
221
-
222
- `texcomments`
223
- If set to ``True``, enables LaTeX comment lines. That is, LaTex markup
224
- in comment tokens is not escaped so that LaTeX can render it (default:
225
- ``False``).
226
-
227
- .. versionadded:: 1.2
228
-
229
- `mathescape`
230
- If set to ``True``, enables LaTeX math mode escape in comments. That
231
- is, ``'$...$'`` inside a comment will trigger math mode (default:
232
- ``False``).
233
-
234
- .. versionadded:: 1.2
235
-
236
- `escapeinside`
237
- If set to a string of length 2, enables escaping to LaTeX. Text
238
- delimited by these 2 characters is read as LaTeX code and
239
- typeset accordingly. It has no effect in string literals. It has
240
- no effect in comments if `texcomments` or `mathescape` is
241
- set. (default: ``''``).
242
-
243
- .. versionadded:: 2.0
244
-
245
- `envname`
246
- Allows you to pick an alternative environment name replacing Verbatim.
247
- The alternate environment still has to support Verbatim's option syntax.
248
- (default: ``'Verbatim'``).
249
-
250
- .. versionadded:: 2.0
251
- """
252
- name = 'LaTeX'
253
- aliases = ['latex', 'tex']
254
- filenames = ['*.tex']
255
-
256
- def __init__(self, **options):
257
- Formatter.__init__(self, **options)
258
- self.nowrap = get_bool_opt(options, 'nowrap', False)
259
- self.docclass = options.get('docclass', 'article')
260
- self.preamble = options.get('preamble', '')
261
- self.linenos = get_bool_opt(options, 'linenos', False)
262
- self.linenostart = abs(get_int_opt(options, 'linenostart', 1))
263
- self.linenostep = abs(get_int_opt(options, 'linenostep', 1))
264
- self.verboptions = options.get('verboptions', '')
265
- self.nobackground = get_bool_opt(options, 'nobackground', False)
266
- self.commandprefix = options.get('commandprefix', 'PY')
267
- self.texcomments = get_bool_opt(options, 'texcomments', False)
268
- self.mathescape = get_bool_opt(options, 'mathescape', False)
269
- self.escapeinside = options.get('escapeinside', '')
270
- if len(self.escapeinside) == 2:
271
- self.left = self.escapeinside[0]
272
- self.right = self.escapeinside[1]
273
- else:
274
- self.escapeinside = ''
275
- self.envname = options.get('envname', 'Verbatim')
276
-
277
- self._create_stylesheet()
278
-
279
- def _create_stylesheet(self):
280
- t2n = self.ttype2name = {Token: ''}
281
- c2d = self.cmd2def = {}
282
- cp = self.commandprefix
283
-
284
- def rgbcolor(col):
285
- if col:
286
- return ','.join(['%.2f' % (int(col[i] + col[i + 1], 16) / 255.0)
287
- for i in (0, 2, 4)])
288
- else:
289
- return '1,1,1'
290
-
291
- for ttype, ndef in self.style:
292
- name = _get_ttype_name(ttype)
293
- cmndef = ''
294
- if ndef['bold']:
295
- cmndef += r'\let\$$@bf=\textbf'
296
- if ndef['italic']:
297
- cmndef += r'\let\$$@it=\textit'
298
- if ndef['underline']:
299
- cmndef += r'\let\$$@ul=\underline'
300
- if ndef['roman']:
301
- cmndef += r'\let\$$@ff=\textrm'
302
- if ndef['sans']:
303
- cmndef += r'\let\$$@ff=\textsf'
304
- if ndef['mono']:
305
- cmndef += r'\let\$$@ff=\textsf'
306
- if ndef['color']:
307
- cmndef += (r'\def\$$@tc##1{\textcolor[rgb]{%s}{##1}}' %
308
- rgbcolor(ndef['color']))
309
- if ndef['border']:
310
- cmndef += (r'\def\$$@bc##1{{\setlength{\fboxsep}{\string -\fboxrule}'
311
- r'\fcolorbox[rgb]{%s}{%s}{\strut ##1}}}' %
312
- (rgbcolor(ndef['border']),
313
- rgbcolor(ndef['bgcolor'])))
314
- elif ndef['bgcolor']:
315
- cmndef += (r'\def\$$@bc##1{{\setlength{\fboxsep}{0pt}'
316
- r'\colorbox[rgb]{%s}{\strut ##1}}}' %
317
- rgbcolor(ndef['bgcolor']))
318
- if cmndef == '':
319
- continue
320
- cmndef = cmndef.replace('$$', cp)
321
- t2n[ttype] = name
322
- c2d[name] = cmndef
323
-
324
- def get_style_defs(self, arg=''):
325
- """
326
- Return the command sequences needed to define the commands
327
- used to format text in the verbatim environment. ``arg`` is ignored.
328
- """
329
- cp = self.commandprefix
330
- styles = []
331
- for name, definition in self.cmd2def.items():
332
- styles.append(r'\@namedef{%s@tok@%s}{%s}' % (cp, name, definition))
333
- return STYLE_TEMPLATE % {'cp': self.commandprefix,
334
- 'styles': '\n'.join(styles)}
335
-
336
- def format_unencoded(self, tokensource, outfile):
337
- # TODO: add support for background colors
338
- t2n = self.ttype2name
339
- cp = self.commandprefix
340
-
341
- if self.full:
342
- realoutfile = outfile
343
- outfile = StringIO()
344
-
345
- if not self.nowrap:
346
- outfile.write('\\begin{' + self.envname + '}[commandchars=\\\\\\{\\}')
347
- if self.linenos:
348
- start, step = self.linenostart, self.linenostep
349
- outfile.write(',numbers=left' +
350
- (start and ',firstnumber=%d' % start or '') +
351
- (step and ',stepnumber=%d' % step or ''))
352
- if self.mathescape or self.texcomments or self.escapeinside:
353
- outfile.write(',codes={\\catcode`\\$=3\\catcode`\\^=7'
354
- '\\catcode`\\_=8\\relax}')
355
- if self.verboptions:
356
- outfile.write(',' + self.verboptions)
357
- outfile.write(']\n')
358
-
359
- for ttype, value in tokensource:
360
- if ttype in Token.Comment:
361
- if self.texcomments:
362
- # Try to guess comment starting lexeme and escape it ...
363
- start = value[0:1]
364
- for i in range(1, len(value)):
365
- if start[0] != value[i]:
366
- break
367
- start += value[i]
368
-
369
- value = value[len(start):]
370
- start = escape_tex(start, cp)
371
-
372
- # ... but do not escape inside comment.
373
- value = start + value
374
- elif self.mathescape:
375
- # Only escape parts not inside a math environment.
376
- parts = value.split('$')
377
- in_math = False
378
- for i, part in enumerate(parts):
379
- if not in_math:
380
- parts[i] = escape_tex(part, cp)
381
- in_math = not in_math
382
- value = '$'.join(parts)
383
- elif self.escapeinside:
384
- text = value
385
- value = ''
386
- while text:
387
- a, sep1, text = text.partition(self.left)
388
- if sep1:
389
- b, sep2, text = text.partition(self.right)
390
- if sep2:
391
- value += escape_tex(a, cp) + b
392
- else:
393
- value += escape_tex(a + sep1 + b, cp)
394
- else:
395
- value += escape_tex(a, cp)
396
- else:
397
- value = escape_tex(value, cp)
398
- elif ttype not in Token.Escape:
399
- value = escape_tex(value, cp)
400
- styles = []
401
- while ttype is not Token:
402
- try:
403
- styles.append(t2n[ttype])
404
- except KeyError:
405
- # not in current style
406
- styles.append(_get_ttype_name(ttype))
407
- ttype = ttype.parent
408
- styleval = '+'.join(reversed(styles))
409
- if styleval:
410
- spl = value.split('\n')
411
- for line in spl[:-1]:
412
- if line:
413
- outfile.write("\\%s{%s}{%s}" % (cp, styleval, line))
414
- outfile.write('\n')
415
- if spl[-1]:
416
- outfile.write("\\%s{%s}{%s}" % (cp, styleval, spl[-1]))
417
- else:
418
- outfile.write(value)
419
-
420
- if not self.nowrap:
421
- outfile.write('\\end{' + self.envname + '}\n')
422
-
423
- if self.full:
424
- encoding = self.encoding or 'utf8'
425
- # map known existings encodings from LaTeX distribution
426
- encoding = {
427
- 'utf_8': 'utf8',
428
- 'latin_1': 'latin1',
429
- 'iso_8859_1': 'latin1',
430
- }.get(encoding.replace('-', '_'), encoding)
431
- realoutfile.write(DOC_TEMPLATE %
432
- dict(docclass = self.docclass,
433
- preamble = self.preamble,
434
- title = self.title,
435
- encoding = encoding,
436
- styledefs = self.get_style_defs(),
437
- code = outfile.getvalue()))
438
-
439
-
440
- class LatexEmbeddedLexer(Lexer):
441
- """
442
- This lexer takes one lexer as argument, the lexer for the language
443
- being formatted, and the left and right delimiters for escaped text.
444
-
445
- First everything is scanned using the language lexer to obtain
446
- strings and comments. All other consecutive tokens are merged and
447
- the resulting text is scanned for escaped segments, which are given
448
- the Token.Escape type. Finally text that is not escaped is scanned
449
- again with the language lexer.
450
- """
451
- def __init__(self, left, right, lang, **options):
452
- self.left = left
453
- self.right = right
454
- self.lang = lang
455
- Lexer.__init__(self, **options)
456
-
457
- def get_tokens_unprocessed(self, text):
458
- # find and remove all the escape tokens (replace with an empty string)
459
- # this is very similar to DelegatingLexer.get_tokens_unprocessed.
460
- buffered = ''
461
- insertions = []
462
- insertion_buf = []
463
- for i, t, v in self._find_safe_escape_tokens(text):
464
- if t is None:
465
- if insertion_buf:
466
- insertions.append((len(buffered), insertion_buf))
467
- insertion_buf = []
468
- buffered += v
469
- else:
470
- insertion_buf.append((i, t, v))
471
- if insertion_buf:
472
- insertions.append((len(buffered), insertion_buf))
473
- return do_insertions(insertions,
474
- self.lang.get_tokens_unprocessed(buffered))
475
-
476
- def _find_safe_escape_tokens(self, text):
477
- """ find escape tokens that are not in strings or comments """
478
- for i, t, v in self._filter_to(
479
- self.lang.get_tokens_unprocessed(text),
480
- lambda t: t in Token.Comment or t in Token.String
481
- ):
482
- if t is None:
483
- for i2, t2, v2 in self._find_escape_tokens(v):
484
- yield i + i2, t2, v2
485
- else:
486
- yield i, None, v
487
-
488
- def _filter_to(self, it, pred):
489
- """ Keep only the tokens that match `pred`, merge the others together """
490
- buf = ''
491
- idx = 0
492
- for i, t, v in it:
493
- if pred(t):
494
- if buf:
495
- yield idx, None, buf
496
- buf = ''
497
- yield i, t, v
498
- else:
499
- if not buf:
500
- idx = i
501
- buf += v
502
- if buf:
503
- yield idx, None, buf
504
-
505
- def _find_escape_tokens(self, text):
506
- """ Find escape tokens within text, give token=None otherwise """
507
- index = 0
508
- while text:
509
- a, sep1, text = text.partition(self.left)
510
- if a:
511
- yield index, None, a
512
- index += len(a)
513
- if sep1:
514
- b, sep2, text = text.partition(self.right)
515
- if sep2:
516
- yield index + len(sep1), Token.Escape, b
517
- index += len(sep1) + len(b) + len(sep2)
518
- else:
519
- yield index, Token.Error, sep1
520
- index += len(sep1)
521
- text = b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BigSalmon/GPT2_Most_Probable/app.py DELETED
@@ -1,273 +0,0 @@
1
- import streamlit as st
2
- import numpy as np
3
- import pandas as pd
4
- import os
5
- import torch
6
- import torch.nn as nn
7
- from transformers.activations import get_activation
8
- from transformers import AutoTokenizer, AutoModelForCausalLM
9
-
10
-
11
- first = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.\n\ninformal english: """
12
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
13
- @st.cache(allow_output_mutation=True)
14
- def get_model():
15
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln86Paraphrase")
16
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln86Paraphrase")
17
-
18
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln82Paraphrase")
19
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln82Paraphrase")
20
-
21
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln79Paraphrase")
22
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln79Paraphrase")
23
-
24
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln74Paraphrase")
25
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln74Paraphrase")
26
-
27
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln72Paraphrase")
28
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln72Paraphrase")
29
-
30
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln64Paraphrase")
31
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln64Paraphrase")
32
-
33
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln60Paraphrase")
34
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln60Paraphrase")
35
-
36
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo1.3BInformalToFormal")
37
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo1.3BInformalToFormal")
38
-
39
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln55")
40
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln55")
41
-
42
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln51")
43
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln51")
44
-
45
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln45")
46
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln49")
47
-
48
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln43")
49
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln43")
50
-
51
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln41")
52
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln41")
53
-
54
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln38")
55
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln38")
56
-
57
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln37")
58
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln37")
59
-
60
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln36")
61
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln36")
62
-
63
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/MediumInformalToFormalLincoln")
64
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/MediumInformalToFormalLincoln")
65
-
66
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln35")
67
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln35")
68
-
69
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln31")
70
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln31")
71
-
72
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln21")
73
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln21")
74
-
75
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/PointsOneSent")
76
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/PointsOneSent")
77
-
78
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/PointsToSentence")
79
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/PointsToSentence")
80
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln89Paraphrase")
81
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln89Paraphrase")
82
-
83
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/DefinitionsSynonyms1")
84
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/DefinitionsSynonyms1")
85
-
86
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln95Paraphrase")
87
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln95Paraphrase")
88
-
89
- tokenizer = AutoTokenizer.from_pretrained("BigSalmon/AbstractTest")
90
- model = AutoModelForCausalLM.from_pretrained("BigSalmon/AbstractTest")
91
-
92
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/DefinitionsSynonyms2")
93
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/DefinitionsSynonyms2")
94
- #tokenizer2 = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincolnMedium")
95
- #model2 = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincolnMedium")
96
- return model, tokenizer
97
-
98
- model, tokenizer = get_model()
99
-
100
- g = """informal english: garage band has made people who know nothing about music good at creating music.
101
- Translated into the Style of Abraham Lincoln: garage band ( offers the uninitiated in music the ability to produce professional-quality compositions / catapults those for whom music is an uncharted art the ability the realize masterpieces / stimulates music novice's competency to yield sublime arrangements / begets individuals of rudimentary musical talent the proficiency to fashion elaborate suites ).
102
- informal english: chrome extensions can make doing regular tasks much easier to get done.
103
- Translated into the Style of Abraham Lincoln: chrome extensions ( yield the boon of time-saving convenience / ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks / turbocharges the velocity with which one can conduct their obligations ).
104
- informal english: broadband is finally expanding to rural areas, a great development that will thrust them into modern life.
105
- Translated into the Style of Abraham Lincoln: broadband is ( ( finally / at last / after years of delay ) arriving in remote locations / springing to life in far-flung outposts / inching into even the most backwater corners of the nation ) that will leap-frog them into the twenty-first century.
106
- informal english: google translate has made talking to people who do not share your language easier.
107
- Translated into the Style of Abraham Lincoln: google translate ( imparts communicability to individuals whose native tongue differs / mitigates the trials of communication across linguistic barriers / hastens the bridging of semantic boundaries / mollifies the complexity of multilingual communication / avails itself to the internationalization of discussion / flexes its muscles to abet intercultural conversation / calms the tides of linguistic divergence ).
108
- informal english: corn fields are all across illinois, visible once you leave chicago.
109
- Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
110
- informal english: """
111
-
112
- number_of_outputs = st.sidebar.slider("Number of Outputs", 5, 100)
113
- log_nums = st.sidebar.slider("How Many Log Outputs?", 50, 1000)
114
-
115
- def BestProbs(prompt):
116
- prompt = prompt.strip()
117
- text = tokenizer.encode(prompt)
118
- myinput, past_key_values = torch.tensor([text]), None
119
- myinput = myinput
120
- logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
121
- logits = logits[0,-1]
122
- probabilities = torch.nn.functional.softmax(logits)
123
- best_logits, best_indices = logits.topk(10)
124
- best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
125
- for i in best_words[0:10]:
126
- print("_______")
127
- st.write(f"${i} $\n")
128
- f = (f"${i} $\n")
129
- m = (prompt + f"{i}")
130
- BestProbs2(m)
131
- return f
132
-
133
- def BestProbs2(prompt):
134
- prompt = prompt.strip()
135
- text = tokenizer.encode(prompt)
136
- myinput, past_key_values = torch.tensor([text]), None
137
- myinput = myinput
138
- logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
139
- logits = logits[0,-1]
140
- probabilities = torch.nn.functional.softmax(logits)
141
- best_logits, best_indices = logits.topk(20)
142
- best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
143
- for i in best_words[0:20]:
144
- print(i)
145
- st.write(i)
146
-
147
- def LogProbs(prompt):
148
- col1 = []
149
- col2 = []
150
- prompt = prompt.strip()
151
- text = tokenizer.encode(prompt)
152
- myinput, past_key_values = torch.tensor([text]), None
153
- myinput = myinput
154
- logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
155
- logits = logits[0,-1]
156
- probabilities = torch.nn.functional.softmax(logits)
157
- best_logits, best_indices = logits.topk(10)
158
- best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
159
- for i in best_words[0:10]:
160
- print("_______")
161
- f = i
162
- col1.append(f)
163
- m = (prompt + f"{i}")
164
- #print("^^" + f + " ^^")
165
- prompt = m.strip()
166
- text = tokenizer.encode(prompt)
167
- myinput, past_key_values = torch.tensor([text]), None
168
- myinput = myinput
169
- logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
170
- logits = logits[0,-1]
171
- probabilities = torch.nn.functional.softmax(logits)
172
- best_logits, best_indices = logits.topk(20)
173
- best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
174
- for i in best_words[0:20]:
175
- #print(i)
176
- col2.append(i)
177
- #print(col1)
178
- #print(col2)
179
- d = {col1[0]: [col2[0], col2[1], col2[2], col2[3], col2[4], col2[5], col2[6], col2[7], col2[8], col2[9], col2[10], col2[11], col2[12], col2[13], col2[14], col2[15], col2[16], col2[17], col2[18], col2[19]],
180
- col1[1]: [col2[20], col2[21], col2[22], col2[23], col2[24], col2[25], col2[26], col2[27], col2[28], col2[29], col2[30], col2[31], col2[32], col2[33], col2[34], col2[35], col2[36], col2[37], col2[38], col2[39]],
181
- col1[2]: [col2[40], col2[41], col2[42], col2[43], col2[44], col2[45], col2[46], col2[47], col2[48], col2[49], col2[50], col2[51], col2[52], col2[53], col2[54], col2[55], col2[56], col2[57], col2[58], col2[59]],
182
- col1[3]: [col2[60], col2[61], col2[62], col2[63], col2[64], col2[65], col2[66], col2[67], col2[68], col2[69], col2[70], col2[71], col2[72], col2[73], col2[74], col2[75], col2[76], col2[77], col2[78], col2[79]],
183
- col1[4]: [col2[80], col2[81], col2[82], col2[83], col2[84], col2[85], col2[86], col2[87], col2[88], col2[89], col2[90], col2[91], col2[92], col2[93], col2[94], col2[95], col2[96], col2[97], col2[98], col2[99]],
184
- col1[5]: [col2[100], col2[101], col2[102], col2[103], col2[104], col2[105], col2[106], col2[107], col2[108], col2[109], col2[110], col2[111], col2[112], col2[113], col2[114], col2[115], col2[116], col2[117], col2[118], col2[119]],
185
- col1[6]: [col2[120], col2[121], col2[122], col2[123], col2[124], col2[125], col2[126], col2[127], col2[128], col2[129], col2[130], col2[131], col2[132], col2[133], col2[134], col2[135], col2[136], col2[137], col2[138], col2[139]],
186
- col1[7]: [col2[140], col2[141], col2[142], col2[143], col2[144], col2[145], col2[146], col2[147], col2[148], col2[149], col2[150], col2[151], col2[152], col2[153], col2[154], col2[155], col2[156], col2[157], col2[158], col2[159]],
187
- col1[8]: [col2[160], col2[161], col2[162], col2[163], col2[164], col2[165], col2[166], col2[167], col2[168], col2[169], col2[170], col2[171], col2[172], col2[173], col2[174], col2[175], col2[176], col2[177], col2[178], col2[179]],
188
- col1[9]: [col2[180], col2[181], col2[182], col2[183], col2[184], col2[185], col2[186], col2[187], col2[188], col2[189], col2[190], col2[191], col2[192], col2[193], col2[194], col2[195], col2[196], col2[197], col2[198], col2[199]]}
189
- df = pd.DataFrame(data=d)
190
- print(df)
191
- st.write(df)
192
- return df
193
-
194
- def BestProbs5(prompt):
195
- prompt = prompt.strip()
196
- text = tokenizer.encode(prompt)
197
- myinput, past_key_values = torch.tensor([text]), None
198
- myinput = myinput
199
- logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
200
- logits = logits[0,-1]
201
- probabilities = torch.nn.functional.softmax(logits)
202
- best_logits, best_indices = logits.topk(number_of_outputs)
203
- best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
204
- for i in best_words[0:number_of_outputs]:
205
- #print(i)
206
- print("\n")
207
- g = (prompt + i)
208
- st.write(g)
209
- l = run_generate(g, "hey")
210
- st.write(l)
211
-
212
- def run_generate(text, bad_words):
213
- yo = []
214
- input_ids = tokenizer.encode(text, return_tensors='pt')
215
- res = len(tokenizer.encode(text))
216
- bad_words = bad_words.split()
217
- bad_word_ids = [[7829], [40940]]
218
- for bad_word in bad_words:
219
- bad_word = " " + bad_word
220
- ids = tokenizer(bad_word).input_ids
221
- bad_word_ids.append(ids)
222
- sample_outputs = model.generate(
223
- input_ids,
224
- do_sample=True,
225
- max_length= res + 5,
226
- min_length = res + 5,
227
- top_k=50,
228
- temperature=1.0,
229
- num_return_sequences=3,
230
- bad_words_ids=bad_word_ids
231
- )
232
- for i in range(3):
233
- e = tokenizer.decode(sample_outputs[i])
234
- e = e.replace(text, "")
235
- yo.append(e)
236
- print(yo)
237
- return yo
238
-
239
- with st.form(key='my_form'):
240
- prompt = st.text_area(label='Enter sentence', value=g, height=500)
241
- submit_button = st.form_submit_button(label='Submit')
242
- submit_button2 = st.form_submit_button(label='Fast Forward')
243
- submit_button3 = st.form_submit_button(label='Fast Forward 2.0')
244
- submit_button4 = st.form_submit_button(label='Get Top')
245
-
246
- if submit_button:
247
- with torch.no_grad():
248
- text = tokenizer.encode(prompt)
249
- myinput, past_key_values = torch.tensor([text]), None
250
- myinput = myinput
251
- myinput= myinput.to(device)
252
- logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
253
- logits = logits[0,-1]
254
- probabilities = torch.nn.functional.softmax(logits)
255
- best_logits, best_indices = logits.topk(log_nums)
256
- best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
257
- text.append(best_indices[0].item())
258
- best_probabilities = probabilities[best_indices].tolist()
259
- words = []
260
- st.write(best_words)
261
- if submit_button2:
262
- print("----")
263
- st.write("___")
264
- m = LogProbs(prompt)
265
- st.write("___")
266
- st.write(m)
267
- st.write("___")
268
- if submit_button3:
269
- print("----")
270
- st.write("___")
271
- st.write(BestProbs)
272
- if submit_button4:
273
- BestProbs5(prompt)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/data/datasets/pascal_voc.py DELETED
@@ -1,80 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
3
-
4
- import numpy as np
5
- import os
6
- import xml.etree.ElementTree as ET
7
- from fvcore.common.file_io import PathManager
8
-
9
- from detectron2.data import DatasetCatalog, MetadataCatalog
10
- from detectron2.structures import BoxMode
11
-
12
- __all__ = ["register_pascal_voc"]
13
-
14
-
15
- # fmt: off
16
- CLASS_NAMES = [
17
- "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat",
18
- "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person",
19
- "pottedplant", "sheep", "sofa", "train", "tvmonitor",
20
- ]
21
- # fmt: on
22
-
23
-
24
- def load_voc_instances(dirname: str, split: str):
25
- """
26
- Load Pascal VOC detection annotations to Detectron2 format.
27
-
28
- Args:
29
- dirname: Contain "Annotations", "ImageSets", "JPEGImages"
30
- split (str): one of "train", "test", "val", "trainval"
31
- """
32
- with PathManager.open(os.path.join(dirname, "ImageSets", "Main", split + ".txt")) as f:
33
- fileids = np.loadtxt(f, dtype=np.str)
34
-
35
- # Needs to read many small annotation files. Makes sense at local
36
- annotation_dirname = PathManager.get_local_path(os.path.join(dirname, "Annotations/"))
37
- dicts = []
38
- for fileid in fileids:
39
- anno_file = os.path.join(annotation_dirname, fileid + ".xml")
40
- jpeg_file = os.path.join(dirname, "JPEGImages", fileid + ".jpg")
41
-
42
- with PathManager.open(anno_file) as f:
43
- tree = ET.parse(f)
44
-
45
- r = {
46
- "file_name": jpeg_file,
47
- "image_id": fileid,
48
- "height": int(tree.findall("./size/height")[0].text),
49
- "width": int(tree.findall("./size/width")[0].text),
50
- }
51
- instances = []
52
-
53
- for obj in tree.findall("object"):
54
- cls = obj.find("name").text
55
- # We include "difficult" samples in training.
56
- # Based on limited experiments, they don't hurt accuracy.
57
- # difficult = int(obj.find("difficult").text)
58
- # if difficult == 1:
59
- # continue
60
- bbox = obj.find("bndbox")
61
- bbox = [float(bbox.find(x).text) for x in ["xmin", "ymin", "xmax", "ymax"]]
62
- # Original annotations are integers in the range [1, W or H]
63
- # Assuming they mean 1-based pixel indices (inclusive),
64
- # a box with annotation (xmin=1, xmax=W) covers the whole image.
65
- # In coordinate space this is represented by (xmin=0, xmax=W)
66
- bbox[0] -= 1.0
67
- bbox[1] -= 1.0
68
- instances.append(
69
- {"category_id": CLASS_NAMES.index(cls), "bbox": bbox, "bbox_mode": BoxMode.XYXY_ABS}
70
- )
71
- r["annotations"] = instances
72
- dicts.append(r)
73
- return dicts
74
-
75
-
76
- def register_pascal_voc(name, dirname, split, year):
77
- DatasetCatalog.register(name, lambda: load_voc_instances(dirname, split))
78
- MetadataCatalog.get(name).set(
79
- thing_classes=CLASS_NAMES, dirname=dirname, year=year, split=split
80
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/detail/malloc_and_free.h DELETED
@@ -1,85 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
- #include <thrust/detail/execution_policy.h>
21
- #include <thrust/detail/pointer.h>
22
- #include <thrust/detail/raw_pointer_cast.h>
23
- #include <thrust/system/detail/generic/memory.h>
24
- #include <thrust/system/detail/adl/malloc_and_free.h>
25
-
26
- namespace thrust
27
- {
28
-
29
- __thrust_exec_check_disable__
30
- template<typename DerivedPolicy>
31
- __host__ __device__
32
- pointer<void,DerivedPolicy> malloc(const thrust::detail::execution_policy_base<DerivedPolicy> &exec, std::size_t n)
33
- {
34
- using thrust::system::detail::generic::malloc;
35
-
36
- // XXX should use a hypothetical thrust::static_pointer_cast here
37
- void *raw_ptr = static_cast<void*>(thrust::raw_pointer_cast(malloc(thrust::detail::derived_cast(thrust::detail::strip_const(exec)), n)));
38
-
39
- return pointer<void,DerivedPolicy>(raw_ptr);
40
- }
41
-
42
- __thrust_exec_check_disable__
43
- template<typename T, typename DerivedPolicy>
44
- __host__ __device__
45
- pointer<T,DerivedPolicy> malloc(const thrust::detail::execution_policy_base<DerivedPolicy> &exec, std::size_t n)
46
- {
47
- using thrust::system::detail::generic::malloc;
48
-
49
- T *raw_ptr = static_cast<T*>(thrust::raw_pointer_cast(malloc<T>(thrust::detail::derived_cast(thrust::detail::strip_const(exec)), n)));
50
-
51
- return pointer<T,DerivedPolicy>(raw_ptr);
52
- }
53
-
54
-
55
- // XXX WAR nvbug 992955
56
- #if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
57
- #if CUDART_VERSION < 5000
58
-
59
- // cudafe generates unqualified calls to free(int *volatile)
60
- // which get confused with thrust::free
61
- // spoof a thrust::free which simply maps to ::free
62
- inline __host__ __device__
63
- void free(int *volatile ptr)
64
- {
65
- ::free(ptr);
66
- }
67
-
68
- #endif // CUDART_VERSION
69
- #endif // THRUST_DEVICE_COMPILER
70
-
71
- __thrust_exec_check_disable__
72
- template<typename DerivedPolicy, typename Pointer>
73
- __host__ __device__
74
- void free(const thrust::detail::execution_policy_base<DerivedPolicy> &exec, Pointer ptr)
75
- {
76
- using thrust::system::detail::generic::free;
77
-
78
- free(thrust::detail::derived_cast(thrust::detail::strip_const(exec)), ptr);
79
- }
80
-
81
- // XXX consider another form of free which does not take a system argument and
82
- // instead infers the system from the pointer
83
-
84
- } // end namespace thrust
85
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/iterator/reverse_iterator.h DELETED
@@ -1,238 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
-
18
- /*! \file thrust/iterator/reverse_iterator.h
19
- * \brief An iterator adaptor which adapts another iterator to traverse backwards
20
- */
21
-
22
- /*
23
- * (C) Copyright David Abrahams 2002.
24
- * (C) Copyright Jeremy Siek 2002.
25
- * (C) Copyright Thomas Witt 2002.
26
- *
27
- * Distributed under the Boost Software License, Version 1.0.
28
- * (See accompanying NOTICE file for the complete license)
29
- *
30
- * For more information, see http://www.boost.org
31
- */
32
-
33
- #pragma once
34
-
35
- #include <thrust/detail/config.h>
36
- #include <thrust/detail/type_traits.h>
37
- #include <thrust/iterator/detail/reverse_iterator_base.h>
38
- #include <thrust/iterator/iterator_facade.h>
39
-
40
- namespace thrust
41
- {
42
-
43
- /*! \addtogroup iterators
44
- * \{
45
- */
46
-
47
- /*! \addtogroup fancyiterator Fancy Iterators
48
- * \ingroup iterators
49
- * \{
50
- */
51
-
52
- /*! \p reverse_iterator is an iterator which represents a pointer into a
53
- * reversed view of a given range. In this way, \p reverse_iterator allows
54
- * backwards iteration through a bidirectional input range.
55
- *
56
- * It is important to note that although \p reverse_iterator is constructed
57
- * from a given iterator, it points to the element preceding it. In this way,
58
- * the past-the-end \p reverse_iterator of a given range points to the element
59
- * preceding the first element of the input range. By the same token, the first
60
- * \p reverse_iterator of a given range is constructed from a past-the-end iterator
61
- * of the original range yet points to the last element of the input.
62
- *
63
- * The following code snippet demonstrates how to create a \p reverse_iterator
64
- * which represents a reversed view of the contents of a \p device_vector.
65
- *
66
- * \code
67
- * #include <thrust/iterator/reverse_iterator.h>
68
- * #include <thrust/device_vector.h>
69
- * ...
70
- * thrust::device_vector<float> v(4);
71
- * v[0] = 0.0f;
72
- * v[1] = 1.0f;
73
- * v[2] = 2.0f;
74
- * v[3] = 3.0f;
75
- *
76
- * typedef thrust::device_vector<float>::iterator Iterator;
77
- *
78
- * // note that we point the iterator to the *end* of the device_vector
79
- * thrust::reverse_iterator<Iterator> iter(values.end());
80
- *
81
- * *iter; // returns 3.0f;
82
- * iter[0]; // returns 3.0f;
83
- * iter[1]; // returns 2.0f;
84
- * iter[2]; // returns 1.0f;
85
- * iter[3]; // returns 0.0f;
86
- *
87
- * // iter[4] is an out-of-bounds error
88
- * \endcode
89
- *
90
- * Since reversing a range is a common operation, containers like \p device_vector
91
- * have nested typedefs for declaration shorthand and methods for constructing
92
- * reverse_iterators. The following code snippet is equivalent to the previous:
93
- *
94
- * \code
95
- * #include <thrust/device_vector.h>
96
- * ...
97
- * thrust::device_vector<float> v(4);
98
- * v[0] = 0.0f;
99
- * v[1] = 1.0f;
100
- * v[2] = 2.0f;
101
- * v[3] = 3.0f;
102
- *
103
- * // we use the nested type reverse_iterator to refer to a reversed view of
104
- * // a device_vector and the method rbegin() to create a reverse_iterator pointing
105
- * // to the beginning of the reversed device_vector
106
- * thrust::device_iterator<float>::reverse_iterator iter = values.rbegin();
107
- *
108
- * *iter; // returns 3.0f;
109
- * iter[0]; // returns 3.0f;
110
- * iter[1]; // returns 2.0f;
111
- * iter[2]; // returns 1.0f;
112
- * iter[3]; // returns 0.0f;
113
- *
114
- * // iter[4] is an out-of-bounds error
115
- *
116
- * // similarly, rend() points to the end of the reversed sequence:
117
- * assert(values.rend() == (iter + 4));
118
- * \endcode
119
- *
120
- * Finally, the following code snippet demonstrates how to use reverse_iterator to
121
- * perform a reversed prefix sum operation on the contents of a device_vector:
122
- *
123
- * \code
124
- * #include <thrust/device_vector.h>
125
- * #include <thrust/scan.h>
126
- * ...
127
- * thrust::device_vector<int> v(5);
128
- * v[0] = 0;
129
- * v[1] = 1;
130
- * v[2] = 2;
131
- * v[3] = 3;
132
- * v[4] = 4;
133
- *
134
- * thrust::device_vector<int> result(5);
135
- *
136
- * // exclusive scan v into result in reverse
137
- * thrust::exclusive_scan(v.rbegin(), v.rend(), result.begin());
138
- *
139
- * // result is now {0, 4, 7, 9, 10}
140
- * \endcode
141
- *
142
- * \see make_reverse_iterator
143
- */
144
- template<typename BidirectionalIterator>
145
- class reverse_iterator
146
- : public detail::reverse_iterator_base<BidirectionalIterator>::type
147
- {
148
- /*! \cond
149
- */
150
- private:
151
- typedef typename thrust::detail::reverse_iterator_base<
152
- BidirectionalIterator
153
- >::type super_t;
154
-
155
- friend class thrust::iterator_core_access;
156
- /*! \endcond
157
- */
158
-
159
- public:
160
- /*! Default constructor does nothing.
161
- */
162
- __host__ __device__
163
- reverse_iterator() {}
164
-
165
- /*! \p Constructor accepts a \c BidirectionalIterator pointing to a range
166
- * for this \p reverse_iterator to reverse.
167
- *
168
- * \param x A \c BidirectionalIterator pointing to a range to reverse.
169
- */
170
- __host__ __device__
171
- explicit reverse_iterator(BidirectionalIterator x);
172
-
173
- /*! \p Copy constructor allows construction from a related compatible
174
- * \p reverse_iterator.
175
- *
176
- * \param r A \p reverse_iterator to copy from.
177
- */
178
- template<typename OtherBidirectionalIterator>
179
- __host__ __device__
180
- reverse_iterator(reverse_iterator<OtherBidirectionalIterator> const &r
181
- // XXX msvc screws this up
182
- // XXX remove these guards when we have static_assert
183
- #if THRUST_HOST_COMPILER != THRUST_HOST_COMPILER_MSVC
184
- , typename thrust::detail::enable_if<
185
- thrust::detail::is_convertible<
186
- OtherBidirectionalIterator,
187
- BidirectionalIterator
188
- >::value
189
- >::type * = 0
190
- #endif // MSVC
191
- );
192
-
193
- /*! \cond
194
- */
195
- private:
196
- __thrust_exec_check_disable__
197
- __host__ __device__
198
- typename super_t::reference dereference() const;
199
-
200
- __host__ __device__
201
- void increment();
202
-
203
- __host__ __device__
204
- void decrement();
205
-
206
- __host__ __device__
207
- void advance(typename super_t::difference_type n);
208
-
209
- template<typename OtherBidirectionalIterator>
210
- __host__ __device__
211
- typename super_t::difference_type
212
- distance_to(reverse_iterator<OtherBidirectionalIterator> const &y) const;
213
- /*! \endcond
214
- */
215
- }; // end reverse_iterator
216
-
217
-
218
- /*! \p make_reverse_iterator creates a \p reverse_iterator
219
- * from a \c BidirectionalIterator pointing to a range of elements to reverse.
220
- *
221
- * \param x A \c BidirectionalIterator pointing to a range to reverse.
222
- * \return A new \p reverse_iterator which reverses the range \p x.
223
- */
224
- template<typename BidirectionalIterator>
225
- __host__ __device__
226
- reverse_iterator<BidirectionalIterator> make_reverse_iterator(BidirectionalIterator x);
227
-
228
-
229
- /*! \} // end fancyiterators
230
- */
231
-
232
- /*! \} // end iterators
233
- */
234
-
235
- } // end thrust
236
-
237
- #include <thrust/iterator/detail/reverse_iterator.inl>
238
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/reduce.h DELETED
@@ -1,59 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
-
18
- #pragma once
19
-
20
- #include <thrust/detail/config.h>
21
- #include <thrust/system/detail/generic/tag.h>
22
- #include <thrust/iterator/iterator_traits.h>
23
-
24
- namespace thrust
25
- {
26
- namespace system
27
- {
28
- namespace detail
29
- {
30
- namespace generic
31
- {
32
-
33
-
34
- template<typename DerivedPolicy, typename InputIterator>
35
- __host__ __device__
36
- typename thrust::iterator_traits<InputIterator>::value_type
37
- reduce(thrust::execution_policy<DerivedPolicy> &exec, InputIterator first, InputIterator last);
38
-
39
-
40
- template<typename DerivedPolicy, typename InputIterator, typename T>
41
- __host__ __device__
42
- T reduce(thrust::execution_policy<DerivedPolicy> &exec, InputIterator first, InputIterator last, T init);
43
-
44
-
45
- template<typename DerivedPolicy,
46
- typename InputIterator,
47
- typename T,
48
- typename BinaryFunction>
49
- __host__ __device__
50
- T reduce(thrust::execution_policy<DerivedPolicy> &exec, InputIterator first, InputIterator last, T init, BinaryFunction binary_op);
51
-
52
-
53
- } // end namespace generic
54
- } // end namespace detail
55
- } // end namespace system
56
- } // end namespace thrust
57
-
58
- #include <thrust/system/detail/generic/reduce.inl>
59
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/binary_search.h DELETED
@@ -1,157 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
-
18
- /*! \file binary_search.h
19
- * \brief Sequential implementation of binary search algorithms.
20
- */
21
-
22
- #pragma once
23
-
24
- #include <thrust/advance.h>
25
- #include <thrust/distance.h>
26
- #include <thrust/iterator/iterator_traits.h>
27
- #include <thrust/detail/function.h>
28
-
29
- namespace thrust
30
- {
31
- namespace system
32
- {
33
- namespace detail
34
- {
35
- namespace sequential
36
- {
37
-
38
-
39
- __thrust_exec_check_disable__
40
- template<typename DerivedPolicy,
41
- typename ForwardIterator,
42
- typename T,
43
- typename StrictWeakOrdering>
44
- __host__ __device__
45
- ForwardIterator lower_bound(sequential::execution_policy<DerivedPolicy> &,
46
- ForwardIterator first,
47
- ForwardIterator last,
48
- const T& val,
49
- StrictWeakOrdering comp)
50
- {
51
- // wrap comp
52
- thrust::detail::wrapped_function<
53
- StrictWeakOrdering,
54
- bool
55
- > wrapped_comp(comp);
56
-
57
- typedef typename thrust::iterator_difference<ForwardIterator>::type difference_type;
58
-
59
- difference_type len = thrust::distance(first, last);
60
-
61
- while(len > 0)
62
- {
63
- difference_type half = len >> 1;
64
- ForwardIterator middle = first;
65
-
66
- thrust::advance(middle, half);
67
-
68
- if(wrapped_comp(*middle, val))
69
- {
70
- first = middle;
71
- ++first;
72
- len = len - half - 1;
73
- }
74
- else
75
- {
76
- len = half;
77
- }
78
- }
79
-
80
- return first;
81
- }
82
-
83
-
84
- __thrust_exec_check_disable__
85
- template<typename DerivedPolicy,
86
- typename ForwardIterator,
87
- typename T,
88
- typename StrictWeakOrdering>
89
- __host__ __device__
90
- ForwardIterator upper_bound(sequential::execution_policy<DerivedPolicy> &,
91
- ForwardIterator first,
92
- ForwardIterator last,
93
- const T& val,
94
- StrictWeakOrdering comp)
95
- {
96
- // wrap comp
97
- thrust::detail::wrapped_function<
98
- StrictWeakOrdering,
99
- bool
100
- > wrapped_comp(comp);
101
-
102
- typedef typename thrust::iterator_difference<ForwardIterator>::type difference_type;
103
-
104
- difference_type len = thrust::distance(first, last);
105
-
106
- while(len > 0)
107
- {
108
- difference_type half = len >> 1;
109
- ForwardIterator middle = first;
110
-
111
- thrust::advance(middle, half);
112
-
113
- if(wrapped_comp(val, *middle))
114
- {
115
- len = half;
116
- }
117
- else
118
- {
119
- first = middle;
120
- ++first;
121
- len = len - half - 1;
122
- }
123
- }
124
-
125
- return first;
126
- }
127
-
128
-
129
- __thrust_exec_check_disable__
130
- template<typename DerivedPolicy,
131
- typename ForwardIterator,
132
- typename T,
133
- typename StrictWeakOrdering>
134
- __host__ __device__
135
- bool binary_search(sequential::execution_policy<DerivedPolicy> &exec,
136
- ForwardIterator first,
137
- ForwardIterator last,
138
- const T& val,
139
- StrictWeakOrdering comp)
140
- {
141
- ForwardIterator iter = sequential::lower_bound(exec, first, last, val, comp);
142
-
143
- // wrap comp
144
- thrust::detail::wrapped_function<
145
- StrictWeakOrdering,
146
- bool
147
- > wrapped_comp(comp);
148
-
149
- return iter != last && !wrapped_comp(val,*iter);
150
- }
151
-
152
-
153
- } // end namespace sequential
154
- } // end namespace detail
155
- } // end namespace system
156
- } // end namespace thrust
157
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/equal.h DELETED
@@ -1,22 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
-
21
- // this system has no special equal functions
22
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/ml-talking-face/translator/module.py DELETED
@@ -1,59 +0,0 @@
1
- from .v3 import GoogleAuthTranslation
2
- from pathlib import Path
3
- import yaml
4
- import os
5
-
6
- MAX_ENG_TEXT_LENGTH = int(os.getenv('MAX_ENG_TEXT_LENGTH', 200))
7
- MAX_CJK_TEXT_LENGTH = int(os.getenv('MAX_CJK_TEXT_LENGTH', 100))
8
-
9
- class Translator:
10
- def __init__(self, yaml_path='./lang.yaml'):
11
- self.google_translation = GoogleAuthTranslation(project_id="cvpr-2022-demonstration")
12
- with open(yaml_path) as f:
13
- self.supporting_languages = yaml.load(f, Loader=yaml.FullLoader)
14
-
15
- @staticmethod
16
- def length_check(lang, text):
17
- if lang in ['en']:
18
- if len(text) > MAX_ENG_TEXT_LENGTH:
19
- raise AssertionError(f"Input text is too long. For English, the text length should be less than {MAX_ENG_TEXT_LENGTH}. | Length: {len(text)}")
20
- elif lang in ['ko', 'ja', 'zh-CN', 'zh']:
21
- if len(text) > MAX_CJK_TEXT_LENGTH:
22
- raise AssertionError(f"Input text is too long. For CJK, the text length should be less than {MAX_CJK_TEXT_LENGTH}. | Length: {len(text)}")
23
- else:
24
- raise AssertionError(f"Not in ['ko', 'ja', 'zh-CN', 'zh', 'en'] ! | Language: {lang}")
25
-
26
- return
27
-
28
- def _get_text_with_lang(self, text, lang):
29
- lang_detected = self.google_translation.detect(text)
30
- print(f"Detected as: {lang_detected} | Destination: {lang}")
31
-
32
- if lang is None:
33
- lang = lang_detected
34
-
35
- if lang != lang_detected:
36
- target_text = self.google_translation.translate(text, lang=lang)
37
- else:
38
- target_text = text
39
-
40
- return target_text, lang
41
-
42
- def _convert_lang_from_index(self, lang):
43
- try:
44
- lang = [name for name in self.supporting_languages
45
- if self.supporting_languages[name]['language'] == lang][0]
46
- except Exception as e:
47
- raise RuntimeError(e)
48
-
49
- return lang
50
-
51
- def get_translation(self, text, lang, use_translation=True):
52
- lang_ = self._convert_lang_from_index(lang)
53
-
54
- if use_translation:
55
- target_text, _ = self._get_text_with_lang(text, lang_)
56
- else:
57
- target_text = text
58
-
59
- return target_text, lang_
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CarlDennis/Lovelive-VITS-JPZH/text/japanese.py DELETED
@@ -1,132 +0,0 @@
1
- import re
2
- from unidecode import unidecode
3
- import pyopenjtalk
4
-
5
-
6
- # Regular expression matching Japanese without punctuation marks:
7
- _japanese_characters = re.compile(
8
- r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
9
-
10
- # Regular expression matching non-Japanese characters or punctuation marks:
11
- _japanese_marks = re.compile(
12
- r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]')
13
-
14
- # List of (symbol, Japanese) pairs for marks:
15
- _symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [
16
- ('%', 'パーセント')
17
- ]]
18
-
19
- # List of (romaji, ipa) pairs for marks:
20
- _romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
21
- ('ts', 'ʦ'),
22
- ('u', 'ɯ'),
23
- ('...', '…'),
24
- ('j', 'ʥ'),
25
- ('y', 'j'),
26
- ('ni', 'n^i'),
27
- ('nj', 'n^'),
28
- ('hi', 'çi'),
29
- ('hj', 'ç'),
30
- ('f', 'ɸ'),
31
- ('I', 'i*'),
32
- ('U', 'ɯ*'),
33
- ('r', 'ɾ')
34
- ]]
35
-
36
- # Dictinary of (consonant, sokuon) pairs:
37
- _real_sokuon = {
38
- 'k': 'k#',
39
- 'g': 'k#',
40
- 't': 't#',
41
- 'd': 't#',
42
- 'ʦ': 't#',
43
- 'ʧ': 't#',
44
- 'ʥ': 't#',
45
- 'j': 't#',
46
- 's': 's',
47
- 'ʃ': 's',
48
- 'p': 'p#',
49
- 'b': 'p#'
50
- }
51
-
52
- # Dictinary of (consonant, hatsuon) pairs:
53
- _real_hatsuon = {
54
- 'p': 'm',
55
- 'b': 'm',
56
- 'm': 'm',
57
- 't': 'n',
58
- 'd': 'n',
59
- 'n': 'n',
60
- 'ʧ': 'n^',
61
- 'ʥ': 'n^',
62
- 'k': 'ŋ',
63
- 'g': 'ŋ'
64
- }
65
-
66
-
67
- def symbols_to_japanese(text):
68
- for regex, replacement in _symbols_to_japanese:
69
- text = re.sub(regex, replacement, text)
70
- return text
71
-
72
-
73
- def japanese_to_romaji_with_accent(text):
74
- '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html'''
75
- text = symbols_to_japanese(text)
76
- sentences = re.split(_japanese_marks, text)
77
- marks = re.findall(_japanese_marks, text)
78
- text = ''
79
- for i, sentence in enumerate(sentences):
80
- if re.match(_japanese_characters, sentence):
81
- if text != '':
82
- text += ' '
83
- labels = pyopenjtalk.extract_fullcontext(sentence)
84
- for n, label in enumerate(labels):
85
- phoneme = re.search(r'\-([^\+]*)\+', label).group(1)
86
- if phoneme not in ['sil', 'pau']:
87
- text += phoneme.replace('ch', 'ʧ').replace('sh',
88
- 'ʃ').replace('cl', 'Q')
89
- else:
90
- continue
91
- # n_moras = int(re.search(r'/F:(\d+)_', label).group(1))
92
- a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1))
93
- a2 = int(re.search(r"\+(\d+)\+", label).group(1))
94
- a3 = int(re.search(r"\+(\d+)/", label).group(1))
95
- if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']:
96
- a2_next = -1
97
- else:
98
- a2_next = int(
99
- re.search(r"\+(\d+)\+", labels[n + 1]).group(1))
100
- # Accent phrase boundary
101
- if a3 == 1 and a2_next == 1:
102
- text += ' '
103
- # Falling
104
- elif a1 == 0 and a2_next == a2 + 1:
105
- text += '↓'
106
- # Rising
107
- elif a2 == 1 and a2_next == 2:
108
- text += '↑'
109
- if i < len(marks):
110
- text += unidecode(marks[i]).replace(' ', '')
111
- return text
112
-
113
-
114
- def get_real_sokuon(text):
115
- text=re.sub('Q[↑↓]*(.)',lambda x:_real_sokuon[x.group(1)]+x.group(0)[1:] if x.group(1) in _real_sokuon.keys() else x.group(0),text)
116
- return text
117
-
118
-
119
- def get_real_hatsuon(text):
120
- text=re.sub('N[↑↓]*(.)',lambda x:_real_hatsuon[x.group(1)]+x.group(0)[1:] if x.group(1) in _real_hatsuon.keys() else x.group(0),text)
121
- return text
122
-
123
-
124
- def japanese_to_ipa(text):
125
- text=japanese_to_romaji_with_accent(text)
126
- for regex, replacement in _romaji_to_ipa:
127
- text = re.sub(regex, replacement, text)
128
- text = re.sub(
129
- r'([A-Za-zɯ])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text)
130
- text = get_real_sokuon(text)
131
- text = get_real_hatsuon(text)
132
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CarperAI/pile-v2-eda/load_dataset.py DELETED
@@ -1,22 +0,0 @@
1
- # import datasets
2
- # import logging
3
- import os
4
- import json
5
- # from tqdm import tqdm
6
- # dataset_subs = os.listdir(PATH)
7
-
8
- # print(dataset_subs)
9
-
10
-
11
- # for ds in tqdm(dataset_subs):
12
- # try:
13
- # print(ds)
14
- # dataset = datasets.load_dataset("CarperAI/pile-v2-small-filtered",data_files=f"data/{ds}/data.json", split="train")
15
- # dataset.save_to_disk(f"cache_ds/{ds}")
16
- # except:
17
- # print(f"Error at {ds}")
18
-
19
- ds_subsets = os.listdir("cache_ds")
20
-
21
- with open("documentation.json","w") as f:
22
- json.dump(ds_subsets,f)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DataDreamweavers/LegaWeaver/app.py DELETED
@@ -1,118 +0,0 @@
1
- # import packages.data_processor as dp
2
- import streamlit as st
3
- import random,time,os
4
-
5
- from ibm_watson_machine_learning.foundation_models import Model
6
- from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes
7
- from ibm_watson_machine_learning.metanames import GenTextParamsMetaNames as GenParams
8
- from ibm_watson_machine_learning.foundation_models.utils.enums import DecodingMethods
9
-
10
-
11
-
12
- # IBM WatsonX
13
- model_id = ModelTypes.FLAN_UL2
14
-
15
-
16
- credentials = {
17
- "url": "https://us-south.ml.cloud.ibm.com",
18
- "apikey": os.getenv("API_KEY")
19
- }
20
-
21
- project_id = os.getenv("PROJECT_ID")
22
-
23
-
24
- instruction = '''You are a seasoned legal assistant that excels at summarizing legal documents. Generate a brief summary of below document in a formal and instructive manner:
25
-
26
-
27
- document: subject to your compliance with these terms niantic grants you a limited nonexclusive nontransferable non sublicensable license to download and install a copy of the app on a mobile device and to run such copy of the app solely for your own personal noncommercial purposes. except as expressly permitted in these terms you may not a copy modify or create derivative works based on the app b distribute transfer sublicense lease lend or rent the app to any third party c reverse engineer decompile or disassemble the app or d make the functionality of the app available to multiple users through any means. niantic reserves all rights in and to the app not expressly granted to you under these terms. if you accessed or downloaded the app from the apple store then you agree to use the app only a on an apple branded product or device that runs ios apple s proprietary operating system software and b as permitted by the usage rules set forth in the apple store terms of service. if you accessed or downloaded the app from any app store or distribution platform like the apple store google play or amazon appstore each an app provider then you acknowledge and agree that these terms are concluded between you and niantic and not with app provider and that as between us and the app provider niantic is solely responsible for the app. app provider has no obligation to furnish any maintenance and support services with respect to the app. in the event of any failure of the app to conform to any applicable warranty you may notify app provider and app provider will refund the purchase price for the app to you if applicable and to the maximum extent permitted by applicable law app provider will have no other warranty obligation whatsoever with respect to the app. any other claims losses liabilities damages costs or expenses attributable to any failure of an app to conform to any warranty will be the sole responsibility of niantic. app provider is not responsible for addressing any claims you have or any claims of any third party relating to the app or your possession and use of the app including but not limited to i product liability claims ii any claim that the app fails to conform to any applicable legal or regulatory requirement and iii claims arising under consumer protection or similar legislation. in the event of any third party claim that the app or your possession and use of the app infringes that third party s intellectual property rights niantic will be solely responsible for the investigation defense settlement and discharge of any such intellectual property infringement claim to the extent required by these terms. app provider and its subsidiaries are third party beneficiaries of these terms as related to your license of the app and that upon your acceptance of the terms and conditions of these terms app provider will have the right and will be deemed to have accepted the right to enforce these terms as related to your license of the app against you as a third party beneficiary thereof. you must also comply with all applicable third party terms of service when using the app. you agree to comply with all u s and foreign export laws and regulations to ensure that neither the app nor any technical data related thereto nor any direct product thereof is exported or re exported directly or indirectly in violation of or used for any purposes prohibited by such laws and regulations. by using the app you represent and warrant that i you are not located in a country that is subject to a u s government embargo or that has been designated by the u s government as a terrorist supporting country and ii you are not listed on any u s government list of prohibited or restricted parties.
28
-
29
- summary: don t copy modify resell distribute or reverse engineer this app.
30
-
31
-
32
- document: any feedback you provide at this site shall be deemed to be non confidential. apple shall be free to use such information on an unrestricted basis.
33
-
34
- summary: apple may use your feedback without restrictions e g. share it publicly.
35
-
36
-
37
- document: except as required by applicable non u s local or national law the laws of the state of california excluding its conflicts of law rules govern these terms and your use of the services.
38
-
39
- summary: the court of law governing the terms is in the state of california.
40
-
41
-
42
- document: thanks for using dropbox. these terms of service terms cover your use and access to the services client software and websites services provided by dropbox inc our privacy policy explains how we collect and use your information while our acceptable use policy outlines your responsibilities when using our services. by using our services you re agreeing to be bound by these terms and to review our privacy and acceptable use policies. if you re using our services for an organization you re agreeing to these terms on behalf of that organization.
43
-
44
- summary: by accepting these terms you also agree to the privacy and acceptable use policies linked in the right.
45
-
46
-
47
- document: we may update this privacy policy to reflect changes to our information practices. if we make any change in how we use your personal information we will notify you by email sent to the e mail address specified in your account. we encourage you to periodically review this page for the latest information on our privacy practices.
48
-
49
- summary: users should revisit the terms periodically although in case of material changes the service will notify.
50
-
51
-
52
- document: jetbrains reserves the exclusive right to revoke authorization to view download and print site content at any time and you shall discontinue such use immediately upon notice from jetbrains.
53
-
54
- summary: the service can delete your account without prior notice and without a reason.
55
-
56
-
57
- document: should any part of these terms and conditions be rendered or declared invalid by an appropriate authority such invalidation of such part or portion of these terms and conditions should not invalidate the remaining portions thereof and they shall remain in full force and effect.
58
-
59
- summary: invalidity of any portion of the terms of service does not entail invalidity of its remainder.
60
-
61
-
62
- document: sharing with third parties we share information with third parties that help us operate provide improve integrate customize support and market our services. service providers we work with third party service providers to provide website and application development hosting maintenance backup storage virtual infrastructure payment processing analysis and other services for us which may require them to access or use information about you. if a service provider needs to access information about you to perform services on our behalf they do so under instruction from us including abiding by policies and procedures designed to protect your information. trello partners we work with third parties who provide consulting sales support and technical services to deliver and implement customer solutions around the services including the atlassian global partner network 7 we may share your information with these third parties in connection with their services such as to assist with billing and collections to provide localized support and to provide customizations. we may also share information with these third parties where you have agreed to that sharing like when you agree to us sharing your information with a trello expert for support related questions. third party apps you your administrator or other service users may choose to add new functionality or change the behavior of the services by enabling third party apps like power ups within the services. doing so may give third party apps access to your account and information about you like your name and email address and any content you choose to use in connection with those apps. if you are an administrator or contact listed on an account we share your details with the third party app provider upon installation. third party app policies and procedures are not controlled by us and this privacy policy does not cover how third party apps use your information. we encourage you to review the privacy policies of third parties before connecting to or using their applications or services to learn more about their privacy and information handling practices. if you object to information about you being shared with these third parties please disable the app. links to third party sites the services may include links that direct you to other websites or services whose privacy practices may differ from ours. your use of and any information you submit to any of those third party sites is governed by their privacy policies not this one. third party widgets some of our services contain widgets and social media features such as the twitter tweet button. these widgets and features collect your ip address which page you are visiting on the services and may set a cookie to enable the feature to function properly. widgets and social media features are either hosted by a third party or hosted directly on our services. your interactions with these features are governed by the privacy policy of the company providing it. with your consent we share information about you with third parties when you give us consent to do so. for example we often display personal testimonials of satisfied customers on our public websites. with your consent we may post your name alongside the testimonial. compliance with enforcement requests and applicable laws. enforcement of our rights in exceptional circumstances we may share information about you with a third party if we believe that sharing is reasonably necessary to a comply with any applicable law regulation legal process or governmental request including to meet national security requirements b enforce our agreements policies and terms of service c protect the security or integrity of our products and services d protect trello our customers or the public from harm or illegal activities or e respond to an emergency which we believe in good faith requires us to disclose information to assist in preventing the death or serious bodily injury of any person. for more information on how we respond to government requests see our guidelines for law enforcement8 and our transparency report 9.
63
-
64
- summary: third parties may be involved in operating the service.
65
-
66
-
67
- document: you also agree to register one account only for your use of the site. the site administrators reserve the right to ban users because of accessing the site with more than one account. the information we obtain through your use of this site including your registration data is subject to our privacy policy which is specifically incorporated by reference into these terms of use.
68
-
69
- summary: service does not allow alternative accounts.
70
-
71
-
72
- document: personal data that you transmit are never disclosed or sold by qwant to third parties except for job applications that may be shared with recruiting partners unless you ask us not to.
73
-
74
- summary: this service does not sell your personal data.
75
-
76
-
77
- document: please note that this privacy policy may change from time to time. we will not reduce your rights under this policy without your explicit consent and we expect most such changes will be minor. regardless we will post any policy changes on this page. we may amend this privacy policy at any time by posting the amend terms on the site. all amended terms shall automatically be effective thirty 30 days after they are initially posted on the site. each version of this policy will be identified at the top of the page by its effective date.
78
-
79
- summary: they can change the terms of service any time they see fit even without notification to the user. your use of the service supposedly constitutes acceptance of the changes in the terms.
80
-
81
-
82
- document: '''
83
-
84
-
85
- parameters = {
86
- GenParams.MAX_NEW_TOKENS: 100,
87
- GenParams.MIN_NEW_TOKENS: 10,
88
- GenParams.DECODING_METHOD: DecodingMethods.GREEDY,
89
- GenParams.REPETITION_PENALTY:2,
90
- GenParams.STOP_SEQUENCES: ["^a-zA-Z\d\s,.", "human", "ai"]
91
-
92
- }
93
-
94
- model = Model(
95
- model_id=model_id,
96
- params=parameters,
97
- credentials=credentials,
98
- project_id=project_id
99
- )
100
-
101
-
102
-
103
- ### MAIN FUNCTION ###
104
- def main(title = "LegaWeaver"):
105
- st.markdown("<h1 style='text-align: center; font-size: 50px; color: #0530AD;'>{}</h1>".format(title),
106
- unsafe_allow_html=True)
107
-
108
-
109
- text_message = st.text_area("Input your legal document here !")
110
- if st.button("Summarize"):
111
-
112
- info =model.generate_text(prompt=" ".join([instruction, text_message, "\n\nsummary: "]))
113
-
114
-
115
- st.success('Summary: {}'.format(info))
116
-
117
- if __name__ == "__main__":
118
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Dinoking/Guccio-AI-Designer/netdissect/workerpool.py DELETED
@@ -1,158 +0,0 @@
1
- '''
2
- WorkerPool and WorkerBase for handling the common problems in managing
3
- a multiprocess pool of workers that aren't done by multiprocessing.Pool,
4
- including setup with per-process state, debugging by putting the worker
5
- on the main thread, and correct handling of unexpected errors, and ctrl-C.
6
-
7
- To use it,
8
- 1. Put the per-process setup and the per-task work in the
9
- setup() and work() methods of your own WorkerBase subclass.
10
- 2. To prepare the process pool, instantiate a WorkerPool, passing your
11
- subclass type as the first (worker) argument, as well as any setup keyword
12
- arguments. The WorkerPool will instantiate one of your workers in each
13
- worker process (passing in the setup arguments in those processes).
14
- If debugging, the pool can have process_count=0 to force all the work
15
- to be done immediately on the main thread; otherwise all the work
16
- will be passed to other processes.
17
- 3. Whenever there is a new piece of work to distribute, call pool.add(*args).
18
- The arguments will be queued and passed as worker.work(*args) to the
19
- next available worker.
20
- 4. When all the work has been distributed, call pool.join() to wait for all
21
- the work to complete and to finish and terminate all the worker processes.
22
- When pool.join() returns, all the work will have been done.
23
-
24
- No arrangement is made to collect the results of the work: for example,
25
- the return value of work() is ignored. If you need to collect the
26
- results, use your own mechanism (filesystem, shared memory object, queue)
27
- which can be distributed using setup arguments.
28
- '''
29
-
30
- from multiprocessing import Process, Queue, cpu_count
31
- import signal
32
- import atexit
33
- import sys
34
-
35
- class WorkerBase(Process):
36
- '''
37
- Subclass this class and override its work() method (and optionally,
38
- setup() as well) to define the units of work to be done in a process
39
- worker in a woker pool.
40
- '''
41
- def __init__(self, i, process_count, queue, initargs):
42
- if process_count > 0:
43
- # Make sure we ignore ctrl-C if we are not on main process.
44
- signal.signal(signal.SIGINT, signal.SIG_IGN)
45
- self.process_id = i
46
- self.process_count = process_count
47
- self.queue = queue
48
- super(WorkerBase, self).__init__()
49
- self.setup(**initargs)
50
- def run(self):
51
- # Do the work until None is dequeued
52
- while True:
53
- try:
54
- work_batch = self.queue.get()
55
- except (KeyboardInterrupt, SystemExit):
56
- print('Exiting...')
57
- break
58
- if work_batch is None:
59
- self.queue.put(None) # for another worker
60
- return
61
- self.work(*work_batch)
62
- def setup(self, **initargs):
63
- '''
64
- Override this method for any per-process initialization.
65
- Keywoard args are passed from WorkerPool constructor.
66
- '''
67
- pass
68
- def work(self, *args):
69
- '''
70
- Override this method for one-time initialization.
71
- Args are passed from WorkerPool.add() arguments.
72
- '''
73
- raise NotImplementedError('worker subclass needed')
74
-
75
- class WorkerPool(object):
76
- '''
77
- Instantiate this object (passing a WorkerBase subclass type
78
- as its first argument) to create a worker pool. Then call
79
- pool.add(*args) to queue args to distribute to worker.work(*args),
80
- and call pool.join() to wait for all the workers to complete.
81
- '''
82
- def __init__(self, worker=WorkerBase, process_count=None, **initargs):
83
- global active_pools
84
- if process_count is None:
85
- process_count = cpu_count()
86
- if process_count == 0:
87
- # zero process_count uses only main process, for debugging.
88
- self.queue = None
89
- self.processes = None
90
- self.worker = worker(None, 0, None, initargs)
91
- return
92
- # Ctrl-C strategy: worker processes should ignore ctrl-C. Set
93
- # this up to be inherited by child processes before forking.
94
- original_sigint_handler = signal.signal(signal.SIGINT, signal.SIG_IGN)
95
- active_pools[id(self)] = self
96
- self.queue = Queue(maxsize=(process_count * 3))
97
- self.processes = None # Initialize before trying to construct workers
98
- self.processes = [worker(i, process_count, self.queue, initargs)
99
- for i in range(process_count)]
100
- for p in self.processes:
101
- p.start()
102
- # The main process should handle ctrl-C. Restore this now.
103
- signal.signal(signal.SIGINT, original_sigint_handler)
104
- def add(self, *work_batch):
105
- if self.queue is None:
106
- if hasattr(self, 'worker'):
107
- self.worker.work(*work_batch)
108
- else:
109
- print('WorkerPool shutting down.', file=sys.stderr)
110
- else:
111
- try:
112
- # The queue can block if the work is so slow it gets full.
113
- self.queue.put(work_batch)
114
- except (KeyboardInterrupt, SystemExit):
115
- # Handle ctrl-C if done while waiting for the queue.
116
- self.early_terminate()
117
- def join(self):
118
- # End the queue, and wait for all worker processes to complete nicely.
119
- if self.queue is not None:
120
- self.queue.put(None)
121
- for p in self.processes:
122
- p.join()
123
- self.queue = None
124
- # Remove myself from the set of pools that need cleanup on shutdown.
125
- try:
126
- del active_pools[id(self)]
127
- except:
128
- pass
129
- def early_terminate(self):
130
- # When shutting down unexpectedly, first end the queue.
131
- if self.queue is not None:
132
- try:
133
- self.queue.put_nowait(None) # Nonblocking put throws if full.
134
- self.queue = None
135
- except:
136
- pass
137
- # But then don't wait: just forcibly terminate workers.
138
- if self.processes is not None:
139
- for p in self.processes:
140
- p.terminate()
141
- self.processes = None
142
- try:
143
- del active_pools[id(self)]
144
- except:
145
- pass
146
- def __del__(self):
147
- if self.queue is not None:
148
- print('ERROR: workerpool.join() not called!', file=sys.stderr)
149
- self.join()
150
-
151
- # Error and ctrl-C handling: kill worker processes if the main process ends.
152
- active_pools = {}
153
- def early_terminate_pools():
154
- for _, pool in list(active_pools.items()):
155
- pool.early_terminate()
156
-
157
- atexit.register(early_terminate_pools)
158
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DragGan/DragGan-Inversion/torch_utils/ops/upfirdn2d.h DELETED
@@ -1,59 +0,0 @@
1
- // Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2
- //
3
- // NVIDIA CORPORATION and its licensors retain all intellectual property
4
- // and proprietary rights in and to this software, related documentation
5
- // and any modifications thereto. Any use, reproduction, disclosure or
6
- // distribution of this software and related documentation without an express
7
- // license agreement from NVIDIA CORPORATION is strictly prohibited.
8
-
9
- #include <cuda_runtime.h>
10
-
11
- //------------------------------------------------------------------------
12
- // CUDA kernel parameters.
13
-
14
- struct upfirdn2d_kernel_params
15
- {
16
- const void* x;
17
- const float* f;
18
- void* y;
19
-
20
- int2 up;
21
- int2 down;
22
- int2 pad0;
23
- int flip;
24
- float gain;
25
-
26
- int4 inSize; // [width, height, channel, batch]
27
- int4 inStride;
28
- int2 filterSize; // [width, height]
29
- int2 filterStride;
30
- int4 outSize; // [width, height, channel, batch]
31
- int4 outStride;
32
- int sizeMinor;
33
- int sizeMajor;
34
-
35
- int loopMinor;
36
- int loopMajor;
37
- int loopX;
38
- int launchMinor;
39
- int launchMajor;
40
- };
41
-
42
- //------------------------------------------------------------------------
43
- // CUDA kernel specialization.
44
-
45
- struct upfirdn2d_kernel_spec
46
- {
47
- void* kernel;
48
- int tileOutW;
49
- int tileOutH;
50
- int loopMinor;
51
- int loopX;
52
- };
53
-
54
- //------------------------------------------------------------------------
55
- // CUDA kernel selection.
56
-
57
- template <class T> upfirdn2d_kernel_spec choose_upfirdn2d_kernel(const upfirdn2d_kernel_params& p);
58
-
59
- //------------------------------------------------------------------------
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ECCV2022/ECCV2022_papers/paper_list.py DELETED
@@ -1,129 +0,0 @@
1
- from __future__ import annotations
2
-
3
- import numpy as np
4
- import pandas as pd
5
- import requests
6
- from huggingface_hub.hf_api import SpaceInfo
7
-
8
-
9
- class PaperList:
10
- def __init__(self):
11
- self.organization_name = 'ECCV2022'
12
- self.table = pd.read_csv('papers.csv')
13
- self._preprcess_table()
14
-
15
- self.table_header = '''
16
- <tr>
17
- <td width="50%">Paper Title</td>
18
- <td width="22%">Authors</td>
19
- <td width="4%">pdf</td>
20
- <td width="4%">Session</td>
21
- <td width="4%">arXiv</td>
22
- <td width="4%">GitHub</td>
23
- <td width="4%">HF Spaces</td>
24
- <td width="4%">HF Models</td>
25
- <td width="4%">HF Datasets</td>
26
- </tr>'''
27
-
28
- @staticmethod
29
- def load_space_info(author: str) -> list[SpaceInfo]:
30
- path = 'https://huggingface.co/api/spaces'
31
- r = requests.get(path, params={'author': author})
32
- d = r.json()
33
- return [SpaceInfo(**x) for x in d]
34
-
35
- def add_spaces_to_table(self, organization_name: str,
36
- df: pd.DataFrame) -> pd.DataFrame:
37
- spaces = self.load_space_info(organization_name)
38
- name2space = {
39
- s.id.split('/')[1].lower(): f'https://huggingface.co/spaces/{s.id}'
40
- for s in spaces
41
- }
42
- df['hf_space'] = df.loc[:, ['hf_space', 'github']].apply(
43
- lambda x: x[0] if isinstance(x[0], str) else name2space.get(
44
- x[1].split('/')[-1].lower()
45
- if isinstance(x[1], str) else '', np.nan),
46
- axis=1)
47
- return df
48
-
49
- def _preprcess_table(self) -> None:
50
- self.table = self.add_spaces_to_table(self.organization_name,
51
- self.table)
52
- self.table['title_lowercase'] = self.table.title.str.lower()
53
-
54
- rows = []
55
- for row in self.table.itertuples():
56
- paper = f'<a href="{row.url}" target="_blank">{row.title}</a>' if isinstance(
57
- row.url, str) else row.title
58
- pdf = f'<a href="{row.pdf}" target="_blank">pdf</a>' if isinstance(
59
- row.pdf, str) else ''
60
- arxiv = f'<a href="{row.arxiv}" target="_blank">arXiv</a>' if isinstance(
61
- row.arxiv, str) else ''
62
- github = f'<a href="{row.github}" target="_blank">GitHub</a>' if isinstance(
63
- row.github, str) else ''
64
- hf_space = f'<a href="{row.hf_space}" target="_blank">Space</a>' if isinstance(
65
- row.hf_space, str) else ''
66
- hf_model = f'<a href="{row.hf_model}" target="_blank">Model</a>' if isinstance(
67
- row.hf_model, str) else ''
68
- hf_dataset = f'<a href="{row.hf_dataset}" target="_blank">Dataset</a>' if isinstance(
69
- row.hf_dataset, str) else ''
70
- row = f'''
71
- <tr>
72
- <td>{paper}</td>
73
- <td>{row.authors}</td>
74
- <td>{pdf}</td>
75
- <td>{row.session}</td>
76
- <td>{arxiv}</td>
77
- <td>{github}</td>
78
- <td>{hf_space}</td>
79
- <td>{hf_model}</td>
80
- <td>{hf_dataset}</td>
81
- </tr>'''
82
- rows.append(row)
83
- self.table['html_table_content'] = rows
84
-
85
- def render(self, search_query: str, case_sensitive: bool,
86
- filter_names: list[str],
87
- paper_sessions: list[str]) -> tuple[int, str]:
88
- df = self.add_spaces_to_table(self.organization_name, self.table)
89
- if search_query:
90
- if case_sensitive:
91
- df = df[df.title.str.contains(search_query)]
92
- else:
93
- df = df[df.title_lowercase.str.contains(search_query.lower())]
94
- has_arxiv = 'arXiv' in filter_names
95
- has_github = 'GitHub' in filter_names
96
- has_hf_space = 'HF Space' in filter_names
97
- has_hf_model = 'HF Model' in filter_names
98
- has_hf_dataset = 'HF Dataset' in filter_names
99
- df = self.filter_table(df, has_arxiv, has_github, has_hf_space,
100
- has_hf_model, has_hf_dataset, paper_sessions)
101
- return len(df), self.to_html(df, self.table_header)
102
-
103
- @staticmethod
104
- def filter_table(df: pd.DataFrame, has_arxiv: bool, has_github: bool,
105
- has_hf_space: bool, has_hf_model: bool,
106
- has_hf_dataset: bool,
107
- paper_sessions: list[str]) -> pd.DataFrame:
108
- if has_arxiv:
109
- df = df[~df.arxiv.isna()]
110
- if has_github:
111
- df = df[~df.github.isna()]
112
- if has_hf_space:
113
- df = df[~df.hf_space.isna()]
114
- if has_hf_model:
115
- df = df[~df.hf_model.isna()]
116
- if has_hf_dataset:
117
- df = df[~df.hf_dataset.isna()]
118
- df = df[df.session.isin(set(paper_sessions))]
119
- return df
120
-
121
- @staticmethod
122
- def to_html(df: pd.DataFrame, table_header: str) -> str:
123
- table_data = ''.join(df.html_table_content)
124
- html = f'''
125
- <table>
126
- {table_header}
127
- {table_data}
128
- </table>'''
129
- return html
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Eieichicken/yyayyaya/README.md DELETED
@@ -1,10 +0,0 @@
1
- ---
2
- title: Yyayyaya
3
- emoji: 👁
4
- colorFrom: pink
5
- colorTo: green
6
- sdk: docker
7
- pinned: false
8
- ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/models/__init__.py DELETED
@@ -1,10 +0,0 @@
1
- import importlib
2
- from basicsr.utils import scandir
3
- from os import path as osp
4
-
5
- # automatically scan and import model modules for registry
6
- # scan all the files that end with '_model.py' under the model folder
7
- model_folder = osp.dirname(osp.abspath(__file__))
8
- model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')]
9
- # import all the model modules
10
- _model_modules = [importlib.import_module(f'realesrgan.models.{file_name}') for file_name in model_filenames]
 
 
 
 
 
 
 
 
 
 
 
spaces/Enigma007/Classifier-Fasttext/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: FASTTEXT CLASSIFIER
3
- emoji: 🚀
4
- colorFrom: blue
5
- colorTo: blue
6
- sdk: streamlit
7
- sdk_version: 1.25.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/EtTKSf/uu/README.md DELETED
@@ -1,10 +0,0 @@
1
- ---
2
- title: Uu
3
- emoji: 🐠
4
- colorFrom: pink
5
- colorTo: indigo
6
- sdk: docker
7
- pinned: false
8
- ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
spaces/EuroPython2022/automatic-speech-recognition-with-next-gen-kaldi/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Automatic Speech Recognition
3
- emoji: 🌖
4
- colorFrom: yellow
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 3.0.26
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/EuroPython2022/automatic-speech-recognition-with-next-gen-kaldi/examples.py DELETED
@@ -1,256 +0,0 @@
1
- #!/usr/bin/env python3
2
- #
3
- # Copyright 2022 Xiaomi Corp. (authors: Fangjun Kuang)
4
- #
5
- # See LICENSE for clarification regarding multiple authors
6
- #
7
- # Licensed under the Apache License, Version 2.0 (the "License");
8
- # you may not use this file except in compliance with the License.
9
- # You may obtain a copy of the License at
10
- #
11
- # http://www.apache.org/licenses/LICENSE-2.0
12
- #
13
- # Unless required by applicable law or agreed to in writing, software
14
- # distributed under the License is distributed on an "AS IS" BASIS,
15
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16
- # See the License for the specific language governing permissions and
17
- # limitations under the License.
18
- examples = [
19
- [
20
- "Chinese+English",
21
- "ptrnull/icefall-asr-conv-emformer-transducer-stateless2-zh",
22
- "greedy_search",
23
- 4,
24
- "./test_wavs/tal_csasr/0.wav",
25
- ],
26
- [
27
- "English",
28
- "csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13",
29
- "greedy_search",
30
- 4,
31
- "./test_wavs/librispeech/1089-134686-0001.wav",
32
- ],
33
- [
34
- "Chinese",
35
- "luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2",
36
- "greedy_search",
37
- 4,
38
- "./test_wavs/wenetspeech/DEV_T0000000000.opus",
39
- ],
40
- [
41
- "German",
42
- "csukuangfj/wav2vec2.0-torchaudio",
43
- "greedy_search",
44
- 4,
45
- "./test_wavs/german/20170517-0900-PLENARY-16-de_20170517.wav",
46
- ],
47
- [
48
- "Arabic",
49
- "AmirHussein/icefall-asr-mgb2-conformer_ctc-2022-27-06",
50
- "greedy_search",
51
- 4,
52
- "./test_wavs/arabic/a.wav",
53
- ],
54
- [
55
- "Tibetan",
56
- "syzym/icefall-asr-xbmu-amdo31-pruned-transducer-stateless7-2022-12-02",
57
- "greedy_search",
58
- 4,
59
- "./test_wavs/tibetan/a_0_cacm-A70_31117.wav",
60
- ],
61
- # librispeech
62
- # https://huggingface.co/csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless5-2022-05-13/tree/main/test_wavs
63
- [
64
- "English",
65
- "csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13",
66
- "greedy_search",
67
- 4,
68
- "./test_wavs/librispeech/1089-134686-0001.wav",
69
- ],
70
- [
71
- "English",
72
- "csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13",
73
- "greedy_search",
74
- 4,
75
- "./test_wavs/librispeech/1221-135766-0001.wav",
76
- ],
77
- [
78
- "English",
79
- "csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13",
80
- "greedy_search",
81
- 4,
82
- "./test_wavs/librispeech/1221-135766-0002.wav",
83
- ],
84
- # gigaspeech
85
- [
86
- "English",
87
- "wgb14/icefall-asr-gigaspeech-pruned-transducer-stateless2",
88
- "greedy_search",
89
- 4,
90
- "./test_wavs/gigaspeech/1-minute-audiobook.opus",
91
- ],
92
- [
93
- "English",
94
- "wgb14/icefall-asr-gigaspeech-pruned-transducer-stateless2",
95
- "greedy_search",
96
- 4,
97
- "./test_wavs/gigaspeech/100-seconds-podcast.opus",
98
- ],
99
- [
100
- "English",
101
- "wgb14/icefall-asr-gigaspeech-pruned-transducer-stateless2",
102
- "greedy_search",
103
- 4,
104
- "./test_wavs/gigaspeech/100-seconds-youtube.opus",
105
- ],
106
- # wenetspeech
107
- # https://huggingface.co/luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2/tree/main/test_wavs
108
- [
109
- "Chinese",
110
- "luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2",
111
- "greedy_search",
112
- 4,
113
- "./test_wavs/wenetspeech/DEV_T0000000000.opus",
114
- ],
115
- [
116
- "Chinese",
117
- "luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2",
118
- "greedy_search",
119
- 4,
120
- "./test_wavs/wenetspeech/DEV_T0000000001.opus",
121
- ],
122
- [
123
- "Chinese",
124
- "luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2",
125
- "greedy_search",
126
- 4,
127
- "./test_wavs/wenetspeech/DEV_T0000000002.opus",
128
- ],
129
- # aishell2-A
130
- # https://huggingface.co/yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12/tree/main/test_wavs
131
- [
132
- "Chinese",
133
- "yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12",
134
- "greedy_search",
135
- 4,
136
- "./test_wavs/aishell2/ID0012W0030.wav",
137
- ],
138
- [
139
- "Chinese",
140
- "yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12",
141
- "greedy_search",
142
- 4,
143
- "./test_wavs/aishell2/ID0012W0162.wav",
144
- ],
145
- [
146
- "Chinese",
147
- "yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12",
148
- "greedy_search",
149
- 4,
150
- "./test_wavs/aishell2/ID0012W0215.wav",
151
- ],
152
- # aishell2-B
153
- # https://huggingface.co/yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12/tree/main/test_wavs
154
- [
155
- "Chinese",
156
- "yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-B-2022-07-12",
157
- "greedy_search",
158
- 4,
159
- "./test_wavs/aishell2/ID0012W0030.wav",
160
- ],
161
- [
162
- "Chinese",
163
- "yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-B-2022-07-12",
164
- "greedy_search",
165
- 4,
166
- "./test_wavs/aishell2/ID0012W0162.wav",
167
- ],
168
- [
169
- "Chinese",
170
- "yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-B-2022-07-12",
171
- "greedy_search",
172
- 4,
173
- "./test_wavs/aishell2/ID0012W0215.wav",
174
- ],
175
- # aishell2-B
176
- # https://huggingface.co/luomingshuang/icefall_asr_aidatatang-200zh_pruned_transducer_stateless2/tree/main/test_wavs
177
- [
178
- "Chinese",
179
- "luomingshuang/icefall_asr_aidatatang-200zh_pruned_transducer_stateless2",
180
- "greedy_search",
181
- 4,
182
- "./test_wavs/aidatatang_200zh/T0055G0036S0002.wav",
183
- ],
184
- [
185
- "Chinese",
186
- "luomingshuang/icefall_asr_aidatatang-200zh_pruned_transducer_stateless2",
187
- "greedy_search",
188
- 4,
189
- "./test_wavs/aidatatang_200zh/T0055G0036S0003.wav",
190
- ],
191
- [
192
- "Chinese",
193
- "luomingshuang/icefall_asr_aidatatang-200zh_pruned_transducer_stateless2",
194
- "greedy_search",
195
- 4,
196
- "./test_wavs/aidatatang_200zh/T0055G0036S0004.wav",
197
- ],
198
- # tal_csasr
199
- [
200
- "Chinese+English",
201
- "ptrnull/icefall-asr-conv-emformer-transducer-stateless2-zh",
202
- "greedy_search",
203
- 4,
204
- "./test_wavs/tal_csasr/210_36476_210_8341_1_1533271973_7057520_132.wav",
205
- ],
206
- [
207
- "Chinese+English",
208
- "ptrnull/icefall-asr-conv-emformer-transducer-stateless2-zh",
209
- "greedy_search",
210
- 4,
211
- "./test_wavs/tal_csasr/210_36476_210_8341_1_1533271973_7057520_138.wav",
212
- ],
213
- [
214
- "Chinese+English",
215
- "ptrnull/icefall-asr-conv-emformer-transducer-stateless2-zh",
216
- "greedy_search",
217
- 4,
218
- "./test_wavs/tal_csasr/210_36476_210_8341_1_1533271973_7057520_145.wav",
219
- ],
220
- [
221
- "Tibetan",
222
- "syzym/icefall-asr-xbmu-amdo31-pruned-transducer-stateless7-2022-12-02",
223
- "greedy_search",
224
- 4,
225
- "./test_wavs/tibetan/a_0_cacm-A70_31116.wav",
226
- ],
227
- [
228
- "Tibetan",
229
- "syzym/icefall-asr-xbmu-amdo31-pruned-transducer-stateless7-2022-12-02",
230
- "greedy_search",
231
- 4,
232
- "./test_wavs/tibetan/a_0_cacm-A70_31118.wav",
233
- ],
234
- # arabic
235
- [
236
- "Arabic",
237
- "AmirHussein/icefall-asr-mgb2-conformer_ctc-2022-27-06",
238
- "greedy_search",
239
- 4,
240
- "./test_wavs/arabic/b.wav",
241
- ],
242
- [
243
- "Arabic",
244
- "AmirHussein/icefall-asr-mgb2-conformer_ctc-2022-27-06",
245
- "greedy_search",
246
- 4,
247
- "./test_wavs/arabic/c.wav",
248
- ],
249
- [
250
- "German",
251
- "csukuangfj/wav2vec2.0-torchaudio",
252
- "greedy_search",
253
- 4,
254
- "./test_wavs/german/20120315-0900-PLENARY-14-de_20120315.wav",
255
- ],
256
- ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Everymans-ai/GPT-knowledge-management/README.md DELETED
@@ -1,15 +0,0 @@
1
- ---
2
- title: Haystack QA
3
- emoji: 📚
4
- colorFrom: yellow
5
- colorTo: green
6
- sdk: streamlit
7
- sdk_version: 1.15.2
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- duplicated_from: Abhilashvj/haystack_QA
12
- ---
13
-
14
-
15
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference