diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/AnyDesk for Mac A Free and Secure Remote Desktop Solution.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/AnyDesk for Mac A Free and Secure Remote Desktop Solution.md
deleted file mode 100644
index 781053a4b49208cc9bbdb5195df7f1a6b9fad03c..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/AnyDesk for Mac A Free and Secure Remote Desktop Solution.md
+++ /dev/null
@@ -1,54 +0,0 @@
-
-
How to Download and Use AnyDesk for Free on Mac
-
AnyDesk is a fast and secure remote desktop application that allows you to access and control any computer or device from anywhere. You can use AnyDesk for various purposes, such as remote support, online collaboration, file transfer, screen sharing, and more. AnyDesk is compatible with multiple platforms, including Windows, macOS, Linux, Android, iOS, and Chrome OS. In this article, we will show you how to download and use AnyDesk for free on Mac.
Downloading AnyDesk for free on Mac is easy and quick. Here are the steps you need to follow:
-
-
Step 1: Go to https://anydesk.com/en/downloads/mac-os
-
Step 2: Click on the green Download Now button.
-
Step 3: Wait for the download to complete and open the .dmg file.
-
Step 4: Drag and drop the AnyDesk icon to the Applications folder.
-
Step 5: Launch AnyDesk from the Applications folder or the Launchpad.
-
-
How to Use AnyDesk for Free on Mac
-
Using AnyDesk for free on Mac is simple and intuitive. Here are some of the basic functions you can perform with AnyDesk:
-
-
To access a remote computer or device:
-
-
Launch AnyDesk on your Mac and enter the AnyDesk address or alias of the remote computer or device in the Remote Desk field.
-
Click on Connect and wait for the remote user to accept your request.
-
You can now see and control the remote screen as if you were sitting in front of it.
-
You can also use the toolbar at the top of the screen to access various options, such as chat, file transfer, audio, video, settings, and more.
-
-
To allow access to your Mac from a remote computer or device:
-
-
Launch AnyDesk on your Mac and note down your AnyDesk address or alias displayed on the main window.
-
Share your AnyDesk address or alias with the remote user who wants to access your Mac.
-
Wait for an incoming connection request and accept it by clicking on Accept.
-
You can now see a green border around your screen indicating that your Mac is being accessed remotely.
-
You can also use the toolbar at the bottom of the screen to access various options, such as chat, file transfer, audio, video, settings, and more.
-
-
-
Conclusion
-
AnyDesk is a powerful and reliable remote desktop application that can help you work and collaborate remotely with ease and security. You can download and use AnyDesk for free on Mac by following the steps above. You can also upgrade to a paid plan if you need more features and customization options. To learn more about AnyDesk, you can visit their official website or their online help center.
How to Uninstall AnyDesk from Mac
-
If you want to uninstall AnyDesk from your Mac, you can follow these steps:
-
-
Step 1: Quit AnyDesk if it is running on your Mac.
-
Step 2: Open the Finder and go to the Applications folder.
-
Step 3: Locate the AnyDesk icon and drag it to the Trash.
-
Step 4: Empty the Trash to delete AnyDesk completely from your Mac.
-
-
Note: This method may not remove all the files and folders associated with AnyDesk from your Mac. If you want to delete them manually, you can use a third-party app like AppCleaner or Finder's search function to find and remove them. Alternatively, you can use a terminal command to delete them. However, this method is not recommended for beginners as it may cause damage to your system if done incorrectly.
-
-
How to Update AnyDesk on Mac
-
If you want to update AnyDesk on your Mac, you can follow these steps:
-
-
Step 1: Launch AnyDesk on your Mac and click on the menu icon at the top left corner of the main window.
-
Step 2: Select Check for Updates from the drop-down menu.
-
Step 3: If there is a new version available, click on Download and Install.
-
Step 4: Wait for the download and installation to complete and restart AnyDesk.
-
-
Note: You can also enable automatic updates for AnyDesk by going to Settings > General > Updates and checking the box next to Automatically check for updates. This way, AnyDesk will notify you when there is a new version available and install it for you.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Control Systems Engineering By Nagrath And Gopal 5th Edition Free Free Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Control Systems Engineering By Nagrath And Gopal 5th Edition Free Free Download.md
deleted file mode 100644
index f766947adacc9ffb136f6c68e949dba8fe7d3fc5..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Control Systems Engineering By Nagrath And Gopal 5th Edition Free Free Download.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
Control System Engineering Book PDF By Nagnath & Gopal is useful for all those students who are studying Control System By Nagnath & Gopal Download Control System Engineering Book PDF Free You will be able to complete the preparation of Control System This book for 2 courses Provides an integrated treatment of constant time and signal processing, frequency domain analysis, modulation and demodulation techniques, DSP-based circuits and control engineering concepts for control systems. Written by the senior scholars of IIT, NIT and other reputed industry Institutes. Covering all the topics of control systems for the beginner engineer and professionals. Download Thesis Sample and start your career as an engineer with engineering degree from Top institutions and universities. or call +91-8370011001/8370077001 for more info
-
This book is also helpful for engineering examination and this book is also helpful for IES GATE PSU examination and for various other government jobs examination download this book for solving the questions here you should understand one thing that the book is covering all the the communication channel concept and controlling of the communication system.
-
Control Systems Engineering By Nagrath And Gopal 5th Edition Free Download
Although the paper is concerned with four-port relay systems, it is expected that the results may be extended to general real transfer functions. The author hopes that the results will be of use in the further study of control and synchronization problems
-
control system engineering : a technical handbook may provide you with answers to the questions about this topic that you are interested in. For example, if you want to learn more about control systems engineering, you may want to see what further information you can get from this article. So, if you want to gain more about this topic, read on.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Counter Strike 1.6 Maps Free Download AA Dima - Why You Should Play on This Map.md b/spaces/1gistliPinn/ChatGPT4/Examples/Counter Strike 1.6 Maps Free Download AA Dima - Why You Should Play on This Map.md
deleted file mode 100644
index 5915fdf37ca80c00fe26fb71653f0dffa1828607..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Counter Strike 1.6 Maps Free Download AA Dima - Why You Should Play on This Map.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
Now go to where the files where all downloaded: C:\Program Files\Valve\HLDServer\cstrike Find the file named server.cfg To open click on it and chose select a program and then find notepad. Get used to notepad, it comes in handy for a HLDS server and many more computer tasks.
Your server.cfg file will contain some CVARs for customizing your server. Copy and paste the following, its long, and over write the original text in server.cfg. These CVARs offer more customization of your server!
CODE Don't Copy this line. // Use this file to configure your DEDICATED server. // This config file is executed on server start. // This is a comment
//GENERAL // default server name. Change to "Bob's Server", etc. hostname "Counter-Strike 1.6 Server"
//sv_lan 0=Public/LAN, 1=LAN Default: 0 sv_lan 0
// sv_contact Contact email for server admin sv_contact "admin@domain.com"
// sv_region - The region of the world to report the server in. // -1 World // 0 US East coast // 1 US West coast // 2 South America // 3 Europe // 4 Asia // 5 Australia // 6 Middle East // 7 Africa sv_region 0
//ROUND // mp_buytime - The amount of time to allow purchasing weapons/equipment on round start mp_buytime 0.45
// mp_c4timer - How long before the c4 explodes mp_c4timer 45
// mp_timelimit - How long each map should be played before switching levels mp_timelimit 25
// mp_freezetime - How long players are unable to move during round starts mp_freezetime 5
//mp_roundtime How much time in minutes does a round last. Default: 5 mp_roundtime 5
// mp_startmoney - Specify how much money players start off with mp_startmoney 800
//mp_friendlyfire Turn on/off friendlyfire. Default: Off mp_friendlyfire 0
//mp_footsteps Turn on/off footsteps. Default: On mp_footsteps 1
//mp_flashlight Turn on/off the ability for clients to use flashlight. Default: Off mp_flashlight 0
//mp_fraglimit Amount of frags a player can exceed before changing maps. Default: 0 mp_fraglimit 0
//mp_maxrounds Amount of round to play before server changes maps. Default: 0 mp_maxrounds 0
//mp_winlimit Max number of rounds one team can win before server changes maps. Default: 0 mp_winlimit 0
// mp_spawnprotectiontime Time in seconds to Kick players who team-kill after round restart. Default: 5 mp_spawnprotectiontime 5
// mp_autoteambalance Force clients to auto-join the opposite team if they are not balanced. Default: On mp_autoteambalance 1
//mp_limitteams Max # of players 1 team can have over another. Default: 2 mp_limitteams 2
//mp_autokick Kick idle/team-killing players. Default Off mp_autokick 0
//mp_tkpunish Punish TK'ers on next round? Default: On mp_tkpunish 1
//mp_hostagepenalty How many hostages a Terrorist can kill before being kicked, 0 to disable. Default: 5 mp_hostagepenalty 5
// disable autoaim sv_aim 0
// sv_cheats - Whether to allow game cheat commands to be used by clients. 0 = off | 1 = on sv_cheats 0
//VOICE-CHATTING //sv_voiceenable Allow clients to use mic. Default: 1 sv_voiceenable 1
//sv_alltalk Players can hear all other players, no team restrictions. Default: Off sv_alltalk 0
//sv_voicecodec Specifies which voice codec DLL to use in a game. Set to the name of the DLL without the extension.. Default:voice_speex sv_voicecodec voice_speex
//sv_voicequality the bps of the voice. //1-2400bps //2-6000bps-DEFAULT //3-8000bps //4-11200bps //5-1520bps sv_voicequality 2
//mp_chattime amount of time in seconds players can chat after the game is over. Lower value = faster map load change. Default: 10 mp_chattime 10
//RATES-SPEEDS //sv_gravity World Gravity Default: 800 sv_gravity 800
//sv_maxvelocity Maximum speed any ballistically moving object is allowed to attain per axis. Default: 3500 sv_maxvelocity 3500
//sv_maxspeed Maximum speed a player can move. Default: 320 sv_maxspeed 320
//CLEINT CVARS //decalfrequency Amount of time in seconds a player can spray their decal. Default: 10 decalfrequency 10
//sv_consistency Force cleints to pass consistency check for critical files before joining server? Default: 0 sv_consistency 0
//sv_timeout After this many seconds without a message from a client, the client is dropped. Default: 65 sv_timeout 65
//mp_playerid Controls what information player see in the status bar: 0 all names; 1 team names; 2 no names. Default: 0 mp_playerid 0
// sv_pausable - Whether to allow clients to pause the server. 0 = off | 1 = on sv_pausable 0
//sv_allowupload Allow clients to upload their custom decals to the server. Default: 1 sv_allowupload 1
//sv_allowdownload Allow clients to downnload files. Default: 1 sv_allowdownload 1
//sv_unlag Enables player lag compensation. Default: 1 sv_unlag 1
//SPECTATING //mp_allowspectators Allow spectators on the server. Default: 1 mp_allowspectators 1
//mp_forcecamera Force dead players to first person mode, effectively disabling freelook. Default: Off mp_forcecamera 0
//sv_hltv Enables HLTV on the server. Default: 0 sv_hltv 0
//BANDWIDTH RATES //sv_minrate Min bandwidth rate allowed on server. Default: 0 (unlimited) sv_minrate 0
// sv_maxrate - The maximum bandwidth rate the server is allowed to transmit to clients sv_maxrate 10000
//sv_maxupdaterate Maximum updates per second that the server will allow. Default: 60 sv_maxupdaterate 60
//sv_minupdaterate Minimum updates per second that the server will allow. Default: 10 sv_minupdaterate 10
//sys_ticrate Max FPS (1000 Max) the server is to render sys_ticrate 200
//SERVER LOGGING // log Enable server logging? Default: Off log off
//sv_logbans Log server bans in the server logs. Default: 0 sv_logbans 0
// sv_logecho Echo log information to the console. Default: 1 sv_logecho 1
// sv_logfile Log server information in the log file. Default: 1 sv_logfile 1
//sv_log_onefile Log server information to only one file. Default: 0 sv_log_onefile 0
//sv_logsdir Folder in the game directory where server logs will be stored.
//RECON //rcon_password Set rcon passsword. Leave blank to disable rcon rcon_password ""
//sv_rcon_banpenalty Number of minutes to ban users who fail rcon authentication. Default: 0 sv_rcon_banpenalty 0
//sv_rcon_maxfailures Max number of times a user can fail rcon authentication before being banned. Default: 10 sv_rcon_maxfailures 10
//sv_rcon_minfailures Number of times a user can fail rcon authentication in sv_rcon_minfailuretime before being banned. Default: 5 sv_rcon_minfailures 5
//sv_rcon_minfailuretime Number of seconds to track failed rcon authentications. Default: 30 sv_rcon_minfailuretime 30
// lists of banned players. // load ban files exec listip.cfg exec banned.cfg
END OF CODE: Don't copy this line.
Now save and look though all the CVARs. They are all explained and most of them you will not need to change, but you can. It is recommended that you change the servers name and most of the subsection labeled ROUND. Make sure to change the location to match up with you servers location! It is under GENERAL.
-
Now the part you have been waiting for. Starting the server.
1. Go to C:\Program Files\Valve\HLServer 2. Find Program called HLDS 3. RIGHT click on it and create a short cut. Drag the shot cut to the desktop. 4. RIGHT click on the short cut and go to properties. 5. In the target field add the following "C:\Program Files\Valve\HLServer\hlds.exe" -console -game cstrike -ip 192.168.254.253 -port 27015 +maxplayers 12 +map cs_assault This will start the server in the Command Prompt from earlier, conserves system resources. The game will be counter strike with the IP address of 192.168.254.253. You need to change this to match your IP address from ipconfig /all. There will be an maximum of 12 players and the starting map will be cs_assault.
A general rule of thumb is 35.5 kbs per player. What does this mean? This is you Internet speed. Preferably you upload speed. To find you speed go to DSL Reports and test with a test server nearest you. Use the results to determine your max players.
Now you can click apply and then OK. To start the server just double click the icon and it will start automatically. Success!
To join your server simply start up the computer you will be playing on. Start Steam and find 'Servers' click on favorites tab and add the IP address of you server to 'add a server' Don't forget to add :27015 after you IP address. Mine looks like this. 192.168.254.253:27015 Now just connect to your server.
To get others to join you will need to complete a few more steps.
Extra: You may want to have your server start automatically when Windows Starts. Doing this is easy! Go to Start -> All Programs -> Startup and copy the desktop short cut here. You will also want to do the same with Windows Media Player. By having media player running in the background you can boost FPS significantly and you don't even have to be playing a song! What have you got to lose.
-
-tool are the result of the hard work of RescoSoft Inc. Therefore, we have to hope that the quality of these software tools will be as good as those of other software tools from RescoSoft Inc.
-
-Verdict: Like all the good software tools, the quality of RescoSoft tools for repair your HP printer is great. While the installation of RescoSoft tools can be a little tough, the usage of these software tools is a breeze.
-
-7. Hp Repair Solution
-
-What Hp Repair Solution is?
-
-Hp Repair Solution is a great tool for HP printer repair. Like all the other tools from RescoSoft Inc, this tool is easy to use.
-
-How to use Hp Repair Solution?
-
-All you have to do is to install Hp Repair Solution from RescoSoft Inc to your Windows system. Once the installation is complete, you can run the tool and it will repair all the bugs present in your HP printer.
-
-Once you are done with the repair, you can uninstall Hp Repair Solution from your PC.
-
-Verdict: The great thing about Hp Repair Solution is that the tool automatically scans the printer and finds all the bugs present in your HP printer.
-
-6. HP Software
-
-What is HP Software?
-
-HP Software is a wonderful tool for the repair of your HP printer. It is a combination of several tools that can help you to repair your printer automatically. It comes with all the tools required to repair your printer, such as the hpbq138exe tool, hpbq138dum tool, dmifit tool, hpbq138exee tool, hpbq138exeu tool, hpbq138exeo tool, hpbq138exeq tool, and hpbq138exea tool. This combination of tools will repair your HP printer automatically.
-
-How to use HP Software?
-
-You can download and install HP Software from RescoSoft Inc. Once the installation is complete, you can open the software and it will prompt you to repair your printer.
-
-Once you are done with the repair, you can uninstall HP Software from your PC.
-
-Verdict: The tools bundled in HP Software are very useful. These tools will help you to repair the printer automatically.
-
-5. HP Printer Setup
-
-What is HP Printer Setup?
-
-HP Printer Setup is a small tool that can help you to setup your 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download UPD Panduan Penggunaan Canon Eos 600d Bahasa Indonesia.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download UPD Panduan Penggunaan Canon Eos 600d Bahasa Indonesia.md
deleted file mode 100644
index 03e9c35ab4bb4da68d11598aab188807ab60e1b3..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Download UPD Panduan Penggunaan Canon Eos 600d Bahasa Indonesia.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
download panduan penggunaan canon eos 600d bahasa indonesia
-
-Download Canon EOS 650D EOS Rebel T4i PDF Manual User Guide. ... Indonesia Buku Panduan Canon Eos 600d Bahasa Indonesia Home ... 1fdad05405
-
-
-
diff --git a/spaces/1phancelerku/anime-remove-background/Download GTA Vice City 5 and Experience the 80s in HD.md b/spaces/1phancelerku/anime-remove-background/Download GTA Vice City 5 and Experience the 80s in HD.md
deleted file mode 100644
index d5b42e99400f6ecd73fc7553f1751b90443d0c33..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download GTA Vice City 5 and Experience the 80s in HD.md
+++ /dev/null
@@ -1,79 +0,0 @@
-
-
GTA Vice City Download 5: How to Play the Classic Game on Your PC
-
Do you miss the nostalgia of playing GTA Vice City, one of the most popular and iconic games of all time? Do you want to relive the adventures of Tommy Vercetti, a former mobster who tries to take over the criminal underworld of Vice City in the 1980s? Do you want to enjoy the amazing soundtrack, graphics, and gameplay of GTA Vice City on your PC in 2023? If you answered yes to any of these questions, then this article is for you. In this article, we will show you how to download GTA Vice City on your PC using two different methods. We will also tell you why you should play GTA Vice City in 2023 and what makes it such a great game. So, without further ado, let's get started!
GTA Vice City is an open-world action-adventure game developed by Rockstar North and published by Rockstar Games in 2002. It is the sixth installment in the Grand Theft Auto series and the first one to be set in a fictional city based on Miami, Florida. The game follows the story of Tommy Vercetti, who is sent by his boss Sonny Forelli to Vice City to establish a foothold for their organization. However, things go wrong when Tommy is ambushed by rival gangs and loses both the money and the drugs he was supposed to deliver. Tommy then sets out to find out who betrayed him and to take over the city by any means necessary.
-
Why should you play GTA Vice City in 2023?
-
GTA Vice City is a game that has stood the test of time and remains one of the most beloved and influential games ever made. Here are some of the reasons why you should play GTA Vice City in 2023:
-
-
It has a captivating story and memorable characters. GTA Vice City has a well-written and engaging story that keeps you hooked from start to finish. You will meet many colorful and interesting characters along the way, such as Lance Vance, Ken Rosenberg, Ricardo Diaz, Phil Cassidy, Avery Carrington, Umberto Robina, and many more. You will also encounter many references and parodies of popular movies, TV shows, celebrities, and events from the 1980s.
-
It has a stunning soundtrack and atmosphere. GTA Vice City has one of the best soundtracks in gaming history, featuring over 80 songs from various genres such as pop, rock, hip hop, disco, soul, reggae, metal, and more. You can listen to these songs on various radio stations while driving around the city or in certain locations. The game also has a great atmosphere that captures the vibe and style of the 1980s, with neon lights, palm trees, beaches, skyscrapers, cars, fashion, and culture.
-
It has a fun and diverse gameplay. GTA Vice City has a gameplay that offers you a lot of freedom and variety. You can explore the city at your own pace, either on foot or by using various vehicles such as cars, motorcycles, boats, helicopters, and planes. You can also complete various missions that advance the main story or give you extra rewards. You can also participate in many side activities such as rampages, races, stunts, robberies, property management, and more. You can also use a variety of weapons and items to fight against enemies or cause chaos in the city.
-
-
How to download GTA Vice City on your PC
-
If you want to play GTA Vice City on your PC in 2023, you have two options: you can either buy the game from the official Rockstar Games website or you can use an emulator like BlueStacks to play the mobile version of GTA Vice City on your PC. Here are the steps for each option:
-
Option 1: Buy the game from Rockstar Games website
-
This is the easiest and most reliable way to download GTA Vice City on your PC. All you need is a stable internet connection and a valid credit card or PayPal account. Here are the steps:
-
Step 1: Visit the Rockstar Games website and create an account
-
Go to https://www.rockstargames.com/ and click on the Sign In button at the top right corner of the screen. If you already have an account, enter your email and password and click on Sign In. If you don't have an account, click on Create a New Account and follow the instructions to create one.
-
Step 2: Go to the Downloads section and find GTA Vice City
-
Once you are signed in, click on the Downloads button at the top of the screen. You will see a list of games that are available for download from Rockstar Games. Scroll down until you find GTA Vice City and click on it.
-
Step 3: Click on Buy Now and complete the payment process
-
You will see a page with the details of GTA Vice City, such as the price, the system requirements, and the screenshots. Click on the Buy Now button and choose your preferred payment method. You can pay with a credit card or PayPal. Follow the instructions to complete the payment process.
-
Step 4: Download and install the game on your PC
-
After you have completed the payment process, you will receive an email with a link to download GTA Vice City on your PC. Click on the link and follow the instructions to download and install the game on your PC. You will need about 1 GB of free space on your hard drive to install GTA Vice City.
-
gta vice city definitive edition steam download
-gta vice city rockstar games official site
-gta vice city 1980s story and gameplay
-gta vice city system requirements and compatibility
-gta vice city free download for windows 10
-gta vice city updated graphics and controls
-gta vice city beach and urban sprawl map
-gta vice city tommy vercetti's rise to power
-gta vice city soundtrack and radio stations
-gta vice city cheats and mods for pc
-gta vice city trilogy bundle discount
-gta vice city reviews and ratings
-gta vice city online multiplayer mode
-gta vice city tips and tricks for beginners
-gta vice city comparison with gta 5
-gta vice city best missions and side quests
-gta vice city hidden packages and collectibles
-gta vice city vehicles and weapons list
-gta vice city characters and voice actors
-gta vice city easter eggs and secrets
-gta vice city walkthrough and guide
-gta vice city save game files download
-gta vice city patch and update notes
-gta vice city achievements and trophies
-gta vice city fan art and wallpapers
-
Option 2: Use an emulator like BlueStacks to play the mobile version of GTA Vice City on your PC
-
This is another way to download GTA Vice City on your PC, but it requires some extra steps and software. You will need to use an emulator like BlueStacks, which is a program that allows you to run Android apps on your PC. You will also need to buy the mobile version of GTA Vice City from the Google Play Store, which costs $4.99. Here are the steps:
-
Step 1: Download and install BlueStacks on your PC
-
Go to https://www.bluestacks.com/ and click on the Download BlueStacks button. Follow the instructions to download and install BlueStacks on your PC. You will need about 5 GB of free space on your hard drive to install BlueStacks.
-
Step 2: Launch BlueStacks and sign in with your Google account
-
After you have installed BlueStacks, launch it from your desktop or start menu. You will see a window with various options and features. Click on the Google Play Store icon at the bottom right corner of the window. You will be asked to sign in with your Google account. If you already have one, enter your email and password and click on Sign In. If you don't have one, click on Create a New Account and follow the instructions to create one.
-
Step 3: Search for GTA Vice City in the Google Play Store and install it
-
Once you are signed in, you will see a page with various apps and games that are available for download from the Google Play Store. Type GTA Vice City in the search bar and press Enter. You will see a page with the details of GTA Vice City, such as the price, the rating, the reviews, and the screenshots. Click on the Install button and follow the instructions to buy and install GTA Vice City on your PC.
-
Step 4: Enjoy playing GTA Vice City on your PC with BlueStacks features
-
After you have installed GTA Vice City, you can launch it from the BlueStacks home screen. You will see a window with the game running on your PC. You can use your mouse and keyboard to control the game, or you can customize the controls according to your preference. You can also use the BlueStacks features to enhance your gaming experience, such as recording your gameplay, taking screenshots, streaming your gameplay, and more.
-
Conclusion
-
GTA Vice City is a classic game that deserves to be played by every gamer who loves open-world action-adventure games. It has a captivating story, a stunning soundtrack, and a fun and diverse gameplay that will keep you entertained for hours. In this article, we have shown you how to download GTA Vice City on your PC using two different methods: buying the game from the Rockstar Games website or using an emulator like BlueStacks to play the mobile version of GTA Vice City on your PC. Both methods are easy and reliable, and you can choose the one that suits you best. We hope you have enjoyed this article and found it helpful. Now, go ahead and download GTA Vice City on your PC and have fun!
-
If you have any questions or feedback about this article, please feel free to leave a comment below. We would love to hear from you!
-
FAQs
-
-
Q: How much does GTA Vice City cost on PC?
-
A: GTA Vice City costs $9.99 on the Rockstar Games website and $4.99 on the Google Play Store.
-
Q: What are the system requirements for GTA Vice City on PC?
-
A: The minimum system requirements for GTA Vice City on PC are: Windows XP/Vista/7/8/10, 800 MHz Intel Pentium III or AMD Athlon processor, 128 MB of RAM, 32 MB video card with DirectX 9.0 compatible drivers, 915 MB of free hard disk space, and DirectX 9.0 compatible sound card.
-
Q: Can I play GTA Vice City online with other players?
-
A: GTA Vice City does not have an official online multiplayer mode, but there are some unofficial mods that allow you to play GTA Vice City online with other players. One of them is https://www.mtasa.com/, which is a free mod for GTA San Andreas that also supports GTA Vice City.
-
Q: Can I play GTA Vice City on other devices besides PC?
-
A: Yes, you can play GTA Vice City on other devices besides PC. GTA Vice City is also available for PlayStation 2, PlayStation 3, PlayStation 4, Xbox, Xbox 360, Xbox One, iOS, Android, Mac OS X, and Fire OS.
-
Q: What are some of the best cheats for GTA Vice City?
-
A: There are many cheats for GTA Vice City that can make the game more fun and easy. Some of them are: THUGSTOOLS (all weapons), ASPIRINE (full health), PRECIOUSPROTECTION (full armor), LEAVEMEALONE (lower wanted level), PANZER (spawn a tank), GETTHEREFAST (spawn a Sabre Turbo), GETTHEREVERYFASTINDEED (spawn a Hotring Racer), GETTHEREAMAZINGLYFAST (spawn a Hotring Racer 2), FANNYMAGNET (women follow you), BIGBANG (blow up all cars), SEAWAYS (cars can drive on water), COMEFLYWITHME (cars can fly), ICANTTAKEITANYMORE (commit suicide), and LIFEISPASSINGMEBY (speed up time).
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Garena Bed Wars APK for Android - Team up and destroy your enemies beds in this PVP game.md b/spaces/1phancelerku/anime-remove-background/Download Garena Bed Wars APK for Android - Team up and destroy your enemies beds in this PVP game.md
deleted file mode 100644
index 952eed6030c5415b6847925701041b2d1d3c34b5..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Garena Bed Wars APK for Android - Team up and destroy your enemies beds in this PVP game.md
+++ /dev/null
@@ -1,143 +0,0 @@
-
-
Garena Bed Wars APK Download: How to Play the Ultimate Sandbox Game on Your Android Device
-
Are you looking for a fun and exciting game that lets you unleash your creativity and imagination? Do you want to play with your friends or other players from around the world in a team-based PVP game? If you answered yes, then you should try out Garena Bed Wars, the ultimate sandbox game that has attracted millions of players in Garena Blockman GO.
Garena Bed Wars is a game where you have to protect your bed at your own base while using all the tools you have at your disposal to destroy the opponents' beds and become the final victor. You can collect resources, buy equipment, build bridges, attack enemies, and cooperate with your teammates in this thrilling and addictive game. You can also customize your character, choose from different maps and modes, and enjoy countless minigames from different genres.
-
If you want to play Garena Bed Wars on your Android device, you will need to download and install the Garena Bed Wars APK file, which is a modified version of the original game that allows you to access all the features and benefits without any restrictions. In this article, we will show you how to download and install Garena Bed Wars APK on your Android device, how to play Garena Bed Wars as a beginner, and how to win Garena Bed Wars with tips and tricks from pro players. Let's get started!
-
How to Play Garena Bed Wars: A Beginner's Guide
-
If you are new to Garena Bed Wars, you might feel overwhelmed by the rules and mechanics of the game. Don't worry, we are here to help you learn the basics of how to play Garena Bed Wars in a few simple steps.
-
How to join a game and choose a team
-
When you launch Garena Bed Wars, you will see a list of available games that you can join. You can also create your own game by tapping on the "+" icon at the top right corner of the screen. You can choose from different maps, such as Sky Island, Desert Island, Snow Island, etc., and different modes, such as Solo, Duo, Squad, etc. You can also set a password for your game if you want to play with your friends only.
-
garena bed wars android game free download
-garena bed wars latest version apk
-garena bed wars pvp game download
-garena blockman go bed wars apk
-garena bed wars mod apk unlimited money
-garena bed wars online multiplayer game
-garena bed wars apk for pc
-garena bed wars hack apk download
-garena bed wars adventures apk
-garena bed wars new update apk
-garena bed wars apk pure
-garena bed wars offline game download
-garena bed wars tips and tricks
-garena bed wars apk mirror
-garena bed wars cheats and codes
-garena bed wars gameplay video download
-garena bed wars apk old version
-garena bed wars review and rating
-garena bed wars apk xapk
-garena bed wars strategy guide
-garena bed wars apk combo
-garena bed wars skins and costumes
-garena bed wars apk no ads
-garena bed wars best team and weapons
-garena bed wars apk obb download
-garena bed wars how to play and win
-garena bed wars apk mod menu
-garena bed wars maps and modes
-garena bed wars apk uptodown
-garena bed wars support and contact
-garena bed wars apk rexdl
-garena bed wars rewards and achievements
-garena bed wars apk data download
-garena bed wars system requirements and compatibility
-garena bed wars apk mob.org
-garena bed wars terms of service and privacy policy
-garena bed wars apk apkpure.com
-garena bed wars news and updates
-garena bed wars apk revdl.com
-garena bed wars faq and help center
-
Once you join or create a game, you will be taken to the lobby where you can choose your team. There are four teams in each game: Red, Blue, Green, and Yellow. You can tap on the team icon at the bottom of the screen to join or switch teams. You can also chat with other players in the lobby or invite your friends by tapping on the "Invite" button at the top left corner of the screen.
-
How to collect resources and buy equipment
-
When the game starts, you will spawn on your island with your teammates. Each island has its own base with a bed. As long as the bed is not destroyed, you and your teammates can be revived if you die. Therefore, protecting your bed is the most important task in the game.
-
To protect your bed, you will need to collect resources and buy equipment. There are three types of resources in the game: iron, gold, and diamonds. Iron and gold are generated by the generators on your island. You can use them to buy items from the shop, such as blocks, weapons, armor, tools, etc. Diamonds are generated by the generators on the middle island. You can use them to buy upgrades from the team shop, such as sharpness, protection, haste, etc.
-
To collect resources, you will need to go to the generators and pick up the items that are dropped on the ground. You can also get resources by killing enemies or breaking their beds. To buy equipment, you will need to go to the shop or the team shop and tap on the item you want to buy. You can also use the quick buy menu at the bottom of the screen to buy items faster.
-
How to build bridges and attack enemies
-
To reach other islands and attack enemies, you will need to build bridges. You can use blocks that you buy from the shop to place them on the ground and create a path. You can also use ender pearls or launch pads to teleport or jump to other islands.
-
To attack enemies, you will need to use weapons that you buy from the shop, such as swords, bows, axes, etc. You can also use items that have special effects, such as fireballs, TNT, snowballs, etc. Your goal is to break their beds and kill them before they respawn. You can also use traps that you buy from the team shop, such as alarm trap, counter-offensive trap, etc., to defend your island from invaders.
-
How to protect your bed and survive
-
To protect your bed, you will need to cover it with blocks that you buy from the shop. You can use different types of blocks, such as wool, wood, stone, etc., to make it harder for enemies to break your bed. You can also use items that have special effects, such as water buckets, obsidian blocks, iron golems, etc., to enhance your defense.
-
To survive, you will need to avoid falling into the void or getting killed by enemies. You can use armor that you buy from the shop, such as leather armor, chainmail armor, iron armor, etc., to reduce the damage you take. You can also use items that have special effects, such as golden apples, potions, invisibility cloaks, etc., to heal yourself or gain an advantage.
-
How to Win Garena Bed Wars: Tips and Tricks from Pro Players
-
If you want to win Garena Bed Wars more often and become a pro player, you will need to learn some tips and tricks that can help you improve your skills and performance in the game. Here are some of them:
-
How to use the best strategies and tactics for different maps and modes
-
Each map and mode in Garena Bed Wars has its own characteristics and challenges that require different strategies and tactics. For example:
-
-
In Sky Island map, you can use ender pearls or launch pads to move around quickly and surprise your enemies.
-
In Desert Island map, you can use fireballs or TNT to destroy the sand bridges and cut off your enemies' access.
-
In Snow Island map, you can use snowballs or ice bridges to knock off your enemies or create shortcuts.
-
In Solo mode, you can focus on collecting diamonds and upgrading your equipment as soon as possible.
-
In Duo mode, you can coordinate with your partner and split up tasks such as collecting resources, buying items, building bridges, etc.
-
In Squad mode, you can communicate with your teammates and assign roles such as defender, attacker, supporter, etc.
-
-
How to cooperate with your teammates and communicate effectively
-
Garena Bed Wars is a team-based game that requires cooperation and communication among teammates. You can use the chat function or the voice chat function to communicate with your teammates and share information, such as enemy locations, resource status, attack plans, etc. You can also use the team signals or the emoticons to express your emotions or intentions, such as happy, angry, sad, etc.
-
When cooperating with your teammates, you should follow some basic etiquette and rules, such as:
-
-
Respect your teammates and do not insult or troll them.
-
Listen to your teammates and do not ignore or contradict them.
-
Help your teammates and do not abandon or betray them.
-
Share your resources and do not hog or waste them.
-
Follow your team's strategy and do not act on your own or sabotage it.
-
-
How to avoid common mistakes and deal with hackers
-
Garena Bed Wars is a game that requires skill and strategy, but also luck and chance. Sometimes, you might make some common mistakes that can cost you the game, such as:
-
-
Leaving your bed unprotected or poorly defended.
-
Rushing to attack without proper equipment or backup.
-
Being too greedy or reckless and exposing yourself to danger.
-
Being too passive or timid and missing opportunities to strike.
-
Being too predictable or repetitive and allowing your enemies to counter you.
-
-
To avoid these mistakes, you should always be aware of your surroundings and your enemies' actions. You should also learn from your mistakes and improve your skills and strategies. You can also watch some videos or streams of pro players and learn from their tips and tricks.
-
Sometimes, you might encounter some hackers who use cheats or hacks to gain an unfair advantage in the game, such as flying, speed hacking, aimbotting, etc. These hackers can ruin the fun and balance of the game and make you lose unfairly. To deal with hackers, you should report them to the developers by tapping on the "Report" button at the end of the game. You can also block them by tapping on their name and choosing "Block". You can also avoid playing with hackers by joining games that have anti-cheat systems or moderators.
-
Conclusion
-
Garena Bed Wars is a game that offers endless fun and excitement for players who love sandbox games and PVP games. You can play with your friends or other players from around the world in a team-based game where you have to protect your bed and destroy the opponents' beds. You can also enjoy various features and benefits by downloading and installing Garena Bed Wars APK on your Android device.
-
If you want to play Garena Bed Wars on your Android device, you can follow these steps:
-
-
Download Garena Bed Wars APK from a trusted source by clicking on this link: [Garena Bed Wars APK Download].
-
Allow unknown sources on your device by going to Settings > Security > Unknown Sources.
-
Install Garena Bed Wars APK by tapping on the file and following the instructions.
-
Launch Garena Bed Wars and enjoy the game!
-
-
We hope this article has helped you learn how to play Garena Bed Wars on your Android device. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you!
-
FAQs
-
What are the minimum requirements to play Garena Bed Wars on Android?
-
The minimum requirements to play Garena Bed Wars on Android are:
-
-
Android 4.4 or higher
-
2 GB of RAM or higher
-
500 MB of free storage space or higher
-
A stable internet connection
-
-
Is Garena Bed Wars safe and legal to download and play?
-
Garena Bed Wars is safe and legal to download and play as long as you download it from a trusted source, such as [Garena Blockman GO] or [Garena Bed Wars APK Download]. However, you should be careful of downloading Garena Bed Wars from unknown sources, as they might contain viruses or malware that can harm your device or steal your personal information. You should also avoid using cheats or hacks that can get you banned from the game or cause other problems.
-
How can I update Garena Bed Wars to the latest version?
-
You can update Garena Bed Wars to the latest version by following these steps:
-
-
Go to [Garena Blockman GO] or [Garena Bed Wars APK Download] and check if there is a new version available.
-
If there is a new version, download it and install it on your device.
-
Launch Garena Bed Wars and enjoy the new features and improvements.
-
-
You can also enable the auto-update function on your device by going to Settings > Apps > Garena Bed Wars > Auto-update. This way, you will always have the latest version of Garena Bed Wars on your device.
-
How can I contact the developers of Garena Bed Wars for support or feedback?
-
If you have any issues or suggestions regarding Garena Bed Wars, you can contact the developers of Garena Bed Wars by using one of these methods:
-
-
Email: You can send an email to [support@blockmango.net] and describe your problem or idea in detail. You should also attach some screenshots or videos if possible.
-
Facebook: You can visit the official Facebook page of Garena Blockman GO at [https://www.facebook.com/Blockmango-608882679545354/] and leave a message or comment.
-
Discord: You can join the official Discord server of Garena Blockman GO at [https://discord.gg/8fJ9Z7F] and chat with other players or moderators.
-
-
The developers of Garena Bed Wars are always happy to hear from their players and will try their best to solve your problems or implement your suggestions.
-
How can I play Garena Bed Wars with my friends?
-
If you want to play Garena Bed Wars with your friends, you can follow these steps:
-
-
Add your friends as contacts by tapping on the "Contacts" button at the bottom of the screen and entering their usernames or IDs.
-
Create a game by tapping on the "+" icon at the top right corner of the screen and choosing a map and a mode.
-
Set a password for your game by tapping on the "Password" button at the top left corner of the screen and entering a code.
-
Invite your friends by tapping on the "Invite" button at the top left corner of the screen and selecting your contacts.
-
Wait for your friends to join your game and choose a team.
-
Start the game by tapping on the "Start" button at the bottom of the screen and have fun!
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Warpath Jurassic Park for Android and Unleash Your Inner Dinosaur.md b/spaces/1phancelerku/anime-remove-background/Download Warpath Jurassic Park for Android and Unleash Your Inner Dinosaur.md
deleted file mode 100644
index 967bfc1f085c9ea30c72c42778fb14729443d1c8..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Warpath Jurassic Park for Android and Unleash Your Inner Dinosaur.md
+++ /dev/null
@@ -1,67 +0,0 @@
-
-
Jurassic Park Warpath: How to Download and Play on Android
-
Introduction
-
If you are a fan of dinosaurs and fighting games, you might be interested in Jurassic Park Warpath, a classic PlayStation game that lets you control various prehistoric creatures and battle against each other in different locations from the Jurassic Park movies. But did you know that you can also play this game on your Android device? In this article, we will show you how to download and play Jurassic Park Warpath on Android, as well as some tips and tricks to help you enjoy this game even more.
-
What is Jurassic Park Warpath?
-
Jurassic Park Warpath is a fighting video game released on the PlayStation console in 1999. It is a spin-off of the films Jurassic Park and The Lost World: Jurassic Park, and features 14 playable dinosaurs, each with their own moves, animations, and sounds. The game has four modes: arcade, versus, survival, and exhibition. In arcade mode, you choose a dinosaur and fight through eight stages against different opponents. In versus mode, you can play against another player or the computer. In survival mode, you have to defeat as many enemies as possible without dying. In exhibition mode, you can watch two dinosaurs fight without any input from you.
Playing Jurassic Park Warpath on Android has several advantages over playing it on the original PlayStation. First of all, you don't need to buy or own a PlayStation console or a physical copy of the game. You can simply download an emulator and the game file from the internet for free. Second, you can play the game anytime and anywhere, as long as you have your Android device with you. You don't need to plug in any wires or cables, or worry about battery life. Third, you can customize the game settings to your liking, such as changing the graphics quality, the sound volume, the controller layout, and more. You can also save and load your progress at any point in the game.
-
How to download Jurassic Park Warpath for Android
-
Step 1: Download an emulator
-
An emulator is a software that allows you to run games from other platforms on your device. To play Jurassic Park Warpath on Android, you need an emulator that can run PlayStation games. There are many emulators available for Android, but one of the most popular and reliable ones is ePSXe. You can download ePSXe from the Google Play Store or from its official website . After downloading ePSXe, install it on your device and open it.
-
Step 2: Download the game file
-
The game file is the data that contains the actual game content. To play Jurassic Park Warpath on Android, you need to download the game file in ISO or BIN format. There are many websites that offer free downloads of PlayStation games, but one of the most trusted ones is Internet Archive . You can search for "Warpath Jurassic Park" on Internet Archive and download the file that has "USA" in its name. After downloading the file, save it in a folder that you can easily access.
-
Step 3: Install and run the game
-
Now that you have both the emulator and the game file, you are ready to install and run the game. To do this , and War of the Monsters . You can also check out our website for more recommendations on dinosaur and fighting games.
-
jurassic park warpath android apk
-jurassic park warpath psx iso download
-jurassic park warpath emulator android
-jurassic park warpath free download for android
-jurassic park warpath rom android
-jurassic park warpath game download android
-jurassic park warpath ps1 android
-jurassic park warpath online download android
-jurassic park warpath mod apk android
-jurassic park warpath cheats android download
-jurassic park warpath full version download android
-jurassic park warpath playstation download android
-jurassic park warpath mobile download android
-jurassic park warpath hack apk android
-jurassic park warpath fighting game download android
-jurassic park warpath dinosaurs download android
-jurassic park warpath iso file download android
-jurassic park warpath eboot download android
-jurassic park warpath ppsspp android download
-jurassic park warpath rar download android
-jurassic park warpath zip download android
-jurassic park warpath bin download android
-jurassic park warpath cue download android
-jurassic park warpath torrent download android
-jurassic park warpath direct download android
-jurassic park warpath mega download android
-jurassic park warpath mediafire download android
-jurassic park warpath google drive download android
-jurassic park warpath zippyshare download android
-jurassic park warpath 4shared download android
-jurassic park warpath coolrom download android
-jurassic park warpath freeroms download android
-jurassic park warpath loveroms download android
-jurassic park warpath romhustler download android
-jurassic park warpath emuparadise download android
-jurassic park warpath cdromance download android
-jurassic park warpath nicoblog download android
-jurassic park warpath vimm's lair download android
-jurassic park warpath archive.org download android
-jurassic park warpath reddit download android
-how to download jurassic park warpath on android
-where to download jurassic park warpath for android
-best site to download jurassic park warpath for android
-easiest way to download jurassic park warpath on android
-fastest way to download jurassic park warpath on android
-safest way to download jurassic park warpath on android
-legal way to download jurassic park warpath on android
-no survey download jurassic park warpath for android
-no virus download jurassic park warpath for android
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Dragon Ball Z Budokai Tenkaichi 3 PC Version How to Get and Install It.md b/spaces/1phancelerku/anime-remove-background/Dragon Ball Z Budokai Tenkaichi 3 PC Version How to Get and Install It.md
deleted file mode 100644
index 4327ea6a0890e458ed6de10610869856d107d421..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Dragon Ball Z Budokai Tenkaichi 3 PC Version How to Get and Install It.md
+++ /dev/null
@@ -1,173 +0,0 @@
-
-
How to Download Dragon Ball Z Budokai Tenkaichi 3 on PC
-
Dragon Ball Z Budokai Tenkaichi 3 is one of the most popular and beloved games in the Dragon Ball franchise. It features over 150 characters, 30 stages, and a variety of game modes that will keep you entertained for hours. But what if you want to play it on your PC instead of your PlayStation 2 or Wii console? Well, you're in luck, because in this article, we will show you how to download and play Dragon Ball Z Budokai Tenkaichi 3 on PC using a free emulator called Dolphin. We will also give you some tips and tricks to enhance your gaming experience. So, without further ado, let's get started!
-
how to download dragon ball z budokai tenkaichi 3 on pc
Dragon Ball Z Budokai Tenkaichi 3 is a fighting game based on the anime and manga series Dragon Ball. It was developed by Spike and published by Atari for the PlayStation 2 and Wii in 2007. It is the third and final installment in the Budokai Tenkaichi series, which is also known as Dragon Ball Z: Sparking! in Japan.
-
The game follows the events of the Dragon Ball story from the Saiyan Saga to the Kid Buu Saga, as well as some original scenarios and what-if scenarios. You can choose from over 150 playable characters, each with their own unique moves, transformations, and abilities. You can also customize your characters with various items and skills that affect their stats and performance.
-
The game offers several modes of play, such as Story Mode, where you can relive the epic battles of the series; Dragon History, where you can explore different timelines and scenarios; Ultimate Battle, where you can test your skills against various opponents; World Tournament, where you can compete for prizes and glory; Duel, where you can fight against a friend or the CPU; Training, where you can practice your moves and combos; Evolution Z, where you can edit your character's attributes; Data Center, where you can view your records and achievements; Replay, where you can watch your saved replays; and Options, where you can adjust the game settings.
-
Why play it on PC?
-
Dragon Ball Z Budokai Tenkaichi 3 is a great game that deserves to be played by every fan of the series. However, not everyone has access to a PlayStation 2 or Wii console, or maybe they just prefer to play games on their PC. That's why playing it on PC using an emulator is a good option for many reasons:
-
-
You can enjoy the game in high-definition graphics and sound quality, thanks to the emulator's enhancements and features.
-
You can use any controller or keyboard that suits your preference and comfort.
-
You can save your progress anytime and anywhere, thanks to the emulator's save states and memory cards.
-
You can unlock all the characters and stages without having to complete the game or use cheat codes.
-
You can use cheats and mods to modify the game according to your liking.
-
You can play online with other players using netplay or LAN.
-
You can avoid any issues or errors that may
Requirements and Preparations
-
System Requirements
-
Before you download and play Dragon Ball Z Budokai Tenkaichi 3 on PC, you need to make sure that your PC meets the minimum system requirements for running the emulator and the game. Here are the recommended system requirements for Dolphin Emulator:
-
-
-
Component
-
Minimum
-
Recommended
-
-
-
Operating System
-
Windows 7 (x64) or above, macOS 10.10 or above, Linux
-
Windows 10 (x64), macOS 10.13 or above, Linux
-
-
-
Processor (CPU)
-
A CPU with SSE2 support. A modern CPU with a high single-thread performance rating.
-
An Intel Core i5-4670K or AMD Ryzen 5 3600 or better.
-
-
-
Memory (RAM)
-
2 GB or more
-
4 GB or more
-
-
-
Graphics Card (GPU)
-
A GPU that supports DirectX 11.1 or OpenGL 4.4.
-
A GPU that supports Vulkan, DirectX 12, or Metal.
-
-
-
Storage Space
-
At least 5 GB of free space for the emulator and the game files.
-
At least 10 GB of free space for the emulator and the game files.
-
-
-
Controller or Keyboard
-
A compatible controller or keyboard that can be mapped to the emulator.
-
A PlayStation 2 or Wii controller with an adapter, or a controller that mimics their layout.
-
-
-
Emulator and Game Files
-
The emulator that we will use to play Dragon Ball Z Budokai Tenkaichi 3 on PC is Dolphin Emulator, which is a free and open-source software that can run games for the Nintendo GameCube and Wii consoles. You can download the latest version of Dolphin Emulator from its official website or from its GitHub page. You can choose between the stable version, which is more stable and tested, or the development version, which is more updated and has more features, but may have some bugs and issues.
-
The game file that we will use to play Dragon Ball Z Budokai Tenkaichi 3 on PC is an ISO file, which is a disc image of the original game disc. You can either rip your own game disc using a DVD drive and a software like ImgBurn, or you can download an ISO file from a reputable source online. However, downloading an ISO file may be illegal in some countries, so we do not condone or encourage piracy. Please only download an ISO file if you own a legal copy of the game.
-
how to install dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc download free full version
-dragon ball z budokai tenkaichi 3 pc game download
-how to play dragon ball z budokai tenkaichi 3 on pc with dolphin emulator
-dragon ball z budokai tenkaichi 3 pc download utorrent
-dragon ball z budokai tenkaichi 3 pc system requirements
-dragon ball z budokai tenkaichi 3 pc download original version
-how to get dragon ball z budokai tenkaichi 3 on pc for free
-dragon ball z budokai tenkaichi 3 pc download highly compressed
-dragon ball z budokai tenkaichi 3 pc iso download
-how to run dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc download windows 10
-dragon ball z budokai tenkaichi 3 pc gameplay
-how to configure dolphin emulator for dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc download rar
-how to use ps2 controller for dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc mods download
-how to fix lag in dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc download no survey
-how to unlock all characters in dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc cheats codes
-how to save game in dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc online multiplayer
-how to change language in dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc download ocean of games
-how to do fusion in dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc keyboard controls
-how to transform in dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc download softonic
-how to use ultimate attacks in dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc requirements test
-how to play story mode in dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc download apunkagames
-how to do potara fusion in dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc best settings
-how to play vs mode in dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc download mega.nz
-how to do team battle in dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc download zip file
-how to do special moves in dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc steam
-how to play tournament mode in dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc download google drive
-how to do super saiyan in dragon ball z budokai tenkaichi 3 on pc
-dragon ball z budokai tenkaichi 3 pc crack download
-how to play survival mode in dragon ball z budokai tenkaichi 3 on pc
-
Controller and Keyboard Settings
-
To play Dragon Ball Z Budokai Tenkaichi 3 on PC, you will need a controller or a keyboard that can be configured to the emulator. You can use any controller or keyboard that is compatible with your PC, but we recommend using a PlayStation 2 or Wii controller with an adapter, or a controller that mimics their layout, such as a Logitech F310 or an Xbox One controller. This is because the game was designed for these controllers, and using them will give you a more authentic and comfortable experience.
-
To configure your controller or keyboard to the emulator, you will need to go to the Controllers menu in Dolphin Emulator and select the appropriate device for each port. For example, if you want to use a PlayStation 2 controller for Port 1, you will need to select Emulated Wii Remote for Port 1, and then click Configure. Then, you will need to map each button and axis of your controller to the corresponding input of the Wii Remote. You can also adjust the sensitivity and deadzone of each input if needed. You can also save your configuration as a profile for future use.
-
If you want to use a keyboard for Port 1, you will need to select Emulated Wii Remote for Port 1, and then click Configure. Then, you will need to map each key of your keyboard to the corresponding input of the Wii Remote. You can also adjust the sensitivity and deadzone of each input if needed. You can also save your configuration as a profile for future use.
-
You can also configure other ports if you want to play with multiple players or use other devices. For example, if you want to use a GameCube controller for Port 2, you will need to select Standard Controller for Port 2, and then click Configure. Then, you will need to map each button and axis of your controller to the corresponding input of the GameCube controller. You can also adjust the sensitivity and deadzone of each input if needed. You can also save your configuration as a profile for future use.
-
Steps to Download and Play Dragon Ball Z Budokai Tenkaichi 3 on PC
-
Step 1: Download and Install Dolphin Emulator
-
The first step to download and play Dragon Ball Z Budokai Tenkaichi 3 on PC is to download and install Dolphin Emulator on your PC. You can download the latest version of Dolphin Emulator from its official website or from its GitHub page. You can choose between the stable version or the development version, depending on your preference. Once you have downloaded the emulator, you will need to extract it to a folder of your choice using a software like WinRAR or 7-Zip. Then, you will need to run the Dolphin.exe file to launch the emulator.
-
Step 2: Download and Extract Dragon Ball Z Budokai Tenkaichi 3 ISO File
-
The second step to download and play Dragon Ball Z Budokai Tenkaichi 3 on PC is to download and extract the Dragon Ball Z Budokai Tenkaichi 3 ISO file on your PC. You can either rip your own game disc using a DVD drive and a software like ImgBurn, or you can download an ISO file from a reputable source online. However, downloading an ISO file may be illegal in some countries, so we do not condone or encourage piracy. Please only download an ISO file if you own a legal copy of the game.
-
Once you have downloaded the ISO file, you will need to extract it to a folder of your choice using a software like WinRAR or 7-Zip. Then, you will need to move the ISO file to a folder where you can easily access it from the emulator.
-
Step 3: Configure Dolphin Emulator Settings
-
The third step to download and play Dragon Ball Z Budokai Tenkaichi 3 on PC is to configure Dolphin Emulator settings to optimize the game performance and quality. You can access the settings menu by clicking on the Options tab in Dolphin Emulator. Here are some of the settings that you can adjust:
-
-
General: Here, you can change the language, theme, interface, and hotkeys of the emulator.
-
Graphics: Here, you can change the video backend, resolution, aspect ratio, fullscreen mode, anti-aliasing, anisotropic filtering, enhancements, hacks, and advanced options of the emulator.
-
Audio: Here, you can change the audio backend, volume, latency, stretching, and DSP options of the emulator.
-
Controllers: Here, you can configure your controller or keyboard settings for each port.
-
Paths: Here, you can add or remove folders where the emulator will search for game files.
-
Config: Here, you can change the general, interface, audio, gamecube, wii, advanced, and debug options of the emulator.
-
-
You can experiment with different settings to find the best ones for your PC and game. However, here are some recommended settings that work well for most users:
-
-
Graphics: Video Backend - Vulkan (if supported by your GPU), Resolution - Native (640x528) or higher (up to 4x), Aspect Ratio - Auto or Force 16:9 (if you want widescreen), Fullscreen Mode - On (if you want fullscreen), Anti-Aliasing - None or MSAA 2x (if your GPU can handle it), Anisotropic Filtering - 1x or higher (up to 16x), Enhancements - Scaled EFB Copy (On), Force Texture Filtering (On), Disable Fog (Off), Per-Pixel Lighting (Off), Widescreen Hack (On if you want widescreen), Hacks - Skip EFB Access from CPU (Off), Ignore Format Changes (On), Store EFB Copies to Texture Only (On), Texture Cache Accuracy (Fast), External Frame Buffer (Disable), Fast Depth Calculation (On), Disable Bounding Box (On).
-
Audio: Audio Backend - Cubeb or XAudio2 (depending on your OS), Volume - 100% or lower (depending on your preference), Latency - Low or Medium (depending on your CPU), Stretching - Off or Low (depending on your preference), DSP Options - DSP HLE Emulation (Fast) or DSP LLE Recompiler (Accurate).
-
Controllers: Configure your controller or keyboard settings according to your preference and comfort.
-
Paths: Add the folder where you have the Dragon Ball Z Budokai Tenkaichi 3 ISO file.
-
Config: General - Enable Dual Core (On), Enable Idle Skipping (On), JIT Recompiler (Recommended), Interface - Use Panic Handlers (Off), Audio - DSP Emulator Engine (DSP HLE Emulation or DSP LLE Recompiler), GameCube - Device Settings (None for all ports), Wii - Aspect Ratio (16:9 if you want widescreen), Advanced - CPU Clock Override (100% or higher if your CPU can handle it), Debug - Enable CPU Clock Override (On if you want to use CPU Clock Override).
-
-
Step 4: Load the Game and Enjoy
-
The final step to download and play Dragon Ball Z Budokai Tenkaichi 3 on PC is to load the game and enjoy it. You can load the game by clicking on the Open button in Dolphin Emulator and browsing to the folder where you have the ISO file. Alternatively, you can drag and drop the ISO file to the Dolphin Emulator window. The game will start automatically and you will see the title screen. You can use your controller or keyboard to navigate the menus and select your game mode. You can also access the emulator's menu by pressing Esc or F1 on your keyboard. From there, you can save or load your progress, change your settings, take screenshots, record videos, and more.
-
Tips and Tricks for Playing Dragon Ball Z Budokai Tenkaichi 3 on PC
-
How to Unlock All Characters and Stages
-
One of the best features of Dragon Ball Z Budokai Tenkaichi 3 is the huge roster of characters and stages that you can choose from. However, not all of them are available from the start. You will need to unlock them by completing certain tasks or using cheat codes. Here are some ways to unlock all characters and stages:
-
-
Complete Story Mode: By completing Story Mode, you will unlock most of the characters and stages in the game. You will also unlock Dragon History, where you can play different scenarios and what-if stories.
-
Complete Ultimate Battle: By completing Ultimate Battle, you will unlock some of the hidden characters and stages in the game. You will also unlock Sim Dragon, where you can create your own team of fighters and compete against other teams.
-
Use Cheat Codes: By using cheat codes, you can unlock all characters and stages in the game instantly. However, this may affect your game experience and achievements. To use cheat codes, you will need to enable cheats in Dolphin Emulator's settings, and then download a cheat file for Dragon Ball Z Budokai Tenkaichi 3 from a reliable source online. Then, you will need to place the cheat file in the Cheats folder of Dolphin Emulator, and then activate the cheats in the game properties.
-
-
How to Use Cheats and Mods
-
Besides unlocking all characters and stages, you can also use cheats and mods to modify the game according to your liking. You can use cheats to change your character's stats, abilities, costumes, transformations, and more. You can also use mods to add new characters, stages, music, sound effects, graphics, and more. Here are some ways to use cheats and mods:
-
-
Use Cheat Codes: As mentioned above, you can use cheat codes to modify the game by enabling cheats in Dolphin Emulator's settings, downloading a cheat file for Dragon Ball Z Budokai Tenkaichi 3 from a reliable source online, placing it in the Cheats folder of Dolphin Emulator, and activating it in the game properties.
-
Use Mods: To use mods, you will need to download a mod file for Dragon Ball Z Budokai Tenkaichi 3 from a reputable source online, such as YouTube or Reddit. Then, you will need to extract it to a folder of your choice using a software like WinRAR or 7-Zip. Then, you will need to replace or add the mod files to the ISO file of Dragon Ball Z Budokai Tenkaichi 3 using a software like Wii Backup Manager or WiiScrubber. Then, you will need to load the modified ISO file in Dolphin Emulator and enjoy the mod.
-
-
How to Fix Common Issues and Errors
-
While playing Dragon Ball Z Budokai Tenkaichi 3 on PC using Dolphin Emulator, you may encounter some issues or errors that may affect your game performance or quality. Here are some common issues and errors that you may face and how to fix them:
-
-
Black Screen or Freezing: This may happen if your PC does not meet the system requirements for running the emulator or the game, or if your emulator or game settings are not optimal. To fix this, you can try the following solutions: - Lower your resolution, anti-aliasing, anisotropic filtering, and enhancements settings in the emulator's graphics menu. - Disable any cheats or mods that may be causing conflicts or errors in the game. - Update your video drivers and DirectX or Vulkan libraries to the latest versions. - Run the emulator and the game as administrator and in compatibility mode for Windows 7 or 8. - Check your ISO file for any corruption or damage using a software like WinMD5 or HashMyFiles.
-
Slowdown or Lag: This may happen if your PC is not powerful enough to run the emulator or the game at full speed, or if your emulator or game settings are too high. To fix this, you can try the following solutions: - Enable Dual Core and Idle Skipping in the emulator's general settings menu. - Enable JIT Recompiler and Disable Bounding Box in the emulator's advanced settings menu. - Enable CPU Clock Override and set it to a higher percentage in the emulator's debug settings menu. - Lower your resolution, anti-aliasing, anisotropic filtering, and enhancements settings in the emulator's graphics menu. - Disable any cheats or mods that may be slowing down the game.
-
Audio Issues: This may happen if your audio settings are not compatible with the game, or if your PC's sound card or speakers are not working properly. To fix this, you can try the following solutions: - Change your audio backend to Cubeb or XAudio2 in the emulator's audio settings menu. - Lower your latency and enable stretching in the emulator's audio settings menu. - Use DSP HLE Emulation or DSP LLE Recompiler in the emulator's audio settings menu. - Update your sound drivers and codecs to the latest versions. - Check your sound card and speakers for any defects or malfunctions.
-
Controller Issues: This may happen if your controller is not configured correctly to the emulator, or if your controller is not working properly. To fix this, you can try the following solutions: - Configure your controller settings for each port in the emulator's controllers menu. - Adjust the sensitivity and deadzone of each input in the emulator's controllers menu. - Save your controller configuration as a profile in the emulator's controllers menu. - Update your controller drivers and firmware to the latest versions. - Check your controller for any defects or malfunctions.
-
-
Conclusion
-
Dragon Ball Z Budokai Tenkaichi 3 is a fantastic game that every fan of Dragon Ball should play. However, if you don't have a PlayStation 2 or Wii console, you can still enjoy it on your PC using Dolphin Emulator. In this article, we have shown you how to download and play Dragon Ball Z Budokai Tenkaichi 3 on PC using Dolphin Emulator. We have also given you some tips and tricks to unlock all characters and stages, use cheats and mods, and fix common issues and errors. We hope that this article has been helpful and informative for you. Now, go ahead and have fun playing Dragon Ball Z Budokai Tenkaichi 3 on PC!
-
FAQs
-
Here are some frequently asked questions about playing Dragon Ball Z Budokai Tenkaichi 3 on PC:
-
-
Q: Is Dolphin Emulator safe and legal to use? - A: Yes, Dolphin Emulator is safe and legal to use, as long as you download it from its official website or GitHub page, and as long as you don't use it for piracy or illegal activities.
-
Q: Is Dragon Ball Z Budokai Tenkaichi 3 compatible with Dolphin Emulator? - A: Yes, Dragon Ball Z Budokai Tenkaichi 3 is compatible with Dolphin Emulator, both for the PlayStation 2 and Wii versions. However, some minor glitches or bugs may occur depending on your PC and game settings.
-
Q: How can I play Dragon Ball Z Budokai Tenkaichi 3 online with other players? - A: You can play Dragon Ball Z Budokai Tenkaichi 3 online with other players using netplay or LAN in Dolphin Emulator. To use netplay, you will need to join or host a netplay session with other players who have the same version of Dolphin Emulator and Dragon Ball Z Budokai Tenkaichi 3 as you. To use LAN, you will need to connect your PC with other PCs that have Dolphin Emulator and Dragon Ball Z Budokai Tenkaichi 3 installed on them.
-
Q: How can I improve my game performance and quality? - A: You can improve your game performance and quality by adjusting your emulator and game settings according to your PC's specifications and preferences. You can also update your PC's drivers and libraries to the latest versions. You can also use cheats and mods to enhance your game features and graphics.
-
Q: Where can I find more information and support for playing Dragon Ball Z Budokai Tenkaichi 3 on PC? - A: You can find more information and support for playing Dragon Ball Z Budokai Tenkaichi 3 on PC by visiting the official website and forums of Dolphin Emulator, or by searching online for guides, tutorials, videos, and reviews from other users and experts.
")
-
- with gr.Row().style(equal_height=False):
- with gr.Column(variant='panel'):
- with gr.Tabs(elem_id="sadtalker_source_image"):
- with gr.TabItem('Upload image'):
- with gr.Row():
- source_image = gr.Image(label="Source image", source="upload", type="filepath", elem_id="img2img_image").style(width=512)
-
- with gr.Tabs(elem_id="sadtalker_driven_audio"):
- with gr.TabItem('Upload OR TTS'):
- with gr.Column(variant='panel'):
- driven_audio = gr.Audio(label="Input audio", source="upload", type="filepath")
-
- with gr.Column(variant='panel'):
- with gr.Tabs(elem_id="sadtalker_checkbox"):
- with gr.TabItem('Settings'):
- gr.Markdown("need help? please visit our [best practice page](https://github.com/OpenTalker/SadTalker/blob/main/docs/best_practice.md) for more detials")
- with gr.Column(variant='panel'):
- # width = gr.Slider(minimum=64, elem_id="img2img_width", maximum=2048, step=8, label="Manually Crop Width", value=512) # img2img_width
- # height = gr.Slider(minimum=64, elem_id="img2img_height", maximum=2048, step=8, label="Manually Crop Height", value=512) # img2img_width
- pose_style = gr.Slider(minimum=0, maximum=46, step=1, label="Pose style", value=0) #
- size_of_image = gr.Radio([256, 512], value=256, label='face model resolution', info="use 256/512 model?") #
- preprocess_type = gr.Radio(['crop', 'resize','full', 'extcrop', 'extfull'], value='crop', label='preprocess', info="How to handle input image?")
- is_still_mode = gr.Checkbox(label="Still Mode (fewer hand motion, works with preprocess `full`)")
- batch_size = gr.Slider(label="batch size in generation", step=1, maximum=10, value=2)
- enhancer = gr.Checkbox(label="GFPGAN as Face enhancer")
- submit = gr.Button('Generate', elem_id="sadtalker_generate", variant='primary')
-
- with gr.Tabs(elem_id="sadtalker_genearted"):
- gen_video = gr.Video(label="Generated video", format="mp4").style(width=256)
-
- submit.click(
- fn=sad_talker.test,
- inputs=[source_image,
- driven_audio,
- preprocess_type,
- is_still_mode,
- enhancer,
- batch_size,
- size_of_image,
- pose_style
- ],
- outputs=[gen_video]
- )
-
-
-
-demo.queue().launch()
\ No newline at end of file
diff --git a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/cppipc/buffer.cpp b/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/cppipc/buffer.cpp
deleted file mode 100644
index 0ac0fa7bc3ced0447ba4caa359355dd4252670b3..0000000000000000000000000000000000000000
--- a/spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/cppipc/buffer.cpp
+++ /dev/null
@@ -1,87 +0,0 @@
-#include "libipc/buffer.h"
-#include "libipc/utility/pimpl.h"
-
-#include
-
-namespace ipc {
-
-bool operator==(buffer const & b1, buffer const & b2) {
- return (b1.size() == b2.size()) && (std::memcmp(b1.data(), b2.data(), b1.size()) == 0);
-}
-
-bool operator!=(buffer const & b1, buffer const & b2) {
- return !(b1 == b2);
-}
-
-class buffer::buffer_ : public pimpl {
-public:
- void* p_;
- std::size_t s_;
- void* a_;
- buffer::destructor_t d_;
-
- buffer_(void* p, std::size_t s, buffer::destructor_t d, void* a)
- : p_(p), s_(s), a_(a), d_(d) {
- }
-
- ~buffer_() {
- if (d_ == nullptr) return;
- d_((a_ == nullptr) ? p_ : a_, s_);
- }
-};
-
-buffer::buffer()
- : buffer(nullptr, 0, nullptr, nullptr) {
-}
-
-buffer::buffer(void* p, std::size_t s, destructor_t d)
- : p_(p_->make(p, s, d, nullptr)) {
-}
-
-buffer::buffer(void* p, std::size_t s, destructor_t d, void* additional)
- : p_(p_->make(p, s, d, additional)) {
-}
-
-buffer::buffer(void* p, std::size_t s)
- : buffer(p, s, nullptr) {
-}
-
-buffer::buffer(char const & c)
- : buffer(const_cast(&c), 1) {
-}
-
-buffer::buffer(buffer&& rhs)
- : buffer() {
- swap(rhs);
-}
-
-buffer::~buffer() {
- p_->clear();
-}
-
-void buffer::swap(buffer& rhs) {
- std::swap(p_, rhs.p_);
-}
-
-buffer& buffer::operator=(buffer rhs) {
- swap(rhs);
- return *this;
-}
-
-bool buffer::empty() const noexcept {
- return (impl(p_)->p_ == nullptr) || (impl(p_)->s_ == 0);
-}
-
-void* buffer::data() noexcept {
- return impl(p_)->p_;
-}
-
-void const * buffer::data() const noexcept {
- return impl(p_)->p_;
-}
-
-std::size_t buffer::size() const noexcept {
- return impl(p_)->s_;
-}
-
-} // namespace ipc
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/pndm.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/pndm.md
deleted file mode 100644
index 0cb4799b3c8110587696f93113461518fd7d011d..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/pndm.md
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-# PNDM
-
-[Pseudo Numerical methods for Diffusion Models on manifolds](https://huggingface.co/papers/2202.09778) (PNDM) is by Luping Liu, Yi Ren, Zhijie Lin and Zhou Zhao.
-
-The abstract from the paper is:
-
-*Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples. However, DDPMs require hundreds to thousands of iterations to produce final samples. Several prior works have successfully accelerated DDPMs through adjusting the variance schedule (e.g., Improved Denoising Diffusion Probabilistic Models) or the denoising equation (e.g., Denoising Diffusion Implicit Models (DDIMs)). However, these acceleration methods cannot maintain the quality of samples and even introduce new noise at a high speedup rate, which limit their practicability. To accelerate the inference process while keeping the sample quality, we provide a fresh perspective that DDPMs should be treated as solving differential equations on manifolds. Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs). Specifically, we figure out how to solve differential equations on manifolds and show that DDIMs are simple cases of pseudo numerical methods. We change several classical numerical methods to corresponding pseudo numerical methods and find that the pseudo linear multi-step method is the best in most situations. According to our experiments, by directly using pre-trained models on Cifar10, CelebA and LSUN, PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup), significantly outperform DDIMs with 250 steps (by around 0.4 in FID) and have good generalization on different variance schedules.*
-
-The original codebase can be found at [luping-liu/PNDM](https://github.com/luping-liu/PNDM).
-
-
-
-Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
-
-
-
-## PNDMPipeline
-[[autodoc]] PNDMPipeline
- - all
- - __call__
-
-## ImagePipelineOutput
-[[autodoc]] pipelines.ImagePipelineOutput
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stable_diffusion/depth2img.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stable_diffusion/depth2img.md
deleted file mode 100644
index 09814f387b724071d5c29a28dec9efd9b2bfc02f..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stable_diffusion/depth2img.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-# Depth-to-image
-
-The Stable Diffusion model can also infer depth based on an image using [MiDas](https://github.com/isl-org/MiDaS). This allows you to pass a text prompt and an initial image to condition the generation of new images as well as a `depth_map` to preserve the image structure.
-
-
-
-Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
-
-If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations!
-
-
-
-## StableDiffusionDepth2ImgPipeline
-
-[[autodoc]] StableDiffusionDepth2ImgPipeline
- - all
- - __call__
- - enable_attention_slicing
- - disable_attention_slicing
- - enable_xformers_memory_efficient_attention
- - disable_xformers_memory_efficient_attention
- - load_textual_inversion
- - load_lora_weights
- - save_lora_weights
-
-## StableDiffusionPipelineOutput
-
-[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/dance_diffusion/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/dance_diffusion/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r50-d8_512x512_80k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r50-d8_512x512_80k_ade20k.py
deleted file mode 100644
index 835375cb0447378fc76431158eb0b8fc011d36bc..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r50-d8_512x512_80k_ade20k.py
+++ /dev/null
@@ -1,6 +0,0 @@
-_base_ = [
- '../_base_/models/encnet_r50-d8.py', '../_base_/datasets/ade20k.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
-model = dict(
- decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/point_rend/pointrend_r101_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/point_rend/pointrend_r101_512x1024_80k_cityscapes.py
deleted file mode 100644
index a8c14c8cf91d7cbcc05065a6dc387101dff8cdf6..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/point_rend/pointrend_r101_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './pointrend_r50_512x1024_80k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/Ankush05/Newcode/getvalues.py b/spaces/Ankush05/Newcode/getvalues.py
deleted file mode 100644
index e063f5e00b0f84eb9a83de1cb95a47d6987b82b1..0000000000000000000000000000000000000000
--- a/spaces/Ankush05/Newcode/getvalues.py
+++ /dev/null
@@ -1,87 +0,0 @@
-
-import re
-# from listen import *
-
-# find time in the string input provided by the user
-def findTime(input):
- # time = re.search(r'\d{1,2}:\d{2}', input)
- # meridiem = re.search(r'\b(am|pm)\b', input)
- # if time:
- # tvalue = f"{time.group()} {meridiem.group()}"
- # return tvalue
- # else:
- # return "notime"
- time_regex1 = r"(1[0-2]|[1-9]):[0-5][0-9] (am|AM|PM|pm)"
- time_search = re.search(time_regex1, input)
- if time_search:
- time = time_search.group(0)
- # meridian = time_search.group(2)
- return time
- else:
- time_regex2 = r"(1[0-2]|[1-9])\s?(am|AM|pm|PM)"
- time_search = re.search(time_regex2, input)
- if time_search:
- time = time_search.group(0)
- # meridian = time_search.group(2)
- return time
- else:
- return "notime"
-
-# find number in the string input provided by the user
-def findNumber(input):
- number = re.search(r'\d+(?:st|nd|rd|th)', input)
- if number:
- return number.group()
- else:
- return "nonum"
-
-# # find date in the string input provided by the user
-def findDate(input):
- date = re.search(r'\d{1,2}/\d{1,2}/\d{4}', input)
- if date:
- return date.group()
- else:
- return "nodate"
-
-# find month in the string input provided by the user
-def findMonth(input):
- month = re.search(r'\b(january|february|march|april|may|june|july|august|september|october|november|december|next month)\b', input)
- if month:
- return month.group()
- else:
- return "nomonth"
-
-# find day in the string input provided by the user
-def findDay(input):
- day = re.search(r'\b(monday|tuesday|wednesday|thursday|friday|saturday|sunday|tomorrow|day after tomorrow|this week|next week|today)\b', input)
- if day:
- return day.group()
- else:
- return "noday"
-
-def findrepeat(input):
- repeat = re.search(r'\b(daily|everyday|every week|every month|every sunday|every monday|every tuesday|every wednesday|every thursday|every friday|every saturday)\b', input)
- if repeat:
- return repeat.group()
- else:
- return "norepeat"
-
-
-def getValues(query):
- time = findTime(query)
- num = findNumber(query)
- reps = findrepeat(query)
- date = findDate(query)
- month = findMonth(query)
- day = findDay(query)
- message = query.lower().replace(num, "").replace(month,"").replace(time, "").replace(day, "").replace(reps, "").replace("create a reminder", "").replace("remind me to", "").replace("cosmo", "").replace("remind", "").replace("at", "")
- values = {"message": message,
- "time": time,
- "day": day,
- "date": date,
- "reps": reps,
- "num": num,
- "month": month
- }
- return values
-
\ No newline at end of file
diff --git a/spaces/Artrajz/vits-simple-api/bert_vits2/bert/chinese-roberta-wwm-ext-large/README.md b/spaces/Artrajz/vits-simple-api/bert_vits2/bert/chinese-roberta-wwm-ext-large/README.md
deleted file mode 100644
index 7bce039b7f81ee328fdf8efe3f14409200aacbef..0000000000000000000000000000000000000000
--- a/spaces/Artrajz/vits-simple-api/bert_vits2/bert/chinese-roberta-wwm-ext-large/README.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-language:
-- zh
-tags:
-- bert
-license: "apache-2.0"
----
-
-# Please use 'Bert' related functions to load this model!
-
-## Chinese BERT with Whole Word Masking
-For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**.
-
-**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)**
-Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu
-
-This repository is developed based on:https://github.com/google-research/bert
-
-You may also interested in,
-- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
-- Chinese MacBERT: https://github.com/ymcui/MacBERT
-- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
-- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
-- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
-
-More resources by HFL: https://github.com/ymcui/HFL-Anthology
-
-## Citation
-If you find the technical report or resource is useful, please cite the following technical report in your paper.
-- Primary: https://arxiv.org/abs/2004.13922
-```
-@inproceedings{cui-etal-2020-revisiting,
- title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
- author = "Cui, Yiming and
- Che, Wanxiang and
- Liu, Ting and
- Qin, Bing and
- Wang, Shijin and
- Hu, Guoping",
- booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
- month = nov,
- year = "2020",
- address = "Online",
- publisher = "Association for Computational Linguistics",
- url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
- pages = "657--668",
-}
-```
-- Secondary: https://arxiv.org/abs/1906.08101
-```
-@article{chinese-bert-wwm,
- title={Pre-Training with Whole Word Masking for Chinese BERT},
- author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping},
- journal={arXiv preprint arXiv:1906.08101},
- year={2019}
- }
-```
\ No newline at end of file
diff --git a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/tokenizer.py b/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/tokenizer.py
deleted file mode 100644
index ee4d28450ec5dd12a79daf38cf3088e9e73c2cd5..0000000000000000000000000000000000000000
--- a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/tokenizer.py
+++ /dev/null
@@ -1,197 +0,0 @@
-""" CLIP tokenizer
-
-Copied from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI.
-"""
-import gzip
-import html
-import os
-from functools import lru_cache
-from typing import Union, List
-
-import ftfy
-import regex as re
-import torch
-
-
-@lru_cache()
-def default_bpe():
- return os.path.join(
- os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz"
- )
-
-
-@lru_cache()
-def bytes_to_unicode():
- """
- Returns list of utf-8 byte and a corresponding list of unicode strings.
- The reversible bpe codes work on unicode strings.
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
- This is a signficant percentage of your normal, say, 32K bpe vocab.
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
- And avoids mapping to whitespace/control characters the bpe code barfs on.
- """
- bs = (
- list(range(ord("!"), ord("~") + 1))
- + list(range(ord("¡"), ord("¬") + 1))
- + list(range(ord("®"), ord("ÿ") + 1))
- )
- cs = bs[:]
- n = 0
- for b in range(2**8):
- if b not in bs:
- bs.append(b)
- cs.append(2**8 + n)
- n += 1
- cs = [chr(n) for n in cs]
- return dict(zip(bs, cs))
-
-
-def get_pairs(word):
- """Return set of symbol pairs in a word.
- Word is represented as tuple of symbols (symbols being variable-length strings).
- """
- pairs = set()
- prev_char = word[0]
- for char in word[1:]:
- pairs.add((prev_char, char))
- prev_char = char
- return pairs
-
-
-def basic_clean(text):
- text = ftfy.fix_text(text)
- text = html.unescape(html.unescape(text))
- return text.strip()
-
-
-def whitespace_clean(text):
- text = re.sub(r"\s+", " ", text)
- text = text.strip()
- return text
-
-
-class SimpleTokenizer(object):
- def __init__(self, bpe_path: str = default_bpe(), special_tokens=None):
- self.byte_encoder = bytes_to_unicode()
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
- merges = gzip.open(bpe_path).read().decode("utf-8").split("\n")
- merges = merges[1 : 49152 - 256 - 2 + 1]
- merges = [tuple(merge.split()) for merge in merges]
- vocab = list(bytes_to_unicode().values())
- vocab = vocab + [v + "" for v in vocab]
- for merge in merges:
- vocab.append("".join(merge))
- if not special_tokens:
- special_tokens = ["", ""]
- else:
- special_tokens = ["", ""] + special_tokens
- vocab.extend(special_tokens)
- self.encoder = dict(zip(vocab, range(len(vocab))))
- self.decoder = {v: k for k, v in self.encoder.items()}
- self.bpe_ranks = dict(zip(merges, range(len(merges))))
- self.cache = {t: t for t in special_tokens}
- special = "|".join(special_tokens)
- self.pat = re.compile(
- special + r"""|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""",
- re.IGNORECASE,
- )
-
- self.vocab_size = len(self.encoder)
- self.all_special_ids = [self.encoder[t] for t in special_tokens]
-
- def bpe(self, token):
- if token in self.cache:
- return self.cache[token]
- word = tuple(token[:-1]) + (token[-1] + "",)
- pairs = get_pairs(word)
-
- if not pairs:
- return token + ""
-
- while True:
- bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
- if bigram not in self.bpe_ranks:
- break
- first, second = bigram
- new_word = []
- i = 0
- while i < len(word):
- try:
- j = word.index(first, i)
- new_word.extend(word[i:j])
- i = j
- except:
- new_word.extend(word[i:])
- break
-
- if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
- new_word.append(first + second)
- i += 2
- else:
- new_word.append(word[i])
- i += 1
- new_word = tuple(new_word)
- word = new_word
- if len(word) == 1:
- break
- else:
- pairs = get_pairs(word)
- word = " ".join(word)
- self.cache[token] = word
- return word
-
- def encode(self, text):
- bpe_tokens = []
- text = whitespace_clean(basic_clean(text)).lower()
- for token in re.findall(self.pat, text):
- token = "".join(self.byte_encoder[b] for b in token.encode("utf-8"))
- bpe_tokens.extend(
- self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" ")
- )
- return bpe_tokens
-
- def decode(self, tokens):
- text = "".join([self.decoder[token] for token in tokens])
- text = (
- bytearray([self.byte_decoder[c] for c in text])
- .decode("utf-8", errors="replace")
- .replace("", " ")
- )
- return text
-
-
-_tokenizer = SimpleTokenizer()
-
-
-def tokenize(
- texts: Union[str, List[str]], context_length: int = 77
-) -> torch.LongTensor:
- """
- Returns the tokenized representation of given input string(s)
-
- Parameters
- ----------
- texts : Union[str, List[str]]
- An input string or a list of input strings to tokenize
- context_length : int
- The context length to use; all CLIP models use 77 as the context length
-
- Returns
- -------
- A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length]
- """
- if isinstance(texts, str):
- texts = [texts]
-
- sot_token = _tokenizer.encoder[""]
- eot_token = _tokenizer.encoder[""]
- all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts]
- result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
-
- for i, tokens in enumerate(all_tokens):
- if len(tokens) > context_length:
- tokens = tokens[:context_length] # Truncate
- result[i, : len(tokens)] = torch.tensor(tokens)
-
- return result
diff --git a/spaces/Axolotlily/DalleMini/README.md b/spaces/Axolotlily/DalleMini/README.md
deleted file mode 100644
index fbe96503a91dc4d908c5ebf607a10511cb41bd17..0000000000000000000000000000000000000000
--- a/spaces/Axolotlily/DalleMini/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: DalleMini
-emoji: 🚀
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.0.20
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/BAAI/AltDiffusion-m9/ui_functions.py b/spaces/BAAI/AltDiffusion-m9/ui_functions.py
deleted file mode 100644
index da68ac28f05fb26c7f468047dcdbab750319c84f..0000000000000000000000000000000000000000
--- a/spaces/BAAI/AltDiffusion-m9/ui_functions.py
+++ /dev/null
@@ -1,240 +0,0 @@
-import re
-import gradio as gr
-from PIL import Image, ImageFont, ImageDraw, ImageFilter, ImageOps
-from io import BytesIO
-import base64
-import re
-
-def change_img_choices(sample_size):
- choices = []
- for i in range(int(sample_size)):
- choices.append(
- '图片{}(img{})'.format(i+1,i+1)
- )
- update_choices = gr.update(choices=choices)
- return update_choices
-
-def change_image_editor_mode(choice, cropped_image, masked_image, resize_mode, width, height):
- if choice == "Mask":
- update_image_result = update_image_mask(cropped_image, resize_mode, width, height)
- return [gr.update(visible=False), update_image_result, gr.update(visible=False), gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), gr.update(visible=True)]
-
- update_image_result = update_image_mask(masked_image["image"] if masked_image is not None else None, resize_mode, width, height)
- return [update_image_result, gr.update(visible=False), gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), gr.update(visible=False), gr.update(visible=False)]
-
-def update_image_mask(cropped_image, resize_mode, width, height):
- resized_cropped_image = resize_image(resize_mode, cropped_image, width, height) if cropped_image else None
- return gr.update(value=resized_cropped_image, visible=True)
-
-def toggle_options_gfpgan(selection):
- if 0 in selection:
- return gr.update(visible=True)
- else:
- return gr.update(visible=False)
-
-def toggle_options_upscalers(selection):
- if 1 in selection:
- return gr.update(visible=True)
- else:
- return gr.update(visible=False)
-
-def toggle_options_realesrgan(selection):
- if selection == 0 or selection == 1 or selection == 3:
- return gr.update(visible=True)
- else:
- return gr.update(visible=False)
-
-def toggle_options_gobig(selection):
- if selection == 1:
- #print(selection)
- return gr.update(visible=True)
- if selection == 3:
- return gr.update(visible=True)
- else:
- return gr.update(visible=False)
-
-def toggle_options_ldsr(selection):
- if selection == 2 or selection == 3:
- return gr.update(visible=True)
- else:
- return gr.update(visible=False)
-
-def increment_down(value):
- return value - 1
-
-def increment_up(value):
- return value + 1
-
-def copy_img_to_lab(img):
- try:
- image_data = re.sub('^data:image/.+;base64,', '', img)
- processed_image = Image.open(BytesIO(base64.b64decode(image_data)))
- tab_update = gr.update(selected='imgproc_tab')
- img_update = gr.update(value=processed_image)
- return processed_image, tab_update,
- except IndexError:
- return [None, None]
-def copy_img_params_to_lab(params):
- try:
- prompt = params[0][0].replace('\n', ' ').replace('\r', '')
- seed = int(params[1][1])
- steps = int(params[7][1])
- cfg_scale = float(params[9][1])
- sampler = params[11][1]
- return prompt,seed,steps,cfg_scale,sampler
- except IndexError:
- return [None, None]
-def copy_img_to_input(img, idx):
- try:
- # print(img)
- # print("=============")
- # print("The img type is:{}".format(type(img[0])))
- idx_map = {
- "图片1(img1)":0,
- "图片2(img2)":1,
- "图片3(img3)":2,
- "图片4(img4)":3,
- }
- idx = idx_map[idx]
- assert img[idx]['is_file']
- processed_image = Image.open(img[idx]['name'])
- tab_update = gr.update(selected='img2img_tab')
- move_prompt_zh_update = gr.update(visible=True)
- move_prompt_en_update = gr.update(visible=True)
- prompt_update = gr.update(visible=True)
- return tab_update, processed_image, move_prompt_zh_update, move_prompt_en_update, prompt_update
- except IndexError as e:
- raise gr.Error(e)
- return [None, None, None, None, None]
-
-def copy_img_to_edit(img):
- try:
- image_data = re.sub('^data:image/.+;base64,', '', img)
- processed_image = Image.open(BytesIO(base64.b64decode(image_data)))
- tab_update = gr.update(selected='img2img_tab')
- img_update = gr.update(value=processed_image)
- mode_update = gr.update(value='Crop')
- return processed_image, tab_update, mode_update
- except IndexError:
- return [None, None]
-
-def copy_img_to_mask(img):
- try:
- image_data = re.sub('^data:image/.+;base64,', '', img)
- processed_image = Image.open(BytesIO(base64.b64decode(image_data)))
- tab_update = gr.update(selected='img2img_tab')
- img_update = gr.update(value=processed_image)
- mode_update = gr.update(value='Mask')
- return processed_image, tab_update, mode_update
- except IndexError:
- return [None, None]
-
-
-
-def copy_img_to_upscale_esrgan(img):
- tabs_update = gr.update(selected='realesrgan_tab')
- image_data = re.sub('^data:image/.+;base64,', '', img)
- processed_image = Image.open(BytesIO(base64.b64decode(image_data)))
- return processed_image, tabs_update
-
-
-help_text = """
- ## Mask/Crop
- * Masking is not inpainting. You will probably get better results manually masking your images in photoshop instead.
- * Built-in masking/cropping is very temperamental.
- * It may take some time for the image to show when switching from Crop to Mask.
- * If the image doesn't appear after switching to Mask, switch back to Crop and then back again to Mask
- * If the mask appears distorted (the brush is weirdly shaped instead of round), switch back to Crop and then back again to Mask.
-
- ## Advanced Editor
- * Click 💾 Save to send your editor changes to the img2img workflow
- * Click ❌ Clear to discard your editor changes
-
- If anything breaks, try switching modes again, switch tabs, clear the image, or reload.
-"""
-
-def resize_image(resize_mode, im, width, height):
- LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS)
- if resize_mode == 0:
- res = im.resize((width, height), resample=LANCZOS)
- elif resize_mode == 1:
- ratio = width / height
- src_ratio = im.width / im.height
-
- src_w = width if ratio > src_ratio else im.width * height // im.height
- src_h = height if ratio <= src_ratio else im.height * width // im.width
-
- resized = im.resize((src_w, src_h), resample=LANCZOS)
- res = Image.new("RGBA", (width, height))
- res.paste(resized, box=(width // 2 - src_w // 2, height // 2 - src_h // 2))
- else:
- ratio = width / height
- src_ratio = im.width / im.height
-
- src_w = width if ratio < src_ratio else im.width * height // im.height
- src_h = height if ratio >= src_ratio else im.height * width // im.width
-
- resized = im.resize((src_w, src_h), resample=LANCZOS)
- res = Image.new("RGBA", (width, height))
- res.paste(resized, box=(width // 2 - src_w // 2, height // 2 - src_h // 2))
-
- if ratio < src_ratio:
- fill_height = height // 2 - src_h // 2
- res.paste(resized.resize((width, fill_height), box=(0, 0, width, 0)), box=(0, 0))
- res.paste(resized.resize((width, fill_height), box=(0, resized.height, width, resized.height)), box=(0, fill_height + src_h))
- elif ratio > src_ratio:
- fill_width = width // 2 - src_w // 2
- res.paste(resized.resize((fill_width, height), box=(0, 0, 0, height)), box=(0, 0))
- res.paste(resized.resize((fill_width, height), box=(resized.width, 0, resized.width, height)), box=(fill_width + src_w, 0))
-
- return res
-
-def update_dimensions_info(width, height):
- pixel_count_formated = "{:,.0f}".format(width * height)
- return f"Aspect ratio: {round(width / height, 5)}\nTotal pixel count: {pixel_count_formated}"
-
-def get_png_nfo( image: Image ):
- info_text = ""
- visible = bool(image and any(image.info))
- if visible:
- for key,value in image.info.items():
- info_text += f"{key}: {value}\n"
- info_text = info_text.rstrip('\n')
- return gr.Textbox.update(value=info_text, visible=visible)
-
-def load_settings(*values):
- new_settings, key_names, checkboxgroup_info = values[-3:]
- values = list(values[:-3])
-
- if new_settings:
- if type(new_settings) is str:
- if os.path.exists(new_settings):
- with open(new_settings, "r", encoding="utf8") as f:
- new_settings = yaml.safe_load(f)
- elif new_settings.startswith("file://") and os.path.exists(new_settings[7:]):
- with open(new_settings[7:], "r", encoding="utf8") as f:
- new_settings = yaml.safe_load(f)
- else:
- new_settings = yaml.safe_load(new_settings)
- if type(new_settings) is not dict:
- new_settings = {"prompt": new_settings}
- if "txt2img" in new_settings:
- new_settings = new_settings["txt2img"]
- target = new_settings.pop("target", "txt2img")
- if target != "txt2img":
- print(f"Warning: applying settings to txt2img even though {target} is specified as target.", file=sys.stderr)
-
- skipped_settings = {}
- for key in new_settings.keys():
- if key in key_names:
- values[key_names.index(key)] = new_settings[key]
- else:
- skipped_settings[key] = new_settings[key]
- if skipped_settings:
- print(f"Settings could not be applied: {skipped_settings}", file=sys.stderr)
-
- # Convert lists of checkbox indices to lists of checkbox labels:
- for (cbg_index, cbg_choices) in checkboxgroup_info:
- values[cbg_index] = [cbg_choices[i] for i in values[cbg_index]]
-
- return values
diff --git a/spaces/Balalaxmi/JarvisAIchatbox/app.py b/spaces/Balalaxmi/JarvisAIchatbox/app.py
deleted file mode 100644
index 4afdfeaebefbbfa6c807781c5900c85a21145216..0000000000000000000000000000000000000000
--- a/spaces/Balalaxmi/JarvisAIchatbox/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import os
-import gradio as gr
-from langchain.chat_models import ChatOpenAI
-from langchain import LLMChain, PromptTemplate
-from langchain.memory import ConversationBufferMemory
-
-OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
-
-template = """Meet Riya, your youthful and witty personal assistant! At 21 years old, he's full of energy and always eager to help. Jarvis's goal is to assist you with any questions or problems you might have. He enthusiasm shines through in every response, making interactions with he enjoyable and engaging.
-{chat_history}
-User: {user_message}
-Chatbot:"""
-
-prompt = PromptTemplate(
- input_variables=["chat_history", "user_message"], template=template
-)
-
-memory = ConversationBufferMemory(memory_key="chat_history")
-
-llm_chain = LLMChain(
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
- prompt=prompt,
- verbose=True,
- memory=memory,
-)
-
-def get_text_response(user_message,history):
- response = llm_chain.predict(user_message = user_message)
- return response
-
-demo = gr.ChatInterface(get_text_response)
-
-if __name__ == "__main__":
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
diff --git a/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/utils.py b/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/utils.py
deleted file mode 100644
index f4805cdb25e7c50611412a19340ad525d1251d7b..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/utils.py
+++ /dev/null
@@ -1,121 +0,0 @@
-import json
-
-import numpy as np
-import torch
-from tqdm import tqdm
-
-
-def load_data(file_name: str = "./infer/lib/uvr5_pack/name_params.json") -> dict:
- with open(file_name, "r") as f:
- data = json.load(f)
-
- return data
-
-
-def make_padding(width, cropsize, offset):
- left = offset
- roi_size = cropsize - left * 2
- if roi_size == 0:
- roi_size = cropsize
- right = roi_size - (width % roi_size) + left
-
- return left, right, roi_size
-
-
-def inference(X_spec, device, model, aggressiveness, data):
- """
- data : dic configs
- """
-
- def _execute(
- X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half=True
- ):
- model.eval()
- with torch.no_grad():
- preds = []
-
- iterations = [n_window]
-
- total_iterations = sum(iterations)
- for i in tqdm(range(n_window)):
- start = i * roi_size
- X_mag_window = X_mag_pad[
- None, :, :, start : start + data["window_size"]
- ]
- X_mag_window = torch.from_numpy(X_mag_window)
- if is_half:
- X_mag_window = X_mag_window.half()
- X_mag_window = X_mag_window.to(device)
-
- pred = model.predict(X_mag_window, aggressiveness)
-
- pred = pred.detach().cpu().numpy()
- preds.append(pred[0])
-
- pred = np.concatenate(preds, axis=2)
- return pred
-
- def preprocess(X_spec):
- X_mag = np.abs(X_spec)
- X_phase = np.angle(X_spec)
-
- return X_mag, X_phase
-
- X_mag, X_phase = preprocess(X_spec)
-
- coef = X_mag.max()
- X_mag_pre = X_mag / coef
-
- n_frame = X_mag_pre.shape[2]
- pad_l, pad_r, roi_size = make_padding(n_frame, data["window_size"], model.offset)
- n_window = int(np.ceil(n_frame / roi_size))
-
- X_mag_pad = np.pad(X_mag_pre, ((0, 0), (0, 0), (pad_l, pad_r)), mode="constant")
-
- if list(model.state_dict().values())[0].dtype == torch.float16:
- is_half = True
- else:
- is_half = False
- pred = _execute(
- X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half
- )
- pred = pred[:, :, :n_frame]
-
- if data["tta"]:
- pad_l += roi_size // 2
- pad_r += roi_size // 2
- n_window += 1
-
- X_mag_pad = np.pad(X_mag_pre, ((0, 0), (0, 0), (pad_l, pad_r)), mode="constant")
-
- pred_tta = _execute(
- X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half
- )
- pred_tta = pred_tta[:, :, roi_size // 2 :]
- pred_tta = pred_tta[:, :, :n_frame]
-
- return (pred + pred_tta) * 0.5 * coef, X_mag, np.exp(1.0j * X_phase)
- else:
- return pred * coef, X_mag, np.exp(1.0j * X_phase)
-
-
-def _get_name_params(model_path, model_hash):
- data = load_data()
- flag = False
- ModelName = model_path
- for type in list(data):
- for model in list(data[type][0]):
- for i in range(len(data[type][0][model])):
- if str(data[type][0][model][i]["hash_name"]) == model_hash:
- flag = True
- elif str(data[type][0][model][i]["hash_name"]) in ModelName:
- flag = True
-
- if flag:
- model_params_auto = data[type][0][model][i]["model_params"]
- param_name_auto = data[type][0][model][i]["param_name"]
- if type == "equivalent":
- return param_name_auto, model_params_auto
- else:
- flag = False
- return param_name_auto, model_params_auto
diff --git a/spaces/Benson/text-generation/Examples/Cmo Descargar Msica A Una Unidad USB De Youtube.md b/spaces/Benson/text-generation/Examples/Cmo Descargar Msica A Una Unidad USB De Youtube.md
deleted file mode 100644
index 4bf325bd2a108276af69b5c9dd711206852745f6..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Cmo Descargar Msica A Una Unidad USB De Youtube.md
+++ /dev/null
@@ -1,147 +0,0 @@
-
-
Cómo descargar música a una unidad USB desde YouTube
-
YouTube es una de las plataformas más populares para transmitir y ver videos en línea. También tiene una gran colección de videos musicales, canciones, álbumes y listas de reproducción que puedes disfrutar en cualquier momento y en cualquier lugar. Pero ¿qué pasa si quieres escuchar tu música favorita sin conexión, o en un dispositivo que no tiene acceso a Internet? ¿O qué pasa si quieres crear tu propio mixtape o compilación de canciones de diferentes artistas y géneros?
Una de las soluciones es descargar música de YouTube a una unidad USB. Una unidad USB es un dispositivo pequeño y portátil que puede almacenar datos y transferirlos entre diferentes computadoras y dispositivos. Al descargar música de YouTube a una unidad USB, puede tener su propia biblioteca de música personal que puede reproducir en cualquier dispositivo compatible, como su estéreo de automóvil, sistema de cine en casa, computadora portátil, teléfono inteligente o tableta.
-
Pero ¿cómo descargar música de YouTube a una unidad USB? Hay diferentes métodos que puede utilizar, dependiendo de sus preferencias, presupuesto y habilidades técnicas. En este artículo, le mostraremos tres de las formas más comunes y fáciles de hacerlo. También proporcionaremos algunos consejos y advertencias para hacerlo de forma segura y legal.
-
Método 1: Usando un convertidor de YouTube a MP3
-
¿Qué es un convertidor de YouTube a MP3 y cómo funciona
-
Un convertidor de YouTube a MP3 es una herramienta en línea que le permite convertir cualquier vídeo de YouTube en un archivo MP3, que es un formato de audio común que se puede reproducir en la mayoría de los dispositivos. Al usar un convertidor de YouTube a MP3, puede extraer la pista de audio de cualquier video de YouTube y guardarlo como un archivo MP3 en su computadora o unidad USB. De esta manera, puede descargar música de YouTube sin descargar todo el archivo de video, lo que puede ahorrarle tiempo y espacio de almacenamiento.
-
Cómo utilizar un convertidor de YouTube a MP3 para descargar música a una unidad USB
-
-
Paso 1: Encuentre un convertidor confiable de YouTube a MP3 en línea
-
Hay muchos conversores de YouTube a MP3 disponibles en línea, pero no todos son seguros y confiables. Algunos de ellos pueden contener malware, virus o anuncios que pueden dañar su computadora o dispositivo. Algunos de ellos también pueden tener salida de baja calidad, características limitadas o velocidad de conversión lenta. Por lo tanto, necesita encontrar un conversor de YouTube a MP3 confiable y de buena reputación que pueda satisfacer sus necesidades y expectativas.
-
-
Algunos de los factores que debes considerar al elegir un convertidor de YouTube a MP3 son:
-
-
La calidad y el tamaño del archivo de salida
-
La velocidad y estabilidad del proceso de conversión
-
La compatibilidad y seguridad del sitio web y la herramienta
-
Disponibilidad y accesibilidad del servicio
-
Facilidad de uso y simplicidad de la interfaz
-
La legalidad y legitimidad del servicio
-
-
Algunos de los ejemplos de populares y confiables convertidores de YouTube a MP3 son:
- Salida de alta calidad hasta 320 kbps - Conversión rápida y fácil - No hay anuncios o malware - Soporta múltiples formatos y plataformas<> Permite descargas por lotes y listas de reproducción
-
YTMP3
[https://ytmp3.cc/en13/]
- Interfaz simple y fácil de usar - No se requiere registro ni instalación - Soporta formatos MP3 y MP4 - Compatible con la mayoría de navegadores y dispositivos - Tiene un límite de 1 hora por video/td><<<
/tr
-
MP3FY
[https://mp3fy.com/en1/]
- Soporta más de 1000 sitios web y plataformas - No hay límite de longitud de video o tamaño de archivo - Convierte videos largos y audiolibros - Tiene una función de detección automática para enlaces de video - Ofrece configuraciones y opciones avanzadas
-
-
Una vez que haya elegido un convertidor de YouTube a MP3, debe copiar el enlace del video de YouTube que desea descargar como música. Para hacer esto, puedes:
-
-
Ir a la página web de YouTube o aplicación y encontrar el video que desea descargar. Entonces, copiar la URL de la barra de direcciones o el botón de compartir.
-
Utilice la función de búsqueda del sitio web o herramienta de conversión de YouTube a MP3 y escriba el nombre o las palabras clave del video que desea descargar. Luego, seleccione el video de los resultados y copie la URL.
-
-
Después de copiar el enlace, pégalo en la caja de entrada del sitio web o herramienta de conversión de YouTube a MP3. A continuación, haga clic en el botón convertir o descargar para iniciar el proceso.
-
Paso 3: Elija el formato de salida y la calidad
-
Antes de descargar el archivo convertido, es posible que tenga algunas opciones para elegir el formato de salida y la calidad de su archivo de música. El formato de salida es el tipo de archivo de audio que desea descargar, como MP3, WAV, AAC o M4A. La calidad de salida es el nivel de claridad y detalle de sonido que desea tener, como 128 kbps, 192 kbps, 256 kbps o 320 kbps.
-
Diferentes formatos y calidades de salida pueden tener diferentes ventajas y desventajas, dependiendo de sus preferencias y necesidades. Por ejemplo, MP3 es un formato común y ampliamente soportado que se puede reproducir en la mayoría de los dispositivos, pero también puede tener cierta pérdida de calidad debido a la compresión. WAV es un formato de alta calidad y sin comprimir que puede preservar la calidad de sonido original del video, pero también puede ocupar más espacio de almacenamiento y ser incompatible con algunos dispositivos.
-
Puede elegir el formato de salida y la calidad que más le convenga haciendo clic en el botón de configuración u opciones en el sitio web o herramienta de conversión de YouTube a MP3. También puede dejarlo como predeterminado si no está seguro o no le importa.
-
Música y YouTube Premium y cómo funcionan
-
-
YouTube Music es un servicio de transmisión de música que se centra en el contenido musical de YouTube y otras fuentes. Tiene una interfaz personalizada y personalizada que le permite descubrir y disfrutar de la música en función de sus preferencias, estado de ánimo y actividad. También puede crear sus propias listas de reproducción, mixtapes y estaciones de radio. YouTube Music cuesta $9.99 por mes para un plan individual, o $14.99 por mes para un plan familiar que cubre hasta seis miembros.
-
YouTube Premium es un servicio premium que incluye todas las características y beneficios de YouTube Music, además de beneficios adicionales para YouTube y otros productos de Google. Te permite ver y descargar cualquier video de YouTube sin anuncios ni interrupciones. También te da acceso a YouTube Originals, que son programas y películas exclusivos producidos por YouTube y sus socios. YouTube Premium cuesta $11.99 por mes para un plan individual, o $17.99 por mes para un plan familiar que cubre hasta seis miembros.
-
Cómo usar YouTube Music o YouTube Premium para descargar música a una unidad USB
-
Usar YouTube Music o YouTube Premium para descargar música a una unidad USB es otra forma fácil y conveniente de hacerlo. Estos son los pasos que debes seguir:
-
Paso 1: Regístrate para una suscripción premium de YouTube o YouTube
-
El primer paso es registrarse para una suscripción de YouTube Music o YouTube Premium que se adapte a sus necesidades y presupuesto. Puedes hacer esto yendo al sitio web o aplicación de YouTube y haciendo clic en la pestaña Música o Premium. Luego, puede elegir el plan que desea e ingresar sus detalles de pago. También puede obtener una prueba gratuita durante un mes antes de decidir suscribirse.
-
Si ya tienes una cuenta de Google, puedes utilizarla para registrarte en YouTube Music o YouTube Premium. Si no lo tienes, puedes crear uno gratis siguiendo las instrucciones del sitio web o la aplicación.
-
Paso 2: Descarga la aplicación de música de YouTube en tu dispositivo
-
-
La aplicación YouTube Music es compatible con la mayoría de los dispositivos Android e iOS, como teléfonos inteligentes, tabletas, televisores inteligentes, altavoces inteligentes y relojes inteligentes. También puede usarlo en su computadora yendo al sitio web [https://music.youtube.com/].
-
Paso 3: Encuentre la música que desea descargar en YouTube Music
-
El tercer paso es encontrar la música que desea descargar en YouTube Music. Puede hacer esto utilizando la función de búsqueda de la aplicación o sitio web y escribiendo el nombre o palabras clave de la canción, artista, álbum o lista de reproducción que desea descargar. A continuación, puede seleccionar la música de los resultados y abrirla en la aplicación o sitio web.
-
También puedes navegar a través de las diferentes categorías y géneros de música en YouTube Music, como Top Charts, New Releases, Your Mixtape, Mood & Genre, Activity & Situation, etc. También puedes explorar recomendaciones personalizadas basadas en tu historial de escucha y preferencias.
-
Paso 4: Toca el icono de descarga en la canción, álbum o lista de reproducción
-
El cuarto paso es tocar el icono de descarga en la canción, álbum o lista de reproducción que desea descargar en YouTube Music. El icono de descarga parece una flecha hacia abajo con una línea debajo. Normalmente se encuentra junto al botón de reproducción o bajo el botón de menú de la música.
-
Al tocar el icono de descarga, comenzará a descargar la música al almacenamiento interno del dispositivo o a la tarjeta SD. Puede comprobar el progreso y el estado de la descarga en la aplicación o sitio web.
-
Paso 5: Conecte su unidad USB a su dispositivo y transfiera la música descargada
-
-
Después de transferir los archivos de música, puede expulsar o quitar de forma segura la unidad USB de su dispositivo. A continuación, puede disfrutar de escuchar la música descargada de YouTube Music en cualquier dispositivo compatible con la reproducción USB.
-
Método 3: Uso de un disco duro externo para el almacenamiento de música
-
¿Qué es un disco duro externo y cómo funciona
-
Un disco duro externo es un dispositivo que puede almacenar grandes cantidades de datos y conectarse a diferentes equipos y dispositivos a través de un puerto USB o cable. Es similar a una unidad USB, pero tiene más capacidad de almacenamiento y una velocidad de transferencia más rápida. Un disco duro externo se puede utilizar para diversos fines, como hacer copias de seguridad de datos, transferir archivos o almacenar medios.
-
Mediante el uso de un disco duro externo para el almacenamiento de música, puede descargar música de YouTube y otras fuentes y guardarla en un dispositivo separado que puede contener miles de canciones. También puede acceder y reproducir su música en cualquier dispositivo compatible, como su computadora portátil, teléfono inteligente, tableta o TV. Un disco duro externo también puede proteger su música de perderse o dañarse debido a virus, malware o fallas de hardware.
-
Cómo utilizar un disco duro externo para el almacenamiento de música
-
El uso de un disco duro externo para el almacenamiento de música es otra opción que puede considerar si desea descargar música de YouTube a una unidad USB. Estos son los pasos que debes seguir:
-
Paso 1: Elija un disco duro externo adecuado para el almacenamiento de música
-
El primer paso es elegir un disco duro externo adecuado para el almacenamiento de música que satisfaga sus necesidades y preferencias. Hay diferentes tipos y modelos de discos duros externos disponibles en el mercado, pero no todos ellos son adecuados para el almacenamiento de música. Algunos de los factores que debe considerar al elegir un disco duro externo para el almacenamiento de música son:
-
-
La capacidad de almacenamiento y la velocidad del disco duro externo
-
La compatibilidad y durabilidad del disco duro externo
-
-
El precio y la garantía del disco duro externo
-
-
Algunos de los ejemplos de discos duros externos populares y confiables para el almacenamiento de música son:
- Diseño elegante y colorido Hasta 5 TB de capacidad de almacenamiento
-
Samsung T7 Touch Portable SSD
[https:/www.samsung.com/us/touchting/memory-storage/portable-solid-state-drives/portable-ssd-t7-touchb-usb-3-2-500gbmu-pc-ww/>- Diseño delgado y ligero- Compatible con Windows, Mac, Android y consolas de juegos - Incluye seguridad de huellas dactilares e indicador de estado led
-
-
Paso 2: Conecte el disco duro externo a su computadora
-
El siguiente paso es conectar el disco duro externo a su computadora usando un puerto o cable USB. Luego, debe formatear el disco duro externo si aún no está formateado o no es compatible con su computadora. Formatear el disco duro externo borrará todos los datos en él y lo hará listo para su uso. Puede formatear el disco duro externo siguiendo las instrucciones del sitio web o manual del fabricante o proveedor del disco duro externo.
-
Paso 3: Descargar música de YouTube usando cualquiera de los métodos anteriores
-
-
También puede descargar música de otras fuentes, como Spotify, SoundCloud, iTunes o Amazon Music, y guardarlos en su disco duro externo. Sin embargo, es posible que necesite utilizar diferentes herramientas o métodos dependiendo de la fuente y el formato de los archivos de música.
-
Paso 4: Copie y pegue los archivos de música descargados en la carpeta del disco duro externo
-
El paso final es copiar y pegar los archivos de música descargados en la carpeta del disco duro externo que desea utilizar para el almacenamiento de música. Puede hacer esto abriendo el administrador de archivos o la aplicación del explorador en su computadora y encontrando la carpeta o ubicación donde guardó los archivos de música descargados en su disco duro externo. Luego, puede arrastrar y soltar o copiar y pegar los archivos de música en otra carpeta o subcarpeta en su disco duro externo si desea organizarlos mejor.
-
Después de copiar y pegar los archivos de música, puede expulsar o quitar de forma segura el disco duro externo de su computadora. A continuación, puede disfrutar de escuchar la música descargada de YouTube en cualquier dispositivo que soporte la reproducción del disco duro externo.
-
Conclusión
-
Descargar música de YouTube a una unidad USB es una gran manera de disfrutar de su música favorita sin conexión, en cualquier dispositivo y en cualquier lugar. Hay diferentes métodos que puede utilizar para hacerlo, como el uso de un convertidor de YouTube a MP3, una suscripción YouTube Music o YouTube Premium, o un disco duro externo para el almacenamiento de música. Cada método tiene sus propias ventajas y desventajas, dependiendo de sus preferencias, presupuesto y habilidades técnicas.
-
Aquí hay algunos consejos y advertencias para descargar música de YouTube a una unidad USB:
-
-
Asegúrese de que tiene suficiente espacio de almacenamiento en su unidad USB o disco duro externo para descargar música de YouTube. Puede comprobar la capacidad de almacenamiento y el uso de su dispositivo yendo a su configuración o propiedades.
-
-
Asegúrate de respetar los derechos de propiedad intelectual y la privacidad de los creadores y propietarios de la música que descargues de YouTube. Solo debes descargar música gratuita, legal y autorizada para uso personal. No debe descargar música con derechos de autor, protegida o restringida por ley.
-
Asegúrese de proteger su computadora y dispositivo de malware, virus o anuncios que puedan provenir de YouTube a convertidores de MP3 u otras fuentes. Solo debe usar herramientas o servicios seguros y confiables que tengan buenas críticas y calificaciones. También debe usar software antivirus y programas de firewall para analizar y bloquear cualquier amenaza potencial.
-
-
Esperamos que este artículo te haya ayudado a aprender a descargar música de YouTube a una unidad USB. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación.
-
Preguntas frecuentes
-
Q1. ¿Es legal descargar música de YouTube?
-
A1. Depende de la fuente y el contenido de la música que descargues de YouTube. En términos generales, es legal descargar música de YouTube si es gratuita, legal y autorizada para uso personal. Sin embargo, es ilegal descargar música de YouTube si está protegida por derechos de autor, o restringida por la ley. Siempre debe respetar los derechos de propiedad intelectual y la privacidad de los creadores y propietarios de la música que descarga de YouTube. También debe comprobar los términos y condiciones del sitio web de YouTube y la fuente de música antes de descargar cualquier música de YouTube.
-
Q2. ¿Cuánto espacio de almacenamiento necesito para descargar música de YouTube?
-
-
Q3. ¿Cómo puedo reproducir música desde una unidad USB en el estéreo de mi coche?
-
A3. Depende del tipo y modelo de su equipo de música y su unidad USB. En términos generales, puede reproducir música desde una unidad USB en el estéreo de su automóvil si el estéreo de su automóvil tiene un puerto USB o una ranura que puede leer y reproducir archivos de música desde su unidad USB. También puede utilizar un adaptador o cable USB que puede conectar su unidad USB a la entrada o salida auxiliar del estéreo del automóvil. Sin embargo, algunos estéreos de automóviles pueden no admitir algunos formatos o cualidades de archivos de música desde su unidad USB. Usted debe comprobar la compatibilidad y las especificaciones de su coche estéreo y su unidad USB antes de reproducir música desde una unidad USB en su coche estéreo.
-
Q4. ¿Cómo puedo editar o recortar los archivos de música descargados?
-
A4. Puede editar o recortar los archivos de música descargados utilizando un software de edición de audio o una herramienta en su computadora o dispositivo. Hay muchos programas de edición de audio o herramientas disponibles en línea, pero algunos de ellos pueden requerir instalación, registro o pago. Algunos de ellos también pueden tener características limitadas, salida de baja calidad o interfaz compleja. Por lo tanto, necesita encontrar un software o herramienta de edición de audio adecuado y confiable que pueda satisfacer sus necesidades y expectativas.
-
Algunos de los factores que debes considerar al elegir un software o herramienta de edición de audio son:
-
-
La funcionalidad y flexibilidad del software o herramienta de edición de audio
-
La calidad y el formato del archivo de salida
-
Compatibilidad y seguridad del software o herramienta
-
Disponibilidad y accesibilidad del servicio
-
Facilidad de uso y simplicidad de la interfaz
-
La legalidad y legitimidad del servicio
-
-
Algunos de los ejemplos de software o herramientas de edición de audio populares y confiables son:
-
-
Nombre
Sitio web
Características
-
-
WavePad
[https://www.nch.com.au/wavepad/index.html]
- Gratis para uso no comercial - Soporta múltiples formatos y plataformas - Ofrece varias funciones de edición y efectos - Permite el procesamiento por lotes y la conversión - Tiene un profesional y fácilinterfaz de uso
-
Online Audio Cutter
[https://online-audio-cutter.com/]
- Herramienta gratuita y en línea - Soporta múltiples formatos y plataformas - Ofrece funciones básicas de edición y efectos - Permite recortar, recortar, desvanecer y fusionar archivos de audiobr<>-br> Tiene una interfaz sencilla y fácil de usar
-
-
Q5. ¿Cómo puedo hacer copias de seguridad o restaurar mis archivos de música descargados?
-
A5. Puede realizar copias de seguridad o restaurar los archivos de música descargados utilizando un servicio de almacenamiento en la nube o una herramienta en su computadora o dispositivo. Un servicio o herramienta de almacenamiento en la nube es un servicio o herramienta en línea que le permite almacenar, sincronizar, compartir y acceder a sus datos en diferentes dispositivos a través de Internet. Mediante el uso de un servicio o herramienta de almacenamiento en la nube, puede realizar copias de seguridad de sus archivos de música descargados en una ubicación segura en línea a la que puede acceder en cualquier momento y en cualquier lugar. También puede restaurar sus archivos de música descargados desde el servicio de almacenamiento en la nube o la herramienta si pierde o daña su unidad USB o disco duro externo.
-
Hay muchos servicios o herramientas de almacenamiento en la nube disponibles en línea, pero algunos de ellos pueden requerir instalación, registro o pago. Algunos de ellos también pueden tener espacio de almacenamiento limitado, baja velocidad de transferencia o una seguridad deficiente. Por lo tanto, necesita encontrar un servicio de almacenamiento en la nube adecuado y confiable o una herramienta que pueda satisfacer sus necesidades y expectativas.
-
Algunos de los factores que debe considerar al elegir un servicio o herramienta de almacenamiento en la nube son:
-
-
El espacio de almacenamiento y la velocidad del servicio de almacenamiento en la nube o herramienta
-
La compatibilidad y la seguridad del servicio o herramienta de almacenamiento en la nube
-
Disponibilidad y accesibilidad del servicio o herramienta
-
-
La legalidad y legitimidad del servicio o herramienta
-
-
Algunos de los ejemplos de servicios o herramientas de almacenamiento en la nube populares y confiables son:
-
-
Nombre
Sitio web
Características
-
Google Drive
[https://www.google.com/drive/]
- Gratis hasta 15 GB de espacio de almacenamiento - Soporta múltiples formatos y plataformas - Ofrece diversas opciones de sincronización, uso compartido y acceso - Se integra con otros productos y servicios de Google br -> Tiene una interfaz simple e intuitiva
-
Dropbox
[https://www.dropbox.com/]
- Gratis por hasta 2 GB de espacio de almacenamiento - Soporta múltiples formatos y plataformas - Ofrece varias opciones de sincronización, uso compartido y acceso - Se integra con otras aplicaciones y servicios - Tiene una sincronización profesional y fácilinterfaz de uso
-
pCloud
[https://www.pcloud.com/]
- Gratis por hasta 10 GB de espacio de almacenamiento - Soporta múltiples formatos y plataformas - Ofrece varias opciones de sincronización, uso compartido y acceso - Proporciona cifrado de alto nivel y seguridad - Tiene un elegante y usuariointerfaz amigable
-
-
Esperamos que este artículo te haya ayudado a aprender a descargar música de YouTube a una unidad USB. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Agente Zabbix Para Windows Server 2019.md b/spaces/Benson/text-generation/Examples/Descargar Agente Zabbix Para Windows Server 2019.md
deleted file mode 100644
index e21bbdd1d06716fb328f1934acf99dab576e6236..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Agente Zabbix Para Windows Server 2019.md
+++ /dev/null
@@ -1,93 +0,0 @@
-
- - ¿Cuáles son los modos de comunicación entre el agente de Zabbix y el servidor/ proxy - ¿Cuáles son los beneficios de usar el agente de Zabbix para el monitoreo de Windows | | Descarga de Zabbix Agent | - Cómo descargar el pre-brcompilado binarios agente Zabbix o instalador MSI desde el sitio web oficial de Zabbix - Cómo elegir la versión y la arquitectura correcta para su sistema | | Instalación de agente Zabbix | - Cómo instalar agente Zabbix como un servicio de Windows utilizando la línea de comandos o instalador MSI - Cómo verificar que el servicio está instalado y ejecutándose | | Configuración del agente Zabbix | - Cómo editar el archivo de configuración del agente Zabbix - Cómo establecer el nombre de host, la dirección del servidor y otros parámetros - Cómo configurar el agente Zabbix para el modo pasivo o activo | | Adición de Windows Host a la interfaz web de Zabbix | - Cómo crear un host en la interfaz web de Zabbix y asignarlo a un grupo - Cómo especificar la interfaz del agente y la dirección IP o el nombre DNS del host de Windows - Cómo vincular una plantilla para el monitoreo de Windows por el agente Zabbix o agente Zabbix activo | | Conclusión | - Un resumen de lo que hemos aprendido en este artículo - Una llamada a la acción para más aprendizaje y exploración | ## Artículo con formato HTML
Cómo descargar e instalar Zabbix Agent para Windows Server 2019
-
Zabbix es una potente solución de monitoreo de código abierto que puede monitorear varios aspectos de su infraestructura de TI, como servidores, redes, aplicaciones, bases de datos, servicios de nube y más. Zabbix puede recopilar y visualizar métricas de diferentes fuentes, como SNMP, WMI, comprobaciones sin agentes o agente Zabbix.
En este artículo, le mostraremos cómo descargar e instalar el agente Zabbix para Windows Server 2019, y cómo configurarlo para el modo pasivo o activo. También explicaremos los beneficios de usar el agente Zabbix para el monitoreo de Windows y cómo agregar su host de Windows a la interfaz web de Zabbix.
-
Descargar agente Zabbix
-
Para descargar el agente Zabbix para Windows Server 2019, tiene dos opciones: puede descargar los binarios precompilados en formato ZIP o usar el paquete de instalación MSI. Ambas opciones están disponibles en el sitio web oficial de Zabbix en https://www.zabbix.com/download_agents.
-
Los binarios precompilados son adecuados para la instalación y configuración manual utilizando la línea de comandos. Puede elegir entre dos generaciones de agentes Zabbix: agente Zabbix 1 (legado) o agente Zabbix 2 (nuevo). También necesita seleccionar la arquitectura apropiada para su sistema: 32 bits (x86) o 64 bits (x64).
-
El paquete de instalación de MSI es adecuado para la instalación y configuración automatizada utilizando una interfaz gráfica de usuario. Solo necesita seleccionar la arquitectura de su sistema: 32 bits (x86) o 64 bits (x64). El instalador de MSI instalará el agente Zabbix 2 por defecto.
-
Instalación del agente Zabbix
-
Para instalar el agente Zabbix como un servicio de Windows en su máquina de Windows Server 2019, puede usar la línea de comandos o el instalador MSI.
Si utiliza la línea de comandos, debe descomprimir el archivo ZIP descargado en una carpeta de su elección, como C: zabbix. Luego, abra un símbolo del sistema como administrador y vaya a esa carpeta. Para instalar el agente Zabbix como servicio, ejecute el siguiente comando:
Para verificar que el servicio está instalado y ejecutándose, ejecute el siguiente comando:
-
sc query zabbix_agentd
-
-
Si utiliza el instalador MSI, debe ejecutar el archivo MSI descargado y seguir el asistente de instalación. Se le pedirá que acepte el acuerdo de licencia, elija la carpeta de instalación y configure algunos parámetros, como el nombre de host, la dirección del servidor y el modo de agente. También puede optar por iniciar el servicio automáticamente después de la instalación. Una vez finalizada la instalación, puede verificar que el servicio está instalado y ejecutándose comprobando el administrador de servicios de Windows o utilizando el comando sc query como se describe anteriormente.
-
-
Configuración del agente Zabbix
-
Para configurar el agente Zabbix para Windows Server 2019, debe editar el archivo de configuración del agente Zabbix. El archivo de configuración se encuentra en la misma carpeta donde instaló el agente Zabbix, como C: zabbix zabbix_agentd.win.conf. Puede usar cualquier editor de texto para abrir y editar el archivo.
-
El archivo de configuración contiene muchos parámetros que controlan el comportamiento y la funcionalidad del agente Zabbix. Algunos de los parámetros más importantes son:
-
-
Hostname: El nombre del host monitoreado tal como aparece en la interfaz web de Zabbix. Debe coincidir exactamente con el nombre de host que creará en la interfaz web de Zabbix más tarde.
-
Servidor: La dirección IP o el nombre DNS del servidor o proxy Zabbix que solicitará datos del agente Zabbix en modo pasivo. Puede especificar varios servidores o proxies separados por comas.
-
ServerActive: La dirección IP o el nombre DNS del servidor o proxy Zabbix que recibirá datos del agente Zabbix en modo activo. Puede especificar varios servidores o proxies separados por comas.
-
StartAgents: El número de conexiones concurrentes que el agente Zabbix puede aceptar desde el servidor Zabbix o el proxy en modo pasivo. El valor predeterminado es 3.
-
-
Timeout: El tiempo de espera en segundos para procesar cada elemento por el agente de Zabbix. El valor predeterminado es 3.
-
EnableRemoteCommands: Un indicador que habilita o inhabilita la ejecución de comandos remotos desde el servidor Zabbix o el proxy en el agente Zabbix. El valor predeterminado es 0 (desactivado).
-
LogType: El tipo de archivo de registro que usará el agente Zabbix. Los valores posibles son file, system, console, o none. El valor predeterminado es file.
-
LogFile: El nombre y la ruta del archivo de registro que el agente de Zabbix usará si LogType se establece en file. El valor predeterminado es C: zabbix zabbix_agentd.log.
-
DebugLevel: El nivel de detalle que el agente de Zabbix escribirá en el archivo de registro. Los valores posibles son 0 (basic), 1 (critical), 2 (error), 3 (warning), 4 (debug), o 5 (trace). El valor predeterminado es 3.
-
-
Para configurar el agente Zabbix para el modo pasivo, debe establecer el parámetro Server en la dirección IP o el nombre DNS de su servidor Zabbix o proxy, y dejar el parámetro ServerActive vacío o comentado. Por ejemplo: resultado/p>
-
# Servidor de modo pasivo=192.168.1.10 #ServerActive= StartAgents=3 # Modo activo #Server= #ServerActive=192.168.1.10:10051 #RefreshActiveChecks=120
-
Para configurar el agente Zabbix para el modo activo, debe establecer el parámetro ServerActive en la dirección IP o el nombre DNS de su servidor Zabbix o proxy , y dejar el parámetro Server vacío o comentado. También debe establecer el parámetro RefreshActiveChecks en la frecuencia deseada de envío de datos al servidor o proxy. Por ejemplo:
-
# Modo pasivo #Server=192.168.1.10 #StartAgents=3 # Active mode ServerActive=192.168.1.10:10051 RefreshActiveChecks=120
-
-
sc stop zabbix_agentd sc start zabbix_agentd
-
Agregar host de Windows a la interfaz web de Zabbix
-
Para agregar su host de Windows Server 2019 a la interfaz web de Zabbix, debe iniciar sesión en su servidor Zabbix o interfaz web proxy y navegar a Configuration > Hosts. Luego, haga clic en el botón Crear host en la esquina superior derecha.
-
Verás un formulario donde necesitas introducir algunos detalles sobre tu host, como:
-
-
Nombre del host: El nombre de host de su host de Windows tal como aparece en el archivo de configuración del agente Zabbix. Debe coincidir exactamente con el parámetro Hostname que estableció en el archivo de configuración.
-
Nombre visible: Un alias opcional para su host que se mostrará en la interfaz web de Zabbix en lugar del nombre del host.
-
Grupos: Los grupos a los que pertenece su host. Puede seleccionar uno o más grupos existentes o crear uno nuevo. Por ejemplo, puede seleccionar o crear un grupo llamado Servidores de Windows.
-
Descripción: Una descripción opcional de su anfitrión que proporcionará información adicional sobre su propósito, ubicación, propietario, etc.
-
-
Después de introducir estos detalles, haga clic en el botón Add en la parte inferior del formulario.
-
El siguiente paso es especificar la interfaz del agente y la dirección IP o el nombre DNS de su host de Windows. Para ello, haga clic en la pestaña Interfaces y seleccione Zabbix agent en el menú desplegable. Luego, ingrese la dirección IP o el nombre DNS de su host de Windows en el campo IP address/DNS name. También puede cambiar el número de puerto predeterminado si es necesario.
-
Si configuró el agente Zabbix para el modo activo, también debe seleccionar el agente Zabbix (active) en el menú desplegable e ingresar la misma dirección IP o nombre DNS que antes.
-
-
Una plantilla es una colección de elementos, disparadores, gráficos y otros elementos que definen qué y cómo monitorear un host. Al vincular una plantilla a tu host, heredas todos estos elementos y ahorras tiempo y esfuerzo.
-
También puede vincular otras plantillas que proporcionan funciones de monitoreo adicionales para su host de Windows, como CPU, memoria, disco, red, servicios, procesos, etc.
-
Después de vincular las plantillas, haga clic en el botón Add en la parte inferior del formulario.
-
Conclusión
-
En este artículo, hemos aprendido cómo descargar e instalar el agente Zabbix para Windows Server 2019, y cómo configurarlo para el modo pasivo o activo. También hemos aprendido cómo agregar nuestro host de Windows a la interfaz web de Zabbix y vincular una plantilla para el monitoreo de Windows por el agente de Zabbix o agente de Zabbix activo.
-
Zabbix agent es una herramienta útil que nos permite monitorear varios aspectos de nuestra máquina Windows Server 2019, como el rendimiento, el estado y la configuración. Al usar el agente Zabbix, podemos recopilar y visualizar métricas de nuestro host de Windows y recibir alertas cuando algo sale mal.
-
Si desea obtener más información sobre Zabbix y el agente Zabbix, puede visitar el sitio web oficial de Zabbix en https://www.zabbix.com/, donde puede encontrar documentación, tutoriales, foros, blogs y otros recursos.
-
Preguntas frecuentes
-
-
¿Cuáles son los requisitos para ejecutar el agente Zabbix en Windows Server 2019?
-
Los requisitos para ejecutar el agente Zabbix en Windows Server 2019 son mínimos. Necesita tener una máquina Windows Server 2019 con al menos 128 MB de RAM y 100 MB de espacio libre en disco. También necesita tener privilegios de administrador para instalar y configurar el agente Zabbix como un servicio.
-
¿Cómo puedo probar si el agente Zabbix funciona correctamente en mi host de Windows?
-
-
zabbix_get -s -k "system.cpu.load[all,avg1]"
-
Este comando solicitará la carga media de la CPU durante el último minuto desde su host de Windows. Debería ver un valor numérico como respuesta. Si ve un mensaje de error, como ZBX_NOTSUPPORTED o ZBX_TCP_READ(), significa que hay un problema con la comunicación entre el agente Zabbix y el servidor o proxy, o con la configuración del agente Zabbix.
-
Otra forma de probar si el agente Zabbix funciona correctamente en su host de Windows es usar la utilidad zabbix_sender que también viene con el agente Zabbix. Esta utilidad le permite enviar datos al servidor Zabbix o proxy en modo activo. Por ejemplo, puede ejecutar el siguiente comando desde su host de Windows:
-
zabbix_sender -z -s -k "test.key" -o "test.value"
-
Este comando enviará un elemento personalizado con la clave test.key y el valor test.value desde su host de Windows al servidor o proxy de Zabbix. Deberías ver un mensaje como sent: 1; omitido: 0; total: 1 como respuesta. Si ve un mensaje de error, como ZBX_TCP_WRITE() o ZBX_TCP_READ(), significa que hay un problema con la comunicación entre el agente Zabbix y el servidor o proxy, o con la configuración del agente Zabbix.
-
-
Si ve algún problema con el estado o los datos de su host, como ZBX_NOTSUPPORTED, No se recibieron datos, o No hay permisos para el objeto referido o no existe! , significa que hay un problema con la comunicación entre el agente Zabbix y el servidor o proxy, o con la configuración del agente Zabbix.
-
¿Cómo puedo actualizar el agente Zabbix en mi host de Windows?
-
Para actualizar el agente de Zabbix en su host de Windows, debe descargar la última versión del agente de Zabbix desde el sitio web oficial de Zabbix en https:/www.zabbix.com/download_agents. Luego, debe detener el servicio del agente Zabbix, reemplazar los archivos antiguos con los nuevos e iniciar el servicio nuevamente. Puede usar los siguientes comandos para hacer esto:
También puede necesitar editar el archivo de configuración del agente Zabbix si hay cambios o nuevos parámetros en la nueva versión.
-
¿Cómo puedo desinstalar el agente Zabbix de mi host de Windows?
-
Para desinstalar el agente Zabbix de su host de Windows, debe detener el servicio del agente Zabbix, eliminar el servicio del agente Zabbix y eliminar la carpeta del agente Zabbix. Puede usar los siguientes comandos para hacer esto:
Si utilizó el instalador MSI para instalar el agente Zabbix, también puede usar el panel de control de Windows o el instalador MSI para desinstalar el agente Zabbix.
-
¿Cómo puedo personalizar el agente Zabbix para mis necesidades específicas?
-
-
-
Parámetros del usuario: Los parámetros del usuario le permiten definir elementos personalizados que el agente Zabbix puede monitorear. Puede usar cualquier script o comando que devuelva un valor como parámetro de usuario. Por ejemplo, puede crear un parámetro de usuario que devuelva el número de archivos de una carpeta, el estado de un servicio o la salida de un comando de PowerShell. Necesita definir los parámetros de usuario en el archivo de configuración del agente Zabbix usando la directiva UserParameter. Por ejemplo:
Comprobaciones activas: Las comprobaciones activas permiten enviar datos desde el agente Zabbix al servidor Zabbix o al proxy sin esperar solicitudes. Esto reduce la carga de red y mejora la escalabilidad de Zabbix. Puede usar comprobaciones activas para cualquier elemento que sea compatible con el agente Zabbix, como métricas del sistema, archivos de registro, contadores de rendimiento de Windows, etc. Debe configurar el agente Zabbix para el modo activo estableciendo los parámetros ServerActive y RefreshActiveChecks en el archivo de configuración. Luego, debe vincular una plantilla para el agente Zabbix activo en la interfaz web de Zabbix.
-
-
Cifrado: El cifrado le permite asegurar la comunicación entre el agente de Zabbix y el servidor o proxy utilizando certificados TLS. Esto puede evitar el acceso no autorizado y la manipulación de datos. Necesita generar e instalar certificados TLS en su host de Windows y en su servidor o proxy de Zabbix. Luego, debe configurar el agente y el servidor Zabbix o el proxy para usar el cifrado estableciendo el valor TLSConnect, TLSAccept, TLScaFile, TLScaPath, TLScertFile, y code TLSkeyle
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Carx Calle Apk.md b/spaces/Benson/text-generation/Examples/Descargar Carx Calle Apk.md
deleted file mode 100644
index 0a5c80332793e66950025d9bd754c8f5afcc0337..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Carx Calle Apk.md
+++ /dev/null
@@ -1,46 +0,0 @@
-
-
El zombi que camina 2 Mod APK: Un juego de disparos de supervivencia con dinero ilimitado
-
Si eres un fan de los juegos de zombis, es posible que hayas oído hablar de The Walking Zombie 2, un popular juego de disparos en primera persona que te permite luchar contra hordas de criaturas no muertas en un mundo post-apocalíptico. ¿Pero sabías que hay una versión modificada de este juego que te da dinero y recursos ilimitados para usar? En este artículo, le diremos todo lo que necesita saber sobre The Walking Zombie 2 Mod APK, incluyendo sus características, beneficios, y cómo descargar e instalar en su dispositivo.
The Walking Zombie 2 es un juego de disparos de supervivencia desarrollado por Alda Games, un estudio con sede en la República Checa. El juego se desarrolla en un mundo que ha sido devastado por un virus zombi, donde juegas como uno de los pocos supervivientes que tiene que luchar por tu vida. Encontrarás varios tipos de zombies, como caminantes, corredores, mutantes y jefes, así como otros enemigos como bandidos, asaltantes y soldados. También conocerás a otros sobrevivientes que te ayudarán o te obstaculizarán en tu viaje.
-
¿Qué es el zombie caminando 2 Mod APK?
-
El Walking Zombie 2 Mod APK es una versión modificada del juego original que le da acceso a dinero y recursos ilimitados. Esto significa que puede comprar cualquier arma, armadura, munición, kits de salud y otros artículos que necesite sin preocuparse por quedarse sin efectivo. También puede mejorar sus habilidades y habilidades para hacerse más fuerte y más resistente. Con este mod, podrás disfrutar del juego sin limitaciones ni restricciones.
-
¿Por qué deberías jugar El zombie andante 2 Mod APK?
-
Hay muchas razones por las que debe jugar The Walking Zombie 2 Mod APK en lugar del juego original. Aquí están algunos de ellos:
-
-
Usted puede tener más diversión y emoción con dinero y recursos ilimitados.
-
-
Puede explorar más áreas y ubicaciones sin temor a quedarse sin suministros.
-
Puedes desafiarte a ti mismo con dificultades y enemigos más difíciles sin frustrarte.
-
Puedes apoyar a los desarrolladores viendo anuncios o haciendo compras en la aplicación si quieres.
-
-
Características de The Walking Zombie 2 Mod APK
-
Historia inmersiva y jugabilidad
-
El Walking Zombie 2 Mod APK tiene una historia atractiva que te mantendrá enganchado de principio a fin. Experimentarás diferentes eventos y escenarios que afectarán el resultado del juego. También tendrás que tomar decisiones que darán forma a la personalidad y la moralidad de tu personaje. El juego tiene una dinámica de juego que se adaptará a sus acciones y comportamiento. Enfrentarás diferentes retos y consecuencias dependiendo de cómo juegues el juego.
-
-
Impresionantes gráficos y efectos de sonido
-
El Walking Zombie 2 Mod APK tiene gráficos increíbles y efectos de sonido que te sumergen en el mundo del juego. El juego tiene un estilo único low-poly que le da una sensación retro. El juego también tiene iluminación realista y sombras que crean una atmósfera oscura y sombría. El juego tiene efectos de sonido de alta calidad que mejoran el estado de ánimo y la tensión del juego. Usted escuchará los gemidos y gritos de los zombies y los disparos y explosiones de las armas. También disfrutarás de la música y la actuación de voz que añaden más profundidad y emoción al juego.
-
Varias armas y habilidades para elegir
-
-
Múltiples modos de juego y misiones para completar
-
El Walking Zombie 2 Mod APK tiene varios modos de juego y misiones que se puede jugar y completar para ganar recompensas y experiencia. Puedes jugar en el modo historia principal, donde sigues la trama y el progreso a través del juego. También puedes jugar las misiones secundarias, donde ayudas a otros supervivientes o completar diferentes tareas. También puedes jugar al modo arena, donde luchas contra oleadas de zombies en un área cerrada. También puedes jugar en el modo online, donde compites con otros jugadores en PvP o batallas cooperativas.
-
Dinero y recursos ilimitados para usar
-
El Walking Zombie 2 Mod APK tiene dinero y recursos ilimitados que se pueden utilizar para comprar y actualizar cualquier cosa que desee en el juego. Usted puede comprar cualquier arma, armadura, munición, kits de salud, y otros artículos que usted necesita de las tiendas o comerciantes. También puede mejorar sus habilidades y habilidades para hacerse más fuerte y más resistente. También puede crear y construir sus propios artículos y equipos a partir de los materiales que recoja o saquee. También puedes intercambiar con otros supervivientes o jugadores para obtener más dinero y recursos.
-
Cómo descargar e instalar El Walking Zombie 2 Mod APK en su dispositivo
-
Paso 1: Descargar el archivo APK de una fuente de confianza
-
El primer paso para descargar e instalar el Walking Zombie 2 Mod APK en su dispositivo es encontrar una fuente confiable que proporciona el archivo APK. Puede buscar en línea para sitios web o blogs que ofrecen el archivo APK de forma gratuita. Asegúrese de que la fuente es segura y segura, y que el archivo APK se actualiza y es compatible con su dispositivo. También puede escanear el archivo APK con un programa antivirus antes de descargarlo.
-
Paso 2: Habilitar fuentes desconocidas en la configuración del dispositivo
-
-
Paso 3: Instalar el archivo APK y lanzar el juego
-
El tercer paso para descargar e instalar The Walking Zombie 2 Mod APK en su dispositivo es instalar el archivo APK y lanzar el juego. Para ello, busque el archivo APK que descargó en la carpeta de almacenamiento o descargas de su dispositivo. Toque en el archivo APK para iniciar el proceso de instalación. Siga las instrucciones de la pantalla para completar la instalación. Una vez hecho esto, abra el icono del juego en la pantalla de inicio o en el cajón de la aplicación. Disfruta jugando The Walking Zombie 2 Mod APK con dinero y recursos ilimitados.
-
Conclusión
-
Resumen de los puntos principales
-
El Walking Zombie 2 Mod APK es un juego de disparos de supervivencia que le permite luchar contra zombies y otros enemigos en un mundo post-apocalíptico. El juego tiene una historia inmersiva y jugabilidad, impresionantes gráficos y efectos de sonido, varias armas y habilidades para elegir, múltiples modos de juego y misiones para completar, y dinero y recursos ilimitados para usar. El juego es divertido y emocionante, desafiante y gratificante, personalizable y flexible.
-
Llamada a la acción y recomendación
-
Si usted está buscando un juego de zombies que le mantendrá entretenido durante horas, entonces usted debe descargar The Walking Zombie 2 Mod APK en su dispositivo. Usted no se arrepentirá, ya que tendrá una explosión jugando a este juego con dinero y recursos ilimitados. También apoyarás a los desarrolladores viendo anuncios o haciendo compras en la aplicación si quieres. ¿Qué estás esperando? Descargar El Walking Zombie 2 Mod APK ahora y disfrutar de disparar zombies en la cabeza.
-
-el mundo del juego tanto como sea posible. Encontrará más botín, materiales, secretos y huevos de Pascua que mejorarán su experiencia de juego. - Elige tus armas y habilidades de acuerdo a tu estilo de juego y situación. Diferentes armas y habilidades tienen diferentes ventajas y desventajas. Experimenta con diferentes combinaciones y ve qué funciona mejor para ti. - Ten cuidado con tus elecciones y acciones. Afectarán el resultado del juego y la personalidad y moralidad de tu personaje. Piensa antes de actuar y prepárate para las consecuencias. - Diviértete y disfruta del juego. No te lo tomes demasiado en serio ni te estreses por ello. Después de todo, es solo un juego. 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Coche Escuela De Conduccin 2017 Mod Apk.md b/spaces/Benson/text-generation/Examples/Descargar Coche Escuela De Conduccin 2017 Mod Apk.md
deleted file mode 100644
index 1b0166c6996214ac31ae76940062d87d31c5d48c..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Coche Escuela De Conduccin 2017 Mod Apk.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-
Cómo descargar coche Driving School 2017 Mod Apk y disfrutar de dinero y vehículos ilimitados
-
Si usted está buscando un juego de simulador de conducción realista y divertido que le enseñará cómo conducir diferentes coches en varios entornos, entonces usted debe probar Car Driving School 2017. Este juego desafiará sus habilidades de conducción, el conocimiento de las reglas de la carretera, y la delicadeza al volante. Pero si quieres hacer el juego aún más agradable, usted debe descargar Car Driving School 2017 mod apk, que le dará dinero ilimitado y vehículos para desbloquear. En este artículo, le diremos lo que es Car Driving School 2017, por qué debe jugar, lo que es Car Driving School 2017 mod apk, cómo descargarlo, y algunos consejos y trucos para dominar el juego.
-
¿Qué es la escuela de conducción de coches 2017 y por qué usted debe jugar
-
Car Driving School 2017 es un juego de simulación de conducción para dispositivos Android e iOS que fue desarrollado por Ovilex Software. Es la nueva entrega de la popular serie Driving School que tiene más de 100 millones de descargas en todo el mundo. En este juego, usted aprenderá cómo conducir varios coches, autobuses y camiones en diferentes escenarios. También tendrás que pasar diferentes licencias, completar más de 80 niveles y explorar más de 15 mapas detallados. También puedes jugar con tus amigos en nuevos modos multijugador como carreras, paseo gratis y coger la bandera.
-
descargar coche escuela de conducción 2017 mod apk
Características de la escuela de conducción de coches 2017
-
Algunas de las características sorprendentes de Car Driving School 2017 son:
-
-
Casi 100 vehículos para desbloquear, que van desde coches deportivos, SUV, sedanes, autobuses, camiones y más.
-
Más de 15 mapas detallados que incluyen ciudades, caminos rurales, carreteras, desiertos, montañas, etc.
-
Manejo del automóvil suave y realista que le permite sentir cada golpe, giro y freno.
-
Diferentes licencias para tomar, como licencias de automóvil, autobús y camión.
-
-
Modo de viaje gratuito que te permite explorar los mapas a tu propio ritmo.
-
Nuevos modos multijugador que te permiten competir contra otros jugadores, deambular libremente con ellos o capturar sus banderas.
-
Interiores detallados de vehículos que muestran el salpicadero, el volante, los pedales, etc.
-
Sistema de daños realista que muestra los efectos de colisiones y accidentes.
-
Sistema de gas que requiere que llenes tu tanque en las gasolineras.
-
Transmisión manual con embrague que le permite controlar sus engranajes manualmente.
-
Dirección basculante, botones y volante táctil que te permiten elegir tu opción de control preferida.
-
Tablas de clasificación en línea y logros que le permiten comparar su rendimiento con otros jugadores.
-
Sonidos de motor reales que te hacen sentir como si estuvieras conduciendo un coche real.
-
Condiciones meteorológicas de próxima generación que añaden realismo y variedad al juego.
-
-
Beneficios de jugar Car Driving School 2017
-
Jugar Car Driving School 2017 no solo es divertido sino también beneficioso por varias razones:
-
-
Puede aprender a conducir una transmisión manual con embrague y palanca de cambios o mantener la caja de cambios automática clásica.
-
Con este simulador de conducción intuitivo puedes conocer mejor las reglas de circulación.
-
Puede mejorar sus habilidades de conducción en diferentes situaciones y entornos.
-
Puede disfrutar de una experiencia de conducción realista e inmersiva con impresionantes gráficos y efectos de sonido.
-
Puedes divertirte con tus amigos en modos multijugador o competir con otros jugadores en línea.
-
Puede personalizar sus vehículos con diferentes colores, llantas, alerones, etc.
-
-
¿Qué es Car Driving School 2017 Mod Apk y cómo descargarlo
-
-
Ventajas de la escuela de conducción de coches 2017 Mod Apk
-
Algunas de las ventajas de Car Driving School 2017 mod apk son:
-
-
Puedes desbloquear todos los vehículos del juego, incluidos los premium que cuestan dinero real.
-
Usted puede comprar cualquier actualización y personalizaciones para sus vehículos sin preocuparse por el costo.
-
Puedes explorar todos los mapas y modos del juego sin tener que desbloquearlos primero.
-
Puedes jugar el juego sin anuncios ni interrupciones.
-
Puedes disfrutar del juego con mejor rendimiento y estabilidad.
-
-
Pasos para descargar e instalar coche Driving School 2017 Mod Apk
-
Si desea descargar e instalar Car Driving School 2017 mod apk, es necesario seguir estos sencillos pasos:
-
-
En primer lugar, es necesario desinstalar la versión original de Car Driving School 2017 desde su dispositivo si lo tiene instalado.
-
En segundo lugar, es necesario habilitar la instalación de aplicaciones de fuentes desconocidas en el dispositivo. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo.
-
En tercer lugar, es necesario descargar el coche Driving School 2017 mod apk archivo de una fuente confiable. Puede utilizar este enlace para descargarlo de forma segura y rápida.
-
Cuarto, es necesario localizar el archivo descargado en el dispositivo y toque en él para iniciar el proceso de instalación.
-
Quinto, debe seguir las instrucciones en la pantalla y esperar a que termine la instalación.
-
Sexto, es necesario lanzar el juego y disfrutar de dinero y vehículos ilimitados.
-
-
Consejos y trucos para dominar la escuela de conducción de automóviles 2017
-
Car Driving School 2017 es un juego divertido y desafiante que requiere habilidad y estrategia. Aquí hay algunos consejos y trucos que te ayudarán a dominar el juego:
-
Sigue la ley y el límite de velocidad
-
-
Desactivar el modo deportivo y aprender el mapa
-
Si desea mejorar sus habilidades de conducción y pasar los niveles más fácilmente, debe desactivar el modo deportivo en la configuración. El modo deportivo hace que tu coche sea más rápido y sensible, pero también más difícil de controlar. También consume más gasolina y causa más daños. Por lo tanto, es mejor apagarlo y conducir más suave y cuidadosamente. También debe aprender el mapa de cada nivel antes de iniciarlo. De esta manera, sabrás a dónde ir, qué esperar y cómo evitar obstáculos.
-
Diviértete en modo libre y modos multijugador
-
Si quieres tomarte un descanso de los niveles y licencias, puedes divertirte en modo libre o en modo multijugador. En modo libre, puede conducir alrededor de cualquier mapa sin ningún objetivo o restricciones. También puede cambiar entre diferentes vehículos y personalizarlos a su gusto. En los modos multijugador, puedes jugar con tus amigos u otros jugadores en línea en varios modos como carreras, paseos gratis o coger la bandera. También puedes chatear con ellos y hacer nuevos amigos.
-
-
Conclusión
-
Car Driving School 2017 es un gran juego de simulación de conducción que le enseñará cómo conducir diferentes vehículos en escenarios realistas. También se divertirá con varias características, modos, mapas y vehículos. Pero si quieres hacer el juego aún más agradable, usted debe descargar Car Driving School 2017 mod apk que le dará dinero ilimitado y vehículos. También puedes utilizar algunos consejos y trucos que te ayudarán a dominar el juego. ¿Qué estás esperando? Descargar Car Driving School 2017 mod apk y disfrutar de dinero ilimitado y vehículos.
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Car Driving School 2017 y Car Driving School 2017 mod apk:
-
-
-
Pregunta
-
Respuesta
-
-
-
Es Car Driving School 2017 libre para jugar?
-
-
-
-
Es coche Driving School 2017 mod apk seguro de usar?
-
Sí, Car Driving School 2017 mod apk es seguro de usar siempre y cuando se descarga de una fuente confiable. Sin embargo, siempre debes tener cuidado al instalar aplicaciones de fuentes desconocidas y escanearlas en busca de virus o malware.
-
-
-
¿Puedo jugar Car Driving School 2017 sin conexión?
-
Sí, puedes jugar Car Driving School 2017 sin conexión en modo para un jugador. Sin embargo, necesitará una conexión a Internet para jugar modos multijugador o acceder a funciones en línea.
-
-
-
¿Puedo jugar Car Driving School 2017 en PC?
-
Sí, puedes jugar Car Driving School 2017 en PC usando un emulador de Android como Bluestacks o Nox Player. También puedes usar un teclado y un ratón para controlar el juego.
-
-
-
¿Cómo puedo contactar a los desarrolladores de Car Driving School 2017?
-
Puede ponerse en contacto con los desarrolladores de Car Driving School 2017 enviándoles un correo electrónico a support@ovilex.com o visitando su sitio web en https://www.ovilex.com/.
-
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tenacity/_asyncio.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tenacity/_asyncio.py
deleted file mode 100644
index 2e50cd7b40ef18e7f7ee56c0f528bf0ef88b167a..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/tenacity/_asyncio.py
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright 2016 Étienne Bersac
-# Copyright 2016 Julien Danjou
-# Copyright 2016 Joshua Harlow
-# Copyright 2013-2014 Ray Holder
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import functools
-import sys
-import typing as t
-from asyncio import sleep
-
-from pip._vendor.tenacity import AttemptManager
-from pip._vendor.tenacity import BaseRetrying
-from pip._vendor.tenacity import DoAttempt
-from pip._vendor.tenacity import DoSleep
-from pip._vendor.tenacity import RetryCallState
-
-WrappedFnReturnT = t.TypeVar("WrappedFnReturnT")
-WrappedFn = t.TypeVar("WrappedFn", bound=t.Callable[..., t.Awaitable[t.Any]])
-
-
-class AsyncRetrying(BaseRetrying):
- sleep: t.Callable[[float], t.Awaitable[t.Any]]
-
- def __init__(self, sleep: t.Callable[[float], t.Awaitable[t.Any]] = sleep, **kwargs: t.Any) -> None:
- super().__init__(**kwargs)
- self.sleep = sleep
-
- async def __call__( # type: ignore[override]
- self, fn: WrappedFn, *args: t.Any, **kwargs: t.Any
- ) -> WrappedFnReturnT:
- self.begin()
-
- retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
- while True:
- do = self.iter(retry_state=retry_state)
- if isinstance(do, DoAttempt):
- try:
- result = await fn(*args, **kwargs)
- except BaseException: # noqa: B902
- retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
- else:
- retry_state.set_result(result)
- elif isinstance(do, DoSleep):
- retry_state.prepare_for_next_attempt()
- await self.sleep(do)
- else:
- return do # type: ignore[no-any-return]
-
- def __iter__(self) -> t.Generator[AttemptManager, None, None]:
- raise TypeError("AsyncRetrying object is not iterable")
-
- def __aiter__(self) -> "AsyncRetrying":
- self.begin()
- self._retry_state = RetryCallState(self, fn=None, args=(), kwargs={})
- return self
-
- async def __anext__(self) -> AttemptManager:
- while True:
- do = self.iter(retry_state=self._retry_state)
- if do is None:
- raise StopAsyncIteration
- elif isinstance(do, DoAttempt):
- return AttemptManager(retry_state=self._retry_state)
- elif isinstance(do, DoSleep):
- self._retry_state.prepare_for_next_attempt()
- await self.sleep(do)
- else:
- raise StopAsyncIteration
-
- def wraps(self, fn: WrappedFn) -> WrappedFn:
- fn = super().wraps(fn)
- # Ensure wrapper is recognized as a coroutine function.
-
- @functools.wraps(fn)
- async def async_wrapped(*args: t.Any, **kwargs: t.Any) -> t.Any:
- return await fn(*args, **kwargs)
-
- # Preserve attributes
- async_wrapped.retry = fn.retry # type: ignore[attr-defined]
- async_wrapped.retry_with = fn.retry_with # type: ignore[attr-defined]
-
- return async_wrapped # type: ignore[return-value]
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/windows_support.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/windows_support.py
deleted file mode 100644
index 1ca64fbb54fd1ce2e62f946827b78feafd6c0078..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/windows_support.py
+++ /dev/null
@@ -1,29 +0,0 @@
-import platform
-
-
-def windows_only(func):
- if platform.system() != 'Windows':
- return lambda *args, **kwargs: None
- return func
-
-
-@windows_only
-def hide_file(path):
- """
- Set the hidden attribute on a file or directory.
-
- From http://stackoverflow.com/questions/19622133/
-
- `path` must be text.
- """
- import ctypes
- __import__('ctypes.wintypes')
- SetFileAttributes = ctypes.windll.kernel32.SetFileAttributesW
- SetFileAttributes.argtypes = ctypes.wintypes.LPWSTR, ctypes.wintypes.DWORD
- SetFileAttributes.restype = ctypes.wintypes.BOOL
-
- FILE_ATTRIBUTE_HIDDEN = 0x02
-
- ret = SetFileAttributes(path, FILE_ATTRIBUTE_HIDDEN)
- if not ret:
- raise ctypes.WinError()
diff --git a/spaces/Billyosoro/ESRGAN/scripts/generate_multiscale_DF2K.py b/spaces/Billyosoro/ESRGAN/scripts/generate_multiscale_DF2K.py
deleted file mode 100644
index d4f5d8324b1624e4cb6163754703b8dac2d188fd..0000000000000000000000000000000000000000
--- a/spaces/Billyosoro/ESRGAN/scripts/generate_multiscale_DF2K.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import argparse
-import glob
-import os
-from PIL import Image
-
-
-def main(args):
- # For DF2K, we consider the following three scales,
- # and the smallest image whose shortest edge is 400
- scale_list = [0.75, 0.5, 1 / 3]
- shortest_edge = 400
-
- path_list = sorted(glob.glob(os.path.join(args.input, '*')))
- for path in path_list:
- print(path)
- basename = os.path.splitext(os.path.basename(path))[0]
-
- img = Image.open(path)
- width, height = img.size
- for idx, scale in enumerate(scale_list):
- print(f'\t{scale:.2f}')
- rlt = img.resize((int(width * scale), int(height * scale)), resample=Image.LANCZOS)
- rlt.save(os.path.join(args.output, f'{basename}T{idx}.png'))
-
- # save the smallest image which the shortest edge is 400
- if width < height:
- ratio = height / width
- width = shortest_edge
- height = int(width * ratio)
- else:
- ratio = width / height
- height = shortest_edge
- width = int(height * ratio)
- rlt = img.resize((int(width), int(height)), resample=Image.LANCZOS)
- rlt.save(os.path.join(args.output, f'{basename}T{idx+1}.png'))
-
-
-if __name__ == '__main__':
- """Generate multi-scale versions for GT images with LANCZOS resampling.
- It is now used for DF2K dataset (DIV2K + Flickr 2K)
- """
- parser = argparse.ArgumentParser()
- parser.add_argument('--input', type=str, default='datasets/DF2K/DF2K_HR', help='Input folder')
- parser.add_argument('--output', type=str, default='datasets/DF2K/DF2K_multiscale', help='Output folder')
- args = parser.parse_args()
-
- os.makedirs(args.output, exist_ok=True)
- main(args)
diff --git a/spaces/Boadiwaa/Recipes/openai/api_resources/abstract/deletable_api_resource.py b/spaces/Boadiwaa/Recipes/openai/api_resources/abstract/deletable_api_resource.py
deleted file mode 100644
index 3a6e83ff0e1802279852ce5032f7aced06a52f31..0000000000000000000000000000000000000000
--- a/spaces/Boadiwaa/Recipes/openai/api_resources/abstract/deletable_api_resource.py
+++ /dev/null
@@ -1,24 +0,0 @@
-from urllib.parse import quote_plus
-
-from openai import error
-from openai.api_resources.abstract.api_resource import APIResource
-from openai.util import ApiType
-
-class DeletableAPIResource(APIResource):
- @classmethod
- def delete(cls, sid, api_type=None, api_version=None, **params):
- if isinstance(cls, APIResource):
- raise ValueError(".delete may only be called as a class method now.")
-
- base = cls.class_url()
- extn = quote_plus(sid)
-
- typed_api_type, api_version = cls._get_api_type_and_version(api_type, api_version)
- if typed_api_type == ApiType.AZURE:
- url = "/%s%s/%s?api-version=%s" % (cls.azure_api_prefix, base, extn, api_version)
- elif typed_api_type == ApiType.OPEN_AI:
- url = "%s/%s" % (base, extn)
- else:
- raise error.InvalidAPIType('Unsupported API type %s' % api_type)
-
- return cls._static_request("delete", url, api_type=api_type, api_version=api_version, **params)
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/CONTRIBUTING.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/CONTRIBUTING.md
deleted file mode 100644
index b59346ca1a20abbb02a2502967a7ba3854dbe676..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/CONTRIBUTING.md
+++ /dev/null
@@ -1,52 +0,0 @@
-# Contributing to detectron2
-We want to make contributing to this project as easy and transparent as
-possible.
-
-## Issues
-We use GitHub issues to track public bugs and questions.
-Please make sure to follow one of the
-[issue templates](https://github.com/facebookresearch/detectron2/issues/new/choose)
-when reporting any issues.
-
-Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe
-disclosure of security bugs. In those cases, please go through the process
-outlined on that page and do not file a public issue.
-
-## Pull Requests
-We actively welcome your pull requests.
-
-However, if you're adding any significant features, please
-make sure to have a corresponding issue to discuss your motivation and proposals,
-before sending a PR. We do not always accept new features, and we take the following
-factors into consideration:
-
-1. Whether the same feature can be achieved without modifying detectron2.
-Detectron2 is designed so that you can implement many extensions from the outside, e.g.
-those in [projects](https://github.com/facebookresearch/detectron2/tree/master/projects).
-If some part is not as extensible, you can also bring up the issue to make it more extensible.
-2. Whether the feature is potentially useful to a large audience, or only to a small portion of users.
-3. Whether the proposed solution has a good design / interface.
-4. Whether the proposed solution adds extra mental/practical overhead to users who don't
- need such feature.
-5. Whether the proposed solution breaks existing APIs.
-
-When sending a PR, please do:
-
-1. If a PR contains multiple orthogonal changes, split it to several PRs.
-2. If you've added code that should be tested, add tests.
-3. For PRs that need experiments (e.g. adding a new model), you don't need to update model zoo,
- but do provide experiment results in the description of the PR.
-4. If APIs are changed, update the documentation.
-5. Ensure the test suite passes.
-6. Make sure your code lints with `./dev/linter.sh`.
-
-
-## Contributor License Agreement ("CLA")
-In order to accept your pull request, we need you to submit a CLA. You only need
-to do this once to work on any of Facebook's open source projects.
-
-Complete your CLA here:
-
-## License
-By contributing to detectron2, you agree that your contributions will be licensed
-under the LICENSE file in the root directory of this source tree.
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/shape_spec.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/shape_spec.py
deleted file mode 100644
index ed7f0d08268a2342cfb8246cc032686f2343ef8f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/shape_spec.py
+++ /dev/null
@@ -1,20 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-from collections import namedtuple
-
-
-class ShapeSpec(namedtuple("_ShapeSpec", ["channels", "height", "width", "stride"])):
- """
- A simple structure that contains basic shape specification about a tensor.
- It is often used as the auxiliary inputs/outputs of models,
- to obtain the shape inference ability among pytorch modules.
-
- Attributes:
- channels:
- height:
- width:
- stride:
- """
-
- def __new__(cls, *, channels=None, height=None, width=None, stride=None):
- return super().__new__(cls, channels, height, width, stride)
diff --git a/spaces/CVPR/Image-Animation-using-Thin-Plate-Spline-Motion-Model/style.css b/spaces/CVPR/Image-Animation-using-Thin-Plate-Spline-Motion-Model/style.css
deleted file mode 100644
index 435ebb5987b8913a52f73664c54022374d0c3ed7..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Image-Animation-using-Thin-Plate-Spline-Motion-Model/style.css
+++ /dev/null
@@ -1,19 +0,0 @@
-h1 {
- text-align: center;
-}
-img#overview {
- max-width: 1000px;
- max-height: 600px;
- display: block;
- margin: auto;
-}
-img#style-image {
- max-width: 1000px;
- max-height: 600px;
- display: block;
- margin: auto;
-}
-img#visitor-badge {
- display: block;
- margin: auto;
-}
\ No newline at end of file
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/functional/operators/bitwise_operators.h b/spaces/CVPR/LIVE/thrust/thrust/detail/functional/operators/bitwise_operators.h
deleted file mode 100644
index a6461f9d493132f6f7c331dedb619cc2fa79f8a9..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/functional/operators/bitwise_operators.h
+++ /dev/null
@@ -1,338 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace detail
-{
-namespace functional
-{
-
-template
-__host__ __device__
-actor<
- composite<
- transparent_binary_operator>,
- actor,
- typename as_actor::type
- >
->
-operator&(const actor &_1, const T2 &_2)
-{
- return compose(transparent_binary_operator>(),
- make_actor(_1),
- make_actor(_2));
-} // end operator&()
-
-template
-__host__ __device__
-actor<
- composite<
- transparent_binary_operator>,
- typename as_actor::type,
- actor
- >
->
-operator&(const T1 &_1, const actor &_2)
-{
- return compose(transparent_binary_operator>(),
- make_actor(_1),
- make_actor(_2));
-} // end operator&()
-
-template
-__host__ __device__
-actor<
- composite<
- transparent_binary_operator>,
- actor,
- actor
- >
->
-operator&(const actor &_1, const actor &_2)
-{
- return compose(transparent_binary_operator>(),
- make_actor(_1),
- make_actor(_2));
-} // end operator&()
-
-template
-__host__ __device__
-actor<
- composite<
- transparent_binary_operator>,
- actor,
- typename as_actor::type
- >
->
-operator|(const actor &_1, const T2 &_2)
-{
- return compose(transparent_binary_operator>(),
- make_actor(_1),
- make_actor(_2));
-} // end operator|()
-
-template
-__host__ __device__
-actor<
- composite<
- transparent_binary_operator>,
- typename as_actor::type,
- actor
- >
->
-operator|(const T1 &_1, const actor &_2)
-{
- return compose(transparent_binary_operator>(),
- make_actor(_1),
- make_actor(_2));
-} // end operator|()
-
-template
-__host__ __device__
-actor<
- composite<
- transparent_binary_operator>,
- actor,
- actor
- >
->
-operator|(const actor &_1, const actor &_2)
-{
- return compose(transparent_binary_operator>(),
- make_actor(_1),
- make_actor(_2));
-} // end operator|()
-
-template
-__host__ __device__
-actor<
- composite<
- transparent_binary_operator>,
- actor,
- typename as_actor::type
- >
->
-operator^(const actor &_1, const T2 &_2)
-{
- return compose(transparent_binary_operator>(),
- make_actor(_1),
- make_actor(_2));
-} // end operator^()
-
-template
-__host__ __device__
-actor<
- composite<
- transparent_binary_operator>,
- typename as_actor::type,
- actor
- >
->
-operator^(const T1 &_1, const actor &_2)
-{
- return compose(transparent_binary_operator>(),
- make_actor(_1),
- make_actor(_2));
-} // end operator^()
-
-template
-__host__ __device__
-actor<
- composite<
- transparent_binary_operator>,
- actor,
- actor
- >
->
-operator^(const actor &_1, const actor &_2)
-{
- return compose(transparent_binary_operator>(),
- make_actor(_1),
- make_actor(_2));
-} // end operator^()
-
-
-// there's no standard bit_not functional, so roll an ad hoc one here
-struct bit_not
-{
- using is_transparent = void;
-
- __thrust_exec_check_disable__
- template
- __host__ __device__
- constexpr auto operator()(T1&& t1) const
- noexcept(noexcept(~THRUST_FWD(t1))) -> decltype(~THRUST_FWD(t1))
- {
- return ~THRUST_FWD(t1);
- }
-}; // end prefix_increment
-
-template
-__host__ __device__
-actor<
- composite<
- transparent_unary_operator,
- actor
- >
->
-__host__ __device__
-operator~(const actor &_1)
-{
- return compose(transparent_unary_operator(), _1);
-} // end operator~()
-
-// there's no standard bit_lshift functional, so roll an ad hoc one here
-struct bit_lshift
-{
- using is_transparent = void;
-
- __thrust_exec_check_disable__
- template
- __host__ __device__
- constexpr auto operator()(T1&& t1, T2&& t2) const
- noexcept(noexcept(THRUST_FWD(t1) << THRUST_FWD(t2)))
- -> decltype(THRUST_FWD(t1) << THRUST_FWD(t2))
- {
- return THRUST_FWD(t1) << THRUST_FWD(t2);
- }
-};
-
-template
-__host__ __device__
-actor<
- composite<
- transparent_binary_operator,
- actor,
- typename as_actor::type
- >
->
-operator<<(const actor &_1, const T2 &_2)
-{
- return compose(transparent_binary_operator(),
- make_actor(_1),
- make_actor(_2));
-} // end operator<<()
-
-template
-__host__ __device__
-actor<
- composite<
- transparent_binary_operator,
- typename as_actor::type,
- actor
- >
->
-operator<<(const T1 &_1, const actor &_2)
-{
- return compose(transparent_binary_operator(),
- make_actor(_1),
- make_actor(_2));
-} // end operator<<()
-
-template
-__host__ __device__
-actor<
- composite<
- transparent_binary_operator,
- actor,
- actor
- >
->
-operator<<(const actor &_1, const actor &_2)
-{
- return compose(transparent_binary_operator(),
- make_actor(_1),
- make_actor(_2));
-} // end operator<<()
-
-// there's no standard bit_rshift functional, so roll an ad hoc one here
-struct bit_rshift
-{
- using is_transparent = void;
-
- __thrust_exec_check_disable__
- template
- __host__ __device__
- constexpr auto operator()(T1& t1, T2&& t2) const
- noexcept(noexcept(THRUST_FWD(t1) >> THRUST_FWD(t2)))
- -> decltype(THRUST_FWD(t1) >> THRUST_FWD(t2))
- {
- return THRUST_FWD(t1) >> THRUST_FWD(t2);
- }
-};
-
-
-template
-__host__ __device__
-actor<
- composite<
- transparent_binary_operator,
- actor,
- typename as_actor::type
- >
->
-operator>>(const actor &_1, const T2 &_2)
-{
- return compose(transparent_binary_operator(),
- make_actor(_1),
- make_actor(_2));
-} // end operator>>()
-
-template
-__host__ __device__
-actor<
- composite<
- transparent_binary_operator,
- typename as_actor::type,
- actor
- >
->
-operator>>(const T1 &_1, const actor &_2)
-{
- return compose(transparent_binary_operator(),
- make_actor(_1),
- make_actor(_2));
-} // end operator>>()
-
-template
-__host__ __device__
-actor<
- composite<
- transparent_binary_operator,
- actor,
- actor
- >
->
-operator>>(const actor &_1, const actor &_2)
-{
- return compose(transparent_binary_operator(),
- make_actor(_1),
- make_actor(_2));
-} // end operator>>()
-
-} // end functional
-} // end detail
-} // end thrust
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/replace.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/replace.h
deleted file mode 100644
index d8fb5746f1ced28be6571c8535ce0d8615863234..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/replace.h
+++ /dev/null
@@ -1,44 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a fill of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// the purpose of this header is to #include the replace.h header
-// of the sequential, host, and device systems. It should be #included in any
-// code which uses adl to dispatch replace
-
-#include
-
-// SCons can't see through the #defines below to figure out what this header
-// includes, so we fake it out by specifying all possible files we might end up
-// including inside an #if 0.
-#if 0
-#include
-#include
-#include
-#include
-#endif
-
-#define __THRUST_HOST_SYSTEM_REPLACE_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/replace.h>
-#include __THRUST_HOST_SYSTEM_REPLACE_HEADER
-#undef __THRUST_HOST_SYSTEM_REPLACE_HEADER
-
-#define __THRUST_DEVICE_SYSTEM_REPLACE_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/replace.h>
-#include __THRUST_DEVICE_SYSTEM_REPLACE_HEADER
-#undef __THRUST_DEVICE_SYSTEM_REPLACE_HEADER
-
diff --git a/spaces/CVPR/WALT/mmdet/models/backbones/darknet.py b/spaces/CVPR/WALT/mmdet/models/backbones/darknet.py
deleted file mode 100644
index 517fe26259217792e0dad80ca3824d914cfe3904..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/backbones/darknet.py
+++ /dev/null
@@ -1,199 +0,0 @@
-# Copyright (c) 2019 Western Digital Corporation or its affiliates.
-
-import logging
-
-import torch.nn as nn
-from mmcv.cnn import ConvModule, constant_init, kaiming_init
-from mmcv.runner import load_checkpoint
-from torch.nn.modules.batchnorm import _BatchNorm
-
-from ..builder import BACKBONES
-
-
-class ResBlock(nn.Module):
- """The basic residual block used in Darknet. Each ResBlock consists of two
- ConvModules and the input is added to the final output. Each ConvModule is
- composed of Conv, BN, and LeakyReLU. In YoloV3 paper, the first convLayer
- has half of the number of the filters as much as the second convLayer. The
- first convLayer has filter size of 1x1 and the second one has the filter
- size of 3x3.
-
- Args:
- in_channels (int): The input channels. Must be even.
- conv_cfg (dict): Config dict for convolution layer. Default: None.
- norm_cfg (dict): Dictionary to construct and config norm layer.
- Default: dict(type='BN', requires_grad=True)
- act_cfg (dict): Config dict for activation layer.
- Default: dict(type='LeakyReLU', negative_slope=0.1).
- """
-
- def __init__(self,
- in_channels,
- conv_cfg=None,
- norm_cfg=dict(type='BN', requires_grad=True),
- act_cfg=dict(type='LeakyReLU', negative_slope=0.1)):
- super(ResBlock, self).__init__()
- assert in_channels % 2 == 0 # ensure the in_channels is even
- half_in_channels = in_channels // 2
-
- # shortcut
- cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg)
-
- self.conv1 = ConvModule(in_channels, half_in_channels, 1, **cfg)
- self.conv2 = ConvModule(
- half_in_channels, in_channels, 3, padding=1, **cfg)
-
- def forward(self, x):
- residual = x
- out = self.conv1(x)
- out = self.conv2(out)
- out = out + residual
-
- return out
-
-
-@BACKBONES.register_module()
-class Darknet(nn.Module):
- """Darknet backbone.
-
- Args:
- depth (int): Depth of Darknet. Currently only support 53.
- out_indices (Sequence[int]): Output from which stages.
- frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
- -1 means not freezing any parameters. Default: -1.
- conv_cfg (dict): Config dict for convolution layer. Default: None.
- norm_cfg (dict): Dictionary to construct and config norm layer.
- Default: dict(type='BN', requires_grad=True)
- act_cfg (dict): Config dict for activation layer.
- Default: dict(type='LeakyReLU', negative_slope=0.1).
- norm_eval (bool): Whether to set norm layers to eval mode, namely,
- freeze running stats (mean and var). Note: Effect on Batch Norm
- and its variants only.
-
- Example:
- >>> from mmdet.models import Darknet
- >>> import torch
- >>> self = Darknet(depth=53)
- >>> self.eval()
- >>> inputs = torch.rand(1, 3, 416, 416)
- >>> level_outputs = self.forward(inputs)
- >>> for level_out in level_outputs:
- ... print(tuple(level_out.shape))
- ...
- (1, 256, 52, 52)
- (1, 512, 26, 26)
- (1, 1024, 13, 13)
- """
-
- # Dict(depth: (layers, channels))
- arch_settings = {
- 53: ((1, 2, 8, 8, 4), ((32, 64), (64, 128), (128, 256), (256, 512),
- (512, 1024)))
- }
-
- def __init__(self,
- depth=53,
- out_indices=(3, 4, 5),
- frozen_stages=-1,
- conv_cfg=None,
- norm_cfg=dict(type='BN', requires_grad=True),
- act_cfg=dict(type='LeakyReLU', negative_slope=0.1),
- norm_eval=True):
- super(Darknet, self).__init__()
- if depth not in self.arch_settings:
- raise KeyError(f'invalid depth {depth} for darknet')
- self.depth = depth
- self.out_indices = out_indices
- self.frozen_stages = frozen_stages
- self.layers, self.channels = self.arch_settings[depth]
-
- cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg)
-
- self.conv1 = ConvModule(3, 32, 3, padding=1, **cfg)
-
- self.cr_blocks = ['conv1']
- for i, n_layers in enumerate(self.layers):
- layer_name = f'conv_res_block{i + 1}'
- in_c, out_c = self.channels[i]
- self.add_module(
- layer_name,
- self.make_conv_res_block(in_c, out_c, n_layers, **cfg))
- self.cr_blocks.append(layer_name)
-
- self.norm_eval = norm_eval
-
- def forward(self, x):
- outs = []
- for i, layer_name in enumerate(self.cr_blocks):
- cr_block = getattr(self, layer_name)
- x = cr_block(x)
- if i in self.out_indices:
- outs.append(x)
-
- return tuple(outs)
-
- def init_weights(self, pretrained=None):
- if isinstance(pretrained, str):
- logger = logging.getLogger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, (_BatchNorm, nn.GroupNorm)):
- constant_init(m, 1)
-
- else:
- raise TypeError('pretrained must be a str or None')
-
- def _freeze_stages(self):
- if self.frozen_stages >= 0:
- for i in range(self.frozen_stages):
- m = getattr(self, self.cr_blocks[i])
- m.eval()
- for param in m.parameters():
- param.requires_grad = False
-
- def train(self, mode=True):
- super(Darknet, self).train(mode)
- self._freeze_stages()
- if mode and self.norm_eval:
- for m in self.modules():
- if isinstance(m, _BatchNorm):
- m.eval()
-
- @staticmethod
- def make_conv_res_block(in_channels,
- out_channels,
- res_repeat,
- conv_cfg=None,
- norm_cfg=dict(type='BN', requires_grad=True),
- act_cfg=dict(type='LeakyReLU',
- negative_slope=0.1)):
- """In Darknet backbone, ConvLayer is usually followed by ResBlock. This
- function will make that. The Conv layers always have 3x3 filters with
- stride=2. The number of the filters in Conv layer is the same as the
- out channels of the ResBlock.
-
- Args:
- in_channels (int): The number of input channels.
- out_channels (int): The number of output channels.
- res_repeat (int): The number of ResBlocks.
- conv_cfg (dict): Config dict for convolution layer. Default: None.
- norm_cfg (dict): Dictionary to construct and config norm layer.
- Default: dict(type='BN', requires_grad=True)
- act_cfg (dict): Config dict for activation layer.
- Default: dict(type='LeakyReLU', negative_slope=0.1).
- """
-
- cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg)
-
- model = nn.Sequential()
- model.add_module(
- 'conv',
- ConvModule(
- in_channels, out_channels, 3, stride=2, padding=1, **cfg))
- for idx in range(res_repeat):
- model.add_module('res{}'.format(idx),
- ResBlock(out_channels, **cfg))
- return model
diff --git a/spaces/CVPR/WALT/mmdet/models/losses/cross_entropy_loss.py b/spaces/CVPR/WALT/mmdet/models/losses/cross_entropy_loss.py
deleted file mode 100644
index 3fa908d2789e291616acf969912bf4429b1b07bf..0000000000000000000000000000000000000000
--- a/spaces/CVPR/WALT/mmdet/models/losses/cross_entropy_loss.py
+++ /dev/null
@@ -1,216 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..builder import LOSSES
-from .utils import weight_reduce_loss
-
-
-def cross_entropy(pred,
- label,
- weight=None,
- reduction='mean',
- avg_factor=None,
- class_weight=None):
- """Calculate the CrossEntropy loss.
-
- Args:
- pred (torch.Tensor): The prediction with shape (N, C), C is the number
- of classes.
- label (torch.Tensor): The learning label of the prediction.
- weight (torch.Tensor, optional): Sample-wise loss weight.
- reduction (str, optional): The method used to reduce the loss.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- class_weight (list[float], optional): The weight for each class.
-
- Returns:
- torch.Tensor: The calculated loss
- """
- # element-wise losses
- loss = F.cross_entropy(pred, label, weight=class_weight, reduction='none')
-
- # apply weights and do the reduction
- if weight is not None:
- weight = weight.float()
- loss = weight_reduce_loss(
- loss, weight=weight, reduction=reduction, avg_factor=avg_factor)
-
- return loss
-
-
-def _expand_onehot_labels(labels, label_weights, label_channels):
- bin_labels = labels.new_full((labels.size(0), label_channels), 0)
- inds = torch.nonzero(
- (labels >= 0) & (labels < label_channels), as_tuple=False).squeeze()
- if inds.numel() > 0:
- bin_labels[inds, labels[inds]] = 1
-
- if label_weights is None:
- bin_label_weights = None
- else:
- bin_label_weights = label_weights.view(-1, 1).expand(
- label_weights.size(0), label_channels)
-
- return bin_labels, bin_label_weights
-
-
-def binary_cross_entropy(pred,
- label,
- weight=None,
- reduction='mean',
- avg_factor=None,
- class_weight=None):
- """Calculate the binary CrossEntropy loss.
-
- Args:
- pred (torch.Tensor): The prediction with shape (N, 1).
- label (torch.Tensor): The learning label of the prediction.
- weight (torch.Tensor, optional): Sample-wise loss weight.
- reduction (str, optional): The method used to reduce the loss.
- Options are "none", "mean" and "sum".
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- class_weight (list[float], optional): The weight for each class.
-
- Returns:
- torch.Tensor: The calculated loss
- """
- if pred.dim() != label.dim():
- label, weight = _expand_onehot_labels(label, weight, pred.size(-1))
-
- # weighted element-wise losses
- if weight is not None:
- weight = weight.float()
- loss = F.binary_cross_entropy_with_logits(
- pred, label.float(), pos_weight=class_weight, reduction='none')
- # do the reduction for the weighted loss
- loss = weight_reduce_loss(
- loss, weight, reduction=reduction, avg_factor=avg_factor)
-
- return loss
-
-
-def mask_cross_entropy(pred,
- target,
- label,
- reduction='mean',
- avg_factor=None,
- class_weight=None):
- """Calculate the CrossEntropy loss for masks.
-
- Args:
- pred (torch.Tensor): The prediction with shape (N, C, *), C is the
- number of classes. The trailing * indicates arbitrary shape.
- target (torch.Tensor): The learning label of the prediction.
- label (torch.Tensor): ``label`` indicates the class label of the mask
- corresponding object. This will be used to select the mask in the
- of the class which the object belongs to when the mask prediction
- if not class-agnostic.
- reduction (str, optional): The method used to reduce the loss.
- Options are "none", "mean" and "sum".
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- class_weight (list[float], optional): The weight for each class.
-
- Returns:
- torch.Tensor: The calculated loss
-
- Example:
- >>> N, C = 3, 11
- >>> H, W = 2, 2
- >>> pred = torch.randn(N, C, H, W) * 1000
- >>> target = torch.rand(N, H, W)
- >>> label = torch.randint(0, C, size=(N,))
- >>> reduction = 'mean'
- >>> avg_factor = None
- >>> class_weights = None
- >>> loss = mask_cross_entropy(pred, target, label, reduction,
- >>> avg_factor, class_weights)
- >>> assert loss.shape == (1,)
- """
- # TODO: handle these two reserved arguments
- assert reduction == 'mean' and avg_factor is None
- num_rois = pred.size()[0]
- inds = torch.arange(0, num_rois, dtype=torch.long, device=pred.device)
- pred_slice = pred[inds, label].squeeze(1)
- return F.binary_cross_entropy_with_logits(
- pred_slice, target, weight=class_weight, reduction='mean')[None]
-
-
-@LOSSES.register_module()
-class CrossEntropyLoss(nn.Module):
-
- def __init__(self,
- use_sigmoid=False,
- use_mask=False,
- reduction='mean',
- class_weight=None,
- loss_weight=1.0):
- """CrossEntropyLoss.
-
- Args:
- use_sigmoid (bool, optional): Whether the prediction uses sigmoid
- of softmax. Defaults to False.
- use_mask (bool, optional): Whether to use mask cross entropy loss.
- Defaults to False.
- reduction (str, optional): . Defaults to 'mean'.
- Options are "none", "mean" and "sum".
- class_weight (list[float], optional): Weight of each class.
- Defaults to None.
- loss_weight (float, optional): Weight of the loss. Defaults to 1.0.
- """
- super(CrossEntropyLoss, self).__init__()
- assert (use_sigmoid is False) or (use_mask is False)
- self.use_sigmoid = use_sigmoid
- self.use_mask = use_mask
- self.reduction = reduction
- self.loss_weight = loss_weight
- self.class_weight = class_weight
-
- if self.use_sigmoid:
- self.cls_criterion = binary_cross_entropy
- elif self.use_mask:
- self.cls_criterion = mask_cross_entropy
- else:
- self.cls_criterion = cross_entropy
-
- def forward(self,
- cls_score,
- label,
- weight=None,
- avg_factor=None,
- reduction_override=None,
- **kwargs):
- """Forward function.
-
- Args:
- cls_score (torch.Tensor): The prediction.
- label (torch.Tensor): The learning label of the prediction.
- weight (torch.Tensor, optional): Sample-wise loss weight.
- avg_factor (int, optional): Average factor that is used to average
- the loss. Defaults to None.
- reduction (str, optional): The method used to reduce the loss.
- Options are "none", "mean" and "sum".
- Returns:
- torch.Tensor: The calculated loss
- """
- assert reduction_override in (None, 'none', 'mean', 'sum')
- reduction = (
- reduction_override if reduction_override else self.reduction)
- if self.class_weight is not None:
- class_weight = cls_score.new_tensor(
- self.class_weight, device=cls_score.device)
- else:
- class_weight = None
- loss_cls = self.loss_weight * self.cls_criterion(
- cls_score,
- label,
- weight,
- class_weight=class_weight,
- reduction=reduction,
- avg_factor=avg_factor,
- **kwargs)
- return loss_cls
-
-
diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/hold_tight/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/hold_tight/__init__.py
deleted file mode 100644
index 98dc735a4250c2e6e8b93cc89ce90646dad7fc15..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/meme_generator/memes/hold_tight/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-from pathlib import Path
-from typing import List
-
-from pil_utils import BuildImage
-
-from meme_generator import add_meme
-
-img_dir = Path(__file__).parent / "images"
-
-
-def hold_tight(images: List[BuildImage], texts, args):
- img = images[0].convert("RGBA").resize((159, 171), keep_ratio=True)
- frame = BuildImage.open(img_dir / "0.png")
- frame.paste(img, (113, 205), below=True)
- return frame.save_jpg()
-
-
-add_meme("hold_tight", hold_tight, min_images=1, max_images=1, keywords=["抱紧"])
diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/midas/midas/__init__.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/midas/midas/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/README.md b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/README.md
deleted file mode 100644
index 8ae85e0567cbe71ef1f1df4137cbf549240065d2..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/README.md
+++ /dev/null
@@ -1,90 +0,0 @@
-# Setting Up Datasets
-This file describes how to perform training on other datasets.
-
-Only Pascal VOC dataset can be loaded from its original format and be outputted to Pascal style results currently.
-
-We expect the annotations from other datasets be converted to COCO json format, and
-the output will be in COCO-style. (i.e. AP, AP50, AP75, APs, APm, APl for bbox and segm)
-
-## Creating Symlinks for PASCAL VOC
-
-We assume that your symlinked `datasets/voc/VOC` directory has the following structure:
-
-```
-VOC
-|_ JPEGImages
-| |_ .jpg
-| |_ ...
-| |_ .jpg
-|_ Annotations
-| |_ pascal_train.json (optional)
-| |_ pascal_val.json (optional)
-| |_ pascal_test.json (optional)
-| |_ .xml
-| |_ ...
-| |_ .xml
-|_ VOCdevkit
-```
-
-Create symlinks for `voc/VOC`:
-
-```
-cd ~/github/maskrcnn-benchmark
-mkdir -p datasets/voc/VOC
-ln -s /path/to/VOC /datasets/voc/VOC
-```
-Example configuration files for PASCAL VOC could be found [here](https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/configs/pascal_voc/).
-
-### PASCAL VOC Annotations in COCO Format
-To output COCO-style evaluation result, PASCAL VOC annotations in COCO json format is required and could be downloaded from [here](https://storage.googleapis.com/coco-dataset/external/PASCAL_VOC.zip)
-via http://cocodataset.org/#external.
-
-## Creating Symlinks for Cityscapes:
-
-We assume that your symlinked `datasets/cityscapes` directory has the following structure:
-
-```
-cityscapes
-|_ images
-| |_ .jpg
-| |_ ...
-| |_ .jpg
-|_ annotations
-| |_ instanceonly_gtFile_train.json
-| |_ ...
-|_ raw
- |_ gtFine
- |_ ...
- |_ README.md
-```
-
-Create symlinks for `cityscapes`:
-
-```
-cd ~/github/maskrcnn-benchmark
-mkdir -p datasets/cityscapes
-ln -s /path/to/cityscapes datasets/data/cityscapes
-```
-
-### Steps to convert Cityscapes Annotations to COCO Format
-1. Download gtFine_trainvaltest.zip from https://www.cityscapes-dataset.com/downloads/ (login required)
-2. Extract it to /path/to/gtFine_trainvaltest
-```
-cityscapes
-|_ gtFine_trainvaltest.zip
-|_ gtFine_trainvaltest
- |_ gtFine
-```
-3. Run the below commands to convert the annotations
-
-```
-cd ~/github
-git clone https://github.com/mcordts/cityscapesScripts.git
-cd cityscapesScripts
-cp ~/github/maskrcnn-benchmark/tools/cityscapes/instances2dict_with_polygons.py cityscapesscripts/evaluation
-python setup.py install
-cd ~/github/maskrcnn-benchmark
-python tools/cityscapes/convert_cityscapes_to_coco.py --datadir /path/to/cityscapes --outdir /path/to/cityscapes/annotations
-```
-
-Example configuration files for Cityscapes could be found [here](https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/configs/cityscapes/).
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/ttGlyphPen.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/ttGlyphPen.py
deleted file mode 100644
index de2ccaeeb45c18c80caae049f3bd26b4ff22e99e..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/ttGlyphPen.py
+++ /dev/null
@@ -1,335 +0,0 @@
-from array import array
-from typing import Any, Callable, Dict, Optional, Tuple
-from fontTools.misc.fixedTools import MAX_F2DOT14, floatToFixedToFloat
-from fontTools.misc.loggingTools import LogMixin
-from fontTools.pens.pointPen import AbstractPointPen
-from fontTools.misc.roundTools import otRound
-from fontTools.pens.basePen import LoggingPen, PenError
-from fontTools.pens.transformPen import TransformPen, TransformPointPen
-from fontTools.ttLib.tables import ttProgram
-from fontTools.ttLib.tables._g_l_y_f import flagOnCurve, flagCubic
-from fontTools.ttLib.tables._g_l_y_f import Glyph
-from fontTools.ttLib.tables._g_l_y_f import GlyphComponent
-from fontTools.ttLib.tables._g_l_y_f import GlyphCoordinates
-from fontTools.ttLib.tables._g_l_y_f import dropImpliedOnCurvePoints
-import math
-
-
-__all__ = ["TTGlyphPen", "TTGlyphPointPen"]
-
-
-class _TTGlyphBasePen:
- def __init__(
- self,
- glyphSet: Optional[Dict[str, Any]],
- handleOverflowingTransforms: bool = True,
- ) -> None:
- """
- Construct a new pen.
-
- Args:
- glyphSet (Dict[str, Any]): A glyphset object, used to resolve components.
- handleOverflowingTransforms (bool): See below.
-
- If ``handleOverflowingTransforms`` is True, the components' transform values
- are checked that they don't overflow the limits of a F2Dot14 number:
- -2.0 <= v < +2.0. If any transform value exceeds these, the composite
- glyph is decomposed.
-
- An exception to this rule is done for values that are very close to +2.0
- (both for consistency with the -2.0 case, and for the relative frequency
- these occur in real fonts). When almost +2.0 values occur (and all other
- values are within the range -2.0 <= x <= +2.0), they are clamped to the
- maximum positive value that can still be encoded as an F2Dot14: i.e.
- 1.99993896484375.
-
- If False, no check is done and all components are translated unmodified
- into the glyf table, followed by an inevitable ``struct.error`` once an
- attempt is made to compile them.
-
- If both contours and components are present in a glyph, the components
- are decomposed.
- """
- self.glyphSet = glyphSet
- self.handleOverflowingTransforms = handleOverflowingTransforms
- self.init()
-
- def _decompose(
- self,
- glyphName: str,
- transformation: Tuple[float, float, float, float, float, float],
- ):
- tpen = self.transformPen(self, transformation)
- getattr(self.glyphSet[glyphName], self.drawMethod)(tpen)
-
- def _isClosed(self):
- """
- Check if the current path is closed.
- """
- raise NotImplementedError
-
- def init(self) -> None:
- self.points = []
- self.endPts = []
- self.types = []
- self.components = []
-
- def addComponent(
- self,
- baseGlyphName: str,
- transformation: Tuple[float, float, float, float, float, float],
- identifier: Optional[str] = None,
- **kwargs: Any,
- ) -> None:
- """
- Add a sub glyph.
- """
- self.components.append((baseGlyphName, transformation))
-
- def _buildComponents(self, componentFlags):
- if self.handleOverflowingTransforms:
- # we can't encode transform values > 2 or < -2 in F2Dot14,
- # so we must decompose the glyph if any transform exceeds these
- overflowing = any(
- s > 2 or s < -2
- for (glyphName, transformation) in self.components
- for s in transformation[:4]
- )
- components = []
- for glyphName, transformation in self.components:
- if glyphName not in self.glyphSet:
- self.log.warning(f"skipped non-existing component '{glyphName}'")
- continue
- if self.points or (self.handleOverflowingTransforms and overflowing):
- # can't have both coordinates and components, so decompose
- self._decompose(glyphName, transformation)
- continue
-
- component = GlyphComponent()
- component.glyphName = glyphName
- component.x, component.y = (otRound(v) for v in transformation[4:])
- # quantize floats to F2Dot14 so we get same values as when decompiled
- # from a binary glyf table
- transformation = tuple(
- floatToFixedToFloat(v, 14) for v in transformation[:4]
- )
- if transformation != (1, 0, 0, 1):
- if self.handleOverflowingTransforms and any(
- MAX_F2DOT14 < s <= 2 for s in transformation
- ):
- # clamp values ~= +2.0 so we can keep the component
- transformation = tuple(
- MAX_F2DOT14 if MAX_F2DOT14 < s <= 2 else s
- for s in transformation
- )
- component.transform = (transformation[:2], transformation[2:])
- component.flags = componentFlags
- components.append(component)
- return components
-
- def glyph(
- self,
- componentFlags: int = 0x04,
- dropImpliedOnCurves: bool = False,
- *,
- round: Callable[[float], int] = otRound,
- ) -> Glyph:
- """
- Returns a :py:class:`~._g_l_y_f.Glyph` object representing the glyph.
-
- Args:
- componentFlags: Flags to use for component glyphs. (default: 0x04)
-
- dropImpliedOnCurves: Whether to remove implied-oncurve points. (default: False)
- """
- if not self._isClosed():
- raise PenError("Didn't close last contour.")
- components = self._buildComponents(componentFlags)
-
- glyph = Glyph()
- glyph.coordinates = GlyphCoordinates(self.points)
- glyph.endPtsOfContours = self.endPts
- glyph.flags = array("B", self.types)
- self.init()
-
- if components:
- # If both components and contours were present, they have by now
- # been decomposed by _buildComponents.
- glyph.components = components
- glyph.numberOfContours = -1
- else:
- glyph.numberOfContours = len(glyph.endPtsOfContours)
- glyph.program = ttProgram.Program()
- glyph.program.fromBytecode(b"")
- if dropImpliedOnCurves:
- dropImpliedOnCurvePoints(glyph)
- glyph.coordinates.toInt(round=round)
-
- return glyph
-
-
-class TTGlyphPen(_TTGlyphBasePen, LoggingPen):
- """
- Pen used for drawing to a TrueType glyph.
-
- This pen can be used to construct or modify glyphs in a TrueType format
- font. After using the pen to draw, use the ``.glyph()`` method to retrieve
- a :py:class:`~._g_l_y_f.Glyph` object representing the glyph.
- """
-
- drawMethod = "draw"
- transformPen = TransformPen
-
- def __init__(
- self,
- glyphSet: Optional[Dict[str, Any]] = None,
- handleOverflowingTransforms: bool = True,
- outputImpliedClosingLine: bool = False,
- ) -> None:
- super().__init__(glyphSet, handleOverflowingTransforms)
- self.outputImpliedClosingLine = outputImpliedClosingLine
-
- def _addPoint(self, pt: Tuple[float, float], tp: int) -> None:
- self.points.append(pt)
- self.types.append(tp)
-
- def _popPoint(self) -> None:
- self.points.pop()
- self.types.pop()
-
- def _isClosed(self) -> bool:
- return (not self.points) or (
- self.endPts and self.endPts[-1] == len(self.points) - 1
- )
-
- def lineTo(self, pt: Tuple[float, float]) -> None:
- self._addPoint(pt, flagOnCurve)
-
- def moveTo(self, pt: Tuple[float, float]) -> None:
- if not self._isClosed():
- raise PenError('"move"-type point must begin a new contour.')
- self._addPoint(pt, flagOnCurve)
-
- def curveTo(self, *points) -> None:
- assert len(points) % 2 == 1
- for pt in points[:-1]:
- self._addPoint(pt, flagCubic)
-
- # last point is None if there are no on-curve points
- if points[-1] is not None:
- self._addPoint(points[-1], 1)
-
- def qCurveTo(self, *points) -> None:
- assert len(points) >= 1
- for pt in points[:-1]:
- self._addPoint(pt, 0)
-
- # last point is None if there are no on-curve points
- if points[-1] is not None:
- self._addPoint(points[-1], 1)
-
- def closePath(self) -> None:
- endPt = len(self.points) - 1
-
- # ignore anchors (one-point paths)
- if endPt == 0 or (self.endPts and endPt == self.endPts[-1] + 1):
- self._popPoint()
- return
-
- if not self.outputImpliedClosingLine:
- # if first and last point on this path are the same, remove last
- startPt = 0
- if self.endPts:
- startPt = self.endPts[-1] + 1
- if self.points[startPt] == self.points[endPt]:
- self._popPoint()
- endPt -= 1
-
- self.endPts.append(endPt)
-
- def endPath(self) -> None:
- # TrueType contours are always "closed"
- self.closePath()
-
-
-class TTGlyphPointPen(_TTGlyphBasePen, LogMixin, AbstractPointPen):
- """
- Point pen used for drawing to a TrueType glyph.
-
- This pen can be used to construct or modify glyphs in a TrueType format
- font. After using the pen to draw, use the ``.glyph()`` method to retrieve
- a :py:class:`~._g_l_y_f.Glyph` object representing the glyph.
- """
-
- drawMethod = "drawPoints"
- transformPen = TransformPointPen
-
- def init(self) -> None:
- super().init()
- self._currentContourStartIndex = None
-
- def _isClosed(self) -> bool:
- return self._currentContourStartIndex is None
-
- def beginPath(self, identifier: Optional[str] = None, **kwargs: Any) -> None:
- """
- Start a new sub path.
- """
- if not self._isClosed():
- raise PenError("Didn't close previous contour.")
- self._currentContourStartIndex = len(self.points)
-
- def endPath(self) -> None:
- """
- End the current sub path.
- """
- # TrueType contours are always "closed"
- if self._isClosed():
- raise PenError("Contour is already closed.")
- if self._currentContourStartIndex == len(self.points):
- # ignore empty contours
- self._currentContourStartIndex = None
- return
-
- contourStart = self.endPts[-1] + 1 if self.endPts else 0
- self.endPts.append(len(self.points) - 1)
- self._currentContourStartIndex = None
-
- # Resolve types for any cubic segments
- flags = self.types
- for i in range(contourStart, len(flags)):
- if flags[i] == "curve":
- j = i - 1
- if j < contourStart:
- j = len(flags) - 1
- while flags[j] == 0:
- flags[j] = flagCubic
- j -= 1
- flags[i] = flagOnCurve
-
- def addPoint(
- self,
- pt: Tuple[float, float],
- segmentType: Optional[str] = None,
- smooth: bool = False,
- name: Optional[str] = None,
- identifier: Optional[str] = None,
- **kwargs: Any,
- ) -> None:
- """
- Add a point to the current sub path.
- """
- if self._isClosed():
- raise PenError("Can't add a point to a closed contour.")
- if segmentType is None:
- self.types.append(0)
- elif segmentType in ("line", "move"):
- self.types.append(flagOnCurve)
- elif segmentType == "qcurve":
- self.types.append(flagOnCurve)
- elif segmentType == "curve":
- self.types.append("curve")
- else:
- raise AssertionError(segmentType)
-
- self.points.append(pt)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/kerning.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/kerning.py
deleted file mode 100644
index 8a1dca5b680fdd02d1e6ef5797e33e617005c254..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/kerning.py
+++ /dev/null
@@ -1,91 +0,0 @@
-def lookupKerningValue(
- pair, kerning, groups, fallback=0, glyphToFirstGroup=None, glyphToSecondGroup=None
-):
- """
- Note: This expects kerning to be a flat dictionary
- of kerning pairs, not the nested structure used
- in kerning.plist.
-
- >>> groups = {
- ... "public.kern1.O" : ["O", "D", "Q"],
- ... "public.kern2.E" : ["E", "F"]
- ... }
- >>> kerning = {
- ... ("public.kern1.O", "public.kern2.E") : -100,
- ... ("public.kern1.O", "F") : -200,
- ... ("D", "F") : -300
- ... }
- >>> lookupKerningValue(("D", "F"), kerning, groups)
- -300
- >>> lookupKerningValue(("O", "F"), kerning, groups)
- -200
- >>> lookupKerningValue(("O", "E"), kerning, groups)
- -100
- >>> lookupKerningValue(("O", "O"), kerning, groups)
- 0
- >>> lookupKerningValue(("E", "E"), kerning, groups)
- 0
- >>> lookupKerningValue(("E", "O"), kerning, groups)
- 0
- >>> lookupKerningValue(("X", "X"), kerning, groups)
- 0
- >>> lookupKerningValue(("public.kern1.O", "public.kern2.E"),
- ... kerning, groups)
- -100
- >>> lookupKerningValue(("public.kern1.O", "F"), kerning, groups)
- -200
- >>> lookupKerningValue(("O", "public.kern2.E"), kerning, groups)
- -100
- >>> lookupKerningValue(("public.kern1.X", "public.kern2.X"), kerning, groups)
- 0
- """
- # quickly check to see if the pair is in the kerning dictionary
- if pair in kerning:
- return kerning[pair]
- # create glyph to group mapping
- if glyphToFirstGroup is not None:
- assert glyphToSecondGroup is not None
- if glyphToSecondGroup is not None:
- assert glyphToFirstGroup is not None
- if glyphToFirstGroup is None:
- glyphToFirstGroup = {}
- glyphToSecondGroup = {}
- for group, groupMembers in groups.items():
- if group.startswith("public.kern1."):
- for glyph in groupMembers:
- glyphToFirstGroup[glyph] = group
- elif group.startswith("public.kern2."):
- for glyph in groupMembers:
- glyphToSecondGroup[glyph] = group
- # get group names and make sure first and second are glyph names
- first, second = pair
- firstGroup = secondGroup = None
- if first.startswith("public.kern1."):
- firstGroup = first
- first = None
- else:
- firstGroup = glyphToFirstGroup.get(first)
- if second.startswith("public.kern2."):
- secondGroup = second
- second = None
- else:
- secondGroup = glyphToSecondGroup.get(second)
- # make an ordered list of pairs to look up
- pairs = [
- (first, second),
- (first, secondGroup),
- (firstGroup, second),
- (firstGroup, secondGroup),
- ]
- # look up the pairs and return any matches
- for pair in pairs:
- if pair in kerning:
- return kerning[pair]
- # use the fallback value
- return fallback
-
-
-if __name__ == "__main__":
- import doctest
-
- doctest.testmod()
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-edf307d2.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-edf307d2.css
deleted file mode 100644
index 690ed736f2c29c32ba8499343659e9fde81f2098..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-edf307d2.css
+++ /dev/null
@@ -1 +0,0 @@
-div.svelte-1yrv54 .math.inline{fill:var(--body-text-color);display:inline-block;vertical-align:middle;padding:var(--size-1-5) -var(--size-1);color:var(--body-text-color)}div.svelte-1yrv54 .math.inline svg{display:inline;margin-bottom:.22em}div.svelte-1yrv54{max-width:100%}.min.svelte-1yrv54{min-height:var(--size-24)}.hide.svelte-1yrv54{display:none}div.svelte-1ed2p3z{transition:.15s}.pending.svelte-1ed2p3z{opacity:.2}
diff --git a/spaces/DaleChen/AutoGPT/autogpt/commands/improve_code.py b/spaces/DaleChen/AutoGPT/autogpt/commands/improve_code.py
deleted file mode 100644
index e3440d8b7c6ee8cb62d73df48623ab757c973c59..0000000000000000000000000000000000000000
--- a/spaces/DaleChen/AutoGPT/autogpt/commands/improve_code.py
+++ /dev/null
@@ -1,29 +0,0 @@
-from __future__ import annotations
-
-import json
-
-from autogpt.llm_utils import call_ai_function
-
-
-def improve_code(suggestions: list[str], code: str) -> str:
- """
- A function that takes in code and suggestions and returns a response from create
- chat completion api call.
-
- Parameters:
- suggestions (List): A list of suggestions around what needs to be improved.
- code (str): Code to be improved.
- Returns:
- A result string from create chat completion. Improved code in response.
- """
-
- function_string = (
- "def generate_improved_code(suggestions: List[str], code: str) -> str:"
- )
- args = [json.dumps(suggestions), code]
- description_string = (
- "Improves the provided code based on the suggestions"
- " provided, making no other changes."
- )
-
- return call_ai_function(function_string, args, description_string)
diff --git a/spaces/DarkyMan/URPM/app.py b/spaces/DarkyMan/URPM/app.py
deleted file mode 100644
index 72ff631b0e066b0ab3bd45c889910051d4a0a818..0000000000000000000000000000000000000000
--- a/spaces/DarkyMan/URPM/app.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import gradio as gr
-import torch
-import numpy as np
-import modin.pandas as pd
-from PIL import Image
-from diffusers import DiffusionPipeline
-
-device = "cuda" if torch.cuda.is_available() else "cpu"
-pipe = DiffusionPipeline.from_pretrained("ductridev/uber-realistic-porn-merge-urpm", torch_dtype=torch.float16, safety_checker=None)
-pipe = pipe.to(device)
-
-def genie (prompt, scale, steps, Seed):
- generator = torch.Generator(device=device).manual_seed(Seed)
- images = pipe(prompt, num_inference_steps=steps, guidance_scale=scale, generator=generator).images[0]
- return images
-
-gr.Interface(fn=genie, inputs=[gr.Textbox(label='What you want the AI to generate. 77 Token Limit.'),
- gr.Slider(1, maximum=25, value=10, step=.25, label='Prompt Guidance Scale:', interactive=True),
- gr.Slider(1, maximum=200, value=100, step=1, label='Number of Iterations: 50 is typically fine.'),
- gr.Slider(minimum=1, step=10, maximum=999999999999999999, randomize=True, interactive=True)],
- outputs=gr.Image(label='512x512 Generated Image'),
- title="OpenJourney V4 GPU",
- description="OJ V4 GPU. Ultra Fast, now running on a T4
Warning: This Demo is capable of producing NSFW content.",
- article = "Code Monkey: Manjushri").launch(debug=True, max_threads=True)
\ No newline at end of file
diff --git a/spaces/DarwinAnim8or/Pythia-Greentext-Playground/README.md b/spaces/DarwinAnim8or/Pythia-Greentext-Playground/README.md
deleted file mode 100644
index a89126359c84fef1b545b0d46bc19bb8dacd6b56..0000000000000000000000000000000000000000
--- a/spaces/DarwinAnim8or/Pythia-Greentext-Playground/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Pythia Greentext Playground
-emoji: ✍️
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-license: mit
-duplicated_from: DarwinAnim8or/GPT-Greentext-Playground
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/DiweshUIT/Spectrometer/README.md b/spaces/DiweshUIT/Spectrometer/README.md
deleted file mode 100644
index e379dec0d195301b639c2ba800b981e39356f4e3..0000000000000000000000000000000000000000
--- a/spaces/DiweshUIT/Spectrometer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Spectrometer
-emoji: ⚡
-colorFrom: pink
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.0.20
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/PP_HumanSeg/export_model/download_export_model.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/PP_HumanSeg/export_model/download_export_model.py
deleted file mode 100644
index 152f598bd48724fe13c5f650f809fdac06c2bae2..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/stylegan_human/PP_HumanSeg/export_model/download_export_model.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# coding: utf8
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserve.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from paddleseg.utils.download import download_file_and_uncompress
-import sys
-import os
-
-LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
-TEST_PATH = os.path.join(LOCAL_PATH, "../../../", "test")
-sys.path.append(TEST_PATH)
-
-
-model_urls = {
- "pphumanseg_lite_portrait_398x224_with_softmax":
- "https://paddleseg.bj.bcebos.com/dygraph/ppseg/ppseg_lite_portrait_398x224_with_softmax.tar.gz",
- "deeplabv3p_resnet50_os8_humanseg_512x512_100k_with_softmax":
- "https://paddleseg.bj.bcebos.com/dygraph/humanseg/export/deeplabv3p_resnet50_os8_humanseg_512x512_100k_with_softmax.zip",
- "fcn_hrnetw18_small_v1_humanseg_192x192_with_softmax":
- "https://paddleseg.bj.bcebos.com/dygraph/humanseg/export/fcn_hrnetw18_small_v1_humanseg_192x192_with_softmax.zip",
- "pphumanseg_lite_generic_humanseg_192x192_with_softmax":
- "https://paddleseg.bj.bcebos.com/dygraph/humanseg/export/pphumanseg_lite_generic_192x192_with_softmax.zip",
-}
-
-if __name__ == "__main__":
- for model_name, url in model_urls.items():
- download_file_and_uncompress(
- url=url,
- savepath=LOCAL_PATH,
- extrapath=LOCAL_PATH,
- extraname=model_name)
-
- print("Export model download success!")
diff --git a/spaces/DragGan/DragGan/stylegan_human/dnnlib/tflib/tfutil.py b/spaces/DragGan/DragGan/stylegan_human/dnnlib/tflib/tfutil.py
deleted file mode 100644
index 7b04c59e41a1b1548bc798379ceb551a488ed2a6..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/stylegan_human/dnnlib/tflib/tfutil.py
+++ /dev/null
@@ -1,254 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.
-#
-# This work is made available under the Nvidia Source Code License-NC.
-# To view a copy of this license, visit
-# https://nvlabs.github.io/stylegan2/license.html
-
-"""Miscellaneous helper utils for Tensorflow."""
-
-import os
-import numpy as np
-import tensorflow as tf
-
-# Silence deprecation warnings from TensorFlow 1.13 onwards
-import logging
-logging.getLogger('tensorflow').setLevel(logging.ERROR)
-import tensorflow.contrib # requires TensorFlow 1.x!
-tf.contrib = tensorflow.contrib
-
-from typing import Any, Iterable, List, Union
-
-TfExpression = Union[tf.Tensor, tf.Variable, tf.Operation]
-"""A type that represents a valid Tensorflow expression."""
-
-TfExpressionEx = Union[TfExpression, int, float, np.ndarray]
-"""A type that can be converted to a valid Tensorflow expression."""
-
-
-def run(*args, **kwargs) -> Any:
- """Run the specified ops in the default session."""
- assert_tf_initialized()
- return tf.get_default_session().run(*args, **kwargs)
-
-
-def is_tf_expression(x: Any) -> bool:
- """Check whether the input is a valid Tensorflow expression, i.e., Tensorflow Tensor, Variable, or Operation."""
- return isinstance(x, (tf.Tensor, tf.Variable, tf.Operation))
-
-
-def shape_to_list(shape: Iterable[tf.Dimension]) -> List[Union[int, None]]:
- """Convert a Tensorflow shape to a list of ints. Retained for backwards compatibility -- use TensorShape.as_list() in new code."""
- return [dim.value for dim in shape]
-
-
-def flatten(x: TfExpressionEx) -> TfExpression:
- """Shortcut function for flattening a tensor."""
- with tf.name_scope("Flatten"):
- return tf.reshape(x, [-1])
-
-
-def log2(x: TfExpressionEx) -> TfExpression:
- """Logarithm in base 2."""
- with tf.name_scope("Log2"):
- return tf.log(x) * np.float32(1.0 / np.log(2.0))
-
-
-def exp2(x: TfExpressionEx) -> TfExpression:
- """Exponent in base 2."""
- with tf.name_scope("Exp2"):
- return tf.exp(x * np.float32(np.log(2.0)))
-
-
-def lerp(a: TfExpressionEx, b: TfExpressionEx, t: TfExpressionEx) -> TfExpressionEx:
- """Linear interpolation."""
- with tf.name_scope("Lerp"):
- return a + (b - a) * t
-
-
-def lerp_clip(a: TfExpressionEx, b: TfExpressionEx, t: TfExpressionEx) -> TfExpression:
- """Linear interpolation with clip."""
- with tf.name_scope("LerpClip"):
- return a + (b - a) * tf.clip_by_value(t, 0.0, 1.0)
-
-
-def absolute_name_scope(scope: str) -> tf.name_scope:
- """Forcefully enter the specified name scope, ignoring any surrounding scopes."""
- return tf.name_scope(scope + "/")
-
-
-def absolute_variable_scope(scope: str, **kwargs) -> tf.variable_scope:
- """Forcefully enter the specified variable scope, ignoring any surrounding scopes."""
- return tf.variable_scope(tf.VariableScope(name=scope, **kwargs), auxiliary_name_scope=False)
-
-
-def _sanitize_tf_config(config_dict: dict = None) -> dict:
- # Defaults.
- cfg = dict()
- cfg["rnd.np_random_seed"] = None # Random seed for NumPy. None = keep as is.
- cfg["rnd.tf_random_seed"] = "auto" # Random seed for TensorFlow. 'auto' = derive from NumPy random state. None = keep as is.
- cfg["env.TF_CPP_MIN_LOG_LEVEL"] = "1" # 0 = Print all available debug info from TensorFlow. 1 = Print warnings and errors, but disable debug info.
- cfg["graph_options.place_pruned_graph"] = True # False = Check that all ops are available on the designated device. True = Skip the check for ops that are not used.
- cfg["gpu_options.allow_growth"] = True # False = Allocate all GPU memory at the beginning. True = Allocate only as much GPU memory as needed.
-
- # Remove defaults for environment variables that are already set.
- for key in list(cfg):
- fields = key.split(".")
- if fields[0] == "env":
- assert len(fields) == 2
- if fields[1] in os.environ:
- del cfg[key]
-
- # User overrides.
- if config_dict is not None:
- cfg.update(config_dict)
- return cfg
-
-
-def init_tf(config_dict: dict = None) -> None:
- """Initialize TensorFlow session using good default settings."""
- # Skip if already initialized.
- if tf.get_default_session() is not None:
- return
-
- # Setup config dict and random seeds.
- cfg = _sanitize_tf_config(config_dict)
- np_random_seed = cfg["rnd.np_random_seed"]
- if np_random_seed is not None:
- np.random.seed(np_random_seed)
- tf_random_seed = cfg["rnd.tf_random_seed"]
- if tf_random_seed == "auto":
- tf_random_seed = np.random.randint(1 << 31)
- if tf_random_seed is not None:
- tf.set_random_seed(tf_random_seed)
-
- # Setup environment variables.
- for key, value in cfg.items():
- fields = key.split(".")
- if fields[0] == "env":
- assert len(fields) == 2
- os.environ[fields[1]] = str(value)
-
- # Create default TensorFlow session.
- create_session(cfg, force_as_default=True)
-
-
-def assert_tf_initialized():
- """Check that TensorFlow session has been initialized."""
- if tf.get_default_session() is None:
- raise RuntimeError("No default TensorFlow session found. Please call dnnlib.tflib.init_tf().")
-
-
-def create_session(config_dict: dict = None, force_as_default: bool = False) -> tf.Session:
- """Create tf.Session based on config dict."""
- # Setup TensorFlow config proto.
- cfg = _sanitize_tf_config(config_dict)
- config_proto = tf.ConfigProto()
- for key, value in cfg.items():
- fields = key.split(".")
- if fields[0] not in ["rnd", "env"]:
- obj = config_proto
- for field in fields[:-1]:
- obj = getattr(obj, field)
- setattr(obj, fields[-1], value)
-
- # Create session.
- session = tf.Session(config=config_proto)
- if force_as_default:
- # pylint: disable=protected-access
- session._default_session = session.as_default()
- session._default_session.enforce_nesting = False
- session._default_session.__enter__()
- return session
-
-
-def init_uninitialized_vars(target_vars: List[tf.Variable] = None) -> None:
- """Initialize all tf.Variables that have not already been initialized.
-
- Equivalent to the following, but more efficient and does not bloat the tf graph:
- tf.variables_initializer(tf.report_uninitialized_variables()).run()
- """
- assert_tf_initialized()
- if target_vars is None:
- target_vars = tf.global_variables()
-
- test_vars = []
- test_ops = []
-
- with tf.control_dependencies(None): # ignore surrounding control_dependencies
- for var in target_vars:
- assert is_tf_expression(var)
-
- try:
- tf.get_default_graph().get_tensor_by_name(var.name.replace(":0", "/IsVariableInitialized:0"))
- except KeyError:
- # Op does not exist => variable may be uninitialized.
- test_vars.append(var)
-
- with absolute_name_scope(var.name.split(":")[0]):
- test_ops.append(tf.is_variable_initialized(var))
-
- init_vars = [var for var, inited in zip(test_vars, run(test_ops)) if not inited]
- run([var.initializer for var in init_vars])
-
-
-def set_vars(var_to_value_dict: dict) -> None:
- """Set the values of given tf.Variables.
-
- Equivalent to the following, but more efficient and does not bloat the tf graph:
- tflib.run([tf.assign(var, value) for var, value in var_to_value_dict.items()]
- """
- assert_tf_initialized()
- ops = []
- feed_dict = {}
-
- for var, value in var_to_value_dict.items():
- assert is_tf_expression(var)
-
- try:
- setter = tf.get_default_graph().get_tensor_by_name(var.name.replace(":0", "/setter:0")) # look for existing op
- except KeyError:
- with absolute_name_scope(var.name.split(":")[0]):
- with tf.control_dependencies(None): # ignore surrounding control_dependencies
- setter = tf.assign(var, tf.placeholder(var.dtype, var.shape, "new_value"), name="setter") # create new setter
-
- ops.append(setter)
- feed_dict[setter.op.inputs[1]] = value
-
- run(ops, feed_dict)
-
-
-def create_var_with_large_initial_value(initial_value: np.ndarray, *args, **kwargs):
- """Create tf.Variable with large initial value without bloating the tf graph."""
- assert_tf_initialized()
- assert isinstance(initial_value, np.ndarray)
- zeros = tf.zeros(initial_value.shape, initial_value.dtype)
- var = tf.Variable(zeros, *args, **kwargs)
- set_vars({var: initial_value})
- return var
-
-
-def convert_images_from_uint8(images, drange=[-1,1], nhwc_to_nchw=False):
- """Convert a minibatch of images from uint8 to float32 with configurable dynamic range.
- Can be used as an input transformation for Network.run().
- """
- images = tf.cast(images, tf.float32)
- if nhwc_to_nchw:
- images = tf.transpose(images, [0, 3, 1, 2])
- return images * ((drange[1] - drange[0]) / 255) + drange[0]
-
-
-def convert_images_to_uint8(images, drange=[-1,1], nchw_to_nhwc=False, shrink=1):
- """Convert a minibatch of images from float32 to uint8 with configurable dynamic range.
- Can be used as an output transformation for Network.run().
- """
- images = tf.cast(images, tf.float32)
- if shrink > 1:
- ksize = [1, 1, shrink, shrink]
- images = tf.nn.avg_pool(images, ksize=ksize, strides=ksize, padding="VALID", data_format="NCHW")
- if nchw_to_nhwc:
- images = tf.transpose(images, [0, 2, 3, 1])
- scale = 255 / (drange[1] - drange[0])
- images = images * scale + (0.5 - drange[0] * scale)
- return tf.saturate_cast(images, tf.uint8)
diff --git a/spaces/Eitan177/mutation_profiler/Sigprofile.py b/spaces/Eitan177/mutation_profiler/Sigprofile.py
deleted file mode 100644
index 3f572cb884fd2525afb3ad3c54ee90d116c3445d..0000000000000000000000000000000000000000
--- a/spaces/Eitan177/mutation_profiler/Sigprofile.py
+++ /dev/null
@@ -1,71 +0,0 @@
-import streamlit as st
-import zipfile
-import urllib.request
-import glob
-import SigProfilerMatrixGenerator
-from SigProfilerMatrixGenerator import install as genInstall
-import shutil
-import os
-from SigProfilerExtractor import sigpro as sig
-import sys
-import base64
-import streamlit.components.v1 as components
-
-curdir= os.getcwd()
-
-def remove_old_vcf():
- vcfrem=glob.glob('input/*.vcf')
- for filepath in vcfrem:
- os.remove(filepath)
- vcfrem=glob.glob('input/input/*.vcf')
- for filepath in vcfrem:
- os.remove(filepath)
-
-def show_pdf(file_path):
- with open(file_path,"rb") as f:
- base64_pdf = base64.b64encode(f.read()).decode('utf-8')
- pdf_display = f''
- st.markdown(pdf_display, unsafe_allow_html=True)
-
-
-
-
-if st.button('get reference genome'):
- st.write(os.path.dirname(SigProfilerMatrixGenerator.__file__))
- dirtest=os.path.dirname(SigProfilerMatrixGenerator.__file__)
- #st.write(sys.path)
- urllib.request.urlretrieve('https://dl.dropboxusercontent.com/s/et97ewsct862x7m/references.zip?dl=0','references.zip')
- with zipfile.ZipFile('references.zip', 'r') as zip_ref:
- zip_ref.extractall(dirtest)
- seev=glob.glob('/home/appuser/venv/lib/python3.9/site-packages/SigProfilerMatrixGenerator/references/*')
- for i in seev:
- st.write(i)
- ##genInstall.install('GRCh37')
-
-if not os.path.exists('input'):
- os.mkdir('input')
-
-if not os.path.exists('input/input'):
- os.mkdir('input/input')
-
-file_to_lookat=st.file_uploader('VCF upload here',type=[".vcf"],accept_multiple_files=True)
-remove_old_vcf()
-
-if file_to_lookat !=[]:
- bytes_data=file_to_lookat[0].read()
- with open(os.path.join("input",file_to_lookat[0].name),"wb") as f:
- f.write(bytes_data)
- f.close()
-
- #vcfuse=glob.glob('file_to_lookat[0].name')[0]
- #shutil.copy2(vcfuse,'input/'+vcfuse)
- #pdb.set_trace()
- with st.spinner('computing signatures'):
- sig.sigProfilerExtractor("vcf", "output", "input", minimum_signatures=1, maximum_signatures=3)
-
- show_pdf('output/SBS96/Suggested_Solution/COSMIC_SBS96_Decomposed_Solution/SBS96_Decomposition_Plots.pdf')
-
- components.iframe("https://cancer.sanger.ac.uk/signatures/sbs/", height=3000,width=800)
- show_pdf('output/ID83/Suggested_Solution/COSMIC_ID83_Decomposed_Solution/ID83_Decomposition_Plots.pdf')
- components.iframe("https://cancer.sanger.ac.uk/signatures/id/",height=3000,width=800)
- remove_old_vcf()
\ No newline at end of file
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textdet/psenet/psenet_r50_fpnf_600e_ctw1500.py b/spaces/EuroPython2022/mmocr-demo/configs/textdet/psenet/psenet_r50_fpnf_600e_ctw1500.py
deleted file mode 100644
index 483a2b2e1e7e584dfba26c7c5f506ce544953db8..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/textdet/psenet/psenet_r50_fpnf_600e_ctw1500.py
+++ /dev/null
@@ -1,35 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/schedules/schedule_adam_step_600e.py',
- '../../_base_/det_models/psenet_r50_fpnf.py',
- '../../_base_/det_datasets/ctw1500.py',
- '../../_base_/det_pipelines/psenet_pipeline.py'
-]
-
-model = {{_base_.model_poly}}
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline = {{_base_.train_pipeline}}
-test_pipeline_ctw1500 = {{_base_.test_pipeline_ctw1500}}
-
-data = dict(
- samples_per_gpu=2,
- workers_per_gpu=2,
- val_dataloader=dict(samples_per_gpu=1),
- test_dataloader=dict(samples_per_gpu=1),
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_ctw1500),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline_ctw1500))
-
-evaluation = dict(interval=10, metric='hmean-iou')
diff --git a/spaces/FathomNet/UWROV_Deepsea_Detector/tator_inference.py b/spaces/FathomNet/UWROV_Deepsea_Detector/tator_inference.py
deleted file mode 100644
index 5593e9ccccb6bbd222b50220b5ffe65ab486c300..0000000000000000000000000000000000000000
--- a/spaces/FathomNet/UWROV_Deepsea_Detector/tator_inference.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import os
-import logging
-from tempfile import TemporaryFile
-
-import cv2
-import numpy as np
-from PIL import Image
-
-import tator
-import inference
-
-
-logger = logging.getLogger(__name__)
-logger.setLevel(logging.INFO)
-
-# Read environment variables that are provided from TATOR
-host = os.getenv('HOST')
-token = os.getenv('TOKEN')
-project_id = int(os.getenv('PROJECT_ID'))
-media_ids = [int(id_) for id_ in os.getenv('MEDIA_IDS').split(',')]
-frames_per_inference = int(os.getenv('FRAMES_PER_INFERENCE', 30))
-
-# Set up the TATOR API.
-api = tator.get_api(host, token)
-
-# Iterate through each video.
-for media_id in media_ids:
-
- # Download video.
- media = api.get_media(media_id)
- logger.info(f"Downloading {media.name}...")
- out_path = f"/tmp/{media.name}"
- for progress in tator.util.download_media(api, media, out_path):
- logger.info(f"Download progress: {progress}%")
-
- # Do inference on each video.
- logger.info(f"Doing inference on {media.name}...")
- localizations = []
- vid = cv2.VideoCapture(out_path)
- frame_number = 0
-
- # Read *every* frame from the video, break when at the end.
- while True:
- ret, frame = vid.read()
- if not ret:
- break
-
- # Create a temporary file, access the image data, save data to file.
- framefile = TemporaryFile(suffix='.jpg')
- im = Image.fromarray(frame)
- im.save(framefile)
-
- # For every N frames, make a prediction; append prediction results
- # to a list, increase the frame count.
- if frame_number % frames_per_inference == 0:
-
- spec = {}
-
- # Predictions contains all information inside pandas dataframe
- predictions = inference.run_inference(framefile)
-
- for i, r in predictions.pandas().xyxy[0].iterrows:
-
- spec['media_id'] = media_id
- spec['type'] = None # Unsure, docs not specific
- spec['frame'] = frame_number
-
- x, y, x2, y2 = r['xmin'], r['ymin'], r['xmax'], r['ymax']
- w, h = x2 - x, y2 - y
-
- spec['x'] = x
- spec['y'] = y
- spec['width'] = w
- spec['height'] = h
- spec['class_category'] = r['name']
- spec['confidence'] = r['confidence']
-
- localizations.append(spec)
-
- frame_number += 1
-
- # End interaction with video properly.
- vid.release()
-
- logger.info(f"Uploading object detections on {media.name}...")
-
- # Create the localizations in the video.
- num_created = 0
- for response in tator.util.chunked_create(api.create_localization_list,
- project_id,
- localization_spec=localizations):
- num_created += len(response.id)
-
- # Output pretty logging information.
- logger.info(f"Successfully created {num_created} localizations on "
- f"{media.name}!")
-
- logger.info("-------------------------------------------------")
-
-logger.info(f"Completed inference on {len(media_ids)} files.")
\ No newline at end of file
diff --git a/spaces/Fawaz/nlx-gpt/VQAX_p/nle_gpt2_tokenizer_0/README.md b/spaces/Fawaz/nlx-gpt/VQAX_p/nle_gpt2_tokenizer_0/README.md
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/cluster/train_cluster.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/cluster/train_cluster.py
deleted file mode 100644
index 8644566388a4107c4442da14c0de090bcd4a91b8..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/so-vits-svc-models-pcr/cluster/train_cluster.py
+++ /dev/null
@@ -1,84 +0,0 @@
-import time,pdb
-import tqdm
-from time import time as ttime
-import os
-from pathlib import Path
-import logging
-import argparse
-from kmeans import KMeansGPU
-import torch
-import numpy as np
-from sklearn.cluster import KMeans,MiniBatchKMeans
-
-logging.basicConfig(level=logging.INFO)
-logger = logging.getLogger(__name__)
-from time import time as ttime
-import pynvml,torch
-
-def train_cluster(in_dir, n_clusters, use_minibatch=True, verbose=False,use_gpu=False):#gpu_minibatch真拉,虽然库支持但是也不考虑
- logger.info(f"Loading features from {in_dir}")
- features = []
- nums = 0
- for path in tqdm.tqdm(in_dir.glob("*.soft.pt")):
- # for name in os.listdir(in_dir):
- # path="%s/%s"%(in_dir,name)
- features.append(torch.load(path,map_location="cpu").squeeze(0).numpy().T)
- # print(features[-1].shape)
- features = np.concatenate(features, axis=0)
- print(nums, features.nbytes/ 1024**2, "MB , shape:",features.shape, features.dtype)
- features = features.astype(np.float32)
- logger.info(f"Clustering features of shape: {features.shape}")
- t = time.time()
- if(use_gpu==False):
- if use_minibatch:
- kmeans = MiniBatchKMeans(n_clusters=n_clusters,verbose=verbose, batch_size=4096, max_iter=80).fit(features)
- else:
- kmeans = KMeans(n_clusters=n_clusters,verbose=verbose).fit(features)
- else:
- kmeans = KMeansGPU(n_clusters=n_clusters, mode='euclidean', verbose=2 if verbose else 0,max_iter=500,tol=1e-2)#
- features=torch.from_numpy(features)#.to(device)
- labels = kmeans.fit_predict(features)#
-
- print(time.time()-t, "s")
-
- x = {
- "n_features_in_": kmeans.n_features_in_ if use_gpu==False else features.shape[1],
- "_n_threads": kmeans._n_threads if use_gpu==False else 4,
- "cluster_centers_": kmeans.cluster_centers_ if use_gpu==False else kmeans.centroids.cpu().numpy(),
- }
- print("end")
-
- return x
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument('--dataset', type=Path, default="./dataset/44k",
- help='path of training data directory')
- parser.add_argument('--output', type=Path, default="logs/44k",
- help='path of model output directory')
- parser.add_argument('--gpu',action='store_true', default=False ,
- help='to use GPU')
-
-
- args = parser.parse_args()
-
- checkpoint_dir = args.output
- dataset = args.dataset
- use_gpu = args.gpu
- n_clusters = 10000
-
- ckpt = {}
- for spk in os.listdir(dataset):
- if os.path.isdir(dataset/spk):
- print(f"train kmeans for {spk}...")
- in_dir = dataset/spk
- x = train_cluster(in_dir, n_clusters,use_minibatch=False,verbose=False,use_gpu=use_gpu)
- ckpt[spk] = x
-
- checkpoint_path = checkpoint_dir / f"kmeans_{n_clusters}.pt"
- checkpoint_path.parent.mkdir(exist_ok=True, parents=True)
- torch.save(
- ckpt,
- checkpoint_path,
- )
-
diff --git a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/nn.py b/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/nn.py
deleted file mode 100644
index 1a5b2f19e692dace9ade33d845632cea0479cc88..0000000000000000000000000000000000000000
--- a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/nn.py
+++ /dev/null
@@ -1,105 +0,0 @@
-"""
-Various utilities for neural networks.
-"""
-
-import math
-
-import torch as th
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-class GroupNorm32(nn.GroupNorm):
- def __init__(self, num_groups, num_channels, swish, eps=1e-5):
- super().__init__(num_groups=num_groups, num_channels=num_channels, eps=eps)
- self.swish = swish
-
- def forward(self, x):
- y = super().forward(x.float()).to(x.dtype)
- if self.swish == 1.0:
- y = F.silu(y)
- elif self.swish:
- y = y * F.sigmoid(y * float(self.swish))
- return y
-
-
-def conv_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D convolution module.
- """
- if dims == 1:
- return nn.Conv1d(*args, **kwargs)
- elif dims == 2:
- return nn.Conv2d(*args, **kwargs)
- elif dims == 3:
- return nn.Conv3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-def linear(*args, **kwargs):
- """
- Create a linear module.
- """
- return nn.Linear(*args, **kwargs)
-
-
-def avg_pool_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D average pooling module.
- """
- if dims == 1:
- return nn.AvgPool1d(*args, **kwargs)
- elif dims == 2:
- return nn.AvgPool2d(*args, **kwargs)
- elif dims == 3:
- return nn.AvgPool3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-def zero_module(module):
- """
- Zero out the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().zero_()
- return module
-
-
-def scale_module(module, scale):
- """
- Scale the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().mul_(scale)
- return module
-
-
-def normalization(channels, swish=0.0):
- """
- Make a standard normalization layer, with an optional swish activation.
-
- :param channels: number of input channels.
- :return: an nn.Module for normalization.
- """
- return GroupNorm32(num_channels=channels, num_groups=32, swish=swish)
-
-
-def timestep_embedding(timesteps, dim, max_period=10000):
- """
- Create sinusoidal timestep embeddings.
-
- :param timesteps: a 1-D Tensor of N indices, one per batch element.
- These may be fractional.
- :param dim: the dimension of the output.
- :param max_period: controls the minimum frequency of the embeddings.
- :return: an [N x dim] Tensor of positional embeddings.
- """
- half = dim // 2
- freqs = th.exp(
- -math.log(max_period) * th.arange(start=0, end=half, dtype=th.float32) / half
- ).to(device=timesteps.device)
- args = timesteps[:, None].float() * freqs[None]
- embedding = th.cat([th.cos(args), th.sin(args)], dim=-1)
- if dim % 2:
- embedding = th.cat([embedding, th.zeros_like(embedding[:, :1])], dim=-1)
- return embedding
diff --git a/spaces/Froleptan/stablediffusion-infinity/app.py b/spaces/Froleptan/stablediffusion-infinity/app.py
deleted file mode 100644
index 382db1074b5e41100078c68eafc524be9277df46..0000000000000000000000000000000000000000
--- a/spaces/Froleptan/stablediffusion-infinity/app.py
+++ /dev/null
@@ -1,1037 +0,0 @@
-import io
-import base64
-import os
-import sys
-
-import numpy as np
-import torch
-from torch import autocast
-import diffusers
-from diffusers.configuration_utils import FrozenDict
-from diffusers import (
- StableDiffusionPipeline,
- StableDiffusionInpaintPipeline,
- StableDiffusionImg2ImgPipeline,
- StableDiffusionInpaintPipelineLegacy,
- DDIMScheduler,
- LMSDiscreteScheduler,
-)
-from diffusers.models import AutoencoderKL
-from PIL import Image
-from PIL import ImageOps
-import gradio as gr
-import base64
-import skimage
-import skimage.measure
-import yaml
-import json
-from enum import Enum
-
-try:
- abspath = os.path.abspath(__file__)
- dirname = os.path.dirname(abspath)
- os.chdir(dirname)
-except:
- pass
-
-from utils import *
-
-assert diffusers.__version__ >= "0.6.0", "Please upgrade diffusers to 0.6.0"
-
-USE_NEW_DIFFUSERS = True
-RUN_IN_SPACE = "RUN_IN_HG_SPACE" in os.environ
-
-
-class ModelChoice(Enum):
- INPAINTING = "stablediffusion-inpainting"
- INPAINTING_IMG2IMG = "stablediffusion-inpainting+img2img-v1.5"
- MODEL_1_5 = "stablediffusion-v1.5"
- MODEL_1_4 = "stablediffusion-v1.4"
-
-
-try:
- from sd_grpcserver.pipeline.unified_pipeline import UnifiedPipeline
-except:
- UnifiedPipeline = StableDiffusionInpaintPipeline
-
-# sys.path.append("./glid_3_xl_stable")
-
-USE_GLID = False
-# try:
-# from glid3xlmodel import GlidModel
-# except:
-# USE_GLID = False
-
-try:
- cuda_available = torch.cuda.is_available()
-except:
- cuda_available = False
-finally:
- if sys.platform == "darwin":
- device = "mps" if torch.backends.mps.is_available() else "cpu"
- elif cuda_available:
- device = "cuda"
- else:
- device = "cpu"
-
-import contextlib
-
-autocast = contextlib.nullcontext
-
-with open("config.yaml", "r") as yaml_in:
- yaml_object = yaml.safe_load(yaml_in)
- config_json = json.dumps(yaml_object)
-
-
-def load_html():
- body, canvaspy = "", ""
- with open("index.html", encoding="utf8") as f:
- body = f.read()
- with open("canvas.py", encoding="utf8") as f:
- canvaspy = f.read()
- body = body.replace("- paths:\n", "")
- body = body.replace(" - ./canvas.py\n", "")
- body = body.replace("from canvas import InfCanvas", canvaspy)
- return body
-
-
-def test(x):
- x = load_html()
- return f""""""
-
-
-DEBUG_MODE = False
-
-try:
- SAMPLING_MODE = Image.Resampling.LANCZOS
-except Exception as e:
- SAMPLING_MODE = Image.LANCZOS
-
-try:
- contain_func = ImageOps.contain
-except Exception as e:
-
- def contain_func(image, size, method=SAMPLING_MODE):
- # from PIL: https://pillow.readthedocs.io/en/stable/reference/ImageOps.html#PIL.ImageOps.contain
- im_ratio = image.width / image.height
- dest_ratio = size[0] / size[1]
- if im_ratio != dest_ratio:
- if im_ratio > dest_ratio:
- new_height = int(image.height / image.width * size[0])
- if new_height != size[1]:
- size = (size[0], new_height)
- else:
- new_width = int(image.width / image.height * size[1])
- if new_width != size[0]:
- size = (new_width, size[1])
- return image.resize(size, resample=method)
-
-
-import argparse
-
-parser = argparse.ArgumentParser(description="stablediffusion-infinity")
-parser.add_argument("--port", type=int, help="listen port", dest="server_port")
-parser.add_argument("--host", type=str, help="host", dest="server_name")
-parser.add_argument("--share", action="store_true", help="share this app?")
-parser.add_argument("--debug", action="store_true", help="debug mode")
-parser.add_argument("--fp32", action="store_true", help="using full precision")
-parser.add_argument("--encrypt", action="store_true", help="using https?")
-parser.add_argument("--ssl_keyfile", type=str, help="path to ssl_keyfile")
-parser.add_argument("--ssl_certfile", type=str, help="path to ssl_certfile")
-parser.add_argument("--ssl_keyfile_password", type=str, help="ssl_keyfile_password")
-parser.add_argument(
- "--auth", nargs=2, metavar=("username", "password"), help="use username password"
-)
-parser.add_argument(
- "--remote_model",
- type=str,
- help="use a model (e.g. dreambooth fined) from huggingface hub",
- default="",
-)
-parser.add_argument(
- "--local_model", type=str, help="use a model stored on your PC", default=""
-)
-
-if __name__ == "__main__" and not RUN_IN_SPACE:
- args = parser.parse_args()
-else:
- args = parser.parse_args()
-# args = parser.parse_args(["--debug"])
-if args.auth is not None:
- args.auth = tuple(args.auth)
-
-model = {}
-
-
-def get_token():
- token = ""
- if os.path.exists(".token"):
- with open(".token", "r") as f:
- token = f.read()
- token = os.environ.get("hftoken", token)
- return token
-
-
-def save_token(token):
- with open(".token", "w") as f:
- f.write(token)
-
-
-def prepare_scheduler(scheduler):
- if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
- new_config = dict(scheduler.config)
- new_config["steps_offset"] = 1
- scheduler._internal_dict = FrozenDict(new_config)
- return scheduler
-
-
-def my_resize(width, height):
- if width >= 512 and height >= 512:
- return width, height
- if width == height:
- return 512, 512
- smaller = min(width, height)
- larger = max(width, height)
- if larger >= 608:
- return width, height
- factor = 1
- if smaller < 290:
- factor = 2
- elif smaller < 330:
- factor = 1.75
- elif smaller < 384:
- factor = 1.375
- elif smaller < 400:
- factor = 1.25
- elif smaller < 450:
- factor = 1.125
- return int(factor * width)//8*8, int(factor * height)//8*8
-
-
-def load_learned_embed_in_clip(
- learned_embeds_path, text_encoder, tokenizer, token=None
-):
- # https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb
- loaded_learned_embeds = torch.load(learned_embeds_path, map_location="cpu")
-
- # separate token and the embeds
- trained_token = list(loaded_learned_embeds.keys())[0]
- embeds = loaded_learned_embeds[trained_token]
-
- # cast to dtype of text_encoder
- dtype = text_encoder.get_input_embeddings().weight.dtype
- embeds.to(dtype)
-
- # add the token in tokenizer
- token = token if token is not None else trained_token
- num_added_tokens = tokenizer.add_tokens(token)
- if num_added_tokens == 0:
- raise ValueError(
- f"The tokenizer already contains the token {token}. Please pass a different `token` that is not already in the tokenizer."
- )
-
- # resize the token embeddings
- text_encoder.resize_token_embeddings(len(tokenizer))
-
- # get the id for the token and assign the embeds
- token_id = tokenizer.convert_tokens_to_ids(token)
- text_encoder.get_input_embeddings().weight.data[token_id] = embeds
-
-
-scheduler_dict = {"PLMS": None, "DDIM": None, "K-LMS": None}
-
-
-class StableDiffusionInpaint:
- def __init__(
- self, token: str = "", model_name: str = "", model_path: str = "", **kwargs,
- ):
- self.token = token
- original_checkpoint = False
- if model_path and os.path.exists(model_path):
- if model_path.endswith(".ckpt"):
- original_checkpoint = True
- elif model_path.endswith(".json"):
- model_name = os.path.dirname(model_path)
- else:
- model_name = model_path
- vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse")
- vae.to(torch.float16)
- if original_checkpoint:
- print(f"Converting & Loading {model_path}")
- from convert_checkpoint import convert_checkpoint
-
- pipe = convert_checkpoint(model_path, inpainting=True)
- if device == "cuda":
- pipe.to(torch.float16)
- inpaint = StableDiffusionInpaintPipeline(
- vae=vae,
- text_encoder=pipe.text_encoder,
- tokenizer=pipe.tokenizer,
- unet=pipe.unet,
- scheduler=pipe.scheduler,
- safety_checker=pipe.safety_checker,
- feature_extractor=pipe.feature_extractor,
- )
- else:
- print(f"Loading {model_name}")
- if device == "cuda":
- inpaint = StableDiffusionInpaintPipeline.from_pretrained(
- model_name,
- revision="fp16",
- torch_dtype=torch.float16,
- use_auth_token=token,
- vae=vae
- )
- else:
- inpaint = StableDiffusionInpaintPipeline.from_pretrained(
- model_name, use_auth_token=token,
- )
- if os.path.exists("./embeddings"):
- print("Note that StableDiffusionInpaintPipeline + embeddings is untested")
- for item in os.listdir("./embeddings"):
- if item.endswith(".bin"):
- load_learned_embed_in_clip(
- os.path.join("./embeddings", item),
- inpaint.text_encoder,
- inpaint.tokenizer,
- )
- inpaint.to(device)
- # if device == "mps":
- # _ = text2img("", num_inference_steps=1)
- scheduler_dict["PLMS"] = inpaint.scheduler
- scheduler_dict["DDIM"] = prepare_scheduler(
- DDIMScheduler(
- beta_start=0.00085,
- beta_end=0.012,
- beta_schedule="scaled_linear",
- clip_sample=False,
- set_alpha_to_one=False,
- )
- )
- scheduler_dict["K-LMS"] = prepare_scheduler(
- LMSDiscreteScheduler(
- beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear"
- )
- )
- self.safety_checker = inpaint.safety_checker
- save_token(token)
- try:
- total_memory = torch.cuda.get_device_properties(0).total_memory // (
- 1024 ** 3
- )
- if total_memory <= 5:
- inpaint.enable_attention_slicing()
- except:
- pass
- self.inpaint = inpaint
-
- def run(
- self,
- image_pil,
- prompt="",
- negative_prompt="",
- guidance_scale=7.5,
- resize_check=True,
- enable_safety=True,
- fill_mode="patchmatch",
- strength=0.75,
- step=50,
- enable_img2img=False,
- use_seed=False,
- seed_val=-1,
- generate_num=1,
- scheduler="",
- scheduler_eta=0.0,
- **kwargs,
- ):
- inpaint = self.inpaint
- selected_scheduler = scheduler_dict.get(scheduler, scheduler_dict["PLMS"])
- for item in [inpaint]:
- item.scheduler = selected_scheduler
- if enable_safety:
- item.safety_checker = self.safety_checker
- else:
- item.safety_checker = lambda images, **kwargs: (images, False)
- width, height = image_pil.size
- sel_buffer = np.array(image_pil)
- img = sel_buffer[:, :, 0:3]
- mask = sel_buffer[:, :, -1]
- nmask = 255 - mask
- process_width = width
- process_height = height
- if resize_check:
- process_width, process_height = my_resize(width, height)
- process_width=process_width*8//8
- process_height=process_height*8//8
- extra_kwargs = {
- "num_inference_steps": step,
- "guidance_scale": guidance_scale,
- "eta": scheduler_eta,
- }
- if USE_NEW_DIFFUSERS:
- extra_kwargs["negative_prompt"] = negative_prompt
- extra_kwargs["num_images_per_prompt"] = generate_num
- if use_seed:
- generator = torch.Generator(inpaint.device).manual_seed(seed_val)
- extra_kwargs["generator"] = generator
- if True:
- img, mask = functbl[fill_mode](img, mask)
- mask = 255 - mask
- mask = skimage.measure.block_reduce(mask, (8, 8), np.max)
- mask = mask.repeat(8, axis=0).repeat(8, axis=1)
- extra_kwargs["strength"] = strength
- inpaint_func = inpaint
- init_image = Image.fromarray(img)
- mask_image = Image.fromarray(mask)
- # mask_image=mask_image.filter(ImageFilter.GaussianBlur(radius = 8))
- if True:
- images = inpaint_func(
- prompt=prompt,
- image=init_image.resize(
- (process_width, process_height), resample=SAMPLING_MODE
- ),
- mask_image=mask_image.resize((process_width, process_height)),
- width=process_width,
- height=process_height,
- **extra_kwargs,
- )["images"]
- return images
-
-
-class StableDiffusion:
- def __init__(
- self,
- token: str = "",
- model_name: str = "runwayml/stable-diffusion-v1-5",
- model_path: str = None,
- inpainting_model: bool = False,
- **kwargs,
- ):
- self.token = token
- original_checkpoint = False
- vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse")
- vae.to(torch.float16)
- if model_path and os.path.exists(model_path):
- if model_path.endswith(".ckpt"):
- original_checkpoint = True
- elif model_path.endswith(".json"):
- model_name = os.path.dirname(model_path)
- else:
- model_name = model_path
- if original_checkpoint:
- print(f"Converting & Loading {model_path}")
- from convert_checkpoint import convert_checkpoint
-
- text2img = convert_checkpoint(model_path)
- if device == "cuda" and not args.fp32:
- text2img.to(torch.float16)
- else:
- print(f"Loading {model_name}")
- if device == "cuda" and not args.fp32:
- text2img = StableDiffusionPipeline.from_pretrained(
- "runwayml/stable-diffusion-v1-5",
- revision="fp16",
- torch_dtype=torch.float16,
- use_auth_token=token,
- vae=vae
- )
- else:
- text2img = StableDiffusionPipeline.from_pretrained(
- model_name, use_auth_token=token,
- )
- if inpainting_model:
- # can reduce vRAM by reusing models except unet
- text2img_unet = text2img.unet
- del text2img.vae
- del text2img.text_encoder
- del text2img.tokenizer
- del text2img.scheduler
- del text2img.safety_checker
- del text2img.feature_extractor
- import gc
-
- gc.collect()
- if device == "cuda":
- inpaint = StableDiffusionInpaintPipeline.from_pretrained(
- "runwayml/stable-diffusion-inpainting",
- revision="fp16",
- torch_dtype=torch.float16,
- use_auth_token=token,
- vae=vae
- ).to(device)
- else:
- inpaint = StableDiffusionInpaintPipeline.from_pretrained(
- "runwayml/stable-diffusion-inpainting", use_auth_token=token,
- ).to(device)
- text2img_unet.to(device)
- del text2img
- gc.collect()
- text2img = StableDiffusionPipeline(
- vae=inpaint.vae,
- text_encoder=inpaint.text_encoder,
- tokenizer=inpaint.tokenizer,
- unet=text2img_unet,
- scheduler=inpaint.scheduler,
- safety_checker=inpaint.safety_checker,
- feature_extractor=inpaint.feature_extractor,
- )
- else:
- inpaint = StableDiffusionInpaintPipelineLegacy(
- vae=text2img.vae,
- text_encoder=text2img.text_encoder,
- tokenizer=text2img.tokenizer,
- unet=text2img.unet,
- scheduler=text2img.scheduler,
- safety_checker=text2img.safety_checker,
- feature_extractor=text2img.feature_extractor,
- ).to(device)
- text_encoder = text2img.text_encoder
- tokenizer = text2img.tokenizer
- if os.path.exists("./embeddings"):
- for item in os.listdir("./embeddings"):
- if item.endswith(".bin"):
- load_learned_embed_in_clip(
- os.path.join("./embeddings", item),
- text2img.text_encoder,
- text2img.tokenizer,
- )
- text2img.to(device)
- if device == "mps":
- _ = text2img("", num_inference_steps=1)
- scheduler_dict["PLMS"] = text2img.scheduler
- scheduler_dict["DDIM"] = prepare_scheduler(
- DDIMScheduler(
- beta_start=0.00085,
- beta_end=0.012,
- beta_schedule="scaled_linear",
- clip_sample=False,
- set_alpha_to_one=False,
- )
- )
- scheduler_dict["K-LMS"] = prepare_scheduler(
- LMSDiscreteScheduler(
- beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear"
- )
- )
- self.safety_checker = text2img.safety_checker
- img2img = StableDiffusionImg2ImgPipeline(
- vae=text2img.vae,
- text_encoder=text2img.text_encoder,
- tokenizer=text2img.tokenizer,
- unet=text2img.unet,
- scheduler=text2img.scheduler,
- safety_checker=text2img.safety_checker,
- feature_extractor=text2img.feature_extractor,
- ).to(device)
- save_token(token)
- try:
- total_memory = torch.cuda.get_device_properties(0).total_memory // (
- 1024 ** 3
- )
- if total_memory <= 5:
- inpaint.enable_attention_slicing()
- except:
- pass
- self.text2img = text2img
- self.inpaint = inpaint
- self.img2img = img2img
- self.unified = UnifiedPipeline(
- vae=text2img.vae,
- text_encoder=text2img.text_encoder,
- tokenizer=text2img.tokenizer,
- unet=text2img.unet,
- scheduler=text2img.scheduler,
- safety_checker=text2img.safety_checker,
- feature_extractor=text2img.feature_extractor,
- ).to(device)
- self.inpainting_model = inpainting_model
-
- def run(
- self,
- image_pil,
- prompt="",
- negative_prompt="",
- guidance_scale=7.5,
- resize_check=True,
- enable_safety=True,
- fill_mode="patchmatch",
- strength=0.75,
- step=50,
- enable_img2img=False,
- use_seed=False,
- seed_val=-1,
- generate_num=1,
- scheduler="",
- scheduler_eta=0.0,
- **kwargs,
- ):
- text2img, inpaint, img2img, unified = (
- self.text2img,
- self.inpaint,
- self.img2img,
- self.unified,
- )
- selected_scheduler = scheduler_dict.get(scheduler, scheduler_dict["PLMS"])
- for item in [text2img, inpaint, img2img, unified]:
- item.scheduler = selected_scheduler
- if enable_safety:
- item.safety_checker = self.safety_checker
- else:
- item.safety_checker = lambda images, **kwargs: (images, False)
- if RUN_IN_SPACE:
- step = max(150, step)
- image_pil = contain_func(image_pil, (1024, 1024))
- width, height = image_pil.size
- sel_buffer = np.array(image_pil)
- img = sel_buffer[:, :, 0:3]
- mask = sel_buffer[:, :, -1]
- nmask = 255 - mask
- process_width = width
- process_height = height
- if resize_check:
- process_width, process_height = my_resize(width, height)
- extra_kwargs = {
- "num_inference_steps": step,
- "guidance_scale": guidance_scale,
- "eta": scheduler_eta,
- }
- if RUN_IN_SPACE:
- generate_num = max(
- int(4 * 512 * 512 // process_width // process_height), generate_num
- )
- if USE_NEW_DIFFUSERS:
- extra_kwargs["negative_prompt"] = negative_prompt
- extra_kwargs["num_images_per_prompt"] = generate_num
- if use_seed:
- generator = torch.Generator(text2img.device).manual_seed(seed_val)
- extra_kwargs["generator"] = generator
- if nmask.sum() < 1 and enable_img2img:
- init_image = Image.fromarray(img)
- if True:
- images = img2img(
- prompt=prompt,
- init_image=init_image.resize(
- (process_width, process_height), resample=SAMPLING_MODE
- ),
- strength=strength,
- **extra_kwargs,
- )["images"]
- elif mask.sum() > 0:
- if fill_mode == "g_diffuser" and not self.inpainting_model:
- mask = 255 - mask
- mask = mask[:, :, np.newaxis].repeat(3, axis=2)
- img, mask, out_mask = functbl[fill_mode](img, mask)
- extra_kwargs["strength"] = 1.0
- extra_kwargs["out_mask"] = Image.fromarray(out_mask)
- inpaint_func = unified
- else:
- img, mask = functbl[fill_mode](img, mask)
- mask = 255 - mask
- mask = skimage.measure.block_reduce(mask, (8, 8), np.max)
- mask = mask.repeat(8, axis=0).repeat(8, axis=1)
- extra_kwargs["strength"] = strength
- inpaint_func = inpaint
- init_image = Image.fromarray(img)
- mask_image = Image.fromarray(mask)
- # mask_image=mask_image.filter(ImageFilter.GaussianBlur(radius = 8))
- if True:
- input_image = init_image.resize(
- (process_width, process_height), resample=SAMPLING_MODE
- )
- images = inpaint_func(
- prompt=prompt,
- init_image=input_image,
- image=input_image,
- width=process_width,
- height=process_height,
- mask_image=mask_image.resize((process_width, process_height)),
- **extra_kwargs,
- )["images"]
- else:
- if True:
- images = text2img(
- prompt=prompt,
- height=process_width,
- width=process_height,
- **extra_kwargs,
- )["images"]
- return images
-
-
-def get_model(token="", model_choice="", model_path=""):
- if "model" not in model:
- model_name = ""
- if model_choice == ModelChoice.INPAINTING.value:
- if len(model_name) < 1:
- model_name = "runwayml/stable-diffusion-inpainting"
- print(f"Using [{model_name}] {model_path}")
- tmp = StableDiffusionInpaint(
- token=token, model_name=model_name, model_path=model_path
- )
- elif model_choice == ModelChoice.INPAINTING_IMG2IMG.value:
- print(
- f"Note that {ModelChoice.INPAINTING_IMG2IMG.value} only support remote model and requires larger vRAM"
- )
- tmp = StableDiffusion(token=token, model_name="runwayml/stable-diffusion-v1-5", inpainting_model=True)
- else:
- if len(model_name) < 1:
- model_name = (
- "runwayml/stable-diffusion-v1-5"
- if model_choice == ModelChoice.MODEL_1_5.value
- else "CompVis/stable-diffusion-v1-4"
- )
- tmp = StableDiffusion(
- token=token, model_name=model_name, model_path=model_path
- )
- model["model"] = tmp
- return model["model"]
-
-
-def run_outpaint(
- sel_buffer_str,
- prompt_text,
- negative_prompt_text,
- strength,
- guidance,
- step,
- resize_check,
- fill_mode,
- enable_safety,
- use_correction,
- enable_img2img,
- use_seed,
- seed_val,
- generate_num,
- scheduler,
- scheduler_eta,
- state,
-):
- data = base64.b64decode(str(sel_buffer_str))
- pil = Image.open(io.BytesIO(data))
- width, height = pil.size
- sel_buffer = np.array(pil)
- cur_model = get_model()
- images = cur_model.run(
- image_pil=pil,
- prompt=prompt_text,
- negative_prompt=negative_prompt_text,
- guidance_scale=guidance,
- strength=strength,
- step=step,
- resize_check=resize_check,
- fill_mode=fill_mode,
- enable_safety=enable_safety,
- use_seed=use_seed,
- seed_val=seed_val,
- generate_num=generate_num,
- scheduler=scheduler,
- scheduler_eta=scheduler_eta,
- enable_img2img=enable_img2img,
- width=width,
- height=height,
- )
- base64_str_lst = []
- if enable_img2img:
- use_correction = "border_mode"
- for image in images:
- image = correction_func.run(pil.resize(image.size), image, mode=use_correction)
- resized_img = image.resize((width, height), resample=SAMPLING_MODE,)
- out = sel_buffer.copy()
- out[:, :, 0:3] = np.array(resized_img)
- out[:, :, -1] = 255
- out_pil = Image.fromarray(out)
- out_buffer = io.BytesIO()
- out_pil.save(out_buffer, format="PNG")
- out_buffer.seek(0)
- base64_bytes = base64.b64encode(out_buffer.read())
- base64_str = base64_bytes.decode("ascii")
- base64_str_lst.append(base64_str)
- return (
- gr.update(label=str(state + 1), value=",".join(base64_str_lst),),
- gr.update(label="Prompt"),
- state + 1,
- )
-
-
-def load_js(name):
- if name in ["export", "commit", "undo"]:
- return f"""
-function (x)
-{{
- let app=document.querySelector("gradio-app");
- app=app.shadowRoot??app;
- let frame=app.querySelector("#sdinfframe").contentWindow.document;
- let button=frame.querySelector("#{name}");
- button.click();
- return x;
-}}
-"""
- ret = ""
- with open(f"./js/{name}.js", "r") as f:
- ret = f.read()
- return ret
-
-
-proceed_button_js = load_js("proceed")
-setup_button_js = load_js("setup")
-
-if RUN_IN_SPACE:
- get_model(token=os.environ.get("hftoken", ""), model_choice=ModelChoice.INPAINTING.value)
-
-blocks = gr.Blocks(
- title="StableDiffusion-Infinity",
- css="""
-.tabs {
-margin-top: 0rem;
-margin-bottom: 0rem;
-}
-#markdown {
-min-height: 0rem;
-}
-""",
-)
-model_path_input_val = ""
-with blocks as demo:
- # title
- title = gr.Markdown(
- """
- **stablediffusion-infinity**: Outpainting with Stable Diffusion on an infinite canvas: [https://github.com/lkwq007/stablediffusion-infinity](https://github.com/lkwq007/stablediffusion-infinity) \[[Open In Colab](https://colab.research.google.com/github/lkwq007/stablediffusion-infinity/blob/master/stablediffusion_infinity_colab.ipynb)\] \[[Setup Locally](https://github.com/lkwq007/stablediffusion-infinity/blob/master/docs/setup_guide.md)\]
- """,
- elem_id="markdown",
- )
- # frame
- frame = gr.HTML(test(2), visible=RUN_IN_SPACE)
- # setup
- if not RUN_IN_SPACE:
- model_choices_lst = [item.value for item in ModelChoice]
- if args.local_model:
- model_path_input_val = args.local_model
- # model_choices_lst.insert(0, "local_model")
- elif args.remote_model:
- model_path_input_val = args.remote_model
- # model_choices_lst.insert(0, "remote_model")
- with gr.Row(elem_id="setup_row"):
- with gr.Column(scale=4, min_width=350):
- token = gr.Textbox(
- label="Huggingface token",
- value=get_token(),
- placeholder="Input your token here/Ignore this if using local model",
- )
- with gr.Column(scale=3, min_width=320):
- model_selection = gr.Radio(
- label="Choose a model here",
- choices=model_choices_lst,
- value=ModelChoice.INPAINTING.value,
- )
- with gr.Column(scale=1, min_width=100):
- canvas_width = gr.Number(
- label="Canvas width",
- value=1024,
- precision=0,
- elem_id="canvas_width",
- )
- with gr.Column(scale=1, min_width=100):
- canvas_height = gr.Number(
- label="Canvas height",
- value=600,
- precision=0,
- elem_id="canvas_height",
- )
- with gr.Column(scale=1, min_width=100):
- selection_size = gr.Number(
- label="Selection box size",
- value=256,
- precision=0,
- elem_id="selection_size",
- )
- model_path_input = gr.Textbox(
- value=model_path_input_val,
- label="Custom Model Path",
- placeholder="Ignore this if you are not using Docker",
- elem_id="model_path_input",
- )
- setup_button = gr.Button("Click to Setup (may take a while)", variant="primary")
- with gr.Row():
- with gr.Column(scale=3, min_width=270):
- init_mode = gr.Radio(
- label="Init Mode",
- choices=[
- "patchmatch",
- "edge_pad",
- "cv2_ns",
- "cv2_telea",
- "perlin",
- "gaussian",
- ],
- value="patchmatch",
- type="value",
- )
- postprocess_check = gr.Radio(
- label="Photometric Correction Mode",
- choices=["disabled", "mask_mode", "border_mode",],
- value="disabled",
- type="value",
- )
- # canvas control
-
- with gr.Column(scale=3, min_width=270):
- sd_prompt = gr.Textbox(
- label="Prompt", placeholder="input your prompt here!", lines=2
- )
- sd_negative_prompt = gr.Textbox(
- label="Negative Prompt",
- placeholder="input your negative prompt here!",
- lines=2,
- )
- with gr.Column(scale=2, min_width=150):
- with gr.Group():
- with gr.Row():
- sd_generate_num = gr.Number(
- label="Sample number", value=1, precision=0
- )
- sd_strength = gr.Slider(
- label="Strength",
- minimum=0.0,
- maximum=1.0,
- value=0.75,
- step=0.01,
- )
- with gr.Row():
- sd_scheduler = gr.Dropdown(
- list(scheduler_dict.keys()), label="Scheduler", value="PLMS"
- )
- sd_scheduler_eta = gr.Number(label="Eta", value=0.0)
- with gr.Column(scale=1, min_width=80):
- sd_step = gr.Number(label="Step", value=50, precision=0)
- sd_guidance = gr.Number(label="Guidance", value=7.5)
-
- proceed_button = gr.Button("Proceed", elem_id="proceed", visible=DEBUG_MODE)
- xss_js = load_js("xss").replace("\n", " ")
- xss_html = gr.HTML(
- value=f"""
- """,
- visible=False,
- )
- xss_keyboard_js = load_js("keyboard").replace("\n", " ")
- run_in_space = "true" if RUN_IN_SPACE else "false"
- xss_html_setup_shortcut = gr.HTML(
- value=f"""
- """,
- visible=False,
- )
- # sd pipeline parameters
- sd_img2img = gr.Checkbox(label="Enable Img2Img", value=False, visible=False)
- sd_resize = gr.Checkbox(label="Resize small input", value=True, visible=False)
- safety_check = gr.Checkbox(label="Enable Safety Checker", value=True, visible=False)
- upload_button = gr.Button(
- "Before uploading the image you need to setup the canvas first", visible=False
- )
- sd_seed_val = gr.Number(label="Seed", value=0, precision=0, visible=False)
- sd_use_seed = gr.Checkbox(label="Use seed", value=False, visible=False)
- model_output = gr.Textbox(visible=DEBUG_MODE, elem_id="output", label="0")
- model_input = gr.Textbox(visible=DEBUG_MODE, elem_id="input", label="Input")
- upload_output = gr.Textbox(visible=DEBUG_MODE, elem_id="upload", label="0")
- model_output_state = gr.State(value=0)
- upload_output_state = gr.State(value=0)
- cancel_button = gr.Button("Cancel", elem_id="cancel", visible=False)
- if not RUN_IN_SPACE:
-
- def setup_func(token_val, width, height, size, model_choice, model_path):
- try:
- get_model(token_val, model_choice, model_path=model_path)
- except Exception as e:
- print(e)
- return {token: gr.update(value=str(e))}
- return {
- token: gr.update(visible=False),
- canvas_width: gr.update(visible=False),
- canvas_height: gr.update(visible=False),
- selection_size: gr.update(visible=False),
- setup_button: gr.update(visible=False),
- frame: gr.update(visible=True),
- upload_button: gr.update(value="Upload Image"),
- model_selection: gr.update(visible=False),
- model_path_input: gr.update(visible=False),
- }
-
- setup_button.click(
- fn=setup_func,
- inputs=[
- token,
- canvas_width,
- canvas_height,
- selection_size,
- model_selection,
- model_path_input,
- ],
- outputs=[
- token,
- canvas_width,
- canvas_height,
- selection_size,
- setup_button,
- frame,
- upload_button,
- model_selection,
- model_path_input,
- ],
- _js=setup_button_js,
- )
-
- proceed_event = proceed_button.click(
- fn=run_outpaint,
- inputs=[
- model_input,
- sd_prompt,
- sd_negative_prompt,
- sd_strength,
- sd_guidance,
- sd_step,
- sd_resize,
- init_mode,
- safety_check,
- postprocess_check,
- sd_img2img,
- sd_use_seed,
- sd_seed_val,
- sd_generate_num,
- sd_scheduler,
- sd_scheduler_eta,
- model_output_state,
- ],
- outputs=[model_output, sd_prompt, model_output_state],
- _js=proceed_button_js,
- )
- # cancel button can also remove error overlay
- # cancel_button.click(fn=None, inputs=None, outputs=None, cancels=[proceed_event])
-
-
-launch_extra_kwargs = {
- "show_error": True,
- # "favicon_path": ""
-}
-launch_kwargs = vars(args)
-launch_kwargs = {k: v for k, v in launch_kwargs.items() if v is not None}
-launch_kwargs.pop("remote_model", None)
-launch_kwargs.pop("local_model", None)
-launch_kwargs.pop("fp32", None)
-launch_kwargs.update(launch_extra_kwargs)
-try:
- import google.colab
-
- launch_kwargs["debug"] = True
-except:
- pass
-
-if RUN_IN_SPACE:
- demo.launch()
-elif args.debug:
- launch_kwargs["server_name"] = "0.0.0.0"
- demo.queue().launch(**launch_kwargs)
-else:
- demo.queue().launch(**launch_kwargs)
-
diff --git a/spaces/Future-Tense/Slo-Mo-YOLO-Video/app.py b/spaces/Future-Tense/Slo-Mo-YOLO-Video/app.py
deleted file mode 100644
index 664bd04de28a3d83424d5d13f3aa7b789438e436..0000000000000000000000000000000000000000
--- a/spaces/Future-Tense/Slo-Mo-YOLO-Video/app.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import gradio as gr
-import time
-import cv2 # opencv2 package for python.
-import torch
-from pytube import YouTube
-from ultralyticsplus import YOLO, render_result
-
-
-model = YOLO('ultralyticsplus/yolov8s')
-device = 'cuda' if torch.cuda.is_available() else 'cpu'
-URL = "https://www.youtube.com/watch?v=6NBwbKMyzEE" #URL to parse
-
-# set model parameters
-model.overrides['conf'] = 0.50 # NMS confidence threshold
-model.overrides['iou'] = 0.45 # NMS IoU threshold
-model.overrides['agnostic_nms'] = False # NMS class-agnostic
-model.overrides['max_det'] = 1000 # maximum number of detections per image
-model.to(device)
-
-
-def load(URL):
-
- yt = YouTube(URL)
- vid_cap = yt.streams.filter(progressive=True, file_extension='mp4').order_by('resolution').desc().last().download(filename="tmp.mp4")
- global player
- player = cv2.VideoCapture(vid_cap)
- frame_num = int(player.get(cv2.CAP_PROP_POS_FRAMES))
- frame_count = int(player.get(cv2.CAP_PROP_FRAME_COUNT))
- frame_fps = (player.get(cv2.CAP_PROP_FPS))
- tog = 0
- return vid_cap,frame_num,frame_count,frame_fps,tog
-
-def vid_play(cap,frame_num):
- assert player.isOpened() # Make sure that their is a stream.
- player.set(cv2.CAP_PROP_POS_FRAMES, int(frame_num))
- ret, frame_bgr = player.read(int(frame_num))
- frame = cv2.cvtColor(frame_bgr, cv2.COLOR_BGR2RGB)
- results = model.predict(frame)
- render = render_result(model=model, image=frame, result=results[0])
- return render
-
-def fw_fn(cur,last):
- next = cur+1
- if next > last:
- next = last
- return next
-def bk_fn(cur):
- next = cur-1
- if next < 0:
- next = 0
- return next
-def tog_on():
- return 1,gr.Markdown.update("""
- )
-}
diff --git a/spaces/GeorgeOrville/bingo/src/lib/bots/bing/utils.ts b/spaces/GeorgeOrville/bingo/src/lib/bots/bing/utils.ts
deleted file mode 100644
index 64b4b96452d125346b0fc4436b5f7c18c962df0b..0000000000000000000000000000000000000000
--- a/spaces/GeorgeOrville/bingo/src/lib/bots/bing/utils.ts
+++ /dev/null
@@ -1,87 +0,0 @@
-import { ChatResponseMessage, BingChatResponse } from './types'
-
-export function convertMessageToMarkdown(message: ChatResponseMessage): string {
- if (message.messageType === 'InternalSearchQuery') {
- return message.text
- }
- for (const card of message.adaptiveCards??[]) {
- for (const block of card.body) {
- if (block.type === 'TextBlock') {
- return block.text
- }
- }
- }
- return ''
-}
-
-const RecordSeparator = String.fromCharCode(30)
-
-export const websocketUtils = {
- packMessage(data: any) {
- return `${JSON.stringify(data)}${RecordSeparator}`
- },
- unpackMessage(data: string | ArrayBuffer | Blob) {
- if (!data) return {}
- return data
- .toString()
- .split(RecordSeparator)
- .filter(Boolean)
- .map((s) => {
- try {
- return JSON.parse(s)
- } catch (e) {
- return {}
- }
- })
- },
-}
-
-export async function createImage(prompt: string, id: string, headers: HeadersInit): Promise {
- const { headers: responseHeaders } = await fetch(`https://www.bing.com/images/create?partner=sydney&re=1&showselective=1&sude=1&kseed=7000&SFX=&q=${encodeURIComponent(prompt)}&iframeid=${id}`,
- {
- method: 'HEAD',
- headers,
- redirect: 'manual'
- },
- );
-
- if (!/&id=([^&]+)$/.test(responseHeaders.get('location') || '')) {
- throw new Error('请求异常,请检查 cookie 是否有效')
- }
-
- const resultId = RegExp.$1;
- let count = 0
- const imageThumbUrl = `https://www.bing.com/images/create/async/results/${resultId}?q=${encodeURIComponent(prompt)}&partner=sydney&showselective=1&IID=images.as`;
-
- do {
- await sleep(3000);
- const content = await fetch(imageThumbUrl, { headers, method: 'GET' })
-
- // @ts-ignore
- if (content.headers.get('content-length') > 1) {
- const text = await content.text()
- return (text?.match(/ target?.split('src="').pop()?.replace(/&/g, '&'))
- .map(img => ``).join(' ')
- }
- } while(count ++ < 10);
-}
-
-
-export async function* streamAsyncIterable(stream: ReadableStream) {
- const reader = stream.getReader()
- try {
- while (true) {
- const { done, value } = await reader.read()
- if (done) {
- return
- }
- yield value
- }
- } finally {
- reader.releaseLock()
- }
-}
-
-export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms))
-
diff --git "a/spaces/Gmq-x/gpt-academic/crazy_functions/\346\211\271\351\207\217\347\277\273\350\257\221PDF\346\226\207\346\241\243_\345\244\232\347\272\277\347\250\213.py" "b/spaces/Gmq-x/gpt-academic/crazy_functions/\346\211\271\351\207\217\347\277\273\350\257\221PDF\346\226\207\346\241\243_\345\244\232\347\272\277\347\250\213.py"
deleted file mode 100644
index 4adb9a464bc71ec4a177b76536d5e5fab619ef2d..0000000000000000000000000000000000000000
--- "a/spaces/Gmq-x/gpt-academic/crazy_functions/\346\211\271\351\207\217\347\277\273\350\257\221PDF\346\226\207\346\241\243_\345\244\232\347\272\277\347\250\213.py"
+++ /dev/null
@@ -1,131 +0,0 @@
-from toolbox import CatchException, report_execption, write_results_to_file
-from toolbox import update_ui
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
-from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
-from .crazy_utils import read_and_clean_pdf_text
-from colorful import *
-
-@CatchException
-def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt, web_port):
- import glob
- import os
-
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "批量翻译PDF文档。函数插件贡献者: Binary-Husky"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import fitz
- import tiktoken
- except:
- report_execption(chatbot, history,
- a=f"解析项目: {txt}",
- b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf tiktoken```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 清空历史,以免输入溢出
- history = []
-
- # 检测输入参数,如没有给定输入参数,直接退出
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "":
- txt = '空空如也的输入栏'
- report_execption(chatbot, history,
- a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 搜索需要处理的文件清单
- file_manifest = [f for f in glob.glob(
- f'{project_folder}/**/*.pdf', recursive=True)]
-
- # 如果没找到任何文件
- if len(file_manifest) == 0:
- report_execption(chatbot, history,
- a=f"解析项目: {txt}", b=f"找不到任何.tex或.pdf文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 开始正式执行任务
- yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt)
-
-
-def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt):
- import os
- import tiktoken
- TOKEN_LIMIT_PER_FRAGMENT = 1280
- generated_conclusion_files = []
- for index, fp in enumerate(file_manifest):
-
- # 读取PDF文件
- file_content, page_one = read_and_clean_pdf_text(fp)
-
- # 递归地切割PDF文件
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
- from request_llm.bridge_all import model_info
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
- def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
- paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
- txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
- page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
- txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
-
- # 为了更好的效果,我们剥离Introduction之后的部分(如果有)
- paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
-
- # 单线,获取文章meta信息
- paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=f"以下是一篇学术论文的基础信息,请从中提取出“标题”、“收录会议或期刊”、“作者”、“摘要”、“编号”、“作者邮箱”这六个部分。请用markdown格式输出,最后用中文翻译摘要部分。请提取:{paper_meta}",
- inputs_show_user=f"请从{fp}中提取出“标题”、“收录会议或期刊”等基本信息。",
- llm_kwargs=llm_kwargs,
- chatbot=chatbot, history=[],
- sys_prompt="Your job is to collect information from materials。",
- )
-
- # 多线,翻译
- gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
- inputs_array=[
- f"你需要翻译以下内容:\n{frag}" for frag in paper_fragments],
- inputs_show_user_array=[f"\n---\n 原文: \n\n {frag.replace('#', '')} \n---\n 翻译:\n " for frag in paper_fragments],
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history_array=[[paper_meta] for _ in paper_fragments],
- sys_prompt_array=[
- "请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" for _ in paper_fragments],
- # max_workers=5 # OpenAI所允许的最大并行过载
- )
-
- # 整理报告的格式
- for i,k in enumerate(gpt_response_collection):
- if i%2==0:
- gpt_response_collection[i] = f"\n\n---\n\n ## 原文[{i//2}/{len(gpt_response_collection)//2}]: \n\n {paper_fragments[i//2].replace('#', '')} \n\n---\n\n ## 翻译[{i//2}/{len(gpt_response_collection)//2}]:\n "
- else:
- gpt_response_collection[i] = gpt_response_collection[i]
- final = ["一、论文概况\n\n---\n\n", paper_meta_info.replace('# ', '### ') + '\n\n---\n\n', "二、论文翻译", ""]
- final.extend(gpt_response_collection)
- create_report_file_name = f"{os.path.basename(fp)}.trans.md"
- res = write_results_to_file(final, file_name=create_report_file_name)
-
- # 更新UI
- generated_conclusion_files.append(f'./gpt_log/{create_report_file_name}')
- chatbot.append((f"{fp}完成了吗?", res))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 准备文件的下载
- import shutil
- for pdf_path in generated_conclusion_files:
- # 重命名文件
- rename_file = f'./gpt_log/总结论文-{os.path.basename(pdf_path)}'
- if os.path.exists(rename_file):
- os.remove(rename_file)
- shutil.copyfile(pdf_path, rename_file)
- if os.path.exists(pdf_path):
- os.remove(pdf_path)
- chatbot.append(("给出输出文件清单", str(generated_conclusion_files)))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
diff --git a/spaces/Gmq-x/gpt-academic/docs/README_EN.md b/spaces/Gmq-x/gpt-academic/docs/README_EN.md
deleted file mode 100644
index db214f5327b8cdcd84ed1c57390c3b24ba83d78f..0000000000000000000000000000000000000000
--- a/spaces/Gmq-x/gpt-academic/docs/README_EN.md
+++ /dev/null
@@ -1,291 +0,0 @@
-> **Note**
->
-> This English README is automatically generated by the markdown translation plugin in this project, and may not be 100% correct.
->
-
-# ChatGPT Academic Optimization
-
-**If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a [README in English](docs/README_EN.md) translated by this project itself.**
-
-> **Note**
->
-> 1. Please note that only **functions with red color** supports reading files, some functions are located in the **dropdown menu** of plugins. Additionally, we welcome and prioritize any new plugin PRs with **highest priority**!
->
-> 2. The functionality of each file in this project is detailed in the self-translation report [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) of the project. With the iteration of the version, you can also click on the relevant function plugins at any time to call GPT to regenerate the self-analysis report of the project. The FAQ summary is in the [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98) section.
->
-
-
-
-
-Function | Description
---- | ---
-One-Click Polish | Supports one-click polishing and finding grammar errors in academic papers.
-One-Key Translation Between Chinese and English | One-click translation between Chinese and English.
-One-Key Code Interpretation | Can correctly display and interpret code.
-[Custom Shortcut Keys](https://www.bilibili.com/video/BV14s4y1E7jN) | Supports custom shortcut keys.
-[Configure Proxy Server](https://www.bilibili.com/video/BV1rc411W7Dr) | Supports configuring proxy servers.
-Modular Design | Supports custom high-order function plugins and [function plugins], and plugins support [hot updates](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
-[Self-programming Analysis](https://www.bilibili.com/video/BV1cj411A7VW) | [Function Plugin] [One-Key Read] (https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) The source code of this project is analyzed.
-[Program Analysis](https://www.bilibili.com/video/BV1cj411A7VW) | [Function Plugin] One-click can analyze the project tree of other Python/C/C++/Java/Lua/... projects
-Read the Paper | [Function Plugin] One-click interpretation of the full text of latex paper and generation of abstracts
-Latex Full Text Translation, Proofreading | [Function Plugin] One-click translation or proofreading of latex papers.
-Batch Comment Generation | [Function Plugin] One-click batch generation of function comments
-Chat Analysis Report Generation | [Function Plugin] After running, an automatic summary report will be generated
-[Arxiv Assistant](https://www.bilibili.com/video/BV1LM4y1279X) | [Function Plugin] Enter the arxiv article url to translate the abstract and download the PDF with one click
-[Full-text Translation Function of PDF Paper](https://www.bilibili.com/video/BV1KT411x7Wn) | [Function Plugin] Extract the title & abstract of the PDF paper + translate the full text (multithreading)
-[Google Scholar Integration Assistant](https://www.bilibili.com/video/BV19L411U7ia) | [Function Plugin] Given any Google Scholar search page URL, let gpt help you choose interesting articles.
-Formula / Picture / Table Display | Can display both the tex form and the rendering form of formulas at the same time, support formula and code highlighting
-Multithreaded Function Plugin Support | Supports multi-threaded calling chatgpt, one-click processing of massive text or programs
-Start Dark Gradio [Theme](https://github.com/binary-husky/chatgpt_academic/issues/173) | Add ```/?__dark-theme=true``` at the end of the browser url to switch to dark theme
-[Multiple LLM Models](https://www.bilibili.com/video/BV1wT411p7yf) support, [API2D](https://api2d.com/) interface support | It must feel nice to be served by both GPT3.5, GPT4, and [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B)!
-Huggingface non-Science Net [Online Experience](https://huggingface.co/spaces/qingxu98/gpt-academic) | After logging in to huggingface, copy [this space](https://huggingface.co/spaces/qingxu98/gpt-academic)
-... | ...
-
-
-
-
-- New interface (switch between "left-right layout" and "up-down layout" by modifying the LAYOUT option in config.py)
-
-
-
-
-
-- All buttons are dynamically generated by reading functional.py and can add custom functionality at will, freeing up clipboard
-
-
-
-
-- Proofreading / correcting
-
-
-
-
-- If the output contains formulas, it will be displayed in both the tex form and the rendering form at the same time, which is convenient for copying and reading
-
-
-
-
-- Don't want to read the project code? Just take the whole project to chatgpt
-
-
-
-
-- Multiple major language model mixing calls (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
-
-
-
-
-Multiple major language model mixing call [huggingface beta version](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta) (the huggingface version does not support chatglm)
-
-
----
-
-## Installation-Method 1: Run directly (Windows, Linux or MacOS)
-
-1. Download project
-```sh
-git clone https://github.com/binary-husky/chatgpt_academic.git
-cd chatgpt_academic
-```
-
-2. Configure API_KEY and proxy settings
-
-
-In `config.py`, configure the overseas Proxy and OpenAI API KEY as follows:
-```
-1. If you are in China, you need to set up an overseas proxy to use the OpenAI API smoothly. Please read config.py carefully for setup details (1. Modify USE_PROXY to True; 2. Modify proxies according to the instructions).
-2. Configure the OpenAI API KEY. You need to register and obtain an API KEY on the OpenAI website. Once you get the API KEY, you can configure it in the config.py file.
-3. Issues related to proxy networks (network timeouts, proxy failures) are summarized at https://github.com/binary-husky/chatgpt_academic/issues/1
-```
-(P.S. When the program runs, it will first check whether there is a private configuration file named `config_private.py` and use the same-name configuration in `config.py` to overwrite it. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py` and transfer (copy) the configuration in `config.py` to` config_private.py`. `config_private.py` is not controlled by git and can make your privacy information more secure.))
-
-
-3. Install dependencies
-```sh
-# (Option One) Recommended
-python -m pip install -r requirements.txt
-
-# (Option Two) If you use anaconda, the steps are similar:
-# (Option Two.1) conda create -n gptac_venv python=3.11
-# (Option Two.2) conda activate gptac_venv
-# (Option Two.3) python -m pip install -r requirements.txt
-
-# Note: Use official pip source or Ali pip source. Other pip sources (such as some university pips) may have problems, and temporary replacement methods are as follows:
-# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
-```
-
-If you need to support Tsinghua ChatGLM, you need to install more dependencies (if you are not familiar with python or your computer configuration is not good, we recommend not to try):
-```sh
-python -m pip install -r request_llm/requirements_chatglm.txt
-```
-
-4. Run
-```sh
-python main.py
-```
-
-5. Test function plugins
-```
-- Test Python project analysis
- In the input area, enter `./crazy_functions/test_project/python/dqn`, and then click "Analyze the entire Python project"
-- Test self-code interpretation
- Click "[Multithreading Demo] Interpretation of This Project Itself (Source Code Interpretation)"
-- Test experimental function template function (requires gpt to answer what happened today in history). You can use this function as a template to implement more complex functions.
- Click "[Function Plugin Template Demo] Today in History"
-- There are more functions to choose from in the function plugin area drop-down menu.
-```
-
-## Installation-Method 2: Use Docker (Linux)
-
-1. ChatGPT only (recommended for most people)
-``` sh
-# download project
-git clone https://github.com/binary-husky/chatgpt_academic.git
-cd chatgpt_academic
-# configure overseas Proxy and OpenAI API KEY
-Edit config.py with any text editor
-# Install
-docker build -t gpt-academic .
-# Run
-docker run --rm -it --net=host gpt-academic
-
-# Test function plug-in
-## Test function plugin template function (requires gpt to answer what happened today in history). You can use this function as a template to implement more complex functions.
-Click "[Function Plugin Template Demo] Today in History"
-## Test Abstract Writing for Latex Projects
-Enter ./crazy_functions/test_project/latex/attention in the input area, and then click "Read Tex Paper and Write Abstract"
-## Test Python Project Analysis
-Enter ./crazy_functions/test_project/python/dqn in the input area and click "Analyze the entire Python project."
-
-More functions are available in the function plugin area drop-down menu.
-```
-
-2. ChatGPT+ChatGLM (requires strong familiarity with docker + strong computer configuration)
-
-``` sh
-# Modify dockerfile
-cd docs && nano Dockerfile+ChatGLM
-# How to build | 如何构建 (Dockerfile+ChatGLM在docs路径下,请先cd docs)
-docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
-# How to run | 如何运行 (1) 直接运行:
-docker run --rm -it --net=host --gpus=all gpt-academic
-# How to run | 如何运行 (2) 我想运行之前进容器做一些调整:
-docker run --rm -it --net=host --gpus=all gpt-academic bash
-```
-
-
-## Installation-Method 3: Other Deployment Methods
-
-1. Remote Cloud Server Deployment
-Please visit [Deployment Wiki-1] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
-
-2. Use WSL2 (Windows Subsystem for Linux)
-Please visit [Deployment Wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
-
-
-## Installation-Proxy Configuration
-### Method 1: Conventional method
-[Configure Proxy](https://github.com/binary-husky/chatgpt_academic/issues/1)
-
-### Method Two: Step-by-step tutorial for newcomers
-[Step-by-step tutorial for newcomers](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
-
----
-
-## Customizing Convenient Buttons (Customizing Academic Shortcuts)
-Open `core_functional.py` with any text editor and add an item as follows, then restart the program (if the button has been successfully added and visible, both the prefix and suffix support hot modification without the need to restart the program to take effect). For example:
-```
-"Super English to Chinese translation": {
- # Prefix, which will be added before your input. For example, to describe your requirements, such as translation, code interpretation, polishing, etc.
- "Prefix": "Please translate the following content into Chinese and use a markdown table to interpret the proprietary terms in the text one by one:\n\n",
-
- # Suffix, which will be added after your input. For example, combined with the prefix, you can put your input content in quotes.
- "Suffix": "",
-},
-```
-
-
-
-
----
-
-
-## Some Function Displays
-
-### Image Display:
-
-
-You are a professional academic paper translator.
-
-
-
-
-
-### If a program can understand and analyze itself:
-
-
-
-
-
-
-
-
-
-### Analysis of any Python/Cpp project:
-
-
-
-
-
-
-
-
-### One-click reading comprehension and summary generation of Latex papers
-
-
-
-
-### Automatic report generation
-
-
-
-
-
-
-### Modular functional design
-
-
-
-
-
-### Source code translation to English
-
-
-
-
-
-## Todo and version planning:
-- version 3.2+ (todo): Function plugin supports more parameter interfaces
-- version 3.1: Support for inquiring multiple GPT models at the same time! Support for api2d, support for multiple apikeys load balancing
-- version 3.0: Support for chatglm and other small llms
-- version 2.6: Refactored the plugin structure, improved interactivity, added more plugins
-- version 2.5: Self-updating, solves the problem of text being too long and token overflowing when summarizing large project source code
-- version 2.4: (1) Added PDF full text translation function; (2) Added function to switch input area position; (3) Added vertical layout option; (4) Multi-threaded function plugin optimization.
-- version 2.3: Enhanced multi-threaded interactivity
-- version 2.2: Function plugin supports hot reloading
-- version 2.1: Foldable layout
-- version 2.0: Introduction of modular function plugins
-- version 1.0: Basic functions
-
-## Reference and learning
-
-```
-The code design of this project has referenced many other excellent projects, including:
-
-# Reference project 1: Borrowed many tips from ChuanhuChatGPT
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# Reference project 2: Tsinghua ChatGLM-6B:
-https://github.com/THUDM/ChatGLM-6B
-```
-
diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/Img_to_Sqlite.py b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/Img_to_Sqlite.py
deleted file mode 100644
index 6f761e681e84433f4060bd2ec9abedddbc261381..0000000000000000000000000000000000000000
--- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/Waifu2x/Img_to_Sqlite.py
+++ /dev/null
@@ -1,123 +0,0 @@
-"""
-Split images into small patches and insert them into sqlite db. Reading and Inserting speeds are much better than
-Ubuntu's (18.04) file system when the number of patches is larger than 20k. And it has smaller size than using h5 format
-
-Recommend to check or filter out small size patches as their content vary little. 128x128 seems better than 64x64.
-
-
-"""
-import sqlite3
-from torch.utils.data import DataLoader
-from tqdm import trange
-from Dataloader import Image2Sqlite
-
-conn = sqlite3.connect("dataset/image_yandere.db")
-cursor = conn.cursor()
-
-with conn:
- cursor.execute("PRAGMA SYNCHRONOUS = OFF")
-
-table_name = "train_images_size_128_noise_1_rgb"
-lr_col = "lr_img"
-hr_col = "hr_img"
-
-with conn:
- conn.execute(
- f"CREATE TABLE IF NOT EXISTS {table_name} ({lr_col} BLOB, {hr_col} BLOB)"
- )
-
-dat = Image2Sqlite(
- img_folder="./dataset/yande.re_test_shrink",
- patch_size=256,
- shrink_size=2,
- noise_level=1,
- down_sample_method=None,
- color_mod="RGB",
- dummy_len=None,
-)
-print(f"Total images {len(dat)}")
-
-img_dat = DataLoader(dat, num_workers=6, batch_size=6, shuffle=True)
-
-num_batches = 20
-for i in trange(num_batches):
- bulk = []
- for lrs, hrs in img_dat:
- patches = [(lrs[i], hrs[i]) for i in range(len(lrs))]
- # patches = [(lrs[i], hrs[i]) for i in range(len(lrs)) if len(lrs[i]) > 14000]
-
- bulk.extend(patches)
-
- bulk = [
- i for i in bulk if len(i[0]) > 15000
- ] # for 128x128, 14000 is fair. Around 20% of patches are filtered out
- cursor.executemany(
- f"INSERT INTO {table_name}({lr_col}, {hr_col}) VALUES (?,?)", bulk
- )
- conn.commit()
-
-cursor.execute(f"select max(rowid) from {table_name}")
-print(cursor.fetchall())
-conn.commit()
-# +++++++++++++++++++++++++++++++++++++
-# Used for Create Test Database
-# -------------------------------------
-
-# cursor.execute(f"SELECT ROWID FROM {table_name} ORDER BY LENGTH({lr_col}) DESC LIMIT 400")
-# rowdis = cursor.fetchall()
-# rowdis = ",".join([str(i[0]) for i in rowdis])
-#
-# cursor.execute(f"DELETE FROM {table_name} WHERE ROWID NOT IN ({rowdis})")
-# conn.commit()
-# cursor.execute("vacuum")
-#
-# cursor.execute("""
-# CREATE TABLE IF NOT EXISTS train_images_size_128_noise_1_rgb_small AS
-# SELECT *
-# FROM train_images_size_128_noise_1_rgb
-# WHERE length(lr_img) < 14000;
-# """)
-#
-# cursor.execute("""
-# DELETE
-# FROM train_images_size_128_noise_1_rgb
-# WHERE length(lr_img) < 14000;
-# """)
-
-# reset index
-cursor.execute("VACUUM")
-conn.commit()
-
-# +++++++++++++++++++++++++++++++++++++
-# check image size
-# -------------------------------------
-#
-
-from PIL import Image
-import io
-
-cursor.execute(
- f"""
- select {hr_col} from {table_name}
- ORDER BY LENGTH({hr_col}) desc
- limit 100
-"""
-)
-# WHERE LENGTH({lr_col}) BETWEEN 14000 AND 16000
-
-# small = cursor.fetchall()
-# print(len(small))
-for idx, i in enumerate(cursor):
- img = Image.open(io.BytesIO(i[0]))
- img.save(f"dataset/check/{idx}.png")
-
-# +++++++++++++++++++++++++++++++++++++
-# Check Image Variance
-# -------------------------------------
-
-import pandas as pd
-import matplotlib.pyplot as plt
-
-dat = pd.read_sql(f"SELECT length({lr_col}) from {table_name}", conn)
-dat.hist(bins=20)
-plt.show()
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gfl/gfl_r101_fpn_dconv_c3-c5_mstrain_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gfl/gfl_r101_fpn_dconv_c3-c5_mstrain_2x_coco.py
deleted file mode 100644
index eab622b2e8bdc03c717b9b04d043da46f25a7cb3..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gfl/gfl_r101_fpn_dconv_c3-c5_mstrain_2x_coco.py
+++ /dev/null
@@ -1,14 +0,0 @@
-_base_ = './gfl_r50_fpn_mstrain_2x_coco.py'
-model = dict(
- pretrained='torchvision://resnet101',
- backbone=dict(
- type='ResNet',
- depth=101,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False),
- stage_with_dcn=(False, True, True, True),
- norm_eval=True,
- style='pytorch'))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/schedules/schedule_40k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/schedules/schedule_40k.py
deleted file mode 100644
index cdbf841abcb26eed87bf76ab816aff4bae0630ee..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/schedules/schedule_40k.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# optimizer
-optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005)
-optimizer_config = dict()
-# learning policy
-lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False)
-# runtime settings
-runner = dict(type='IterBasedRunner', max_iters=40000)
-checkpoint_config = dict(by_epoch=False, interval=4000)
-evaluation = dict(interval=4000, metric='mIoU')
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x1024_80k_cityscapes.py
deleted file mode 100644
index 03734310d7338c75d48c914cb325500961c04a79..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x1024_80k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/danet_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
-]
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r101_769x769_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r101_769x769_40k_cityscapes.py
deleted file mode 100644
index e5f3a3fae18cb769fd04b0c669785c5728cf479f..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r101_769x769_40k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './upernet_r50_769x769_40k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_cluster_faiss.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_cluster_faiss.py
deleted file mode 100644
index 632a69e9f4bd98d33abb689c15557c818d0e35ea..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_cluster_faiss.py
+++ /dev/null
@@ -1,210 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import gc
-import os
-import os.path as osp
-import random
-import numpy as np
-import tqdm
-import torch
-
-from collections import namedtuple
-
-import faiss
-
-import fairseq
-import soundfile as sf
-
-
-def get_parser():
- parser = argparse.ArgumentParser(
- description="compute kmeans codebook from kaldi-computed feats"
- )
- # fmt: off
- parser.add_argument('data', help='location of tsv files')
- parser.add_argument('--save-dir', help='where to save the output', required=True)
- parser.add_argument('--checkpoint', type=str, help='checkpoint for wav2vec model (if using wav2vec features)', required=True)
- parser.add_argument('--sample-pct', '-r', type=float, help='percentage of timesteps to sample', default=0)
- parser.add_argument('--layer', '-l', type=int, help='which layer to read', default=14)
- parser.add_argument('--faiss-specs', '-f', type=str,
- help='faiss index specs; separated by space '
- 'format is: PCAx_NORM_CLUSx_SPHERICAL -> '
- 'PCAx if exists first apply PCA '
- 'NORM if exists, normalize the vector by L2 norm '
- 'CLUSx must exist, cluster to x clusters '
- 'SPEHRICAL if exists, apply spherical kmeans',
- default='l2')
- # fmt: on
-
- return parser
-
-
-faiss_spec = namedtuple("faiss_spec", ["pca", "norm", "n_clus", "sphere", "spec_str"])
-
-
-def parse_faiss_specs(specs_str):
- specs = []
- for ss in specs_str.split():
- comps = ss.split("_")
- pca = 0
- norm = False
- n_clus = 0
- sphere = False
- for c in comps:
- if c.startswith("PCA"):
- pca = int(c[3:])
- elif c == "NORM":
- norm = True
- elif c.startswith("CLUS"):
- n_clus = int(c[4:])
- elif c == "SPHERICAL":
- sphere = True
- assert n_clus > 0
- specs.append(
- faiss_spec(pca=pca, norm=norm, n_clus=n_clus, sphere=sphere, spec_str=ss)
- )
- return specs
-
-
-class Wav2VecFeatureReader(object):
- def __init__(self, cp_file, layer):
- state = fairseq.checkpoint_utils.load_checkpoint_to_cpu(cp_file)
-
- self.layer = layer
-
- if "cfg" in state:
- w2v_args = state["cfg"]
- task = fairseq.tasks.setup_task(w2v_args.task)
- model = task.build_model(w2v_args.model)
- else:
- w2v_args = state["args"]
- task = fairseq.tasks.setup_task(w2v_args)
- model = task.build_model(w2v_args)
- model.load_state_dict(state["model"], strict=True)
- model.eval()
- model.cuda()
- self.model = model
-
- def read_audio(self, fname):
- """Load an audio file and return PCM along with the sample rate"""
- wav, sr = sf.read(fname)
- assert sr == 16e3
-
- return wav
-
- def get_feats(self, loc):
- x = self.read_audio(loc)
- with torch.no_grad():
- source = torch.from_numpy(x).view(1, -1).float().cuda()
- res = self.model(
- source=source, mask=False, features_only=True, layer=self.layer
- )
- return res["layer_results"][self.layer][0].squeeze(1)
-
-
-def get_iterator(args):
- with open(args.data, "r") as fp:
- lines = fp.read().split("\n")
- root = lines.pop(0).strip()
- files = [osp.join(root, line.split("\t")[0]) for line in lines if len(line) > 0]
-
- if getattr(args, "sample_pct", 0) > 0:
- files = random.sample(files, int(args.sample_pct * len(files)))
- num = len(files)
- reader = Wav2VecFeatureReader(args.checkpoint, args.layer)
-
- def iterate():
- for fname in files:
- feats = reader.get_feats(fname)
- yield feats.cpu().numpy()
-
- return iterate, num
-
-
-def main():
- parser = get_parser()
- args = parser.parse_args()
-
- faiss_specs = parse_faiss_specs(args.faiss_specs)
- print("Faiss Specs:", faiss_specs)
-
- feat_path = osp.join(args.save_dir, "features")
- if osp.exists(feat_path + ".npy"):
- feats = np.load(feat_path + ".npy")
- else:
- generator, num = get_iterator(args)
- iterator = generator()
-
- feats = []
- for f in tqdm.tqdm(iterator, total=num):
- feats.append(f)
-
- del iterator
- del generator
-
- feats = np.concatenate(feats)
-
- print(feats.shape)
-
- os.makedirs(args.save_dir, exist_ok=True)
- # np.save(feat_path, feats)
-
- gc.collect()
- torch.cuda.empty_cache()
-
- reload = False
- for spec in faiss_specs:
- print("Processing spec", spec)
-
- if reload:
- print("Reloading...")
- del feats
- gc.collect()
- feats = np.load(feat_path + ".npy")
-
- save_path = osp.join(args.save_dir, spec.spec_str)
- os.makedirs(save_path, exist_ok=True)
- d = feats.shape[-1]
- x = feats
- if spec.pca > 0:
- print("Computing PCA")
- pca = faiss.PCAMatrix(d, spec.pca)
- pca.train(x)
- d = spec.pca
- b = faiss.vector_to_array(pca.b)
- A = faiss.vector_to_array(pca.A).reshape(pca.d_out, pca.d_in)
- np.save(osp.join(save_path, "pca_A"), A.T)
- np.save(osp.join(save_path, "pca_b"), b)
- print("Applying PCA")
- x = pca.apply_py(x)
-
- if spec.norm:
- reload = spec.pca <= 0
- print("Normalizing")
- faiss.normalize_L2(x)
-
- print("Computing kmeans")
- kmeans = faiss.Kmeans(
- d,
- spec.n_clus,
- niter=50,
- verbose=True,
- spherical=spec.sphere,
- max_points_per_centroid=feats.shape[0],
- gpu=True,
- nredo=3,
- )
- kmeans.train(x)
- np.save(osp.join(save_path, "centroids"), kmeans.centroids)
- del kmeans
- del x
- gc.collect()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/model_parallel/models/roberta/model.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/model_parallel/models/roberta/model.py
deleted file mode 100644
index 77a80ef72057219110b34678a38705549910edd3..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/model_parallel/models/roberta/model.py
+++ /dev/null
@@ -1,225 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-RoBERTa: A Robustly Optimized BERT Pretraining Approach.
-"""
-
-import logging
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import utils
-from fairseq.model_parallel.models.transformer import ModelParallelTransformerEncoder
-from fairseq.models import register_model, register_model_architecture
-from fairseq.models.roberta import (
- roberta_base_architecture,
- roberta_prenorm_architecture,
- RobertaEncoder,
- RobertaModel,
-)
-from fairseq.modules import LayerNorm
-
-
-try:
- from fairseq.model_parallel.megatron.mpu import (
- copy_to_model_parallel_region,
- gather_from_model_parallel_region,
- ColumnParallelLinear,
- VocabParallelEmbedding,
- )
-
- has_megatron_submodule = True
-except (ImportError, ModuleNotFoundError):
- has_megatron_submodule = False
-
-logger = logging.getLogger(__name__)
-
-
-@register_model("model_parallel_roberta")
-class ModelParallelRobertaModel(RobertaModel):
- def __init__(self, args, encoder):
- super().__init__(args, encoder)
-
- self.classification_heads = nn.ModuleDict()
-
- @staticmethod
- def add_args(parser):
- RobertaModel.add_args(parser)
- parser.add_argument(
- "--no-final-layer-norm",
- action="store_true",
- help=(
- "don't add final layernorm (only applicable when "
- "--encoder-normalize-before=True"
- ),
- )
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
-
- # make sure all arguments are present
- base_architecture(args)
-
- task.source_dictionary.pad_to_multiple_(args.model_parallel_size * 8)
- task.target_dictionary.pad_to_multiple_(args.model_parallel_size * 8)
-
- if not hasattr(args, "max_positions"):
- args.max_positions = args.tokens_per_sample
-
- if getattr(args, "untie_weights_roberta", False):
- raise NotImplementedError(
- "--untie-weights-roberta is not supported in model parallel mode"
- )
-
- encoder = ModelParallelRobertaEncoder(args, task.source_dictionary)
- return cls(args, encoder)
-
- def forward(
- self,
- src_tokens,
- features_only=False,
- return_all_hiddens=False,
- classification_head_name=None,
- **kwargs
- ):
- if classification_head_name is not None:
- features_only = True
-
- x, extra = self.encoder(src_tokens, features_only, return_all_hiddens, **kwargs)
-
- if classification_head_name is not None:
- x = self.classification_heads[classification_head_name](x)
- return x, extra
-
- def register_classification_head(
- self, name, num_classes=None, inner_dim=None, **kwargs
- ):
- """Register a classification head."""
- if name in self.classification_heads:
- prev_num_classes = self.classification_heads[name].out_proj.out_features
- prev_inner_dim = self.classification_heads[name].dense.out_features
- if num_classes != prev_num_classes or inner_dim != prev_inner_dim:
- logger.warning(
- 're-registering head "{}" with num_classes {} (prev: {}) '
- "and inner_dim {} (prev: {})".format(
- name, num_classes, prev_num_classes, inner_dim, prev_inner_dim
- )
- )
- self.classification_heads[name] = ModelParallelRobertaClassificationHead(
- self.args.encoder_embed_dim,
- inner_dim or self.args.encoder_embed_dim,
- num_classes,
- self.args.pooler_activation_fn,
- self.args.pooler_dropout,
- )
-
-
-class ModelParallelRobertaLMHead(nn.Module):
- """Head for masked language modeling."""
-
- def __init__(self, embed_dim, output_dim, activation_fn, weight=None):
- super().__init__()
- self.dense = ColumnParallelLinear(embed_dim, embed_dim, gather_output=True)
- self.activation_fn = utils.get_activation_fn(activation_fn)
- self.layer_norm = LayerNorm(embed_dim)
-
- if weight is None:
- weight = nn.Linear(embed_dim, output_dim, bias=False).weight
- self.weight = weight
- self.bias = nn.Parameter(torch.zeros(output_dim))
-
- def forward(self, features, masked_tokens=None, **kwargs):
- # Only project the unmasked tokens while training,
- # saves both memory and computation
- if masked_tokens is not None:
- features = features[masked_tokens, :]
-
- x = self.dense(features)
- x = self.activation_fn(x)
- x = self.layer_norm(x)
-
- x = copy_to_model_parallel_region(x)
- # project back to size of vocabulary with bias
- x = F.linear(x, self.weight)
- x = gather_from_model_parallel_region(x).contiguous()
- x = x + self.bias
- return x
-
-
-class ModelParallelRobertaClassificationHead(nn.Module):
- """Head for sentence-level classification tasks."""
-
- def __init__(
- self, input_dim, inner_dim, num_classes, activation_fn, pooler_dropout
- ):
- super().__init__()
- self.dense = ColumnParallelLinear(input_dim, inner_dim, gather_output=True)
- self.activation_fn = utils.get_activation_fn(activation_fn)
- self.dropout = nn.Dropout(p=pooler_dropout)
- self.out_proj = nn.Linear(inner_dim, num_classes)
-
- def forward(self, features, **kwargs):
- x = features[:, 0, :] # take token (equiv. to [CLS])
- x = self.dropout(x)
- x = self.dense(x)
- x = self.activation_fn(x)
- x = self.dropout(x)
- x = self.out_proj(x)
- return x
-
-
-class ModelParallelRobertaEncoder(RobertaEncoder):
- """RoBERTa encoder."""
-
- def __init__(self, args, dictionary):
- super().__init__(args, dictionary)
- assert not self.args.untie_weights_roberta
-
- def build_embedding(self, vocab_size, embedding_dim, padding_idx):
- return VocabParallelEmbedding(vocab_size, embedding_dim, padding_idx)
-
- def build_encoder(self, args, dictionary, embed_tokens):
- return ModelParallelTransformerEncoder(args, dictionary, embed_tokens)
-
- def build_lm_head(self, embed_dim, output_dim, activation_fn, weight):
- return ModelParallelRobertaLMHead(embed_dim, output_dim, activation_fn, weight)
-
-
-@register_model_architecture("model_parallel_roberta", "model_parallel_roberta")
-def base_architecture(args):
- args.no_final_layer_norm = getattr(args, "no_final_layer_norm", False)
- # model parallel RoBERTa defaults to "Pre-LN" formulation
- roberta_prenorm_architecture(args)
-
-
-# earlier versions of model parallel RoBERTa removed the final layer norm
-@register_model_architecture("model_parallel_roberta", "model_parallel_roberta_v1")
-def model_parallel_roberta_v1_architecture(args):
- args.no_final_layer_norm = getattr(args, "no_final_layer_norm", True)
- base_architecture(args)
-
-
-@register_model_architecture(
- "model_parallel_roberta", "model_parallel_roberta_postnorm"
-)
-def model_parallel_roberta_postnorm_architecture(args):
- # the original BERT/RoBERTa uses the "Post-LN" formulation
- roberta_base_architecture(args)
-
-
-@register_model_architecture("model_parallel_roberta", "model_parallel_roberta_base")
-def model_parallel_roberta_base_architecture(args):
- base_architecture(args)
-
-
-@register_model_architecture("model_parallel_roberta", "model_parallel_roberta_large")
-def model_parallel_roberta_large_architecture(args):
- args.encoder_layers = getattr(args, "encoder_layers", 24)
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16)
- base_architecture(args)
diff --git a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/transformer/mingpt.py b/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/transformer/mingpt.py
deleted file mode 100644
index d14b7b68117f4b9f297b2929397cd4f55089334c..0000000000000000000000000000000000000000
--- a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/transformer/mingpt.py
+++ /dev/null
@@ -1,415 +0,0 @@
-"""
-taken from: https://github.com/karpathy/minGPT/
-GPT model:
-- the initial stem consists of a combination of token encoding and a positional encoding
-- the meat of it is a uniform sequence of Transformer blocks
- - each Transformer is a sequential combination of a 1-hidden-layer MLP block and a self-attention block
- - all blocks feed into a central residual pathway similar to resnets
-- the final decoder is a linear projection into a vanilla Softmax classifier
-"""
-
-import math
-import logging
-
-import torch
-import torch.nn as nn
-from torch.nn import functional as F
-from transformers import top_k_top_p_filtering
-
-logger = logging.getLogger(__name__)
-
-
-class GPTConfig:
- """ base GPT config, params common to all GPT versions """
- embd_pdrop = 0.1
- resid_pdrop = 0.1
- attn_pdrop = 0.1
-
- def __init__(self, vocab_size, block_size, **kwargs):
- self.vocab_size = vocab_size
- self.block_size = block_size
- for k,v in kwargs.items():
- setattr(self, k, v)
-
-
-class GPT1Config(GPTConfig):
- """ GPT-1 like network roughly 125M params """
- n_layer = 12
- n_head = 12
- n_embd = 768
-
-
-class CausalSelfAttention(nn.Module):
- """
- A vanilla multi-head masked self-attention layer with a projection at the end.
- It is possible to use torch.nn.MultiheadAttention here but I am including an
- explicit implementation here to show that there is nothing too scary here.
- """
-
- def __init__(self, config):
- super().__init__()
- assert config.n_embd % config.n_head == 0
- # key, query, value projections for all heads
- self.key = nn.Linear(config.n_embd, config.n_embd)
- self.query = nn.Linear(config.n_embd, config.n_embd)
- self.value = nn.Linear(config.n_embd, config.n_embd)
- # regularization
- self.attn_drop = nn.Dropout(config.attn_pdrop)
- self.resid_drop = nn.Dropout(config.resid_pdrop)
- # output projection
- self.proj = nn.Linear(config.n_embd, config.n_embd)
- # causal mask to ensure that attention is only applied to the left in the input sequence
- mask = torch.tril(torch.ones(config.block_size,
- config.block_size))
- if hasattr(config, "n_unmasked"):
- mask[:config.n_unmasked, :config.n_unmasked] = 1
- self.register_buffer("mask", mask.view(1, 1, config.block_size, config.block_size))
- self.n_head = config.n_head
-
- def forward(self, x, layer_past=None):
- B, T, C = x.size()
-
- # calculate query, key, values for all heads in batch and move head forward to be the batch dim
- k = self.key(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
- q = self.query(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
- v = self.value(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
-
- present = torch.stack((k, v))
- if layer_past is not None:
- past_key, past_value = layer_past
- k = torch.cat((past_key, k), dim=-2)
- v = torch.cat((past_value, v), dim=-2)
-
- # causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T)
- att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))
- if layer_past is None:
- att = att.masked_fill(self.mask[:,:,:T,:T] == 0, float('-inf'))
-
- att = F.softmax(att, dim=-1)
- att = self.attn_drop(att)
- y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs)
- y = y.transpose(1, 2).contiguous().view(B, T, C) # re-assemble all head outputs side by side
-
- # output projection
- y = self.resid_drop(self.proj(y))
- return y, present # TODO: check that this does not break anything
-
-
-class Block(nn.Module):
- """ an unassuming Transformer block """
- def __init__(self, config):
- super().__init__()
- self.ln1 = nn.LayerNorm(config.n_embd)
- self.ln2 = nn.LayerNorm(config.n_embd)
- self.attn = CausalSelfAttention(config)
- self.mlp = nn.Sequential(
- nn.Linear(config.n_embd, 4 * config.n_embd),
- nn.GELU(), # nice
- nn.Linear(4 * config.n_embd, config.n_embd),
- nn.Dropout(config.resid_pdrop),
- )
-
- def forward(self, x, layer_past=None, return_present=False):
- # TODO: check that training still works
- if return_present: assert not self.training
- # layer past: tuple of length two with B, nh, T, hs
- attn, present = self.attn(self.ln1(x), layer_past=layer_past)
-
- x = x + attn
- x = x + self.mlp(self.ln2(x))
- if layer_past is not None or return_present:
- return x, present
- return x
-
-
-class GPT(nn.Module):
- """ the full GPT language model, with a context size of block_size """
- def __init__(self, vocab_size, block_size, n_layer=12, n_head=8, n_embd=256,
- embd_pdrop=0., resid_pdrop=0., attn_pdrop=0., n_unmasked=0):
- super().__init__()
- config = GPTConfig(vocab_size=vocab_size, block_size=block_size,
- embd_pdrop=embd_pdrop, resid_pdrop=resid_pdrop, attn_pdrop=attn_pdrop,
- n_layer=n_layer, n_head=n_head, n_embd=n_embd,
- n_unmasked=n_unmasked)
- # input embedding stem
- self.tok_emb = nn.Embedding(config.vocab_size, config.n_embd)
- self.pos_emb = nn.Parameter(torch.zeros(1, config.block_size, config.n_embd))
- self.drop = nn.Dropout(config.embd_pdrop)
- # transformer
- self.blocks = nn.Sequential(*[Block(config) for _ in range(config.n_layer)])
- # decoder head
- self.ln_f = nn.LayerNorm(config.n_embd)
- self.head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
- self.block_size = config.block_size
- self.apply(self._init_weights)
- self.config = config
- logger.info("number of parameters: %e", sum(p.numel() for p in self.parameters()))
-
- def get_block_size(self):
- return self.block_size
-
- def _init_weights(self, module):
- if isinstance(module, (nn.Linear, nn.Embedding)):
- module.weight.data.normal_(mean=0.0, std=0.02)
- if isinstance(module, nn.Linear) and module.bias is not None:
- module.bias.data.zero_()
- elif isinstance(module, nn.LayerNorm):
- module.bias.data.zero_()
- module.weight.data.fill_(1.0)
-
- def forward(self, idx, embeddings=None, targets=None):
- # forward the GPT model
- token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector
-
- if embeddings is not None: # prepend explicit embeddings
- token_embeddings = torch.cat((embeddings, token_embeddings), dim=1)
-
- t = token_embeddings.shape[1]
- assert t <= self.block_size, "Cannot forward, model block size is exhausted."
- position_embeddings = self.pos_emb[:, :t, :] # each position maps to a (learnable) vector
- x = self.drop(token_embeddings + position_embeddings)
- x = self.blocks(x)
- x = self.ln_f(x)
- logits = self.head(x)
-
- # if we are given some desired targets also calculate the loss
- loss = None
- if targets is not None:
- loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1))
-
- return logits, loss
-
- def forward_with_past(self, idx, embeddings=None, targets=None, past=None, past_length=None):
- # inference only
- assert not self.training
- token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector
- if embeddings is not None: # prepend explicit embeddings
- token_embeddings = torch.cat((embeddings, token_embeddings), dim=1)
-
- if past is not None:
- assert past_length is not None
- past = torch.cat(past, dim=-2) # n_layer, 2, b, nh, len_past, dim_head
- past_shape = list(past.shape)
- expected_shape = [self.config.n_layer, 2, idx.shape[0], self.config.n_head, past_length, self.config.n_embd//self.config.n_head]
- assert past_shape == expected_shape, f"{past_shape} =/= {expected_shape}"
- position_embeddings = self.pos_emb[:, past_length, :] # each position maps to a (learnable) vector
- else:
- position_embeddings = self.pos_emb[:, :token_embeddings.shape[1], :]
-
- x = self.drop(token_embeddings + position_embeddings)
- presents = [] # accumulate over layers
- for i, block in enumerate(self.blocks):
- x, present = block(x, layer_past=past[i, ...] if past is not None else None, return_present=True)
- presents.append(present)
-
- x = self.ln_f(x)
- logits = self.head(x)
- # if we are given some desired targets also calculate the loss
- loss = None
- if targets is not None:
- loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1))
-
- return logits, loss, torch.stack(presents) # _, _, n_layer, 2, b, nh, 1, dim_head
-
-
-class DummyGPT(nn.Module):
- # for debugging
- def __init__(self, add_value=1):
- super().__init__()
- self.add_value = add_value
-
- def forward(self, idx):
- return idx + self.add_value, None
-
-
-class CodeGPT(nn.Module):
- """Takes in semi-embeddings"""
- def __init__(self, vocab_size, block_size, in_channels, n_layer=12, n_head=8, n_embd=256,
- embd_pdrop=0., resid_pdrop=0., attn_pdrop=0., n_unmasked=0):
- super().__init__()
- config = GPTConfig(vocab_size=vocab_size, block_size=block_size,
- embd_pdrop=embd_pdrop, resid_pdrop=resid_pdrop, attn_pdrop=attn_pdrop,
- n_layer=n_layer, n_head=n_head, n_embd=n_embd,
- n_unmasked=n_unmasked)
- # input embedding stem
- self.tok_emb = nn.Linear(in_channels, config.n_embd)
- self.pos_emb = nn.Parameter(torch.zeros(1, config.block_size, config.n_embd))
- self.drop = nn.Dropout(config.embd_pdrop)
- # transformer
- self.blocks = nn.Sequential(*[Block(config) for _ in range(config.n_layer)])
- # decoder head
- self.ln_f = nn.LayerNorm(config.n_embd)
- self.head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
- self.block_size = config.block_size
- self.apply(self._init_weights)
- self.config = config
- logger.info("number of parameters: %e", sum(p.numel() for p in self.parameters()))
-
- def get_block_size(self):
- return self.block_size
-
- def _init_weights(self, module):
- if isinstance(module, (nn.Linear, nn.Embedding)):
- module.weight.data.normal_(mean=0.0, std=0.02)
- if isinstance(module, nn.Linear) and module.bias is not None:
- module.bias.data.zero_()
- elif isinstance(module, nn.LayerNorm):
- module.bias.data.zero_()
- module.weight.data.fill_(1.0)
-
- def forward(self, idx, embeddings=None, targets=None):
- # forward the GPT model
- token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector
-
- if embeddings is not None: # prepend explicit embeddings
- token_embeddings = torch.cat((embeddings, token_embeddings), dim=1)
-
- t = token_embeddings.shape[1]
- assert t <= self.block_size, "Cannot forward, model block size is exhausted."
- position_embeddings = self.pos_emb[:, :t, :] # each position maps to a (learnable) vector
- x = self.drop(token_embeddings + position_embeddings)
- x = self.blocks(x)
- x = self.taming_cinln_f(x)
- logits = self.head(x)
-
- # if we are given some desired targets also calculate the loss
- loss = None
- if targets is not None:
- loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1))
-
- return logits, loss
-
-
-
-#### sampling utils
-
-def top_k_logits(logits, k):
- v, ix = torch.topk(logits, k)
- out = logits.clone()
- out[out < v[:, [-1]]] = -float('Inf')
- return out
-
-@torch.no_grad()
-def sample(model, x, steps, temperature=1.0, sample=False, top_k=None):
- """
- take a conditioning sequence of indices in x (of shape (b,t)) and predict the next token in
- the sequence, feeding the predictions back into the model each time. Clearly the sampling
- has quadratic complexity unlike an RNN that is only linear, and has a finite context window
- of block_size, unlike an RNN that has an infinite context window.
- """
- block_size = model.get_block_size()
- model.eval()
- for k in range(steps):
- x_cond = x if x.size(1) <= block_size else x[:, -block_size:] # crop context if needed
- logits, _ = model(x_cond)
- # pluck the logits at the final step and scale by temperature
- logits = logits[:, -1, :] / temperature
- # optionally crop probabilities to only the top k options
- if top_k is not None:
- logits = top_k_logits(logits, top_k)
- # apply softmax to convert to probabilities
- probs = F.softmax(logits, dim=-1)
- # sample from the distribution or take the most likely
- if sample:
- ix = torch.multinomial(probs, num_samples=1)
- else:
- _, ix = torch.topk(probs, k=1, dim=-1)
- # append to the sequence and continue
- x = torch.cat((x, ix), dim=1)
-
- return x
-
-
-@torch.no_grad()
-def sample_with_past(x, model, steps, temperature=1., sample_logits=True,
- top_k=None, top_p=None, callback=None):
- # x is conditioning
- sample = x
- cond_len = x.shape[1]
- past = None
- for n in range(steps):
- if callback is not None:
- callback(n)
- logits, _, present = model.forward_with_past(x, past=past, past_length=(n+cond_len-1))
- if past is None:
- past = [present]
- else:
- past.append(present)
- logits = logits[:, -1, :] / temperature
- if top_k is not None:
- logits = top_k_top_p_filtering(logits, top_k=top_k, top_p=top_p)
-
- probs = F.softmax(logits, dim=-1)
- if not sample_logits:
- _, x = torch.topk(probs, k=1, dim=-1)
- else:
- x = torch.multinomial(probs, num_samples=1)
- # append to the sequence and continue
- sample = torch.cat((sample, x), dim=1)
- del past
- sample = sample[:, cond_len:] # cut conditioning off
- return sample
-
-
-#### clustering utils
-
-class KMeans(nn.Module):
- def __init__(self, ncluster=512, nc=3, niter=10):
- super().__init__()
- self.ncluster = ncluster
- self.nc = nc
- self.niter = niter
- self.shape = (3,32,32)
- self.register_buffer("C", torch.zeros(self.ncluster,nc))
- self.register_buffer('initialized', torch.tensor(0, dtype=torch.uint8))
-
- def is_initialized(self):
- return self.initialized.item() == 1
-
- @torch.no_grad()
- def initialize(self, x):
- N, D = x.shape
- assert D == self.nc, D
- c = x[torch.randperm(N)[:self.ncluster]] # init clusters at random
- for i in range(self.niter):
- # assign all pixels to the closest codebook element
- a = ((x[:, None, :] - c[None, :, :])**2).sum(-1).argmin(1)
- # move each codebook element to be the mean of the pixels that assigned to it
- c = torch.stack([x[a==k].mean(0) for k in range(self.ncluster)])
- # re-assign any poorly positioned codebook elements
- nanix = torch.any(torch.isnan(c), dim=1)
- ndead = nanix.sum().item()
- print('done step %d/%d, re-initialized %d dead clusters' % (i+1, self.niter, ndead))
- c[nanix] = x[torch.randperm(N)[:ndead]] # re-init dead clusters
-
- self.C.copy_(c)
- self.initialized.fill_(1)
-
-
- def forward(self, x, reverse=False, shape=None):
- if not reverse:
- # flatten
- bs,c,h,w = x.shape
- assert c == self.nc
- x = x.reshape(bs,c,h*w,1)
- C = self.C.permute(1,0)
- C = C.reshape(1,c,1,self.ncluster)
- a = ((x-C)**2).sum(1).argmin(-1) # bs, h*w indices
- return a
- else:
- # flatten
- bs, HW = x.shape
- """
- c = self.C.reshape( 1, self.nc, 1, self.ncluster)
- c = c[bs*[0],:,:,:]
- c = c[:,:,HW*[0],:]
- x = x.reshape(bs, 1, HW, 1)
- x = x[:,3*[0],:,:]
- x = torch.gather(c, dim=3, index=x)
- """
- x = self.C[x]
- x = x.permute(0,2,1)
- shape = shape if shape is not None else self.shape
- x = x.reshape(bs, *shape)
-
- return x
diff --git a/spaces/ICML2022/OFA/fairseq/examples/multilingual/train_multilingual_model.sh b/spaces/ICML2022/OFA/fairseq/examples/multilingual/train_multilingual_model.sh
deleted file mode 100644
index cc050bd3f02de8a2f303737f187442d2eb80e4ef..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/multilingual/train_multilingual_model.sh
+++ /dev/null
@@ -1,28 +0,0 @@
-#!/bin/bash
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-path_2_data=$1 # which contains binarized data for each directions
-lang_list=$2 #
-lang_pairs=$3 #a list language pairs to train multilingual models, e.g. "en-fr,en-cs,fr-en,cs-en"
-
-fairseq-train "$path_2_data" \
- --encoder-normalize-before --decoder-normalize-before \
- --arch transformer --layernorm-embedding \
- --task translation_multi_simple_epoch \
- --sampling-method "temperature" \
- --sampling-temperature 1.5 \
- --encoder-langtok "src" \
- --decoder-langtok \
- --lang-dict "$lang_list" \
- --lang-pairs "$lang_pairs" \
- --criterion label_smoothed_cross_entropy --label-smoothing 0.2 \
- --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \
- --lr-scheduler inverse_sqrt --lr 3e-05 --warmup-updates 2500 --max-update 40000 \
- --dropout 0.3 --attention-dropout 0.1 --weight-decay 0.0 \
- --max-tokens 1024 --update-freq 2 \
- --save-interval 1 --save-interval-updates 5000 --keep-interval-updates 10 --no-epoch-checkpoints \
- --seed 222 --log-format simple --log-interval 2
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/__init__.py
deleted file mode 100644
index 7cbe00a10520331709441e5e77991bd2edca8c06..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/data/encoders/__init__.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-
-import importlib
-import os
-
-from fairseq import registry
-
-
-build_tokenizer, register_tokenizer, TOKENIZER_REGISTRY, _ = registry.setup_registry(
- "--tokenizer",
- default=None,
-)
-
-
-build_bpe, register_bpe, BPE_REGISTRY, _ = registry.setup_registry(
- "--bpe",
- default=None,
-)
-
-
-# automatically import any Python files in the encoders/ directory
-for file in sorted(os.listdir(os.path.dirname(__file__))):
- if file.endswith(".py") and not file.startswith("_"):
- module = file[: file.find(".py")]
- importlib.import_module("fairseq.data.encoders." + module)
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py b/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py
deleted file mode 100644
index 35c50ee1521963c5cb6dfb7036ccf43401c6c6ac..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py
+++ /dev/null
@@ -1,87 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-from fairseq import metrics, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-
-
-try:
- from fairseq.model_parallel.megatron.mpu.cross_entropy import (
- vocab_parallel_cross_entropy,
- )
-
- has_megatron_submodule = True
-except (ImportError, ModuleNotFoundError):
- has_megatron_submodule = False
-
-
-@register_criterion("vocab_parallel_cross_entropy")
-class VocabParallelCrossEntropyCriterion(FairseqCriterion):
- def __init__(self, task, sentence_avg):
- super().__init__(task)
- self.sentence_avg = sentence_avg
- if not has_megatron_submodule:
- raise ImportError(
- "\n\nPlease install the megatron submodule:"
- "\n\n git submodule update --init "
- "fairseq/model_parallel/megatron"
- )
-
- def forward(self, model, sample, reduce=True):
- """Compute the loss for the given sample.
-
- Returns a tuple with three elements:
- 1) the loss
- 2) the sample size, which is used as the denominator for the gradient
- 3) logging outputs to display while training
- """
- net_output = model(**sample["net_input"])
- target = sample["target"]
-
- loss = vocab_parallel_cross_entropy(net_output[0].float(), target)
- loss = (loss * (target != self.padding_idx)).sum()
- sample_size = (
- sample["target"].size(0) if self.sentence_avg else sample["ntokens"]
- )
- logging_output = {
- "loss": utils.item(loss.data) if reduce else loss.data,
- "ntokens": sample["ntokens"],
- "nsentences": sample["target"].size(0),
- "sample_size": sample_size,
- }
- return loss, sample_size, logging_output
-
- @staticmethod
- def reduce_metrics(logging_outputs) -> None:
- """Aggregate logging outputs from data parallel training."""
- loss_sum = sum(log.get("loss", 0) for log in logging_outputs)
- ntokens = sum(log.get("ntokens", 0) for log in logging_outputs)
- sample_size = sum(log.get("sample_size", 0) for log in logging_outputs)
-
- metrics.log_scalar(
- "loss", loss_sum / sample_size / math.log(2), sample_size, round=3
- )
- if sample_size != ntokens:
- metrics.log_scalar(
- "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3
- )
- metrics.log_derived(
- "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg)
- )
- else:
- metrics.log_derived(
- "ppl", lambda meters: utils.get_perplexity(meters["loss"].avg)
- )
-
- @staticmethod
- def logging_outputs_can_be_summed() -> bool:
- """
- Whether the logging outputs returned by `forward` can be summed
- across workers prior to calling `reduce_metrics`. Setting this
- to True will improves distributed training speed.
- """
- return True
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py b/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py
deleted file mode 100644
index eb81ded341257ba0a43c4d0867e8f3c83f276bc7..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py
+++ /dev/null
@@ -1,600 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-from collections import namedtuple
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import options, utils
-from fairseq.modules import (
- AdaptiveSoftmax,
- LayerNorm,
- MultiheadAttention,
- PositionalEmbedding,
-)
-
-
-EncoderOut = namedtuple(
- "TransformerEncoderOut",
- [
- "encoder_out", # T x B x C
- "encoder_padding_mask", # B x T
- "encoder_embedding", # B x T x C
- "encoder_states", # List[T x B x C]
- ],
-)
-
-
-class TransformerEncoderEmbedding(nn.Module):
- """ Encoder Embedding + Positional Embedding """
-
- def __init__(self, args, embed_tokens):
- super().__init__()
- self.dropout = args.dropout
- self.max_source_positions = args.max_source_positions
- self.embed_tokens = embed_tokens
- if isinstance(embed_tokens, nn.ModuleList):
- self.padding_idx = embed_tokens[0].padding_idx
- embed_dim = sum(e.embedding_dim for e in embed_tokens)
- else:
- self.padding_idx = embed_tokens.padding_idx
- embed_dim = embed_tokens.embedding_dim
- self.embed_scale = math.sqrt(embed_dim)
- self.embed_positions = (
- PositionalEmbedding(
- args.max_source_positions,
- embed_dim,
- self.padding_idx,
- learned=args.encoder_learned_pos,
- )
- if not args.no_token_positional_embeddings
- else None
- )
- if getattr(args, "layernorm_embedding", False):
- self.layernorm_embedding = LayerNorm(embed_dim)
- else:
- self.layernorm_embedding = None
-
- def forward(self, input):
- # embed tokens and positions
- src_tokens = input[0]
- prev_output_tokens = input[2]
- if isinstance(self.embed_tokens, nn.ModuleList):
- x_embed_list = []
- for embed_tokens_part in self.embed_tokens:
- x_embed_list.append(embed_tokens_part(src_tokens))
-
- embedded = torch.cat(x_embed_list, dim=-1)
- else:
- embedded = self.embed_tokens(src_tokens)
- x = embed = self.embed_scale * embedded
- if self.embed_positions is not None:
- x = embed + self.embed_positions(src_tokens)
- if self.layernorm_embedding:
- x = self.layernorm_embedding(x)
- x = F.dropout(x, p=self.dropout, training=self.training)
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- # compute padding mask
- encoder_padding_mask = src_tokens.eq(self.padding_idx)
- return (x, encoder_padding_mask, prev_output_tokens)
-
-
-class TransformerEncoderLayerNorm(nn.Module):
- """
- Layer norm at the the end of all encoder layers if
- args.encoder_enormalize_before = True
- """
-
- def __init__(self, args, embed_dim):
- super().__init__()
- if args.encoder_normalize_before:
- self.layer_norm = LayerNorm(embed_dim)
- else:
- self.layer_norm = None
-
- def forward(self, input):
- x = input[0]
- encoder_padding_mask = input[1]
- prev_output_tokens = input[2]
- if self.layer_norm:
- x = self.layer_norm(x)
- # keeping track of the incremental_state is not supported yet
- return (x, encoder_padding_mask, prev_output_tokens)
-
-
-class TransformerDecoderEmbedding(nn.Module):
- """ Decoder Embedding + Positional Embedding """
-
- def __init__(self, args, embed_tokens):
- super().__init__()
- self.dropout = args.dropout
- self.share_input_output_embed = args.share_decoder_input_output_embed
- input_embed_dim = (
- sum(e.embedding_dim for e in embed_tokens)
- if isinstance(embed_tokens, nn.ModuleList)
- else embed_tokens.embedding_dim
- )
- embed_dim = args.decoder_embed_dim
- self.output_embed_dim = args.decoder_output_dim
-
- padding_idx = (
- embed_tokens[0].padding_idx
- if isinstance(embed_tokens, nn.ModuleList)
- else embed_tokens.padding_idx
- )
- self.max_target_positions = args.max_target_positions
-
- self.embed_tokens = embed_tokens
- self.embed_scale = math.sqrt(embed_dim) # todo: try with input_embed_dim
-
- self.project_in_dim = (
- Linear(input_embed_dim, embed_dim, bias=False)
- if embed_dim != input_embed_dim
- else None
- )
-
- self.embed_positions = (
- PositionalEmbedding(
- args.max_target_positions,
- embed_dim,
- padding_idx,
- learned=args.decoder_learned_pos,
- )
- if not args.no_token_positional_embeddings
- else None
- )
-
- def forward(self, input):
- mt_task = False
- if isinstance(input, tuple):
- if len(input) == 3:
- encoder_out = input[0]
- encoder_padding_mask = input[1]
- prev_output_tokens = input[2]
- incremental_state = None # Hardcoding to avoid passing of None objects
- mt_task = True
- else:
- # HACK for now, need to fix (TODO sidgoyal)
- prev_output_tokens = input[0]
- # discard "src_lengths"
- encoder_out = None
- encoder_padding_mask = None
- incremental_state = None
-
- else:
- prev_output_tokens = input
- encoder_out = None
- encoder_padding_mask = None
- incremental_state = None
-
- positions = (
- self.embed_positions(
- prev_output_tokens,
- incremental_state=incremental_state,
- )
- if self.embed_positions is not None
- else None
- )
-
- if incremental_state is not None:
- prev_output_tokens = prev_output_tokens[:, -1:]
- if positions is not None:
- positions = positions[:, -1:]
-
- # embed tokens and positions
-
- if isinstance(self.embed_tokens, nn.ModuleList):
- x_embed_list = []
- for embed_tokens_part in self.embed_tokens:
- x_embed_list.append(embed_tokens_part(prev_output_tokens))
-
- x = self.embed_scale * torch.cat(x_embed_list, dim=-1)
- else:
- x = self.embed_scale * self.embed_tokens(prev_output_tokens)
-
- if self.project_in_dim is not None:
- x = self.project_in_dim(x)
-
- if positions is not None:
- x += positions
- x = F.dropout(x, p=self.dropout, training=self.training)
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
- if mt_task:
- return (x, encoder_out, encoder_padding_mask)
- return x
-
-
-class TransformerDecoderOutputLayer(nn.Module):
- def __init__(self, args, embed_tokens, dictionary):
- super().__init__()
- self.share_input_output_embed = args.share_decoder_input_output_embed
- self.embed_tokens = embed_tokens
- self.output_embed_dim = args.decoder_output_dim
- embed_dim = args.decoder_embed_dim
-
- self.project_out_dim = (
- Linear(embed_dim, self.output_embed_dim, bias=False)
- if embed_dim != self.output_embed_dim and not args.tie_adaptive_weights
- else None
- )
- self.adaptive_softmax = None
- if args.adaptive_softmax_cutoff is not None:
- assert not isinstance(embed_tokens, nn.ModuleList)
- self.adaptive_softmax = AdaptiveSoftmax(
- len(dictionary),
- self.output_embed_dim,
- options.eval_str_list(args.adaptive_softmax_cutoff, type=int),
- dropout=args.adaptive_softmax_dropout,
- adaptive_inputs=embed_tokens if args.tie_adaptive_weights else None,
- factor=args.adaptive_softmax_factor,
- tie_proj=args.tie_adaptive_proj,
- )
- elif not self.share_input_output_embed:
- self.embed_tokens = nn.Parameter(
- torch.Tensor(len(dictionary), self.output_embed_dim)
- )
- nn.init.normal_(
- self.embed_tokens, mean=0, std=self.output_embed_dim ** -0.5
- )
-
- if args.decoder_normalize_before and not getattr(
- args, "no_decoder_final_norm", False
- ):
- self.layer_norm = LayerNorm(embed_dim)
- else:
- self.layer_norm = None
-
- def forward(self, input, apply_final_proj=True):
- if isinstance(input, tuple):
- x = input[0]
- else:
- x = input
-
- if self.layer_norm:
- x = self.layer_norm(x)
-
- # T x B x C -> B x T x C
- x = x.transpose(0, 1)
-
- if self.project_out_dim is not None:
- x = self.project_out_dim(x)
- if apply_final_proj:
- x = self.output_layer(x)
- return x
-
- def output_layer(self, features, **kwargs):
- """Project features to the vocabulary size."""
- if self.adaptive_softmax is None:
- # project back to size of vocabulary
- if self.share_input_output_embed:
- if isinstance(self.embed_tokens, nn.ModuleList):
- output = None
- for i, emb in enumerate(self.embed_tokens):
- sidx = i * emb.embedding_dim
- eidx = (i + 1) * emb.embedding_dim
- if output is None:
- output = F.linear(features[:, :, sidx:eidx], emb.weight)
- else:
- output += F.linear(features[:, :, sidx:eidx], emb.weight)
-
- return output
- else:
- return F.linear(features, self.embed_tokens.weight)
- else:
- return F.linear(features, self.embed_tokens)
- else:
- return features
-
-
-class TransformerEncoderLayer(nn.Module):
- """Encoder layer block.
- In the original paper each operation (multi-head attention or FFN) is
- postprocessed with: `dropout -> add residual -> layernorm`. In the
- tensor2tensor code they suggest that learning is more robust when
- preprocessing each layer with layernorm and postprocessing with:
- `dropout -> add residual`. We default to the approach in the paper, but the
- tensor2tensor approach can be enabled by setting
- *args.encoder_normalize_before* to ``True``.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- """
-
- def __init__(self, args):
- super().__init__()
- self.embed_dim = args.encoder_embed_dim
- self.self_attn = MultiheadAttention(
- self.embed_dim,
- args.encoder_attention_heads,
- dropout=args.attention_dropout,
- self_attention=True,
- )
- self.self_attn_layer_norm = LayerNorm(self.embed_dim)
- self.dropout = args.dropout
- self.activation_fn = utils.get_activation_fn(
- activation=getattr(args, "activation_fn", "relu")
- )
- self.activation_dropout = getattr(args, "activation_dropout", 0)
- if self.activation_dropout == 0:
- # for backwards compatibility with models that use args.relu_dropout
- self.activation_dropout = getattr(args, "relu_dropout", 0)
- self.normalize_before = args.encoder_normalize_before
- self.fc1 = Linear(self.embed_dim, args.encoder_ffn_embed_dim)
- self.fc2 = Linear(args.encoder_ffn_embed_dim, self.embed_dim)
- self.final_layer_norm = LayerNorm(self.embed_dim)
-
- def upgrade_state_dict_named(self, state_dict, name):
- """
- Rename layer norm states from `...layer_norms.0.weight` to
- `...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to
- `...final_layer_norm.weight`
- """
- layer_norm_map = {"0": "self_attn_layer_norm", "1": "final_layer_norm"}
- for old, new in layer_norm_map.items():
- for m in ("weight", "bias"):
- k = "{}.layer_norms.{}.{}".format(name, old, m)
- if k in state_dict:
- state_dict["{}.{}.{}".format(name, new, m)] = state_dict[k]
- del state_dict[k]
-
- def forward(self, input):
- """
- Args:
- input (Tuple):
- input[0] (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)`
- input[1] (ByteTensor/FloatTensor): encoder padding mask -
- binary ByteTensor of shape `(batch, src_len)` where padding elements
- are indicated by ``1``.
- input[2] (LongTensor): previous decoder outputs of shape
- `(batch, tgt_len)`, for teacher forcing)
- Returns:
- output (Tuple):
- output[0] (Tensor): encoded output of shape `(batch, src_len, embed_dim)`
- output[1] (ByteTensor/FloatTensor): encoder padding mask
- output[2] (LongTensor): previous decoder outputs
- """
- x = input[0]
- encoder_padding_mask = input[1]
- prev_output_tokens = input[2]
- residual = x
- x = self.maybe_layer_norm(self.self_attn_layer_norm, x, before=True)
- x, _ = self.self_attn(
- query=x, key=x, value=x, key_padding_mask=encoder_padding_mask
- )
- x = F.dropout(x, p=self.dropout, training=self.training)
- x = residual + x
- x = self.maybe_layer_norm(self.self_attn_layer_norm, x, after=True)
-
- residual = x
- x = self.maybe_layer_norm(self.final_layer_norm, x, before=True)
- x = self.activation_fn(self.fc1(x))
- x = F.dropout(x, p=self.activation_dropout, training=self.training)
- x = self.fc2(x)
- x = F.dropout(x, p=self.dropout, training=self.training)
- x = residual + x
- x = self.maybe_layer_norm(self.final_layer_norm, x, after=True)
- return (x, encoder_padding_mask, prev_output_tokens)
-
- def maybe_layer_norm(self, layer_norm, x, before=False, after=False):
- assert before ^ after
- if after ^ self.normalize_before:
- return layer_norm(x)
- else:
- return x
-
-
-class TransformerDecoderLayer(nn.Module):
- """Decoder layer block.
-
- In the original paper each operation (multi-head attention, encoder
- attention or FFN) is postprocessed with: `dropout -> add residual ->
- layernorm`. In the tensor2tensor code they suggest that learning is more
- robust when preprocessing each layer with layernorm and postprocessing with:
- `dropout -> add residual`. We default to the approach in the paper, but the
- tensor2tensor approach can be enabled by setting
- *args.decoder_normalize_before* to ``True``.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- no_encoder_attn (bool, optional): whether to attend to encoder outputs
- (default: False).
- """
-
- def __init__(
- self, args, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False
- ):
- super().__init__()
- self.embed_dim = args.decoder_embed_dim
- self.self_attn = MultiheadAttention(
- embed_dim=self.embed_dim,
- num_heads=args.decoder_attention_heads,
- dropout=args.attention_dropout,
- add_bias_kv=add_bias_kv,
- add_zero_attn=add_zero_attn,
- self_attention=True,
- )
- self.dropout = args.dropout
- self.activation_fn = utils.get_activation_fn(
- activation=getattr(args, "activation_fn", "relu")
- )
- self.activation_dropout = getattr(args, "activation_dropout", 0)
- if self.activation_dropout == 0:
- # for backwards compatibility with models that use args.relu_dropout
- self.activation_dropout = getattr(args, "relu_dropout", 0)
- self.normalize_before = args.decoder_normalize_before
-
- # use layerNorm rather than FusedLayerNorm for exporting.
- # char_inputs can be used to determint this.
- # TODO remove this once we update apex with the fix
- export = getattr(args, "char_inputs", False)
- self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=export)
-
- if no_encoder_attn:
- self.encoder_attn = None
- self.encoder_attn_layer_norm = None
- else:
- self.encoder_attn = MultiheadAttention(
- self.embed_dim,
- args.decoder_attention_heads,
- kdim=getattr(args, "encoder_embed_dim", None),
- vdim=getattr(args, "encoder_embed_dim", None),
- dropout=args.attention_dropout,
- encoder_decoder_attention=True,
- )
- self.encoder_attn_layer_norm = LayerNorm(self.embed_dim, export=export)
-
- self.fc1 = Linear(self.embed_dim, args.decoder_ffn_embed_dim)
- self.fc2 = Linear(args.decoder_ffn_embed_dim, self.embed_dim)
-
- self.final_layer_norm = LayerNorm(self.embed_dim, export=export)
- self.need_attn = True
-
- self.onnx_trace = False
-
- def prepare_for_onnx_export_(self):
- self.onnx_trace = True
-
- def forward(self, input):
- """
- Args:
- input (Tuple):
- input[0] (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)`
- input[1] (Tensor): encoder output of shape `(batch, src_len, embed_dim)`
- input[2] (ByteTensor/FloatTensor): encoder padding mask -
- binary ByteTensor of shape `(batch, src_len)` where padding elements
- are indicated by ``1``.
- Returns:
- output (Tuple):
- output[0] (Tensor): encoded output of shape `(batch, src_len, embed_dim)`
- output[1] (ByteTensor/FloatTensor): encoder padding mask
- output[2] (LongTensor): previous decoder outputs
- """
- # Note: incremental state is not yet supported
- mt_task = False
- if isinstance(input, tuple):
- x = input[0]
- encoder_out = input[1]
- encoder_padding_mask = input[2]
- incremental_state = None
- mt_task = True
- else:
- x = input
- encoder_out = None
- encoder_padding_mask = None
- incremental_state = None
-
- if incremental_state is None:
- self_attn_mask = self.buffered_future_mask(x)
- else:
- self_attn_mask = None
-
- # TODO: add back prev_self_attn_state, prev_attn_state,
- # self_attn_padding_mask
- prev_self_attn_state = None
- prev_attn_state = None
- self_attn_padding_mask = None
-
- residual = x
- x = self.maybe_layer_norm(self.self_attn_layer_norm, x, before=True)
- if prev_self_attn_state is not None:
- if incremental_state is None:
- incremental_state = {}
- prev_key, prev_value = prev_self_attn_state
- saved_state = {"prev_key": prev_key, "prev_value": prev_value}
- self.self_attn._set_input_buffer(incremental_state, saved_state)
- x, attn = self.self_attn(
- query=x,
- key=x,
- value=x,
- key_padding_mask=self_attn_padding_mask,
- incremental_state=incremental_state,
- need_weights=False,
- attn_mask=self_attn_mask,
- )
- x = F.dropout(x, p=self.dropout, training=self.training)
- x = residual + x
- x = self.maybe_layer_norm(self.self_attn_layer_norm, x, after=True)
-
- if self.encoder_attn is not None:
- residual = x
- x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, before=True)
- if prev_attn_state is not None:
- if incremental_state is None:
- incremental_state = {}
- prev_key, prev_value = prev_attn_state
- saved_state = {"prev_key": prev_key, "prev_value": prev_value}
- self.encoder_attn._set_input_buffer(incremental_state, saved_state)
- x, attn = self.encoder_attn(
- query=x,
- key=encoder_out,
- value=encoder_out,
- key_padding_mask=encoder_padding_mask,
- incremental_state=incremental_state,
- static_kv=True,
- need_weights=(not self.training and self.need_attn),
- )
- x = F.dropout(x, p=self.dropout, training=self.training)
- x = residual + x
- x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, after=True)
-
- residual = x
- x = self.maybe_layer_norm(self.final_layer_norm, x, before=True)
- x = self.activation_fn(self.fc1(x))
- x = F.dropout(x, p=self.activation_dropout, training=self.training)
- x = self.fc2(x)
- x = F.dropout(x, p=self.dropout, training=self.training)
- x = residual + x
- x = self.maybe_layer_norm(self.final_layer_norm, x, after=True)
-
- if mt_task:
- return (x, encoder_out, encoder_padding_mask)
- return x
-
- def buffered_future_mask(self, tensor):
- dim = tensor.size(0)
- if (
- not hasattr(self, "_future_mask")
- or self._future_mask is None
- or self._future_mask.device != tensor.device
- ):
- self._future_mask = torch.triu(
- utils.fill_with_neg_inf(tensor.new(dim, dim)), 1
- )
- if self._future_mask.size(0) < dim:
- self._future_mask = torch.triu(
- utils.fill_with_neg_inf(self._future_mask.resize_(dim, dim)), 1
- )
- return self._future_mask[:dim, :dim]
-
- def maybe_layer_norm(self, layer_norm, x, before=False, after=False):
- assert before ^ after
- if after ^ self.normalize_before:
- return layer_norm(x)
- else:
- return x
-
- def make_generation_fast_(self, need_attn=False, **kwargs):
- self.need_attn = need_attn
-
-
-def Embedding(num_embeddings, embedding_dim, padding_idx):
- m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx)
- nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5)
- nn.init.constant_(m.weight[padding_idx], 0)
- return m
-
-
-def Linear(in_features, out_features, bias=True):
- m = nn.Linear(in_features, out_features, bias)
- nn.init.xavier_uniform_(m.weight)
- if bias:
- nn.init.constant_(m.bias, 0.0)
- return m
diff --git a/spaces/INDONESIA-AI/Lobe/app.py b/spaces/INDONESIA-AI/Lobe/app.py
deleted file mode 100644
index d502e2454e1a496a88aa16f0a52b8d86497aa8ed..0000000000000000000000000000000000000000
--- a/spaces/INDONESIA-AI/Lobe/app.py
+++ /dev/null
@@ -1,245 +0,0 @@
-"""
-Stable Diffusion Webui Version 1.6
-https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.6.0
-
-"""
-#commit_id=r"5ef669de080814067961f28357256e8fe27544f4" #Version 1.6.0
-import os
-from sys import executable
-import subprocess
-import pathlib
-import gc
-import time
-import subprocess
-def Gitclone(URI:str,ClonePath:pathlib.Path ) -> int :
- if pathlib.Path.exists(ClonePath):
- return 0
- for z in range(10):
- i=subprocess.run([r"git",r"clone",str(URI),str(ClonePath)])
- if(i.returncode == 0 ):
- del i
- return 0
- else :
- del i
- raise Exception(str.format("clone \'{0}\' failed",URI))
-
-
-def DownLoad(URI:str,DownloadPath:pathlib.Path,DownLoadFileName:str ) -> int:
- if (DownloadPath / DownLoadFileName).is_file(): return 0
- for z in range(10):
- i=subprocess.run([r"aria2c",r"-c",r"-x" ,r"16", r"-s",r"16", r"-k" ,r"1M" ,r"-m",r"0",r"--enable-mmap=false",r"--console-log-level=error",r"-d",str(DownloadPath),r"-o",DownLoadFileName,URI]);
- if(i.returncode == 0 ):
- del i
- gc.collect()
- return 0
- else :
- del i
- raise Exception(str.format("download \'{0}\' failed",URI))
-
-user_home =pathlib.Path.home().resolve()
-os.chdir(str(user_home))
-#clone stable-diffusion-webui repo
-print("cloning stable-diffusion-webui repo")
-Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui.git",user_home / r"stable-diffusion-webui")
-#os.chdir(str(user_home / r"stable-diffusion-webui"))
-#os.system("git reset --hard "+commit_id)
-os.chdir(user_home / r"stable-diffusion-webui")
-Gitclone(r"https://github.com/vorstcavry/ncpt_colab_timer",user_home / r"stable-diffusion-webui" / r"extensions" / r"ncpt_colab_timer")
-Gitclone(r"https://github.com/vorstcavry/static",user_home / r"stable-diffusion-webui" / r"static")
-
-def run_echo_command():
- try:
- start_huggingface
- except NameError:
- start_huggingface = int(time.time()) - 5
-
- cmd = f"echo -n {start_huggingface} > /home/user/app/stable-diffusion-webui/static/colabTimer.txt"
- subprocess.run(cmd, shell=True)
-
-# Contoh pemanggilan fungsi run_echo_command:
-run_echo_command()
-os.chdir(user_home / r"stable-diffusion-webui")
-#install extensions
-print("installing extensions")
-Gitclone(r"https://github.com/vorstcavry/embeddings",user_home / r"stable-diffusion-webui" / r"embeddings" / r"negative")
-Gitclone(r"https://github.com/vorstcavry/lora",user_home / r"stable-diffusion-webui" / r"models" / r"Lora" / r"positive")
-Gitclone(r"https://github.com/vorstcavry/Checkpoint-Model",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint")
-
-DownLoad(r"https://huggingface.co/embed/upscale/resolve/main/4x-UltraSharp.pth",user_home / r"stable-diffusion-webui" / r"models" / r"ESRGAN" ,r"4x-UltraSharp.pth")
-while (True):
- i=subprocess.run([r"wget",r"https://raw.githubusercontent.com/camenduru/stable-diffusion-webui-scripts/main/run_n_times.py",r"-O",str(user_home / r"stable-diffusion-webui" / r"scripts" / r"run_n_times.py")])
- if(i.returncode == 0 ):
- del i
- gc.collect()
- break
- else :
- del i
-#Gitclone(r"https://github.com/deforum-art/deforum-for-automatic1111-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"deforum-for-automatic1111-webui" )
-#Gitclone(r"https://github.com/AlUlkesh/stable-diffusion-webui-images-browser",user_home / r"stable-diffusion-webui" / r"extensions"/ r"stable-diffusion-webui-images-browser")
-Gitclone(r"https://github.com/camenduru/stable-diffusion-webui-huggingface",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-huggingface")
-Gitclone(r"https://github.com/BlafKing/sd-civitai-browser-plus",user_home / r"stable-diffusion-webui" / r"extensions" / r"civitai-browser")
-Gitclone(r"https://github.com/kohya-ss/sd-webui-additional-networks",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks")
-Gitclone(r"https://github.com/Mikubill/sd-webui-controlnet",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-controlnet")
-Gitclone(r"https://github.com/fkunn1326/openpose-editor",user_home / r"stable-diffusion-webui" / r"extensions" / r"openpose-editor")
-Gitclone(r"https://tinyurl.com/LOBE-Repo",user_home / r"stable-diffusion-webui" / r"extensions" / r"LOBE")
-#Gitclone(r"https://github.com/jexom/sd-webui-depth-lib",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-depth-lib")
-#Gitclone(r"https://github.com/hnmr293/posex",user_home / r"stable-diffusion-webui" / r"extensions" / r"posex")
-#Gitclone(r"https://github.com/nonnonstop/sd-webui-3d-open-pose-editor",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-3d-open-pose-editor")
-#Gitclone(r"https://github.com/hnmr293/posex",user_home / r"stable-diffusion-webui" / r"extensions" / r"posex")
-Gitclone(r"https://github.com/EdithForsaken/sd-webui-cloud-inference.git",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-cloud-inference")
-Gitclone(r"https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git" , user_home / r"stable-diffusion-webui" / r"extensions" / r"a1111-sd-webui-tagcomplete")
-Gitclone(r"https://github.com/camenduru/sd-webui-tunnels",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-tunnels")
-Gitclone(r"https://github.com/etherealxx/batchlinks-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"batchlinks-webui")
-Gitclone(r"https://github.com/zanllp/sd-webui-infinite-image-browsing",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-infinite-image-browsing")
-#Gitclone(r"https://github.com/catppuccin/stable-diffusion-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-catppuccin")
-#Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-rembg")
-Gitclone(r"https://tinyurl.com/aspect-ratio-v",user_home / r"stable-diffusion-webui" / r"extensions" / r"aspect-ratio")
-#Gitclone(r"https://github.com/vorstcavry/cleaner",user_home / r"stable-diffusion-webui" / r"extensions" / r"cleaner")
-Gitclone(r"https://github.com/hnmr293/sd-webui-llul",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-llul")
-Gitclone(r"https://github.com/IDEA-Research/DWPose",user_home / r"stable-diffusion-webui" / r"extensions" / r"DWPose")
-Gitclone(r"https://github.com/Bing-su/adetailer",user_home / r"stable-diffusion-webui" / r"extensions" / r"adetailer")
-Gitclone(r"https://github.com/NoCrypt/sd_hf_out",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd_hf_out")
-
-
-#Gitclone(r"https://github.com/NoCrypt/sd_hf_out",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd_hf_out")
-#Gitclone(r"https://github.com/Iyashinouta/sd-model-downloader",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-model-downloader")
-#Gitclone(r"https://github.com/AIrjen/OneButtonPrompt",user_home / r"stable-diffusion-webui" / r"extensions" / r"OneButtonPrompt")
-#Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-wildcards")
-#Gitclone(r"https://github.com/adieyal/sd-dynamic-prompts",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-dynamic-prompts")
-#Gitclone(r"https://github.com/d8ahazard/sd_dreambooth_extension",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd_dreambooth_extension")
-#Gitclone(r"https://github.com/yfszzx/stable-diffusion-webui-inspiration",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-inspiration")
-#Gitclone(r"https://github.com/Coyote-A/ultimate-upscale-for-automatic1111",user_home / r"stable-diffusion-webui" / r"extensions" / r"ultimate-upscale-for-automatic1111")
-os.chdir(user_home / r"stable-diffusion-webui")
-#download ControlNet models
-print("extensions dolwnload done .\ndownloading ControlNet models")
-dList =[ r"https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors",
- r"https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_openpose_fp16.safetensors",
- r"https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_lineart_fp16.safetensors",
- r"https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors",
- r"https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_canny_fp16.safetensors"]
-for i in range(0,len(dList)): DownLoad(dList[i],user_home / r"stable-diffusion-webui" / r"models" / r"ControlNet",pathlib.Path(dList[i]).name)
-del dList
-
-#download ControlNet models
-#print("extensions dolwnload done .\ndownloading ControlNet models")
-#dList =[r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_canny_fp16.safetensors",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_lineart_fp16.safetensors",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_openpose_fp16.safetensors",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_scribble_fp16.safetensors",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_seg_fp16.safetensors",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_softedge_fp16.safetensors",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1e_sd15_tile_fp16.safetensors",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_ip2p_fp16.yaml",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_shuffle_fp16.yaml",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_canny_fp16.yaml",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1p_sd15_depth_fp16.yaml",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_inpaint_fp16.yaml",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_lineart_fp16.yaml",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_mlsd_fp16.yaml",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_normalbae_fp16.yaml",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_openpose_fp16.yaml",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_scribble_fp16.yaml",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_seg_fp16.yaml",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_softedge_fp16.yaml",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15s2_lineart_anime_fp16.yaml",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1e_sd15_tile_fp16.yaml",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_style_sd14v1.pth",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd14v1.pth",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_seg_sd14v1.pth",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_openpose_sd14v1.pth",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_keypose_sd14v1.pth",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd14v1.pth",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd14v1.pth",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd15v2.pth",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd15v2.pth",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd15v2.pth",
-# r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_zoedepth_sd15v1.pth"]
-#for i in range(0,len(dList)): DownLoad(dList[i],user_home / r"stable-diffusion-webui" / r"extensions" / "sd-webui-controlnet" / r"models",pathlib.Path(dList[i]).name)
-#del dList
-#download model
-#you can change model download address here
-print("ControlNet models download done.\ndownloading model")
-#Stable Diffusion Checkpoint Model
-#anything version4.5
-#DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.5-pruned.ckpt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"anything-v4.5-pruned.ckpt")
-#DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.0.vae.pt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"anything-v4.0.vae.pt")
-#Counterfeit-V3.0
-#DownLoad(r"https://huggingface.co/gsdf/Counterfeit-V3.0/resolve/main/Counterfeit-V3.0_fp16.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"Counterfeit-V3.0_fp16.safetensors")
-#AbyssOrangeMix2 sfw
-#DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_sfw.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"AbyssOrangeMix2_sfw.safetensors")
-#DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"orangemix.vae.pt")
-#MeinaPastelV5
-#DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV5%20-%20Baked%20VAE.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"MeinaPastelV5_BakedVAE.safetensors")
-#DownLoad(r"https://huggingface.co/AnonPerson/ChilloutMix/resolve/main/ChilloutMix-ni-fp16.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"ChilloutMix-ni-fp16.safetensors")
-#DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV4%20-%20Without%20VAE.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"MeinaPastelV4%20-%20Without%20VAE.safetensors")
-#DownLoad(r"https://huggingface.co/ckpt/perfect_world/resolve/main/perfectWorld_v2Baked.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"perfectWorld_v2Baked.safetensors")
-#DownLoad(r"https://huggingface.co/vorstcavry/figurestyle1/resolve/main/figure.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"figure.safetensors")
-#DownLoad(r"https://huggingface.co/vorstcavry/dosmix/resolve/main/ddosmix_V2.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"ddosmix_V2.safetensors")
-#DownLoad(r"https://huggingface.co/ckpt/rev-animated/resolve/main/revAnimated_v11.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"revAnimated_v11.safetensors")
-#DownLoad(r"https://huggingface.co/ckpt/MeinaMix/resolve/main/Meina_V8_baked_VAE.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"Meina_V8_baked_VAE.safetensors")
-#DownLoad(r"https://huggingface.co/ckpt/CyberRealistic/resolve/main/cyberrealistic_v13.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"cyberrealistic_v13.safetensors")
-DownLoad(r"https://huggingface.co/vorstcavry/mymodel/resolve/main/Cavry_V2.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"Cavry_V2.safetensors")
-#downloadvae
-DownLoad(r"https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"VAE",r"vae-ft-mse-840000-ema-pruned.safetensors")
-
-#Lora Model
-#Better Light
-#DownLoad(r"https://civitai.com/api/download/models/39885",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"Better_light.safetensors")
-#DownLoad(r"https://civitai.com/api/download/models/39885",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"Better_light.safetensors")
-#LAS
-#DownLoad(r"https://civitai.com/api/download/models/21065",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"LAS.safetensors")
-#DownLoad(r"https://civitai.com/api/download/models/21065",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"LAS.safetensors")
-#Backlighting
-#DownLoad(r"https://civitai.com/api/download/models/39164",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"backlighting.safetensors")
-#DownLoad(r"https://civitai.com/api/download/models/39164",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"backlighting.safetensors")
-DownLoad(r"https://huggingface.co/vorstcavry/loraasia1/resolve/main/japaneseDollLikeness_v15.safetensors",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"japaneseDollLikeness_v15.safetensors")
-DownLoad(r"https://huggingface.co/vorstcavry/loraasia1/resolve/main/koreanDollLikeness_v20.safetensors",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"koreanDollLikeness_v20.safetensors")
-DownLoad(r"https://huggingface.co/vorstcavry/loraasia1/resolve/main/taiwanDollLikeness_v15.safetensors",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"taiwanDollLikeness_v15.safetensors")
-
-
-
-
-#GFPGAN Model
-#detection Resnet50
-DownLoad(r"https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"detection_Resnet50_Final.pth")
-#parsing_parsenet
-DownLoad(r"https://github.com/xinntao/facexlib/releases/download/v0.2.2/parsing_parsenet.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"parsing_parsenet.pth")
-#GFPGANv1.4
-DownLoad(r"https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"GFPGANv1.4.pth")
-#strt Stable Diffusion Webui
-print("Done\nStarting Webui...")
-os.chdir(user_home / r"stable-diffusion-webui")
-import subprocess
-import pathlib
-
-user_home = pathlib.Path("/home/user") # Gantilah dengan path yang sesuai
-
-args = [
- executable,
- user_home / "stable-diffusion-webui" / "launch.py",
- "--precision", "full",
- "--no-half",
- "--no-half-vae",
- "--enable-insecure-extension-access",
- "--medvram",
- "--skip-torch-cuda-test",
- "--enable-console-prompts",
- "--ui-settings-file=" + str(pathlib.Path(__file__).parent / "config.json"),
- "--hf-token-out",
- "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" # Gantilah dengan token yang sesuai
-]
-
-args = [arg.as_posix() if isinstance(arg, pathlib.PosixPath) else arg for arg in args]
-
-try:
- ret = subprocess.run(args)
-except Exception as e:
- print("Error:", e)
-del os ,user_home ,pyexecutable ,subprocess
\ No newline at end of file
diff --git a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/eval/webpage/styles.css b/spaces/Intel/NeuralChat-ICX-INT4/fastchat/eval/webpage/styles.css
deleted file mode 100644
index 5c5a08dd92dc799298d0d70dec8fefe9482be0ff..0000000000000000000000000000000000000000
--- a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/eval/webpage/styles.css
+++ /dev/null
@@ -1,105 +0,0 @@
-body {
- font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
- background-color: linear-gradient(135deg, #f5f7fa 0%, #4285f4 100%)
-}
-
-.navbar-dark .navbar-nav .nav-link {
- color: #f1cf68;
- font-size: 1.1rem;
- padding: 0.5rem 0.6rem;
-}
-
-.card-header {
- font-weight: bold;
-}
-
-.card {
- box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
- transition: 0.3s;
-}
-
-.card:hover {
- box-shadow: 0 8px 16px rgba(0, 0, 0, 0.2);
-}
-
-button {
- transition: background-color 0.3s;
-}
-
-button:hover {
- background-color: #007bff;
-}
-
-@media (max-width: 767px) {
- .form-row .form-group {
- margin-bottom: 10px;
- }
-}
-
-/* Extra styles */
-
-.expandable-card .card-text-container {
- max-height: 200px;
- overflow-y: hidden;
- position: relative;
-}
-
-.expandable-card.expanded .card-text-container {
- max-height: none;
-}
-
-.expand-btn {
- position: relative;
- display: none;
- background-color: rgba(255, 255, 255, 0.8);
- color: #510c75;
- border-color: transparent;
-}
-
-.expand-btn:hover {
- background-color: rgba(200, 200, 200, 0.8);
- text-decoration: none;
- border-color: transparent;
- color: #510c75;
-}
-
-.expand-btn:focus {
- outline: none;
- text-decoration: none;
-}
-
-.expandable-card:not(.expanded) .card-text-container:after {
- content: "";
- position: absolute;
- bottom: 0;
- left: 0;
- width: 100%;
- height: 90px;
- background: linear-gradient(rgba(255, 255, 255, 0.2), rgba(255, 255, 255, 1));
-}
-
-.expandable-card:not(.expanded) .expand-btn {
- margin-top: -40px;
-}
-
-.card-body {
- padding-bottom: 5px;
-}
-
-.vertical-flex-layout {
- justify-content: center;
- align-items: center;
- height: 100%;
- display: flex;
- flex-direction: column;
- gap: 5px;
-}
-
-.figure-img {
- max-width: 100%;
- height: auto;
-}
-
-.adjustable-font-size {
- font-size: calc(0.5rem + 2vw);
-}
\ No newline at end of file
diff --git a/spaces/Iqbalzz/hololive-rvc-models/infer_pack/models_onnx_moess.py b/spaces/Iqbalzz/hololive-rvc-models/infer_pack/models_onnx_moess.py
deleted file mode 100644
index 12efb0629a2e3d0d746a34f467254536c2bdbe5f..0000000000000000000000000000000000000000
--- a/spaces/Iqbalzz/hololive-rvc-models/infer_pack/models_onnx_moess.py
+++ /dev/null
@@ -1,849 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder256Sim(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- x = self.proj(x) * x_mask
- return x, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsidM(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o
-
-
-class SynthesizerTrnMs256NSFsid_sim(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- # hop_length,
- gin_channels=0,
- use_sdp=True,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256Sim(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- is_half=kwargs["is_half"],
- )
-
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, ds, max_len=None
- ): # y是spec不需要了现在
- g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g)
- return o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/IsaacK/streamlit-test/pages/utils.py b/spaces/IsaacK/streamlit-test/pages/utils.py
deleted file mode 100644
index bd3babe6f835cab4ff16bf969b7513a66e836919..0000000000000000000000000000000000000000
--- a/spaces/IsaacK/streamlit-test/pages/utils.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import re
-import random
-import datetime
-import os.path
-import string
-import sqlite3
-import re
-
-def empty(string):
- return len(string) == 0
-
-def add_blanks(word, sentence, blank = "__"):
- return re.sub(word, blank, sentence, flags=re.IGNORECASE)
-
-def chunker(seq, size):
- return (seq[pos:pos + size] for pos in range(0, len(seq), size))
-
-def random_session_id():
- alphabet = string.ascii_lowercase + string.digits
- return ''.join(random.choices(alphabet, k=12))
-
-def check_answer(item, answer):
- return item == answer
-
-def clean_string(string):
- return re.sub('[^0-9a-zA-Z\s,]+', '', string)
-
-def split_string(string, split_on = ","):
- return [x.strip().upper() for x in string.split(split_on)]
-
-def make_subquery(terms, column = 'tags', operator = 'AND'):
- return f' {operator} '.join([f"{column} LIKE '%{x}%'" for x in terms if len(x) > 0])
-
-def make_query(subquery, limit = 10):
- return f"""SELECT * FROM vocab WHERE {subquery} ORDER BY RANDOM() LIMIT {str(limit)}"""
-
-def db_path(database):
- dir = os.path.dirname(os.path.abspath(__file__))
- return os.path.join(dir, database)
-
-def db_connect(database):
- conn = sqlite3.connect(database)
- c = conn.cursor()
- return c, conn
-
-def chk_conn(conn):
- try:
- conn.cursor()
- return True
- except Exception as ex:
- return False
-
-def get_tables(cursor):
- cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
- return cursor.fetchall()
\ No newline at end of file
diff --git a/spaces/Jack1804/stabilityai-stable-diffusion-xl-refiner-1.0/app.py b/spaces/Jack1804/stabilityai-stable-diffusion-xl-refiner-1.0/app.py
deleted file mode 100644
index f0854c17140255ac783638680bc5dce595cc9fd0..0000000000000000000000000000000000000000
--- a/spaces/Jack1804/stabilityai-stable-diffusion-xl-refiner-1.0/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/stabilityai/stable-diffusion-xl-refiner-1.0").launch()
\ No newline at end of file
diff --git a/spaces/Jackflack09/diffuse-custom/Waifu2x/Dataloader.py b/spaces/Jackflack09/diffuse-custom/Waifu2x/Dataloader.py
deleted file mode 100644
index 8e38548e9c3feab16951e3078041658ea2f32487..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/Waifu2x/Dataloader.py
+++ /dev/null
@@ -1,215 +0,0 @@
-import glob
-import io
-import numpy as np
-import re
-import os
-import random
-from io import BytesIO
-from uuid import uuid4
-import sqlite3
-import h5py
-import torch
-from PIL import Image
-from torch.utils.data import Dataset
-from torchvision.transforms import RandomCrop
-from torchvision.transforms.functional import to_tensor
-
-
-class ImageH5Data(Dataset):
- def __init__(self, h5py_file, folder_name):
- self.data = h5py.File(h5py_file, 'r')[folder_name]
- self.data_hr = self.data['train_hr']
- self.data_lr = self.data['train_lr']
- self.len_imgs = len(self.data_hr)
- self.h5py_file = h5py_file
- self.folder_name = folder_name
-
- def __len__(self):
- # with h5py.File(self.h5py_file, 'r') as f:
- # return len(f[self.folder_name]['train_lr'])
- return self.len_imgs
-
- def __getitem__(self, index):
- # with h5py.File(self.h5py_file, 'r') as f:
- # data_lr = f[self.folder_name]['train_lr'][index]
- # data_hr = f[self.folder_name]['train_lr'][index]
- #
- # return data_lr, data_hr
- return self.data_lr[index], self.data_hr[index]
-
-
-class ImageData(Dataset):
- def __init__(self,
- img_folder,
- patch_size=96,
- shrink_size=2,
- noise_level=1,
- down_sample_method=None,
- color_mod='RGB',
- dummy_len=None):
-
- self.img_folder = img_folder
- all_img = glob.glob(self.img_folder + "/**", recursive=True)
- self.img = list(filter(lambda x: x.endswith('png') or x.endswith("jpg") or x.endswith("jpeg"), all_img))
- self.total_img = len(self.img)
- self.dummy_len = dummy_len if dummy_len is not None else self.total_img
- self.random_cropper = RandomCrop(size=patch_size)
- self.color_mod = color_mod
- self.img_augmenter = ImageAugment(shrink_size, noise_level, down_sample_method)
-
- def get_img_patches(self, img_file):
- img_pil = Image.open(img_file).convert("RGB")
- img_patch = self.random_cropper(img_pil)
- lr_hr_patches = self.img_augmenter.process(img_patch)
- return lr_hr_patches
-
- def __len__(self):
- return self.dummy_len # len(self.img)
-
- def __getitem__(self, index):
- idx = random.choice(range(0, self.total_img))
- img = self.img[idx]
- patch = self.get_img_patches(img)
- if self.color_mod == 'RGB':
- lr_img = patch[0].convert("RGB")
- hr_img = patch[1].convert("RGB")
- elif self.color_mod == 'YCbCr':
- lr_img, _, _ = patch[0].convert('YCbCr').split()
- hr_img, _, _ = patch[1].convert('YCbCr').split()
- else:
- raise KeyError('Either RGB or YCbCr')
- return to_tensor(lr_img), to_tensor(hr_img)
-
-
-class Image2Sqlite(ImageData):
- def __getitem__(self, item):
- img = self.img[item]
- lr_hr_patch = self.get_img_patches(img)
- if self.color_mod == 'RGB':
- lr_img = lr_hr_patch[0].convert("RGB")
- hr_img = lr_hr_patch[1].convert("RGB")
- elif self.color_mod == 'YCbCr':
- lr_img, _, _ = lr_hr_patch[0].convert('YCbCr').split()
- hr_img, _, _ = lr_hr_patch[1].convert('YCbCr').split()
- else:
- raise KeyError('Either RGB or YCbCr')
- lr_byte = self.convert_to_bytevalue(lr_img)
- hr_byte = self.convert_to_bytevalue(hr_img)
- return [lr_byte, hr_byte]
-
- @staticmethod
- def convert_to_bytevalue(pil_img):
- img_byte = io.BytesIO()
- pil_img.save(img_byte, format='png')
- return img_byte.getvalue()
-
-
-class ImageDBData(Dataset):
- def __init__(self, db_file, db_table="images", lr_col="lr_img", hr_col="hr_img", max_images=None):
- self.db_file = db_file
- self.db_table = db_table
- self.lr_col = lr_col
- self.hr_col = hr_col
- self.total_images = self.get_num_rows(max_images)
- # self.lr_hr_images = self.get_all_images()
-
- def __len__(self):
- return self.total_images
-
- # def get_all_images(self):
- # with sqlite3.connect(self.db_file) as conn:
- # cursor = conn.cursor()
- # cursor.execute(f"SELECT * FROM {self.db_table} LIMIT {self.total_images}")
- # return cursor.fetchall()
-
- def get_num_rows(self, max_images):
- with sqlite3.connect(self.db_file) as conn:
- cursor = conn.cursor()
- cursor.execute(f"SELECT MAX(ROWID) FROM {self.db_table}")
- db_rows = cursor.fetchone()[0]
- if max_images:
- return min(max_images, db_rows)
- else:
- return db_rows
-
- def __getitem__(self, item):
- # lr, hr = self.lr_hr_images[item]
- # lr = Image.open(io.BytesIO(lr))
- # hr = Image.open(io.BytesIO(hr))
- # return to_tensor(lr), to_tensor(hr)
- # note sqlite rowid starts with 1
- with sqlite3.connect(self.db_file) as conn:
- cursor = conn.cursor()
- cursor.execute(f"SELECT {self.lr_col}, {self.hr_col} FROM {self.db_table} WHERE ROWID={item + 1}")
- lr, hr = cursor.fetchone()
- lr = Image.open(io.BytesIO(lr)).convert("RGB")
- hr = Image.open(io.BytesIO(hr)).convert("RGB")
- # lr = np.array(lr) # use scale [0, 255] instead of [0,1]
- # hr = np.array(hr)
- return to_tensor(lr), to_tensor(hr)
-
-
-class ImagePatchData(Dataset):
- def __init__(self, lr_folder, hr_folder):
- self.lr_folder = lr_folder
- self.hr_folder = hr_folder
- self.lr_imgs = glob.glob(os.path.join(lr_folder, "**"))
- self.total_imgs = len(self.lr_imgs)
-
- def __len__(self):
- return self.total_imgs
-
- def __getitem__(self, item):
- lr_file = self.lr_imgs[item]
- hr_path = re.sub("lr", 'hr', os.path.dirname(lr_file))
- filename = os.path.basename(lr_file)
- hr_file = os.path.join(hr_path, filename)
- return to_tensor(Image.open(lr_file)), to_tensor(Image.open(hr_file))
-
-
-class ImageAugment:
- def __init__(self,
- shrink_size=2,
- noise_level=1,
- down_sample_method=None
- ):
- # noise_level (int): 0: no noise; 1: 75-95% quality; 2:50-75%
- if noise_level == 0:
- self.noise_level = [0, 0]
- elif noise_level == 1:
- self.noise_level = [5, 25]
- elif noise_level == 2:
- self.noise_level = [25, 50]
- else:
- raise KeyError("Noise level should be either 0, 1, 2")
- self.shrink_size = shrink_size
- self.down_sample_method = down_sample_method
-
- def shrink_img(self, hr_img):
-
- if self.down_sample_method is None:
- resample_method = random.choice([Image.BILINEAR, Image.BICUBIC, Image.LANCZOS])
- else:
- resample_method = self.down_sample_method
- img_w, img_h = tuple(map(lambda x: int(x / self.shrink_size), hr_img.size))
- lr_img = hr_img.resize((img_w, img_h), resample_method)
- return lr_img
-
- def add_jpeg_noise(self, hr_img):
- quality = 100 - round(random.uniform(*self.noise_level))
- lr_img = BytesIO()
- hr_img.save(lr_img, format='JPEG', quality=quality)
- lr_img.seek(0)
- lr_img = Image.open(lr_img)
- return lr_img
-
- def process(self, hr_patch_pil):
- lr_patch_pil = self.shrink_img(hr_patch_pil)
- if self.noise_level[1] > 0:
- lr_patch_pil = self.add_jpeg_noise(lr_patch_pil)
-
- return lr_patch_pil, hr_patch_pil
-
- def up_sample(self, img, resample):
- width, height = img.size
- return img.resize((self.shrink_size * width, self.shrink_size * height), resample=resample)
diff --git a/spaces/Jaehan/Text-Generation-3/README.md b/spaces/Jaehan/Text-Generation-3/README.md
deleted file mode 100644
index c78b153f151e9a4f6303961cbc468d8d814330de..0000000000000000000000000000000000000000
--- a/spaces/Jaehan/Text-Generation-3/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Text Generation 3
-emoji: 📚
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/facelib/detection/yolov5face/__init__.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/facelib/detection/yolov5face/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Jo0xFF/4xArText/utils/architecture/SRVGG.py b/spaces/Jo0xFF/4xArText/utils/architecture/SRVGG.py
deleted file mode 100644
index 556be991309ec4c541d2aea1b8d74fdf00b80040..0000000000000000000000000000000000000000
--- a/spaces/Jo0xFF/4xArText/utils/architecture/SRVGG.py
+++ /dev/null
@@ -1,109 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-
-import math
-from collections import OrderedDict
-from typing import Union
-
-import torch.nn as nn
-import torch.nn.functional as F
-from torch import Tensor
-
-
-class SRVGGNetCompact(nn.Module):
- """A compact VGG-style network structure for super-resolution.
- It is a compact network structure, which performs upsampling in the last layer and no convolution is
- conducted on the HR feature space.
- Args:
- num_in_ch (int): Channel number of inputs. Default: 3.
- num_out_ch (int): Channel number of outputs. Default: 3.
- num_feat (int): Channel number of intermediate features. Default: 64.
- num_conv (int): Number of convolution layers in the body network. Default: 16.
- upscale (int): Upsampling factor. Default: 4.
- act_type (str): Activation type, options: 'relu', 'prelu', 'leakyrelu'. Default: prelu.
- """
-
- def __init__(
- self,
- state_dict,
- act_type: str = "prelu",
- ):
- super(SRVGGNetCompact, self).__init__()
- self.act_type = act_type
-
- self.state = state_dict
-
- if "params" in self.state:
- self.state = self.state["params"]
-
- self.key_arr = list(self.state.keys())
-
- self.num_in_ch = self.get_in_nc()
- self.num_feat = self.get_num_feats()
- self.num_conv = self.get_num_conv()
- self.num_out_ch = self.num_in_ch # :(
- self.scale = self.get_scale()
-
- self.body = nn.ModuleList()
- # the first conv
- self.body.append(nn.Conv2d(self.num_in_ch, self.num_feat, 3, 1, 1))
- # the first activation
- if act_type == "relu":
- activation = nn.ReLU(inplace=True)
- elif act_type == "prelu":
- activation = nn.PReLU(num_parameters=self.num_feat)
- elif act_type == "leakyrelu":
- activation = nn.LeakyReLU(negative_slope=0.1, inplace=True)
- self.body.append(activation)
-
- # the body structure
- for _ in range(self.num_conv):
- self.body.append(nn.Conv2d(self.num_feat, self.num_feat, 3, 1, 1))
- # activation
- if act_type == "relu":
- activation = nn.ReLU(inplace=True)
- elif act_type == "prelu":
- activation = nn.PReLU(num_parameters=self.num_feat)
- elif act_type == "leakyrelu":
- activation = nn.LeakyReLU(negative_slope=0.1, inplace=True)
- self.body.append(activation)
-
- # the last conv
- self.body.append(nn.Conv2d(self.num_feat, self.pixelshuffle_shape, 3, 1, 1))
- # upsample
- self.upsampler = nn.PixelShuffle(self.scale)
-
- self.load_state_dict(self.state, strict=False)
-
- def get_num_conv(self) -> int:
- return (int(self.key_arr[-1].split(".")[1]) - 2) // 2
-
- def get_num_feats(self) -> int:
- return self.state[self.key_arr[0]].shape[0]
-
- def get_in_nc(self) -> int:
- return self.state[self.key_arr[0]].shape[1]
-
- def get_scale(self) -> int:
- self.pixelshuffle_shape = self.state[self.key_arr[-1]].shape[0]
- # Assume out_nc is the same as in_nc
- # I cant think of a better way to do that
- self.num_out_ch = self.num_in_ch
- scale = math.sqrt(self.pixelshuffle_shape / self.num_out_ch)
- if scale - int(scale) > 0:
- print(
- "out_nc is probably different than in_nc, scale calculation might be wrong"
- )
- scale = int(scale)
- return scale
-
- def forward(self, x):
- out = x
- for i in range(0, len(self.body)):
- out = self.body[i](out)
-
- out = self.upsampler(out)
- # add the nearest upsampled image, so that the network learns the residual
- base = F.interpolate(x, scale_factor=self.scale, mode="nearest")
- out += base
- return out
diff --git a/spaces/Joabutt/test/style.css b/spaces/Joabutt/test/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/Joabutt/test/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/models/phase1.py b/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/models/phase1.py
deleted file mode 100644
index b30aea6350609071c915c2f051fbe5419253e656..0000000000000000000000000000000000000000
--- a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/models/phase1.py
+++ /dev/null
@@ -1,65 +0,0 @@
-import torch
-import numpy as np
-from salad.models.base_model import BaseModel
-from salad.utils import nputil, thutil
-from salad.utils.spaghetti_util import clip_eigenvalues, project_eigenvectors
-
-class Phase1Model(BaseModel):
- def __init__(self, network, variance_schedule, **kwargs):
- super().__init__(network, variance_schedule, **kwargs)
-
- @torch.no_grad()
- def sample(
- self,
- batch_size=0,
- return_traj=False,
- ):
- x_T = torch.randn([batch_size, 16, 16]).to(self.device)
-
- traj = {self.var_sched.num_steps: x_T}
- for t in range(self.var_sched.num_steps, 0, -1):
- z = torch.randn_like(x_T) if t > 1 else torch.zeros_like(x_T)
- alpha = self.var_sched.alphas[t]
- alpha_bar = self.var_sched.alpha_bars[t]
- sigma = self.var_sched.get_sigmas(t, flexibility=0)
-
- c0 = 1.0 / torch.sqrt(alpha)
- c1 = (1 - alpha) / torch.sqrt(1 - alpha_bar)
-
- x_t = traj[t]
-
- beta = self.var_sched.betas[[t] * batch_size]
- e_theta = self.net(x_t, beta=beta)
- # print(e_theta.norm(-1).mean())
-
- x_next = c0 * (x_t - c1 * e_theta) + sigma * z
- traj[t - 1] = x_next.detach()
-
- traj[t] = traj[t].cpu()
-
- if not return_traj:
- del traj[t]
- if return_traj:
- return traj
- else:
- return traj[0]
-
- def sampling_gaussians(self, num_shapes):
- """
- Return:
- ldm_gaus: np.ndarray
- gt_gaus: np.ndarray
- """
- ldm_gaus = self.sample(num_shapes)
-
- if self.hparams.get("global_normalization"):
- if not hasattr(self, "data_val"):
- self._build_dataset("val")
- if self.hparams.get("global_normalization") == "partial":
- ldm_gaus = self.data_val.unnormalize_global_static(ldm_gaus, slice(12,None))
- elif self.hparams.get("global_normalization") == "all":
- ldm_gaus = self.data_val.unnormalize_global_static(ldm_gaus, slice(None))
-
- ldm_gaus = clip_eigenvalues(ldm_gaus)
- ldm_gaus = project_eigenvectors(ldm_gaus)
- return ldm_gaus
diff --git a/spaces/KarmKarma/genshinimpact-rvc-models-v2/lib/infer_pack/models.py b/spaces/KarmKarma/genshinimpact-rvc-models-v2/lib/infer_pack/models.py
deleted file mode 100644
index 44c08d361bcb13b84b38dc29beff5cdaddad4ea2..0000000000000000000000000000000000000000
--- a/spaces/KarmKarma/genshinimpact-rvc-models-v2/lib/infer_pack/models.py
+++ /dev/null
@@ -1,1124 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from lib.infer_pack import modules
-from lib.infer_pack import attentions
-from lib.infer_pack import commons
-from lib.infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from lib.infer_pack.commons import init_weights
-import numpy as np
-from lib.infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder768(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(768, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class MultiPeriodDiscriminatorV2(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminatorV2, self).__init__()
- # periods = [2, 3, 5, 7, 11, 17]
- periods = [2, 3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/ssd_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/ssd_head.py
deleted file mode 100644
index c3b46fa3d8942ff1eb41e067b8e9b361542b6362..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/ssd_head.py
+++ /dev/null
@@ -1,362 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Dict, List, Optional, Sequence, Tuple
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule, DepthwiseSeparableConvModule
-from torch import Tensor
-
-from mmdet.registry import MODELS, TASK_UTILS
-from mmdet.utils import ConfigType, InstanceList, MultiConfig, OptInstanceList
-from ..losses import smooth_l1_loss
-from ..task_modules.samplers import PseudoSampler
-from ..utils import multi_apply
-from .anchor_head import AnchorHead
-
-
-# TODO: add loss evaluator for SSD
-@MODELS.register_module()
-class SSDHead(AnchorHead):
- """Implementation of `SSD head `_
-
- Args:
- num_classes (int): Number of categories excluding the background
- category.
- in_channels (Sequence[int]): Number of channels in the input feature
- map.
- stacked_convs (int): Number of conv layers in cls and reg tower.
- Defaults to 0.
- feat_channels (int): Number of hidden channels when stacked_convs
- > 0. Defaults to 256.
- use_depthwise (bool): Whether to use DepthwiseSeparableConv.
- Defaults to False.
- conv_cfg (:obj:`ConfigDict` or dict, Optional): Dictionary to construct
- and config conv layer. Defaults to None.
- norm_cfg (:obj:`ConfigDict` or dict, Optional): Dictionary to construct
- and config norm layer. Defaults to None.
- act_cfg (:obj:`ConfigDict` or dict, Optional): Dictionary to construct
- and config activation layer. Defaults to None.
- anchor_generator (:obj:`ConfigDict` or dict): Config dict for anchor
- generator.
- bbox_coder (:obj:`ConfigDict` or dict): Config of bounding box coder.
- reg_decoded_bbox (bool): If true, the regression loss would be
- applied directly on decoded bounding boxes, converting both
- the predicted boxes and regression targets to absolute
- coordinates format. Defaults to False. It should be `True` when
- using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head.
- train_cfg (:obj:`ConfigDict` or dict, Optional): Training config of
- anchor head.
- test_cfg (:obj:`ConfigDict` or dict, Optional): Testing config of
- anchor head.
- init_cfg (:obj:`ConfigDict` or dict or list[:obj:`ConfigDict` or \
- dict], Optional): Initialization config dict.
- """ # noqa: W605
-
- def __init__(
- self,
- num_classes: int = 80,
- in_channels: Sequence[int] = (512, 1024, 512, 256, 256, 256),
- stacked_convs: int = 0,
- feat_channels: int = 256,
- use_depthwise: bool = False,
- conv_cfg: Optional[ConfigType] = None,
- norm_cfg: Optional[ConfigType] = None,
- act_cfg: Optional[ConfigType] = None,
- anchor_generator: ConfigType = dict(
- type='SSDAnchorGenerator',
- scale_major=False,
- input_size=300,
- strides=[8, 16, 32, 64, 100, 300],
- ratios=([2], [2, 3], [2, 3], [2, 3], [2], [2]),
- basesize_ratio_range=(0.1, 0.9)),
- bbox_coder: ConfigType = dict(
- type='DeltaXYWHBBoxCoder',
- clip_border=True,
- target_means=[.0, .0, .0, .0],
- target_stds=[1.0, 1.0, 1.0, 1.0],
- ),
- reg_decoded_bbox: bool = False,
- train_cfg: Optional[ConfigType] = None,
- test_cfg: Optional[ConfigType] = None,
- init_cfg: MultiConfig = dict(
- type='Xavier', layer='Conv2d', distribution='uniform', bias=0)
- ) -> None:
- super(AnchorHead, self).__init__(init_cfg=init_cfg)
- self.num_classes = num_classes
- self.in_channels = in_channels
- self.stacked_convs = stacked_convs
- self.feat_channels = feat_channels
- self.use_depthwise = use_depthwise
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.act_cfg = act_cfg
-
- self.cls_out_channels = num_classes + 1 # add background class
- self.prior_generator = TASK_UTILS.build(anchor_generator)
-
- # Usually the numbers of anchors for each level are the same
- # except SSD detectors. So it is an int in the most dense
- # heads but a list of int in SSDHead
- self.num_base_priors = self.prior_generator.num_base_priors
-
- self._init_layers()
-
- self.bbox_coder = TASK_UTILS.build(bbox_coder)
- self.reg_decoded_bbox = reg_decoded_bbox
- self.use_sigmoid_cls = False
- self.cls_focal_loss = False
- self.train_cfg = train_cfg
- self.test_cfg = test_cfg
- if self.train_cfg:
- self.assigner = TASK_UTILS.build(self.train_cfg['assigner'])
- if self.train_cfg.get('sampler', None) is not None:
- self.sampler = TASK_UTILS.build(
- self.train_cfg['sampler'], default_args=dict(context=self))
- else:
- self.sampler = PseudoSampler(context=self)
-
- def _init_layers(self) -> None:
- """Initialize layers of the head."""
- self.cls_convs = nn.ModuleList()
- self.reg_convs = nn.ModuleList()
- # TODO: Use registry to choose ConvModule type
- conv = DepthwiseSeparableConvModule \
- if self.use_depthwise else ConvModule
-
- for channel, num_base_priors in zip(self.in_channels,
- self.num_base_priors):
- cls_layers = []
- reg_layers = []
- in_channel = channel
- # build stacked conv tower, not used in default ssd
- for i in range(self.stacked_convs):
- cls_layers.append(
- conv(
- in_channel,
- self.feat_channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
- reg_layers.append(
- conv(
- in_channel,
- self.feat_channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
- in_channel = self.feat_channels
- # SSD-Lite head
- if self.use_depthwise:
- cls_layers.append(
- ConvModule(
- in_channel,
- in_channel,
- 3,
- padding=1,
- groups=in_channel,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
- reg_layers.append(
- ConvModule(
- in_channel,
- in_channel,
- 3,
- padding=1,
- groups=in_channel,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg))
- cls_layers.append(
- nn.Conv2d(
- in_channel,
- num_base_priors * self.cls_out_channels,
- kernel_size=1 if self.use_depthwise else 3,
- padding=0 if self.use_depthwise else 1))
- reg_layers.append(
- nn.Conv2d(
- in_channel,
- num_base_priors * 4,
- kernel_size=1 if self.use_depthwise else 3,
- padding=0 if self.use_depthwise else 1))
- self.cls_convs.append(nn.Sequential(*cls_layers))
- self.reg_convs.append(nn.Sequential(*reg_layers))
-
- def forward(self, x: Tuple[Tensor]) -> Tuple[List[Tensor], List[Tensor]]:
- """Forward features from the upstream network.
-
- Args:
- x (tuple[Tensor]): Features from the upstream network, each is
- a 4D-tensor.
-
- Returns:
- tuple[list[Tensor], list[Tensor]]: A tuple of cls_scores list and
- bbox_preds list.
-
- - cls_scores (list[Tensor]): Classification scores for all scale \
- levels, each is a 4D-tensor, the channels number is \
- num_anchors * num_classes.
- - bbox_preds (list[Tensor]): Box energies / deltas for all scale \
- levels, each is a 4D-tensor, the channels number is \
- num_anchors * 4.
- """
- cls_scores = []
- bbox_preds = []
- for feat, reg_conv, cls_conv in zip(x, self.reg_convs, self.cls_convs):
- cls_scores.append(cls_conv(feat))
- bbox_preds.append(reg_conv(feat))
- return cls_scores, bbox_preds
-
- def loss_by_feat_single(self, cls_score: Tensor, bbox_pred: Tensor,
- anchor: Tensor, labels: Tensor,
- label_weights: Tensor, bbox_targets: Tensor,
- bbox_weights: Tensor,
- avg_factor: int) -> Tuple[Tensor, Tensor]:
- """Compute loss of a single image.
-
- Args:
- cls_score (Tensor): Box scores for eachimage
- Has shape (num_total_anchors, num_classes).
- bbox_pred (Tensor): Box energies / deltas for each image
- level with shape (num_total_anchors, 4).
- anchors (Tensor): Box reference for each scale level with shape
- (num_total_anchors, 4).
- labels (Tensor): Labels of each anchors with shape
- (num_total_anchors,).
- label_weights (Tensor): Label weights of each anchor with shape
- (num_total_anchors,)
- bbox_targets (Tensor): BBox regression targets of each anchor
- weight shape (num_total_anchors, 4).
- bbox_weights (Tensor): BBox regression loss weights of each anchor
- with shape (num_total_anchors, 4).
- avg_factor (int): Average factor that is used to average
- the loss. When using sampling method, avg_factor is usually
- the sum of positive and negative priors. When using
- `PseudoSampler`, `avg_factor` is usually equal to the number
- of positive priors.
-
- Returns:
- Tuple[Tensor, Tensor]: A tuple of cls loss and bbox loss of one
- feature map.
- """
-
- loss_cls_all = F.cross_entropy(
- cls_score, labels, reduction='none') * label_weights
- # FG cat_id: [0, num_classes -1], BG cat_id: num_classes
- pos_inds = ((labels >= 0) & (labels < self.num_classes)).nonzero(
- as_tuple=False).reshape(-1)
- neg_inds = (labels == self.num_classes).nonzero(
- as_tuple=False).view(-1)
-
- num_pos_samples = pos_inds.size(0)
- num_neg_samples = self.train_cfg['neg_pos_ratio'] * num_pos_samples
- if num_neg_samples > neg_inds.size(0):
- num_neg_samples = neg_inds.size(0)
- topk_loss_cls_neg, _ = loss_cls_all[neg_inds].topk(num_neg_samples)
- loss_cls_pos = loss_cls_all[pos_inds].sum()
- loss_cls_neg = topk_loss_cls_neg.sum()
- loss_cls = (loss_cls_pos + loss_cls_neg) / avg_factor
-
- if self.reg_decoded_bbox:
- # When the regression loss (e.g. `IouLoss`, `GIouLoss`)
- # is applied directly on the decoded bounding boxes, it
- # decodes the already encoded coordinates to absolute format.
- bbox_pred = self.bbox_coder.decode(anchor, bbox_pred)
-
- loss_bbox = smooth_l1_loss(
- bbox_pred,
- bbox_targets,
- bbox_weights,
- beta=self.train_cfg['smoothl1_beta'],
- avg_factor=avg_factor)
- return loss_cls[None], loss_bbox
-
- def loss_by_feat(
- self,
- cls_scores: List[Tensor],
- bbox_preds: List[Tensor],
- batch_gt_instances: InstanceList,
- batch_img_metas: List[dict],
- batch_gt_instances_ignore: OptInstanceList = None
- ) -> Dict[str, List[Tensor]]:
- """Compute losses of the head.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level
- Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level with shape (N, num_anchors * 4, H, W)
- batch_gt_instances (list[:obj:`InstanceData`]): Batch of
- gt_instance. It usually includes ``bboxes`` and ``labels``
- attributes.
- batch_img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- batch_gt_instances_ignore (list[:obj:`InstanceData`], Optional):
- Batch of gt_instances_ignore. It includes ``bboxes`` attribute
- data that is ignored during training and testing.
- Defaults to None.
-
- Returns:
- dict[str, list[Tensor]]: A dictionary of loss components. the dict
- has components below:
-
- - loss_cls (list[Tensor]): A list containing each feature map \
- classification loss.
- - loss_bbox (list[Tensor]): A list containing each feature map \
- regression loss.
- """
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
- assert len(featmap_sizes) == self.prior_generator.num_levels
-
- device = cls_scores[0].device
-
- anchor_list, valid_flag_list = self.get_anchors(
- featmap_sizes, batch_img_metas, device=device)
- cls_reg_targets = self.get_targets(
- anchor_list,
- valid_flag_list,
- batch_gt_instances,
- batch_img_metas,
- batch_gt_instances_ignore=batch_gt_instances_ignore,
- unmap_outputs=True)
- (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list,
- avg_factor) = cls_reg_targets
-
- num_images = len(batch_img_metas)
- all_cls_scores = torch.cat([
- s.permute(0, 2, 3, 1).reshape(
- num_images, -1, self.cls_out_channels) for s in cls_scores
- ], 1)
- all_labels = torch.cat(labels_list, -1).view(num_images, -1)
- all_label_weights = torch.cat(label_weights_list,
- -1).view(num_images, -1)
- all_bbox_preds = torch.cat([
- b.permute(0, 2, 3, 1).reshape(num_images, -1, 4)
- for b in bbox_preds
- ], -2)
- all_bbox_targets = torch.cat(bbox_targets_list,
- -2).view(num_images, -1, 4)
- all_bbox_weights = torch.cat(bbox_weights_list,
- -2).view(num_images, -1, 4)
-
- # concat all level anchors to a single tensor
- all_anchors = []
- for i in range(num_images):
- all_anchors.append(torch.cat(anchor_list[i]))
-
- losses_cls, losses_bbox = multi_apply(
- self.loss_by_feat_single,
- all_cls_scores,
- all_bbox_preds,
- all_anchors,
- all_labels,
- all_label_weights,
- all_bbox_targets,
- all_bbox_weights,
- avg_factor=avg_factor)
- return dict(loss_cls=losses_cls, loss_bbox=losses_bbox)
diff --git a/spaces/Liu-LAB/GPT-academic/docs/waifu_plugin/waifu.css b/spaces/Liu-LAB/GPT-academic/docs/waifu_plugin/waifu.css
deleted file mode 100644
index 42639df0794e46fc58f66e2c772e2bf9ba605eed..0000000000000000000000000000000000000000
--- a/spaces/Liu-LAB/GPT-academic/docs/waifu_plugin/waifu.css
+++ /dev/null
@@ -1,290 +0,0 @@
-.waifu {
- position: fixed;
- bottom: 0;
- z-index: 1;
- font-size: 0;
- -webkit-transform: translateY(3px);
- transform: translateY(3px);
-}
-.waifu:hover {
- -webkit-transform: translateY(0);
- transform: translateY(0);
-}
-.waifu-tips {
- opacity: 0;
- margin: -20px 20px;
- padding: 5px 10px;
- border: 1px solid rgba(224, 186, 140, 0.62);
- border-radius: 12px;
- background-color: rgba(236, 217, 188, 0.5);
- box-shadow: 0 3px 15px 2px rgba(191, 158, 118, 0.2);
- text-overflow: ellipsis;
- overflow: hidden;
- position: absolute;
- animation-delay: 5s;
- animation-duration: 50s;
- animation-iteration-count: infinite;
- animation-name: shake;
- animation-timing-function: ease-in-out;
-}
-.waifu-tool {
- display: none;
- color: #aaa;
- top: 50px;
- right: 10px;
- position: absolute;
-}
-.waifu:hover .waifu-tool {
- display: block;
-}
-.waifu-tool span {
- display: block;
- cursor: pointer;
- color: #5b6c7d;
- transition: 0.2s;
-}
-.waifu-tool span:hover {
- color: #34495e;
-}
-.waifu #live2d{
- position: relative;
-}
-
-@keyframes shake {
- 2% {
- transform: translate(0.5px, -1.5px) rotate(-0.5deg);
- }
-
- 4% {
- transform: translate(0.5px, 1.5px) rotate(1.5deg);
- }
-
- 6% {
- transform: translate(1.5px, 1.5px) rotate(1.5deg);
- }
-
- 8% {
- transform: translate(2.5px, 1.5px) rotate(0.5deg);
- }
-
- 10% {
- transform: translate(0.5px, 2.5px) rotate(0.5deg);
- }
-
- 12% {
- transform: translate(1.5px, 1.5px) rotate(0.5deg);
- }
-
- 14% {
- transform: translate(0.5px, 0.5px) rotate(0.5deg);
- }
-
- 16% {
- transform: translate(-1.5px, -0.5px) rotate(1.5deg);
- }
-
- 18% {
- transform: translate(0.5px, 0.5px) rotate(1.5deg);
- }
-
- 20% {
- transform: translate(2.5px, 2.5px) rotate(1.5deg);
- }
-
- 22% {
- transform: translate(0.5px, -1.5px) rotate(1.5deg);
- }
-
- 24% {
- transform: translate(-1.5px, 1.5px) rotate(-0.5deg);
- }
-
- 26% {
- transform: translate(1.5px, 0.5px) rotate(1.5deg);
- }
-
- 28% {
- transform: translate(-0.5px, -0.5px) rotate(-0.5deg);
- }
-
- 30% {
- transform: translate(1.5px, -0.5px) rotate(-0.5deg);
- }
-
- 32% {
- transform: translate(2.5px, -1.5px) rotate(1.5deg);
- }
-
- 34% {
- transform: translate(2.5px, 2.5px) rotate(-0.5deg);
- }
-
- 36% {
- transform: translate(0.5px, -1.5px) rotate(0.5deg);
- }
-
- 38% {
- transform: translate(2.5px, -0.5px) rotate(-0.5deg);
- }
-
- 40% {
- transform: translate(-0.5px, 2.5px) rotate(0.5deg);
- }
-
- 42% {
- transform: translate(-1.5px, 2.5px) rotate(0.5deg);
- }
-
- 44% {
- transform: translate(-1.5px, 1.5px) rotate(0.5deg);
- }
-
- 46% {
- transform: translate(1.5px, -0.5px) rotate(-0.5deg);
- }
-
- 48% {
- transform: translate(2.5px, -0.5px) rotate(0.5deg);
- }
-
- 50% {
- transform: translate(-1.5px, 1.5px) rotate(0.5deg);
- }
-
- 52% {
- transform: translate(-0.5px, 1.5px) rotate(0.5deg);
- }
-
- 54% {
- transform: translate(-1.5px, 1.5px) rotate(0.5deg);
- }
-
- 56% {
- transform: translate(0.5px, 2.5px) rotate(1.5deg);
- }
-
- 58% {
- transform: translate(2.5px, 2.5px) rotate(0.5deg);
- }
-
- 60% {
- transform: translate(2.5px, -1.5px) rotate(1.5deg);
- }
-
- 62% {
- transform: translate(-1.5px, 0.5px) rotate(1.5deg);
- }
-
- 64% {
- transform: translate(-1.5px, 1.5px) rotate(1.5deg);
- }
-
- 66% {
- transform: translate(0.5px, 2.5px) rotate(1.5deg);
- }
-
- 68% {
- transform: translate(2.5px, -1.5px) rotate(1.5deg);
- }
-
- 70% {
- transform: translate(2.5px, 2.5px) rotate(0.5deg);
- }
-
- 72% {
- transform: translate(-0.5px, -1.5px) rotate(1.5deg);
- }
-
- 74% {
- transform: translate(-1.5px, 2.5px) rotate(1.5deg);
- }
-
- 76% {
- transform: translate(-1.5px, 2.5px) rotate(1.5deg);
- }
-
- 78% {
- transform: translate(-1.5px, 2.5px) rotate(0.5deg);
- }
-
- 80% {
- transform: translate(-1.5px, 0.5px) rotate(-0.5deg);
- }
-
- 82% {
- transform: translate(-1.5px, 0.5px) rotate(-0.5deg);
- }
-
- 84% {
- transform: translate(-0.5px, 0.5px) rotate(1.5deg);
- }
-
- 86% {
- transform: translate(2.5px, 1.5px) rotate(0.5deg);
- }
-
- 88% {
- transform: translate(-1.5px, 0.5px) rotate(1.5deg);
- }
-
- 90% {
- transform: translate(-1.5px, -0.5px) rotate(-0.5deg);
- }
-
- 92% {
- transform: translate(-1.5px, -1.5px) rotate(1.5deg);
- }
-
- 94% {
- transform: translate(0.5px, 0.5px) rotate(-0.5deg);
- }
-
- 96% {
- transform: translate(2.5px, -0.5px) rotate(-0.5deg);
- }
-
- 98% {
- transform: translate(-1.5px, -1.5px) rotate(-0.5deg);
- }
-
- 0%, 100% {
- transform: translate(0, 0) rotate(0);
- }
-}
-@font-face {
- font-family: 'Flat-UI-Icons';
- src: url('flat-ui-icons-regular.eot');
- src: url('flat-ui-icons-regular.eot?#iefix') format('embedded-opentype'), url('flat-ui-icons-regular.woff') format('woff'), url('flat-ui-icons-regular.ttf') format('truetype'), url('flat-ui-icons-regular.svg#flat-ui-icons-regular') format('svg');
-}
-[class^="fui-"],
-[class*="fui-"] {
- font-family: 'Flat-UI-Icons';
- speak: none;
- font-style: normal;
- font-weight: normal;
- font-variant: normal;
- text-transform: none;
- -webkit-font-smoothing: antialiased;
- -moz-osx-font-smoothing: grayscale;
-}
-.fui-cross:before {
- content: "\e609";
-}
-.fui-info-circle:before {
- content: "\e60f";
-}
-.fui-photo:before {
- content: "\e62a";
-}
-.fui-eye:before {
- content: "\e62c";
-}
-.fui-chat:before {
- content: "\e62d";
-}
-.fui-home:before {
- content: "\e62e";
-}
-.fui-user:before {
- content: "\e631";
-}
\ No newline at end of file
diff --git a/spaces/Luelll/ChuanhuChatGPT/modules/models.py b/spaces/Luelll/ChuanhuChatGPT/modules/models.py
deleted file mode 100644
index 25b18b1904910e183a997a763008403d960868d6..0000000000000000000000000000000000000000
--- a/spaces/Luelll/ChuanhuChatGPT/modules/models.py
+++ /dev/null
@@ -1,625 +0,0 @@
-from __future__ import annotations
-from typing import TYPE_CHECKING, List
-
-import logging
-import json
-import commentjson as cjson
-import os
-import sys
-import requests
-import urllib3
-import platform
-import base64
-from io import BytesIO
-from PIL import Image
-
-from tqdm import tqdm
-import colorama
-from duckduckgo_search import ddg
-import asyncio
-import aiohttp
-from enum import Enum
-import uuid
-
-from .presets import *
-from .llama_func import *
-from .utils import *
-from . import shared
-from .config import retrieve_proxy
-from modules import config
-from .base_model import BaseLLMModel, ModelType
-
-
-class OpenAIClient(BaseLLMModel):
- def __init__(
- self,
- model_name,
- api_key,
- system_prompt=INITIAL_SYSTEM_PROMPT,
- temperature=1.0,
- top_p=1.0,
- ) -> None:
- super().__init__(
- model_name=model_name,
- temperature=temperature,
- top_p=top_p,
- system_prompt=system_prompt,
- )
- self.api_key = api_key
- self.need_api_key = True
- self._refresh_header()
-
- def get_answer_stream_iter(self):
- response = self._get_response(stream=True)
- if response is not None:
- iter = self._decode_chat_response(response)
- partial_text = ""
- for i in iter:
- partial_text += i
- yield partial_text
- else:
- yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG
-
- def get_answer_at_once(self):
- response = self._get_response()
- response = json.loads(response.text)
- content = response["choices"][0]["message"]["content"]
- total_token_count = response["usage"]["total_tokens"]
- return content, total_token_count
-
- def count_token(self, user_input):
- input_token_count = count_token(construct_user(user_input))
- if self.system_prompt is not None and len(self.all_token_counts) == 0:
- system_prompt_token_count = count_token(
- construct_system(self.system_prompt)
- )
- return input_token_count + system_prompt_token_count
- return input_token_count
-
- def billing_info(self):
- try:
- curr_time = datetime.datetime.now()
- last_day_of_month = get_last_day_of_month(
- curr_time).strftime("%Y-%m-%d")
- first_day_of_month = curr_time.replace(day=1).strftime("%Y-%m-%d")
- usage_url = f"{shared.state.usage_api_url}?start_date={first_day_of_month}&end_date={last_day_of_month}"
- try:
- usage_data = self._get_billing_data(usage_url)
- except Exception as e:
- logging.error(f"获取API使用情况失败:" + str(e))
- return i18n("**获取API使用情况失败**")
- rounded_usage = "{:.5f}".format(usage_data["total_usage"] / 100)
- return i18n("**本月使用金额** ") + f"\u3000 ${rounded_usage}"
- except requests.exceptions.ConnectTimeout:
- status_text = (
- STANDARD_ERROR_MSG + CONNECTION_TIMEOUT_MSG + ERROR_RETRIEVE_MSG
- )
- return status_text
- except requests.exceptions.ReadTimeout:
- status_text = STANDARD_ERROR_MSG + READ_TIMEOUT_MSG + ERROR_RETRIEVE_MSG
- return status_text
- except Exception as e:
- import traceback
- traceback.print_exc()
- logging.error(i18n("获取API使用情况失败:") + str(e))
- return STANDARD_ERROR_MSG + ERROR_RETRIEVE_MSG
-
- def set_token_upper_limit(self, new_upper_limit):
- pass
-
- @shared.state.switching_api_key # 在不开启多账号模式的时候,这个装饰器不会起作用
- def _get_response(self, stream=False):
- openai_api_key = self.api_key
- system_prompt = self.system_prompt
- history = self.history
- logging.debug(colorama.Fore.YELLOW +
- f"{history}" + colorama.Fore.RESET)
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {openai_api_key}",
- }
-
- if system_prompt is not None:
- history = [construct_system(system_prompt), *history]
-
- payload = {
- "model": self.model_name,
- "messages": history,
- "temperature": self.temperature,
- "top_p": self.top_p,
- "n": self.n_choices,
- "stream": stream,
- "presence_penalty": self.presence_penalty,
- "frequency_penalty": self.frequency_penalty,
- }
-
- if self.max_generation_token is not None:
- payload["max_tokens"] = self.max_generation_token
- if self.stop_sequence is not None:
- payload["stop"] = self.stop_sequence
- if self.logit_bias is not None:
- payload["logit_bias"] = self.logit_bias
- if self.user_identifier is not None:
- payload["user"] = self.user_identifier
-
- if stream:
- timeout = TIMEOUT_STREAMING
- else:
- timeout = TIMEOUT_ALL
-
- # 如果有自定义的api-host,使用自定义host发送请求,否则使用默认设置发送请求
- if shared.state.completion_url != COMPLETION_URL:
- logging.info(f"使用自定义API URL: {shared.state.completion_url}")
-
- with retrieve_proxy():
- try:
- response = requests.post(
- shared.state.completion_url,
- headers=headers,
- json=payload,
- stream=stream,
- timeout=timeout,
- )
- except:
- return None
- return response
-
- def _refresh_header(self):
- self.headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {self.api_key}",
- }
-
- def _get_billing_data(self, billing_url):
- with retrieve_proxy():
- response = requests.get(
- billing_url,
- headers=self.headers,
- timeout=TIMEOUT_ALL,
- )
-
- if response.status_code == 200:
- data = response.json()
- return data
- else:
- raise Exception(
- f"API request failed with status code {response.status_code}: {response.text}"
- )
-
- def _decode_chat_response(self, response):
- error_msg = ""
- for chunk in response.iter_lines():
- if chunk:
- chunk = chunk.decode()
- chunk_length = len(chunk)
- try:
- chunk = json.loads(chunk[6:])
- except json.JSONDecodeError:
- print(i18n("JSON解析错误,收到的内容: ") + f"{chunk}")
- error_msg += chunk
- continue
- if chunk_length > 6 and "delta" in chunk["choices"][0]:
- if chunk["choices"][0]["finish_reason"] == "stop":
- break
- try:
- yield chunk["choices"][0]["delta"]["content"]
- except Exception as e:
- # logging.error(f"Error: {e}")
- continue
- if error_msg:
- raise Exception(error_msg)
-
- def set_key(self, new_access_key):
- ret = super().set_key(new_access_key)
- self._refresh_header()
- return ret
-
-
-class ChatGLM_Client(BaseLLMModel):
- def __init__(self, model_name) -> None:
- super().__init__(model_name=model_name)
- from transformers import AutoTokenizer, AutoModel
- import torch
- global CHATGLM_TOKENIZER, CHATGLM_MODEL
- if CHATGLM_TOKENIZER is None or CHATGLM_MODEL is None:
- system_name = platform.system()
- model_path = None
- if os.path.exists("models"):
- model_dirs = os.listdir("models")
- if model_name in model_dirs:
- model_path = f"models/{model_name}"
- if model_path is not None:
- model_source = model_path
- else:
- model_source = f"THUDM/{model_name}"
- CHATGLM_TOKENIZER = AutoTokenizer.from_pretrained(
- model_source, trust_remote_code=True
- )
- quantified = False
- if "int4" in model_name:
- quantified = True
- model = AutoModel.from_pretrained(
- model_source, trust_remote_code=True
- )
- if torch.cuda.is_available():
- # run on CUDA
- logging.info("CUDA is available, using CUDA")
- model = model.half().cuda()
- # mps加速还存在一些问题,暂时不使用
- elif system_name == "Darwin" and model_path is not None and not quantified:
- logging.info("Running on macOS, using MPS")
- # running on macOS and model already downloaded
- model = model.half().to("mps")
- else:
- logging.info("GPU is not available, using CPU")
- model = model.float()
- model = model.eval()
- CHATGLM_MODEL = model
-
- def _get_glm_style_input(self):
- history = [x["content"] for x in self.history]
- query = history.pop()
- logging.debug(colorama.Fore.YELLOW +
- f"{history}" + colorama.Fore.RESET)
- assert (
- len(history) % 2 == 0
- ), f"History should be even length. current history is: {history}"
- history = [[history[i], history[i + 1]]
- for i in range(0, len(history), 2)]
- return history, query
-
- def get_answer_at_once(self):
- history, query = self._get_glm_style_input()
- response, _ = CHATGLM_MODEL.chat(
- CHATGLM_TOKENIZER, query, history=history)
- return response, len(response)
-
- def get_answer_stream_iter(self):
- history, query = self._get_glm_style_input()
- for response, history in CHATGLM_MODEL.stream_chat(
- CHATGLM_TOKENIZER,
- query,
- history,
- max_length=self.token_upper_limit,
- top_p=self.top_p,
- temperature=self.temperature,
- ):
- yield response
-
-
-class LLaMA_Client(BaseLLMModel):
- def __init__(
- self,
- model_name,
- lora_path=None,
- ) -> None:
- super().__init__(model_name=model_name)
- from lmflow.datasets.dataset import Dataset
- from lmflow.pipeline.auto_pipeline import AutoPipeline
- from lmflow.models.auto_model import AutoModel
- from lmflow.args import ModelArguments, DatasetArguments, InferencerArguments
-
- self.max_generation_token = 1000
- self.end_string = "\n\n"
- # We don't need input data
- data_args = DatasetArguments(dataset_path=None)
- self.dataset = Dataset(data_args)
- self.system_prompt = ""
-
- global LLAMA_MODEL, LLAMA_INFERENCER
- if LLAMA_MODEL is None or LLAMA_INFERENCER is None:
- model_path = None
- if os.path.exists("models"):
- model_dirs = os.listdir("models")
- if model_name in model_dirs:
- model_path = f"models/{model_name}"
- if model_path is not None:
- model_source = model_path
- else:
- model_source = f"decapoda-research/{model_name}"
- # raise Exception(f"models目录下没有这个模型: {model_name}")
- if lora_path is not None:
- lora_path = f"lora/{lora_path}"
- model_args = ModelArguments(model_name_or_path=model_source, lora_model_path=lora_path, model_type=None, config_overrides=None, config_name=None, tokenizer_name=None, cache_dir=None,
- use_fast_tokenizer=True, model_revision='main', use_auth_token=False, torch_dtype=None, use_lora=False, lora_r=8, lora_alpha=32, lora_dropout=0.1, use_ram_optimized_load=True)
- pipeline_args = InferencerArguments(
- local_rank=0, random_seed=1, deepspeed='configs/ds_config_chatbot.json', mixed_precision='bf16')
-
- with open(pipeline_args.deepspeed, "r") as f:
- ds_config = json.load(f)
- LLAMA_MODEL = AutoModel.get_model(
- model_args,
- tune_strategy="none",
- ds_config=ds_config,
- )
- LLAMA_INFERENCER = AutoPipeline.get_pipeline(
- pipeline_name="inferencer",
- model_args=model_args,
- data_args=data_args,
- pipeline_args=pipeline_args,
- )
-
- def _get_llama_style_input(self):
- history = []
- instruction = ""
- if self.system_prompt:
- instruction = (f"Instruction: {self.system_prompt}\n")
- for x in self.history:
- if x["role"] == "user":
- history.append(f"{instruction}Input: {x['content']}")
- else:
- history.append(f"Output: {x['content']}")
- context = "\n\n".join(history)
- context += "\n\nOutput: "
- return context
-
- def get_answer_at_once(self):
- context = self._get_llama_style_input()
-
- input_dataset = self.dataset.from_dict(
- {"type": "text_only", "instances": [{"text": context}]}
- )
-
- output_dataset = LLAMA_INFERENCER.inference(
- model=LLAMA_MODEL,
- dataset=input_dataset,
- max_new_tokens=self.max_generation_token,
- temperature=self.temperature,
- )
-
- response = output_dataset.to_dict()["instances"][0]["text"]
- return response, len(response)
-
- def get_answer_stream_iter(self):
- context = self._get_llama_style_input()
- partial_text = ""
- step = 1
- for _ in range(0, self.max_generation_token, step):
- input_dataset = self.dataset.from_dict(
- {"type": "text_only", "instances": [
- {"text": context + partial_text}]}
- )
- output_dataset = LLAMA_INFERENCER.inference(
- model=LLAMA_MODEL,
- dataset=input_dataset,
- max_new_tokens=step,
- temperature=self.temperature,
- )
- response = output_dataset.to_dict()["instances"][0]["text"]
- if response == "" or response == self.end_string:
- break
- partial_text += response
- yield partial_text
-
-
-class XMChat(BaseLLMModel):
- def __init__(self, api_key):
- super().__init__(model_name="xmchat")
- self.api_key = api_key
- self.session_id = None
- self.reset()
- self.image_bytes = None
- self.image_path = None
- self.xm_history = []
- self.url = "https://xmbot.net/web"
- self.last_conv_id = None
-
- def reset(self):
- self.session_id = str(uuid.uuid4())
- self.last_conv_id = None
- return [], "已重置"
-
- def image_to_base64(self, image_path):
- # 打开并加载图片
- img = Image.open(image_path)
-
- # 获取图片的宽度和高度
- width, height = img.size
-
- # 计算压缩比例,以确保最长边小于4096像素
- max_dimension = 2048
- scale_ratio = min(max_dimension / width, max_dimension / height)
-
- if scale_ratio < 1:
- # 按压缩比例调整图片大小
- new_width = int(width * scale_ratio)
- new_height = int(height * scale_ratio)
- img = img.resize((new_width, new_height), Image.ANTIALIAS)
-
- # 将图片转换为jpg格式的二进制数据
- buffer = BytesIO()
- if img.mode == "RGBA":
- img = img.convert("RGB")
- img.save(buffer, format='JPEG')
- binary_image = buffer.getvalue()
-
- # 对二进制数据进行Base64编码
- base64_image = base64.b64encode(binary_image).decode('utf-8')
-
- return base64_image
-
- def try_read_image(self, filepath):
- def is_image_file(filepath):
- # 判断文件是否为图片
- valid_image_extensions = [".jpg", ".jpeg", ".png", ".bmp", ".gif", ".tiff"]
- file_extension = os.path.splitext(filepath)[1].lower()
- return file_extension in valid_image_extensions
-
- if is_image_file(filepath):
- logging.info(f"读取图片文件: {filepath}")
- self.image_bytes = self.image_to_base64(filepath)
- self.image_path = filepath
- else:
- self.image_bytes = None
- self.image_path = None
-
- def like(self):
- if self.last_conv_id is None:
- return "点赞失败,你还没发送过消息"
- data = {
- "uuid": self.last_conv_id,
- "appraise": "good"
- }
- response = requests.post(self.url, json=data)
- return "👍点赞成功,,感谢反馈~"
-
- def dislike(self):
- if self.last_conv_id is None:
- return "点踩失败,你还没发送过消息"
- data = {
- "uuid": self.last_conv_id,
- "appraise": "bad"
- }
- response = requests.post(self.url, json=data)
- return "👎点踩成功,感谢反馈~"
-
- def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot):
- fake_inputs = real_inputs
- display_append = ""
- limited_context = False
- return limited_context, fake_inputs, display_append, real_inputs, chatbot
-
- def handle_file_upload(self, files, chatbot):
- """if the model accepts multi modal input, implement this function"""
- if files:
- for file in files:
- if file.name:
- logging.info(f"尝试读取图像: {file.name}")
- self.try_read_image(file.name)
- if self.image_path is not None:
- chatbot = chatbot + [((self.image_path,), None)]
- if self.image_bytes is not None:
- logging.info("使用图片作为输入")
- # XMChat的一轮对话中实际上只能处理一张图片
- self.reset()
- conv_id = str(uuid.uuid4())
- data = {
- "user_id": self.api_key,
- "session_id": self.session_id,
- "uuid": conv_id,
- "data_type": "imgbase64",
- "data": self.image_bytes
- }
- response = requests.post(self.url, json=data)
- response = json.loads(response.text)
- logging.info(f"图片回复: {response['data']}")
- return None, chatbot, None
-
- def get_answer_at_once(self):
- question = self.history[-1]["content"]
- conv_id = str(uuid.uuid4())
- self.last_conv_id = conv_id
- data = {
- "user_id": self.api_key,
- "session_id": self.session_id,
- "uuid": conv_id,
- "data_type": "text",
- "data": question
- }
- response = requests.post(self.url, json=data)
- try:
- response = json.loads(response.text)
- return response["data"], len(response["data"])
- except Exception as e:
- return response.text, len(response.text)
-
-
-
-
-def get_model(
- model_name,
- lora_model_path=None,
- access_key=None,
- temperature=None,
- top_p=None,
- system_prompt=None,
-) -> BaseLLMModel:
- msg = i18n("模型设置为了:") + f" {model_name}"
- model_type = ModelType.get_type(model_name)
- lora_selector_visibility = False
- lora_choices = []
- dont_change_lora_selector = False
- if model_type != ModelType.OpenAI:
- config.local_embedding = True
- # del current_model.model
- model = None
- try:
- if model_type == ModelType.OpenAI:
- logging.info(f"正在加载OpenAI模型: {model_name}")
- model = OpenAIClient(
- model_name=model_name,
- api_key=access_key,
- system_prompt=system_prompt,
- temperature=temperature,
- top_p=top_p,
- )
- elif model_type == ModelType.ChatGLM:
- logging.info(f"正在加载ChatGLM模型: {model_name}")
- model = ChatGLM_Client(model_name)
- elif model_type == ModelType.LLaMA and lora_model_path == "":
- msg = f"现在请为 {model_name} 选择LoRA模型"
- logging.info(msg)
- lora_selector_visibility = True
- if os.path.isdir("lora"):
- lora_choices = get_file_names(
- "lora", plain=True, filetypes=[""])
- lora_choices = ["No LoRA"] + lora_choices
- elif model_type == ModelType.LLaMA and lora_model_path != "":
- logging.info(f"正在加载LLaMA模型: {model_name} + {lora_model_path}")
- dont_change_lora_selector = True
- if lora_model_path == "No LoRA":
- lora_model_path = None
- msg += " + No LoRA"
- else:
- msg += f" + {lora_model_path}"
- model = LLaMA_Client(model_name, lora_model_path)
- elif model_type == ModelType.XMChat:
- if os.environ.get("XMCHAT_API_KEY") != "":
- access_key = os.environ.get("XMCHAT_API_KEY")
- model = XMChat(api_key=access_key)
- elif model_type == ModelType.Unknown:
- raise ValueError(f"未知模型: {model_name}")
- logging.info(msg)
- except Exception as e:
- logging.error(e)
- msg = f"{STANDARD_ERROR_MSG}: {e}"
- if dont_change_lora_selector:
- return model, msg
- else:
- return model, msg, gr.Dropdown.update(choices=lora_choices, visible=lora_selector_visibility)
-
-
-if __name__ == "__main__":
- with open("config.json", "r") as f:
- openai_api_key = cjson.load(f)["openai_api_key"]
- # set logging level to debug
- logging.basicConfig(level=logging.DEBUG)
- # client = ModelManager(model_name="gpt-3.5-turbo", access_key=openai_api_key)
- client = get_model(model_name="chatglm-6b-int4")
- chatbot = []
- stream = False
- # 测试账单功能
- logging.info(colorama.Back.GREEN + "测试账单功能" + colorama.Back.RESET)
- logging.info(client.billing_info())
- # 测试问答
- logging.info(colorama.Back.GREEN + "测试问答" + colorama.Back.RESET)
- question = "巴黎是中国的首都吗?"
- for i in client.predict(inputs=question, chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"测试问答后history : {client.history}")
- # 测试记忆力
- logging.info(colorama.Back.GREEN + "测试记忆力" + colorama.Back.RESET)
- question = "我刚刚问了你什么问题?"
- for i in client.predict(inputs=question, chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"测试记忆力后history : {client.history}")
- # 测试重试功能
- logging.info(colorama.Back.GREEN + "测试重试功能" + colorama.Back.RESET)
- for i in client.retry(chatbot=chatbot, stream=stream):
- logging.info(i)
- logging.info(f"重试后history : {client.history}")
- # # 测试总结功能
- # print(colorama.Back.GREEN + "测试总结功能" + colorama.Back.RESET)
- # chatbot, msg = client.reduce_token_size(chatbot=chatbot)
- # print(chatbot, msg)
- # print(f"总结后history: {client.history}")
diff --git a/spaces/LuxOAI/ChatGpt-Web/docs/faq-en.md b/spaces/LuxOAI/ChatGpt-Web/docs/faq-en.md
deleted file mode 100644
index 319fc7dea861e0451b3d17c8391dfce82daf2c26..0000000000000000000000000000000000000000
--- a/spaces/LuxOAI/ChatGpt-Web/docs/faq-en.md
+++ /dev/null
@@ -1,136 +0,0 @@
-# Frequently Asked Questions
-
-## How to get help quickly?
-1. Ask ChatGPT / Bing / Baidu / Google, etc.
-2. Ask online friends. Please provide background information and a detailed description of the problem. High-quality questions are more likely to get useful answers.
-
-# Deployment Related Questions
-
-## Why does the Docker deployment version always prompt for updates
-The Docker version is equivalent to the stable version, and the latest Docker is always consistent with the latest release version. Currently, our release frequency is once every one to two days, so the Docker version will always be one to two days behind the latest commit, which is expected.
-
-## How to deploy on Vercel
-1. Register a Github account and fork this project.
-2. Register Vercel (mobile phone verification required, Chinese number can be used), and connect your Github account.
-3. Create a new project on Vercel, select the project you forked on Github, fill in the required environment variables, and start deploying. After deployment, you can access your project through the domain provided by Vercel. (Requires proxy in mainland China)
-* If you need to access it directly in China: At your DNS provider, add a CNAME record for the domain name, pointing to cname.vercel-dns.com. Then set up your domain access on Vercel.
-
-## How to modify Vercel environment variables
-- Enter the Vercel console page;
-- Select your chatgpt-next-web project;
-- Click on the Settings option at the top of the page;
-- Find the Environment Variables option in the sidebar;
-- Modify the corresponding values as needed.
-
-## What is the environment variable CODE? Is it necessary to set it?
-This is your custom access password, you can choose:
-1. Do not set it, delete the environment variable. Be cautious: anyone can access your project at this time.
-2. When deploying the project, set the environment variable CODE (supports multiple passwords, separated by commas). After setting the access password, users need to enter the access password in the settings page to use it. See [related instructions](https://github.com/Yidadaa/ChatGPT-Next-Web#access-password)
-
-## Why doesn't the version I deployed have streaming response
-> Related discussion: [#386](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/386)
-
-If you use nginx reverse proxy, you need to add the following code to the configuration file:
-```
-# No caching, support streaming output
-proxy_cache off; # Turn off caching
-proxy_buffering off; # Turn off proxy buffering
-chunked_transfer_encoding on; # Turn on chunked transfer encoding
-tcp_nopush on; # Turn on TCP NOPUSH option, disable Nagle algorithm
-tcp_nodelay on; # Turn on TCP NODELAY option, disable delay ACK algorithm
-keepalive_timeout 300; # Set keep-alive timeout to 65 seconds
-```
-
-If you are deploying on netlify, this issue is still waiting to be resolved, please be patient.
-
-## I've deployed, but it's not accessible
-Please check and troubleshoot the following issues:
-- Is the service started?
-- Is the port correctly mapped?
-- Is the firewall port open?
-- Is the route to the server okay?
-- Is the domain name resolved correctly?
-
-# Usage Related Questions
-
-## Why does it always prompt "An error occurred, please try again later"
-There could be many reasons, please check the following in order:
-- First, check if your code version is the latest version, update to the latest version and try again;
-- Check if the api key is set correctly, the environment variable name must be uppercase with underscores;
-- Check if the api key is available;
-- If you still cannot determine the problem after going through the above steps, please submit a new issue in the issue area and attach the runtime log of vercel or the log of docker runtime.
-
-## Why does ChatGPT's reply get garbled
-In the settings page - model settings, there is an item called `temperature`. If this value is greater than 1, it may cause garbled replies. Adjust it back to within 1.
-
-## It prompts "Now it's unauthorized, please enter the access password on the settings page" when using?
-The project has set an access password through the environment variable CODE. When using it for the first time, you need to go to settings and enter the access code to use.
-
-## It prompts "You exceeded your current quota, ..." when using?
-The API KEY is problematic. Insufficient balance.
-
-## What is a proxy and how to use it?
-Due to IP restrictions of OpenAI, China and some other countries/regions cannot directly connect to OpenAI API and need to go through a proxy. You can use a proxy server (forward proxy) or a pre-configured OpenAI API reverse proxy.
-- Forward proxy example: VPN ladder. In the case of docker deployment, set the environment variable HTTP_PROXY to your proxy address (http://address:port).
-- Reverse proxy example: You can use someone else's proxy address or set it up for free through Cloudflare. Set the project environment variable BASE_URL to your proxy address.
-
-## Can I deploy it on a server in China?
-It is possible but there are issues to be addressed:
-- Proxy is required to connect to websites such as Github and OpenAI;
-- Domain name resolution requires filing for servers in China;
-- Chinese policy restricts proxy access to foreign websites/ChatGPT-related applications, which may be blocked.
-
-# Network Service Related Questions
-## What is Cloudflare?
-Cloudflare (CF) is a network service provider offering CDN, domain management, static page hosting, edge computing function deployment, and more. Common use cases: purchase and/or host your domain (resolution, dynamic domain, etc.), apply CDN to your server (can hide IP to avoid being blocked), deploy websites (CF Pages). CF offers most services for free.
-
-## What is Vercel?
-Vercel is a global cloud platform designed to help developers build and deploy modern web applications more quickly. This project and many web applications can be deployed on Vercel with a single click for free. No need to understand code, Linux, have a server, pay, or set up an OpenAI API proxy. The downside is that you need to bind a domain name to access it without restrictions in China.
-
-## How to obtain a domain name?
-1. Register with a domain provider, such as Namesilo (supports Alipay) or Cloudflare for international providers, and Wanwang for domestic providers in China.
-2. Free domain name providers: eu.org (second-level domain), etc.
-3. Ask friends for a free second-level domain.
-
-## How to obtain a server
-- Examples of international server providers: Amazon Web Services, Google Cloud, Vultr, Bandwagon, Hostdare, etc.
- International server considerations: Server lines affect access speed in China; CN2 GIA and CN2 lines are recommended. If the server has difficulty accessing in China (serious packet loss, etc.), you can try using a CDN (from providers like Cloudflare).
-- Domestic server providers: Alibaba Cloud, Tencent, etc.
- Domestic server considerations: Domain name resolution requires filing; domestic server bandwidth is relatively expensive; accessing foreign websites (Github, OpenAI, etc.) requires a proxy.
-
-# OpenAI-related Questions
-## How to register an OpenAI account?
-Go to chat.openai.com to register. You will need:
-- A good VPN (OpenAI only allows native IP addresses of supported regions)
-- A supported email (e.g., Gmail or a company/school email, not Outlook or QQ email)
-- A way to receive SMS verification (e.g., SMS-activate website)
-
-## How to activate OpenAI API? How to check API balance?
-Official website (requires VPN): https://platform.openai.com/account/usage
-Some users have set up a proxy to check the balance without a VPN; ask online friends for access. Please verify the source is reliable to avoid API Key leakage.
-
-## Why doesn't my new OpenAI account have an API balance?
-(Updated April 6th) Newly registered accounts usually display API balance within 24 hours. New accounts are currently given a $5 balance.
-
-## How to recharge OpenAI API?
-OpenAI only accepts credit cards from designated regions (Chinese credit cards cannot be used). If the credit cards from your region is not supported, some options include:
-1. Depay virtual credit card
-2. Apply for a foreign credit card
-3. Find someone online to top up
-
-## How to access the GPT-4 API?
-(Updated April 6th) Access to the GPT-4 API requires a separate application. Go to the following address and enter your information to join the waitlist (prepare your OpenAI organization ID): https://openai.com/waitlist/gpt-4-api
-Wait for email updates afterwards.
-
-## How to use the Azure OpenAI interface
-Please refer to: [#371](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/371)
-
-## Why is my Token consumed so fast?
-> Related discussion: [#518](https://github.com/Yidadaa/ChatGPT-Next-Web/issues/518)
-- If you have GPT-4 access and use GPT-4 API regularly, your bill will increase rapidly since GPT-4 pricing is about 15 times higher than GPT-3.5;
-- If you are using GPT-3.5 and not using it frequently, but still find your bill increasing fast, please troubleshoot immediately using these steps:
- - Check your API key consumption record on the OpenAI website; if your token is consumed every hour and each time consumes tens of thousands of tokens, your key must have been leaked. Please delete it and regenerate it immediately. **Do not check your balance on random websites.**
- - If your password is short, such as 5 characters or fewer, the cost of brute-forcing is very low. It is recommended to search docker logs to confirm whether someone has tried a large number of password combinations. Keyword: got access code
-- By following these two methods, you can locate the reason for your token's rapid consumption:
- - If the OpenAI consumption record is abnormal but the Docker log has no issues, it means your API key has been leaked;
- - If the Docker log shows a large number of got access code brute-force attempts, your password has been cracked.
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/transforms/base.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/transforms/base.py
deleted file mode 100644
index eb5a2deb3c44f5aed7530fd1e299fff1273737b8..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/inference/transforms/base.py
+++ /dev/null
@@ -1,38 +0,0 @@
-import torch
-
-
-class BaseTransform(object):
- def __init__(self):
- self.image_changed = False
-
- def transform(self, image_nd, clicks_lists):
- raise NotImplementedError
-
- def inv_transform(self, prob_map):
- raise NotImplementedError
-
- def reset(self):
- raise NotImplementedError
-
- def get_state(self):
- raise NotImplementedError
-
- def set_state(self, state):
- raise NotImplementedError
-
-
-class SigmoidForPred(BaseTransform):
- def transform(self, image_nd, clicks_lists):
- return image_nd, clicks_lists
-
- def inv_transform(self, prob_map):
- return torch.sigmoid(prob_map)
-
- def reset(self):
- pass
-
- def get_state(self):
- return None
-
- def set_state(self, state):
- pass
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/syncbn/modules/functional/_csrc.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/syncbn/modules/functional/_csrc.py
deleted file mode 100644
index d0c14098f0cfa422920f01fe4985dbeb7fedc2d1..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/syncbn/modules/functional/_csrc.py
+++ /dev/null
@@ -1,54 +0,0 @@
-"""
-/*****************************************************************************/
-
-Extension module loader
-
-code referenced from : https://github.com/facebookresearch/maskrcnn-benchmark
-
-/*****************************************************************************/
-"""
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import glob
-import os.path
-
-import torch
-
-try:
- from torch.utils.cpp_extension import load
- from torch.utils.cpp_extension import CUDA_HOME
-except ImportError:
- raise ImportError(
- "The cpp layer extensions requires PyTorch 0.4 or higher")
-
-
-def _load_C_extensions():
- this_dir = os.path.dirname(os.path.abspath(__file__))
- this_dir = os.path.join(this_dir, "csrc")
-
- main_file = glob.glob(os.path.join(this_dir, "*.cpp"))
- sources_cpu = glob.glob(os.path.join(this_dir, "cpu", "*.cpp"))
- sources_cuda = glob.glob(os.path.join(this_dir, "cuda", "*.cu"))
-
- sources = main_file + sources_cpu
-
- extra_cflags = []
- extra_cuda_cflags = []
- if torch.cuda.is_available() and CUDA_HOME is not None:
- sources.extend(sources_cuda)
- extra_cflags = ["-O3", "-DWITH_CUDA"]
- extra_cuda_cflags = ["--expt-extended-lambda"]
- sources = [os.path.join(this_dir, s) for s in sources]
- extra_include_paths = [this_dir]
- return load(
- name="ext_lib",
- sources=sources,
- extra_cflags=extra_cflags,
- extra_include_paths=extra_include_paths,
- extra_cuda_cflags=extra_cuda_cflags,
- )
-
-
-_backend = _load_C_extensions()
diff --git a/spaces/Manjushri/OJ-V4-CPU/README.md b/spaces/Manjushri/OJ-V4-CPU/README.md
deleted file mode 100644
index 5a5e26e4692572691da5ca174035609226627c3a..0000000000000000000000000000000000000000
--- a/spaces/Manjushri/OJ-V4-CPU/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: OpenJourney V4 CPU
-emoji: 🔥
-colorFrom: red
-colorTo: red
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: Manjushri/SD-2.1-CPU
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Marshalls/testmtd/analysis/pymo/Quaternions.py b/spaces/Marshalls/testmtd/analysis/pymo/Quaternions.py
deleted file mode 100644
index d4b754871310a264e2bd2675479db9a79d24358e..0000000000000000000000000000000000000000
--- a/spaces/Marshalls/testmtd/analysis/pymo/Quaternions.py
+++ /dev/null
@@ -1,468 +0,0 @@
-import numpy as np
-
-class Quaternions:
- """
- Quaternions is a wrapper around a numpy ndarray
- that allows it to act as if it were an narray of
- a quaternion data type.
-
- Therefore addition, subtraction, multiplication,
- division, negation, absolute, are all defined
- in terms of quaternion operations such as quaternion
- multiplication.
-
- This allows for much neater code and many routines
- which conceptually do the same thing to be written
- in the same way for point data and for rotation data.
-
- The Quaternions class has been desgined such that it
- should support broadcasting and slicing in all of the
- usual ways.
- """
-
- def __init__(self, qs):
- if isinstance(qs, np.ndarray):
-
- if len(qs.shape) == 1: qs = np.array([qs])
- self.qs = qs
- return
-
- if isinstance(qs, Quaternions):
- self.qs = qs.qs
- return
-
- raise TypeError('Quaternions must be constructed from iterable, numpy array, or Quaternions, not %s' % type(qs))
-
- def __str__(self): return "Quaternions("+ str(self.qs) + ")"
- def __repr__(self): return "Quaternions("+ repr(self.qs) + ")"
-
- """ Helper Methods for Broadcasting and Data extraction """
-
- @classmethod
- def _broadcast(cls, sqs, oqs, scalar=False):
-
- if isinstance(oqs, float): return sqs, oqs * np.ones(sqs.shape[:-1])
-
- ss = np.array(sqs.shape) if not scalar else np.array(sqs.shape[:-1])
- os = np.array(oqs.shape)
-
- if len(ss) != len(os):
- raise TypeError('Quaternions cannot broadcast together shapes %s and %s' % (sqs.shape, oqs.shape))
-
- if np.all(ss == os): return sqs, oqs
-
- if not np.all((ss == os) | (os == np.ones(len(os))) | (ss == np.ones(len(ss)))):
- raise TypeError('Quaternions cannot broadcast together shapes %s and %s' % (sqs.shape, oqs.shape))
-
- sqsn, oqsn = sqs.copy(), oqs.copy()
-
- for a in np.where(ss == 1)[0]: sqsn = sqsn.repeat(os[a], axis=a)
- for a in np.where(os == 1)[0]: oqsn = oqsn.repeat(ss[a], axis=a)
-
- return sqsn, oqsn
-
- """ Adding Quaterions is just Defined as Multiplication """
-
- def __add__(self, other): return self * other
- def __sub__(self, other): return self / other
-
- """ Quaterion Multiplication """
-
- def __mul__(self, other):
- """
- Quaternion multiplication has three main methods.
-
- When multiplying a Quaternions array by Quaternions
- normal quaternion multiplication is performed.
-
- When multiplying a Quaternions array by a vector
- array of the same shape, where the last axis is 3,
- it is assumed to be a Quaternion by 3D-Vector
- multiplication and the 3D-Vectors are rotated
- in space by the Quaternions.
-
- When multipplying a Quaternions array by a scalar
- or vector of different shape it is assumed to be
- a Quaternions by Scalars multiplication and the
- Quaternions are scaled using Slerp and the identity
- quaternions.
- """
-
- """ If Quaternions type do Quaternions * Quaternions """
- if isinstance(other, Quaternions):
-
- sqs, oqs = Quaternions._broadcast(self.qs, other.qs)
-
- q0 = sqs[...,0]; q1 = sqs[...,1];
- q2 = sqs[...,2]; q3 = sqs[...,3];
- r0 = oqs[...,0]; r1 = oqs[...,1];
- r2 = oqs[...,2]; r3 = oqs[...,3];
-
- qs = np.empty(sqs.shape)
- qs[...,0] = r0 * q0 - r1 * q1 - r2 * q2 - r3 * q3
- qs[...,1] = r0 * q1 + r1 * q0 - r2 * q3 + r3 * q2
- qs[...,2] = r0 * q2 + r1 * q3 + r2 * q0 - r3 * q1
- qs[...,3] = r0 * q3 - r1 * q2 + r2 * q1 + r3 * q0
-
- return Quaternions(qs)
-
- """ If array type do Quaternions * Vectors """
- if isinstance(other, np.ndarray) and other.shape[-1] == 3:
- vs = Quaternions(np.concatenate([np.zeros(other.shape[:-1] + (1,)), other], axis=-1))
- return (self * (vs * -self)).imaginaries
-
- """ If float do Quaternions * Scalars """
- if isinstance(other, np.ndarray) or isinstance(other, float):
- return Quaternions.slerp(Quaternions.id_like(self), self, other)
-
- raise TypeError('Cannot multiply/add Quaternions with type %s' % str(type(other)))
-
- def __div__(self, other):
- """
- When a Quaternion type is supplied, division is defined
- as multiplication by the inverse of that Quaternion.
-
- When a scalar or vector is supplied it is defined
- as multiplicaion of one over the supplied value.
- Essentially a scaling.
- """
-
- if isinstance(other, Quaternions): return self * (-other)
- if isinstance(other, np.ndarray): return self * (1.0 / other)
- if isinstance(other, float): return self * (1.0 / other)
- raise TypeError('Cannot divide/subtract Quaternions with type %s' + str(type(other)))
-
- def __eq__(self, other): return self.qs == other.qs
- def __ne__(self, other): return self.qs != other.qs
-
- def __neg__(self):
- """ Invert Quaternions """
- return Quaternions(self.qs * np.array([[1, -1, -1, -1]]))
-
- def __abs__(self):
- """ Unify Quaternions To Single Pole """
- qabs = self.normalized().copy()
- top = np.sum(( qabs.qs) * np.array([1,0,0,0]), axis=-1)
- bot = np.sum((-qabs.qs) * np.array([1,0,0,0]), axis=-1)
- qabs.qs[top < bot] = -qabs.qs[top < bot]
- return qabs
-
- def __iter__(self): return iter(self.qs)
- def __len__(self): return len(self.qs)
-
- def __getitem__(self, k): return Quaternions(self.qs[k])
- def __setitem__(self, k, v): self.qs[k] = v.qs
-
- @property
- def lengths(self):
- return np.sum(self.qs**2.0, axis=-1)**0.5
-
- @property
- def reals(self):
- return self.qs[...,0]
-
- @property
- def imaginaries(self):
- return self.qs[...,1:4]
-
- @property
- def shape(self): return self.qs.shape[:-1]
-
- def repeat(self, n, **kwargs):
- return Quaternions(self.qs.repeat(n, **kwargs))
-
- def normalized(self):
- return Quaternions(self.qs / self.lengths[...,np.newaxis])
-
- def log(self):
- norm = abs(self.normalized())
- imgs = norm.imaginaries
- lens = np.sqrt(np.sum(imgs**2, axis=-1))
- lens = np.arctan2(lens, norm.reals) / (lens + 1e-10)
- return imgs * lens[...,np.newaxis]
-
- def constrained(self, axis):
-
- rl = self.reals
- im = np.sum(axis * self.imaginaries, axis=-1)
-
- t1 = -2 * np.arctan2(rl, im) + np.pi
- t2 = -2 * np.arctan2(rl, im) - np.pi
-
- top = Quaternions.exp(axis[np.newaxis] * (t1[:,np.newaxis] / 2.0))
- bot = Quaternions.exp(axis[np.newaxis] * (t2[:,np.newaxis] / 2.0))
- img = self.dot(top) > self.dot(bot)
-
- ret = top.copy()
- ret[ img] = top[ img]
- ret[~img] = bot[~img]
- return ret
-
- def constrained_x(self): return self.constrained(np.array([1,0,0]))
- def constrained_y(self): return self.constrained(np.array([0,1,0]))
- def constrained_z(self): return self.constrained(np.array([0,0,1]))
-
- def dot(self, q): return np.sum(self.qs * q.qs, axis=-1)
-
- def copy(self): return Quaternions(np.copy(self.qs))
-
- def reshape(self, s):
- self.qs.reshape(s)
- return self
-
- def interpolate(self, ws):
- return Quaternions.exp(np.average(abs(self).log, axis=0, weights=ws))
-
- def euler(self, order='xyz'):
-
- q = self.normalized().qs
- q0 = q[...,0]
- q1 = q[...,1]
- q2 = q[...,2]
- q3 = q[...,3]
- es = np.zeros(self.shape + (3,))
-
- if order == 'xyz':
- es[...,0] = np.arctan2(2 * (q0 * q1 + q2 * q3), 1 - 2 * (q1 * q1 + q2 * q2))
- es[...,1] = np.arcsin((2 * (q0 * q2 - q3 * q1)).clip(-1,1))
- es[...,2] = np.arctan2(2 * (q0 * q3 + q1 * q2), 1 - 2 * (q2 * q2 + q3 * q3))
- elif order == 'yzx':
- es[...,0] = np.arctan2(2 * (q1 * q0 - q2 * q3), -q1 * q1 + q2 * q2 - q3 * q3 + q0 * q0)
- es[...,1] = np.arctan2(2 * (q2 * q0 - q1 * q3), q1 * q1 - q2 * q2 - q3 * q3 + q0 * q0)
- es[...,2] = np.arcsin((2 * (q1 * q2 + q3 * q0)).clip(-1,1))
- else:
- raise NotImplementedError('Cannot convert from ordering %s' % order)
-
- """
-
- # These conversion don't appear to work correctly for Maya.
- # http://bediyap.com/programming/convert-quaternion-to-euler-rotations/
-
- if order == 'xyz':
- es[...,0] = np.arctan2(2 * (q0 * q3 - q1 * q2), q0 * q0 + q1 * q1 - q2 * q2 - q3 * q3)
- es[...,1] = np.arcsin((2 * (q1 * q3 + q0 * q2)).clip(-1,1))
- es[...,2] = np.arctan2(2 * (q0 * q1 - q2 * q3), q0 * q0 - q1 * q1 - q2 * q2 + q3 * q3)
- elif order == 'yzx':
- es[...,0] = np.arctan2(2 * (q0 * q1 - q2 * q3), q0 * q0 - q1 * q1 + q2 * q2 - q3 * q3)
- es[...,1] = np.arcsin((2 * (q1 * q2 + q0 * q3)).clip(-1,1))
- es[...,2] = np.arctan2(2 * (q0 * q2 - q1 * q3), q0 * q0 + q1 * q1 - q2 * q2 - q3 * q3)
- elif order == 'zxy':
- es[...,0] = np.arctan2(2 * (q0 * q2 - q1 * q3), q0 * q0 - q1 * q1 - q2 * q2 + q3 * q3)
- es[...,1] = np.arcsin((2 * (q0 * q1 + q2 * q3)).clip(-1,1))
- es[...,2] = np.arctan2(2 * (q0 * q3 - q1 * q2), q0 * q0 - q1 * q1 + q2 * q2 - q3 * q3)
- elif order == 'xzy':
- es[...,0] = np.arctan2(2 * (q0 * q2 + q1 * q3), q0 * q0 + q1 * q1 - q2 * q2 - q3 * q3)
- es[...,1] = np.arcsin((2 * (q0 * q3 - q1 * q2)).clip(-1,1))
- es[...,2] = np.arctan2(2 * (q0 * q1 + q2 * q3), q0 * q0 - q1 * q1 + q2 * q2 - q3 * q3)
- elif order == 'yxz':
- es[...,0] = np.arctan2(2 * (q1 * q2 + q0 * q3), q0 * q0 - q1 * q1 + q2 * q2 - q3 * q3)
- es[...,1] = np.arcsin((2 * (q0 * q1 - q2 * q3)).clip(-1,1))
- es[...,2] = np.arctan2(2 * (q1 * q3 + q0 * q2), q0 * q0 - q1 * q1 - q2 * q2 + q3 * q3)
- elif order == 'zyx':
- es[...,0] = np.arctan2(2 * (q0 * q1 + q2 * q3), q0 * q0 - q1 * q1 - q2 * q2 + q3 * q3)
- es[...,1] = np.arcsin((2 * (q0 * q2 - q1 * q3)).clip(-1,1))
- es[...,2] = np.arctan2(2 * (q0 * q3 + q1 * q2), q0 * q0 + q1 * q1 - q2 * q2 - q3 * q3)
- else:
- raise KeyError('Unknown ordering %s' % order)
-
- """
-
- # https://github.com/ehsan/ogre/blob/master/OgreMain/src/OgreMatrix3.cpp
- # Use this class and convert from matrix
-
- return es
-
-
- def average(self):
-
- if len(self.shape) == 1:
-
- import numpy.core.umath_tests as ut
- system = ut.matrix_multiply(self.qs[:,:,np.newaxis], self.qs[:,np.newaxis,:]).sum(axis=0)
- w, v = np.linalg.eigh(system)
- qiT_dot_qref = (self.qs[:,:,np.newaxis] * v[np.newaxis,:,:]).sum(axis=1)
- return Quaternions(v[:,np.argmin((1.-qiT_dot_qref**2).sum(axis=0))])
-
- else:
-
- raise NotImplementedError('Cannot average multi-dimensionsal Quaternions')
-
- def angle_axis(self):
-
- norm = self.normalized()
- s = np.sqrt(1 - (norm.reals**2.0))
- s[s == 0] = 0.001
-
- angles = 2.0 * np.arccos(norm.reals)
- axis = norm.imaginaries / s[...,np.newaxis]
-
- return angles, axis
-
-
- def transforms(self):
-
- qw = self.qs[...,0]
- qx = self.qs[...,1]
- qy = self.qs[...,2]
- qz = self.qs[...,3]
-
- x2 = qx + qx; y2 = qy + qy; z2 = qz + qz;
- xx = qx * x2; yy = qy * y2; wx = qw * x2;
- xy = qx * y2; yz = qy * z2; wy = qw * y2;
- xz = qx * z2; zz = qz * z2; wz = qw * z2;
-
- m = np.empty(self.shape + (3,3))
- m[...,0,0] = 1.0 - (yy + zz)
- m[...,0,1] = xy - wz
- m[...,0,2] = xz + wy
- m[...,1,0] = xy + wz
- m[...,1,1] = 1.0 - (xx + zz)
- m[...,1,2] = yz - wx
- m[...,2,0] = xz - wy
- m[...,2,1] = yz + wx
- m[...,2,2] = 1.0 - (xx + yy)
-
- return m
-
- def ravel(self):
- return self.qs.ravel()
-
- @classmethod
- def id(cls, n):
-
- if isinstance(n, tuple):
- qs = np.zeros(n + (4,))
- qs[...,0] = 1.0
- return Quaternions(qs)
-
- if isinstance(n, int) or isinstance(n, long):
- qs = np.zeros((n,4))
- qs[:,0] = 1.0
- return Quaternions(qs)
-
- raise TypeError('Cannot Construct Quaternion from %s type' % str(type(n)))
-
- @classmethod
- def id_like(cls, a):
- qs = np.zeros(a.shape + (4,))
- qs[...,0] = 1.0
- return Quaternions(qs)
-
- @classmethod
- def exp(cls, ws):
-
- ts = np.sum(ws**2.0, axis=-1)**0.5
- ts[ts == 0] = 0.001
- ls = np.sin(ts) / ts
-
- qs = np.empty(ws.shape[:-1] + (4,))
- qs[...,0] = np.cos(ts)
- qs[...,1] = ws[...,0] * ls
- qs[...,2] = ws[...,1] * ls
- qs[...,3] = ws[...,2] * ls
-
- return Quaternions(qs).normalized()
-
- @classmethod
- def slerp(cls, q0s, q1s, a):
-
- fst, snd = cls._broadcast(q0s.qs, q1s.qs)
- fst, a = cls._broadcast(fst, a, scalar=True)
- snd, a = cls._broadcast(snd, a, scalar=True)
-
- len = np.sum(fst * snd, axis=-1)
-
- neg = len < 0.0
- len[neg] = -len[neg]
- snd[neg] = -snd[neg]
-
- amount0 = np.zeros(a.shape)
- amount1 = np.zeros(a.shape)
-
- linear = (1.0 - len) < 0.01
- omegas = np.arccos(len[~linear])
- sinoms = np.sin(omegas)
-
- amount0[ linear] = 1.0 - a[linear]
- amount1[ linear] = a[linear]
- amount0[~linear] = np.sin((1.0 - a[~linear]) * omegas) / sinoms
- amount1[~linear] = np.sin( a[~linear] * omegas) / sinoms
-
- return Quaternions(
- amount0[...,np.newaxis] * fst +
- amount1[...,np.newaxis] * snd)
-
- @classmethod
- def between(cls, v0s, v1s):
- a = np.cross(v0s, v1s)
- w = np.sqrt((v0s**2).sum(axis=-1) * (v1s**2).sum(axis=-1)) + (v0s * v1s).sum(axis=-1)
- return Quaternions(np.concatenate([w[...,np.newaxis], a], axis=-1)).normalized()
-
- @classmethod
- def from_angle_axis(cls, angles, axis):
- axis = axis / (np.sqrt(np.sum(axis**2, axis=-1)) + 1e-10)[...,np.newaxis]
- sines = np.sin(angles / 2.0)[...,np.newaxis]
- cosines = np.cos(angles / 2.0)[...,np.newaxis]
- return Quaternions(np.concatenate([cosines, axis * sines], axis=-1))
-
- @classmethod
- def from_euler(cls, es, order='xyz', world=False):
-
- axis = {
- 'x' : np.array([1,0,0]),
- 'y' : np.array([0,1,0]),
- 'z' : np.array([0,0,1]),
- }
-
- q0s = Quaternions.from_angle_axis(es[...,0], axis[order[0]])
- q1s = Quaternions.from_angle_axis(es[...,1], axis[order[1]])
- q2s = Quaternions.from_angle_axis(es[...,2], axis[order[2]])
-
- return (q2s * (q1s * q0s)) if world else (q0s * (q1s * q2s))
-
- @classmethod
- def from_transforms(cls, ts):
-
- d0, d1, d2 = ts[...,0,0], ts[...,1,1], ts[...,2,2]
-
- q0 = ( d0 + d1 + d2 + 1.0) / 4.0
- q1 = ( d0 - d1 - d2 + 1.0) / 4.0
- q2 = (-d0 + d1 - d2 + 1.0) / 4.0
- q3 = (-d0 - d1 + d2 + 1.0) / 4.0
-
- q0 = np.sqrt(q0.clip(0,None))
- q1 = np.sqrt(q1.clip(0,None))
- q2 = np.sqrt(q2.clip(0,None))
- q3 = np.sqrt(q3.clip(0,None))
-
- c0 = (q0 >= q1) & (q0 >= q2) & (q0 >= q3)
- c1 = (q1 >= q0) & (q1 >= q2) & (q1 >= q3)
- c2 = (q2 >= q0) & (q2 >= q1) & (q2 >= q3)
- c3 = (q3 >= q0) & (q3 >= q1) & (q3 >= q2)
-
- q1[c0] *= np.sign(ts[c0,2,1] - ts[c0,1,2])
- q2[c0] *= np.sign(ts[c0,0,2] - ts[c0,2,0])
- q3[c0] *= np.sign(ts[c0,1,0] - ts[c0,0,1])
-
- q0[c1] *= np.sign(ts[c1,2,1] - ts[c1,1,2])
- q2[c1] *= np.sign(ts[c1,1,0] + ts[c1,0,1])
- q3[c1] *= np.sign(ts[c1,0,2] + ts[c1,2,0])
-
- q0[c2] *= np.sign(ts[c2,0,2] - ts[c2,2,0])
- q1[c2] *= np.sign(ts[c2,1,0] + ts[c2,0,1])
- q3[c2] *= np.sign(ts[c2,2,1] + ts[c2,1,2])
-
- q0[c3] *= np.sign(ts[c3,1,0] - ts[c3,0,1])
- q1[c3] *= np.sign(ts[c3,2,0] + ts[c3,0,2])
- q2[c3] *= np.sign(ts[c3,2,1] + ts[c3,1,2])
-
- qs = np.empty(ts.shape[:-2] + (4,))
- qs[...,0] = q0
- qs[...,1] = q1
- qs[...,2] = q2
- qs[...,3] = q3
-
- return cls(qs)
-
-
-
\ No newline at end of file
diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/custom_solver.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/custom_solver.py
deleted file mode 100644
index 0284ae14ed2e93b2664ef52ad938061f78363516..0000000000000000000000000000000000000000
--- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/custom_solver.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-from enum import Enum
-import itertools
-from typing import Any, Callable, Dict, Iterable, List, Set, Type, Union
-import torch
-
-from detectron2.config import CfgNode
-
-from detectron2.solver.build import maybe_add_gradient_clipping
-
-def match_name_keywords(n, name_keywords):
- out = False
- for b in name_keywords:
- if b in n:
- out = True
- break
- return out
-
-def build_custom_optimizer(cfg: CfgNode, model: torch.nn.Module) -> torch.optim.Optimizer:
- """
- Build an optimizer from config.
- """
- params: List[Dict[str, Any]] = []
- memo: Set[torch.nn.parameter.Parameter] = set()
- custom_multiplier_name = cfg.SOLVER.CUSTOM_MULTIPLIER_NAME
- optimizer_type = cfg.SOLVER.OPTIMIZER
- for key, value in model.named_parameters(recurse=True):
- if not value.requires_grad:
- continue
- # Avoid duplicating parameters
- if value in memo:
- continue
- memo.add(value)
- lr = cfg.SOLVER.BASE_LR
- weight_decay = cfg.SOLVER.WEIGHT_DECAY
- if "backbone" in key:
- lr = lr * cfg.SOLVER.BACKBONE_MULTIPLIER
- if match_name_keywords(key, custom_multiplier_name):
- lr = lr * cfg.SOLVER.CUSTOM_MULTIPLIER
- print('Costum LR', key, lr)
- param = {"params": [value], "lr": lr}
- if optimizer_type != 'ADAMW':
- param['weight_decay'] = weight_decay
- params += [param]
-
- def maybe_add_full_model_gradient_clipping(optim): # optim: the optimizer class
- # detectron2 doesn't have full model gradient clipping now
- clip_norm_val = cfg.SOLVER.CLIP_GRADIENTS.CLIP_VALUE
- enable = (
- cfg.SOLVER.CLIP_GRADIENTS.ENABLED
- and cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE == "full_model"
- and clip_norm_val > 0.0
- )
-
- class FullModelGradientClippingOptimizer(optim):
- def step(self, closure=None):
- all_params = itertools.chain(*[x["params"] for x in self.param_groups])
- torch.nn.utils.clip_grad_norm_(all_params, clip_norm_val)
- super().step(closure=closure)
-
- return FullModelGradientClippingOptimizer if enable else optim
-
-
- if optimizer_type == 'SGD':
- optimizer = maybe_add_full_model_gradient_clipping(torch.optim.SGD)(
- params, cfg.SOLVER.BASE_LR, momentum=cfg.SOLVER.MOMENTUM,
- nesterov=cfg.SOLVER.NESTEROV
- )
- elif optimizer_type == 'ADAMW':
- optimizer = maybe_add_full_model_gradient_clipping(torch.optim.AdamW)(
- params, cfg.SOLVER.BASE_LR,
- weight_decay=cfg.SOLVER.WEIGHT_DECAY
- )
- else:
- raise NotImplementedError(f"no optimizer type {optimizer_type}")
- if not cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE == "full_model":
- optimizer = maybe_add_gradient_clipping(cfg, optimizer)
- return optimizer
\ No newline at end of file
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/rembg/commands/p_command.py b/spaces/Mellow-ai/PhotoAI_Mellow/rembg/commands/p_command.py
deleted file mode 100644
index 2163bfbb2332f5b23f2fa9c305b15df9c6c425ff..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/rembg/commands/p_command.py
+++ /dev/null
@@ -1,181 +0,0 @@
-import json
-import pathlib
-import time
-from typing import cast
-
-import click
-import filetype
-from tqdm import tqdm
-from watchdog.events import FileSystemEvent, FileSystemEventHandler
-from watchdog.observers import Observer
-
-from ..bg import remove
-from ..session_factory import new_session
-from ..sessions import sessions_names
-
-
-@click.command(
- name="p",
- help="for a folder as input",
-)
-@click.option(
- "-m",
- "--model",
- default="u2net",
- type=click.Choice(sessions_names),
- show_default=True,
- show_choices=True,
- help="model name",
-)
-@click.option(
- "-a",
- "--alpha-matting",
- is_flag=True,
- show_default=True,
- help="use alpha matting",
-)
-@click.option(
- "-af",
- "--alpha-matting-foreground-threshold",
- default=240,
- type=int,
- show_default=True,
- help="trimap fg threshold",
-)
-@click.option(
- "-ab",
- "--alpha-matting-background-threshold",
- default=10,
- type=int,
- show_default=True,
- help="trimap bg threshold",
-)
-@click.option(
- "-ae",
- "--alpha-matting-erode-size",
- default=10,
- type=int,
- show_default=True,
- help="erode size",
-)
-@click.option(
- "-om",
- "--only-mask",
- is_flag=True,
- show_default=True,
- help="output only the mask",
-)
-@click.option(
- "-ppm",
- "--post-process-mask",
- is_flag=True,
- show_default=True,
- help="post process the mask",
-)
-@click.option(
- "-w",
- "--watch",
- default=False,
- is_flag=True,
- show_default=True,
- help="watches a folder for changes",
-)
-@click.option(
- "-bgc",
- "--bgcolor",
- default=None,
- type=(int, int, int, int),
- nargs=4,
- help="Background color (R G B A) to replace the removed background with",
-)
-@click.option("-x", "--extras", type=str)
-@click.argument(
- "input",
- type=click.Path(
- exists=True,
- path_type=pathlib.Path,
- file_okay=False,
- dir_okay=True,
- readable=True,
- ),
-)
-@click.argument(
- "output",
- type=click.Path(
- exists=False,
- path_type=pathlib.Path,
- file_okay=False,
- dir_okay=True,
- writable=True,
- ),
-)
-def p_command(
- model: str,
- extras: str,
- input: pathlib.Path,
- output: pathlib.Path,
- watch: bool,
- **kwargs,
-) -> None:
- try:
- kwargs.update(json.loads(extras))
- except Exception:
- pass
-
- session = new_session(model)
-
- def process(each_input: pathlib.Path) -> None:
- try:
- mimetype = filetype.guess(each_input)
- if mimetype is None:
- return
- if mimetype.mime.find("image") < 0:
- return
-
- each_output = (output / each_input.name).with_suffix(".png")
- each_output.parents[0].mkdir(parents=True, exist_ok=True)
-
- if not each_output.exists():
- each_output.write_bytes(
- cast(
- bytes,
- remove(each_input.read_bytes(), session=session, **kwargs),
- )
- )
-
- if watch:
- print(
- f"processed: {each_input.absolute()} -> {each_output.absolute()}"
- )
- except Exception as e:
- print(e)
-
- inputs = list(input.glob("**/*"))
- if not watch:
- inputs = tqdm(inputs)
-
- for each_input in inputs:
- if not each_input.is_dir():
- process(each_input)
-
- if watch:
- observer = Observer()
-
- class EventHandler(FileSystemEventHandler):
- def on_any_event(self, event: FileSystemEvent) -> None:
- if not (
- event.is_directory or event.event_type in ["deleted", "closed"]
- ):
- process(pathlib.Path(event.src_path))
-
- event_handler = EventHandler()
- observer.schedule(event_handler, input, recursive=False)
- observer.start()
-
- try:
- while True:
- time.sleep(1)
-
- finally:
- observer.stop()
- observer.join()
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/module_losses/db_module_loss.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/module_losses/db_module_loss.py
deleted file mode 100644
index ba8487310f2ce9592a2fa5b8b20621b870a9fe05..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/module_losses/db_module_loss.py
+++ /dev/null
@@ -1,300 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from typing import Dict, List, Sequence, Tuple, Union
-
-import cv2
-import numpy as np
-import torch
-from mmdet.models.utils import multi_apply
-from shapely.geometry import Polygon
-from torch import Tensor
-
-from mmocr.registry import MODELS
-from mmocr.structures import TextDetDataSample
-from mmocr.utils import offset_polygon
-from mmocr.utils.typing_utils import ArrayLike
-from .seg_based_module_loss import SegBasedModuleLoss
-
-
-@MODELS.register_module()
-class DBModuleLoss(SegBasedModuleLoss):
- r"""The class for implementing DBNet loss.
-
- This is partially adapted from https://github.com/MhLiao/DB.
-
- Args:
- loss_prob (dict): The loss config for probability map. Defaults to
- dict(type='MaskedBalancedBCEWithLogitsLoss').
- loss_thr (dict): The loss config for threshold map. Defaults to
- dict(type='MaskedSmoothL1Loss', beta=0).
- loss_db (dict): The loss config for binary map. Defaults to
- dict(type='MaskedDiceLoss').
- weight_prob (float): The weight of probability map loss.
- Denoted as :math:`\alpha` in paper. Defaults to 5.
- weight_thr (float): The weight of threshold map loss.
- Denoted as :math:`\beta` in paper. Defaults to 10.
- shrink_ratio (float): The ratio of shrunk text region. Defaults to 0.4.
- thr_min (float): The minimum threshold map value. Defaults to 0.3.
- thr_max (float): The maximum threshold map value. Defaults to 0.7.
- min_sidelength (int or float): The minimum sidelength of the
- minimum rotated rectangle around any text region. Defaults to 8.
- """
-
- def __init__(self,
- loss_prob: Dict = dict(
- type='MaskedBalancedBCEWithLogitsLoss'),
- loss_thr: Dict = dict(type='MaskedSmoothL1Loss', beta=0),
- loss_db: Dict = dict(type='MaskedDiceLoss'),
- weight_prob: float = 5.,
- weight_thr: float = 10.,
- shrink_ratio: float = 0.4,
- thr_min: float = 0.3,
- thr_max: float = 0.7,
- min_sidelength: Union[int, float] = 8) -> None:
- super().__init__()
- self.loss_prob = MODELS.build(loss_prob)
- self.loss_thr = MODELS.build(loss_thr)
- self.loss_db = MODELS.build(loss_db)
- self.weight_prob = weight_prob
- self.weight_thr = weight_thr
- self.shrink_ratio = shrink_ratio
- self.thr_min = thr_min
- self.thr_max = thr_max
- self.min_sidelength = min_sidelength
-
- def forward(self, preds: Tuple[Tensor, Tensor, Tensor],
- data_samples: Sequence[TextDetDataSample]) -> Dict:
- """Compute DBNet loss.
-
- Args:
- preds (tuple(tensor)): Raw predictions from model, containing
- ``prob_logits``, ``thr_map`` and ``binary_map``.
- Each is a tensor of shape :math:`(N, H, W)`.
- data_samples (list[TextDetDataSample]): The data samples.
-
- Returns:
- results(dict): The dict for dbnet losses with loss_prob, \
- loss_db and loss_thr.
- """
- prob_logits, thr_map, binary_map = preds
- gt_shrinks, gt_shrink_masks, gt_thrs, gt_thr_masks = self.get_targets(
- data_samples)
- gt_shrinks = gt_shrinks.to(prob_logits.device)
- gt_shrink_masks = gt_shrink_masks.to(prob_logits.device)
- gt_thrs = gt_thrs.to(thr_map.device)
- gt_thr_masks = gt_thr_masks.to(thr_map.device)
- loss_prob = self.loss_prob(prob_logits, gt_shrinks, gt_shrink_masks)
-
- loss_thr = self.loss_thr(thr_map, gt_thrs, gt_thr_masks)
- loss_db = self.loss_db(binary_map, gt_shrinks, gt_shrink_masks)
-
- results = dict(
- loss_prob=self.weight_prob * loss_prob,
- loss_thr=self.weight_thr * loss_thr,
- loss_db=loss_db)
-
- return results
-
- def _is_poly_invalid(self, poly: np.ndarray) -> bool:
- """Check if the input polygon is invalid or not. It is invalid if its
- area is smaller than 1 or the shorter side of its minimum bounding box
- is smaller than min_sidelength.
-
- Args:
- poly (ndarray): The polygon.
-
- Returns:
- bool: Whether the polygon is invalid.
- """
- poly = poly.reshape(-1, 2)
- area = Polygon(poly).area
- if abs(area) < 1:
- return True
- rect_size = cv2.minAreaRect(poly)[1]
- len_shortest_side = min(rect_size)
- if len_shortest_side < self.min_sidelength:
- return True
-
- return False
-
- def _generate_thr_map(self, img_size: Tuple[int, int],
- polygons: ArrayLike) -> np.ndarray:
- """Generate threshold map.
-
- Args:
- img_size (tuple(int)): The image size (h, w)
- polygons (Sequence[ndarray]): 2-d array, representing all the
- polygons of the text region.
-
- Returns:
- tuple:
-
- - thr_map (ndarray): The generated threshold map.
- - thr_mask (ndarray): The effective mask of threshold map.
- """
- thr_map = np.zeros(img_size, dtype=np.float32)
- thr_mask = np.zeros(img_size, dtype=np.uint8)
-
- for polygon in polygons:
- self._draw_border_map(polygon, thr_map, mask=thr_mask)
- thr_map = thr_map * (self.thr_max - self.thr_min) + self.thr_min
-
- return thr_map, thr_mask
-
- def _draw_border_map(self, polygon: np.ndarray, canvas: np.ndarray,
- mask: np.ndarray) -> None:
- """Generate threshold map for one polygon.
-
- Args:
- polygon (np.ndarray): The polygon.
- canvas (np.ndarray): The generated threshold map.
- mask (np.ndarray): The generated threshold mask.
- """
-
- polygon = polygon.reshape(-1, 2)
- polygon_obj = Polygon(polygon)
- distance = (
- polygon_obj.area * (1 - np.power(self.shrink_ratio, 2)) /
- polygon_obj.length)
- expanded_polygon = offset_polygon(polygon, distance)
- if len(expanded_polygon) == 0:
- print(f'Padding {polygon} with {distance} gets {expanded_polygon}')
- expanded_polygon = polygon.copy().astype(np.int32)
- else:
- expanded_polygon = expanded_polygon.reshape(-1, 2).astype(np.int32)
-
- x_min = expanded_polygon[:, 0].min()
- x_max = expanded_polygon[:, 0].max()
- y_min = expanded_polygon[:, 1].min()
- y_max = expanded_polygon[:, 1].max()
-
- width = x_max - x_min + 1
- height = y_max - y_min + 1
-
- polygon[:, 0] = polygon[:, 0] - x_min
- polygon[:, 1] = polygon[:, 1] - y_min
-
- xs = np.broadcast_to(
- np.linspace(0, width - 1, num=width).reshape(1, width),
- (height, width))
- ys = np.broadcast_to(
- np.linspace(0, height - 1, num=height).reshape(height, 1),
- (height, width))
-
- distance_map = np.zeros((polygon.shape[0], height, width),
- dtype=np.float32)
- for i in range(polygon.shape[0]):
- j = (i + 1) % polygon.shape[0]
- absolute_distance = self._dist_points2line(xs, ys, polygon[i],
- polygon[j])
- distance_map[i] = np.clip(absolute_distance / distance, 0, 1)
- distance_map = distance_map.min(axis=0)
-
- x_min_valid = min(max(0, x_min), canvas.shape[1] - 1)
- x_max_valid = min(max(0, x_max), canvas.shape[1] - 1)
- y_min_valid = min(max(0, y_min), canvas.shape[0] - 1)
- y_max_valid = min(max(0, y_max), canvas.shape[0] - 1)
-
- if x_min_valid - x_min >= width or y_min_valid - y_min >= height:
- return
-
- cv2.fillPoly(mask, [expanded_polygon.astype(np.int32)], 1.0)
- canvas[y_min_valid:y_max_valid + 1,
- x_min_valid:x_max_valid + 1] = np.fmax(
- 1 - distance_map[y_min_valid - y_min:y_max_valid - y_max +
- height, x_min_valid - x_min:x_max_valid -
- x_max + width],
- canvas[y_min_valid:y_max_valid + 1,
- x_min_valid:x_max_valid + 1])
-
- def get_targets(self, data_samples: List[TextDetDataSample]) -> Tuple:
- """Generate loss targets from data samples.
-
- Args:
- data_samples (list(TextDetDataSample)): Ground truth data samples.
-
- Returns:
- tuple: A tuple of four tensors as DBNet targets.
- """
-
- gt_shrinks, gt_shrink_masks, gt_thrs, gt_thr_masks = multi_apply(
- self._get_target_single, data_samples)
- gt_shrinks = torch.cat(gt_shrinks)
- gt_shrink_masks = torch.cat(gt_shrink_masks)
- gt_thrs = torch.cat(gt_thrs)
- gt_thr_masks = torch.cat(gt_thr_masks)
- return gt_shrinks, gt_shrink_masks, gt_thrs, gt_thr_masks
-
- def _get_target_single(self, data_sample: TextDetDataSample) -> Tuple:
- """Generate loss target from a data sample.
-
- Args:
- data_sample (TextDetDataSample): The data sample.
-
- Returns:
- tuple: A tuple of four tensors as the targets of one prediction.
- """
-
- gt_instances = data_sample.gt_instances
- ignore_flags = gt_instances.ignored
- for idx, polygon in enumerate(gt_instances.polygons):
- if self._is_poly_invalid(polygon):
- ignore_flags[idx] = True
- gt_shrink, ignore_flags = self._generate_kernels(
- data_sample.img_shape,
- gt_instances.polygons,
- self.shrink_ratio,
- ignore_flags=ignore_flags)
-
- # Get boolean mask where Trues indicate text instance pixels
- gt_shrink = gt_shrink > 0
-
- gt_shrink_mask = self._generate_effective_mask(
- data_sample.img_shape, gt_instances[ignore_flags].polygons)
- gt_thr, gt_thr_mask = self._generate_thr_map(
- data_sample.img_shape, gt_instances[~ignore_flags].polygons)
-
- # to_tensor
- gt_shrink = torch.from_numpy(gt_shrink).unsqueeze(0).float()
- gt_shrink_mask = torch.from_numpy(gt_shrink_mask).unsqueeze(0).float()
- gt_thr = torch.from_numpy(gt_thr).unsqueeze(0).float()
- gt_thr_mask = torch.from_numpy(gt_thr_mask).unsqueeze(0).float()
- return gt_shrink, gt_shrink_mask, gt_thr, gt_thr_mask
-
- @staticmethod
- def _dist_points2line(xs: np.ndarray, ys: np.ndarray, pt1: np.ndarray,
- pt2: np.ndarray) -> np.ndarray:
- """Compute distances from points to a line. This is adapted from
- https://github.com/MhLiao/DB.
-
- Args:
- xs (ndarray): The x coordinates of points of size :math:`(N, )`.
- ys (ndarray): The y coordinates of size :math:`(N, )`.
- pt1 (ndarray): The first point on the line of size :math:`(2, )`.
- pt2 (ndarray): The second point on the line of size :math:`(2, )`.
-
- Returns:
- ndarray: The distance matrix of size :math:`(N, )`.
- """
- # suppose a triangle with three edge abc with c=point_1 point_2
- # a^2
- a_square = np.square(xs - pt1[0]) + np.square(ys - pt1[1])
- # b^2
- b_square = np.square(xs - pt2[0]) + np.square(ys - pt2[1])
- # c^2
- c_square = np.square(pt1[0] - pt2[0]) + np.square(pt1[1] - pt2[1])
- # -cosC=(c^2-a^2-b^2)/2(ab)
- neg_cos_c = (
- (c_square - a_square - b_square) /
- (np.finfo(np.float32).eps + 2 * np.sqrt(a_square * b_square)))
- # clip -cosC value to [-1, 1]
- neg_cos_c = np.clip(neg_cos_c, -1.0, 1.0)
- # sinC^2=1-cosC^2
- square_sin = 1 - np.square(neg_cos_c)
- square_sin = np.nan_to_num(square_sin)
- # distance=a*b*sinC/c=a*h/c=2*area/c
- result = np.sqrt(a_square * b_square * square_sin /
- (np.finfo(np.float32).eps + c_square))
- # set result to minimum edge if C 1:
- data = np.mean(data, axis=1)
- # Resample to the rate assumed by VGGish.
- if sample_rate != vggish_params.SAMPLE_RATE:
- data = resampy.resample(data, sample_rate, vggish_params.SAMPLE_RATE)
-
- # Compute log mel spectrogram features.
- log_mel = mel_features.log_mel_spectrogram(
- data,
- audio_sample_rate=vggish_params.SAMPLE_RATE,
- log_offset=vggish_params.LOG_OFFSET,
- window_length_secs=vggish_params.STFT_WINDOW_LENGTH_SECONDS,
- hop_length_secs=vggish_params.STFT_HOP_LENGTH_SECONDS,
- num_mel_bins=vggish_params.NUM_MEL_BINS,
- lower_edge_hertz=vggish_params.MEL_MIN_HZ,
- upper_edge_hertz=vggish_params.MEL_MAX_HZ)
-
- # Frame features into examples.
- features_sample_rate = 1.0 / vggish_params.STFT_HOP_LENGTH_SECONDS
- example_window_length = int(round(
- vggish_params.EXAMPLE_WINDOW_SECONDS * features_sample_rate))
- example_hop_length = int(round(
- vggish_params.EXAMPLE_HOP_SECONDS * features_sample_rate))
- log_mel_examples = mel_features.frame(
- log_mel,
- window_length=example_window_length,
- hop_length=example_hop_length)
- return log_mel_examples
-
-
-def wavfile_to_examples(wav_file):
- """Convenience wrapper around waveform_to_examples() for a common WAV format.
-
- Args:
- wav_file: String path to a file, or a file-like object. The file
- is assumed to contain WAV audio data with signed 16-bit PCM samples.
-
- Returns:
- See waveform_to_examples.
- """
- wav_data, sr = wav_read(wav_file)
- assert wav_data.dtype == np.int16, 'Bad sample type: %r' % wav_data.dtype
- samples = wav_data / 32768.0 # Convert to [-1.0, +1.0]
- return waveform_to_examples(samples, sr)
diff --git a/spaces/Nee001/bing0/README.md b/spaces/Nee001/bing0/README.md
deleted file mode 100644
index d65eafbc8431818f738e8e086455fa6159f101bb..0000000000000000000000000000000000000000
--- a/spaces/Nee001/bing0/README.md
+++ /dev/null
@@ -1,196 +0,0 @@
----
-title: bingo
-emoji: 📉
-colorFrom: red
-colorTo: red
-sdk: docker
-license: mit
-duplicated_from: hf4all/bingo
----
-
-
-
-# Bingo
-
-Bingo,一个让你呼吸顺畅 New Bing。
-
-高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。
-
-
-
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://hub.docker.com/repository/docker/weaigc/bingo/)
-[](https://github.com/weaigc/bingo/blob/main/license)
-
-
-
-English | [简体中文](https://github.com/opendilab/LLMRiddles/blob/main/README_zh.md)
-
-## :thinking: What's This
-Welcome to LLM Riddles! This is a game of wits and courage with language models. In the game, you need to construct questions that interact with the language model to get answers that meet the requirements. In this process, you can use your brain and use all the methods you can think of to get the model to output the results required by the answer.
-
-## :space_invader: How to Play
-We provide an online version for players to directly access and try out.
-- [ChatGPT + English(w/o key)](https://huggingface.co/spaces/OpenDILabCommunity/LLMRiddlesChatGPTEN)
-- [ChatGPT + Chinese(w/o key)](https://huggingface.co/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN)
-- [Mistral + English(w/ key)](https://d9b451a97791dd8ef3.gradio.live)
-- [ChatGPT + Chinese(w/ key)](http://llmriddles.opendilab.net/)
-
-Local deployment can be done in the following ways:
-## Installation
-### Use ChatGPT / ChatGLM API
-```shell
-pip3 install -r requirements.txt
-```
-### Deploy Mistral-7B-Instruct-v0.1 for local inference
-```shell
-pip3 install -r requirements-dev.txt
-```
-## Launch
-### ChatGPT + Chinese
-```shell
-QUESTION_LANG=cn QUESTION_LLM='chatgpt' QUESTION_LLM_KEY= python3 -u app.py
-```
-### ChatGPT + English
-```shell
-QUESTION_LANG=en QUESTION_LLM='chatgpt' QUESTION_LLM_KEY= python3 -u app.py
-```
-### ChatGLM + Chinese
-```shell
-QUESTION_LANG=cn QUESTION_LLM='chatglm' QUESTION_LLM_KEY= python3 -u app.py
-```
-### ChatGLM + English
-```shell
-QUESTION_LANG=en QUESTION_LLM='chatglm' QUESTION_LLM_KEY= python3 -u app.py
-```
-### Mistral-7B-Instruct-v0.1 + English
-```shell
-QUESTION_LANG=en QUESTION_LLM='mistral-7b' python3 -u app.py
-```
-## :technologist: Why Doing This
-
-Our goal is to use this game to give participants a deeper understanding of the fascinating aspects of prompt engineering and natural language processing. This process will show players how to cleverly construct prompts and how to use them to trigger surprising responses from artificial intelligence systems, while also helping them better understand the incredible power of deep learning and natural language processing technologies. .
-
-## :raising_hand: How to Submit a Custom Level
-If you have interesting questions or ideas, players are welcome to submit their own ideas. You can [Initiate a Pull Request](https://github.com/opendilab/LLMRiddles/compare) and submit it to us. We will include it in the level after approval.
-The question format should include the following points:
-- Pull Request title, example: feature(username): Chapter X-Level Design
-- The ID you want to be mentioned
-- Modify the corresponding chapter question files
-- Modification of \__init__.py
-
-For a complete example, please refer to: [Submit your own level design](https://github.com/opendilab/LLMRiddles/pull/6)
-
-## :writing_hand: Roadmap
-
-- [x] Support custom levels
-- [x] Online trial link
-- [x] Hugging Face Space link
-- [x] Support Mistral-7B(English version)
-- [x] Support ChatGLM(Chinese and English version)
-- [ ] Support Baichuan2-7B(Chinese version)
-- [ ] Support LLaMA2-7B(English version)
-- [ ] LLM inference speed optimization
-- [ ] More question levels and solution blogs
-
-## :speech_balloon: Feedback and Contribution
-- [Start an Issue](https://github.com/opendilab/CodeMorpheus/issues/new/choose) on GitHub
-- Contact us by email (opendilab@pjlab.org.cn)
-- Discuss on OpenDILab's WeChat group (i.e. add us on WeChat: ding314assist)
-
-
-## :star2: Special Thanks
-- Thanks to [Haoqiang Fan](https://www.zhihu.com/people/haoqiang-fan) for his original idea and title, which provided inspiration and motivation for the development and expansion of this project.
-- Thanks to [HuggingFace](https://huggingface.co) for supporting and assisting the game.
-- Thanks to [ChatGLM](https://chatglm.cn) for supporting and assisting the game, especially sufficient inference token support.
-- Thanks to [LLM Riddles contributors](https://github.com/opendilab/LLMRiddles/graphs/contributors) for their implementation and support.
-
-## :label: License
-All code within this repository is under [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
-
-